Toggle light / dark theme

Every skin flake, hair follicle, eyelash, and spit drop cast from your body contains instructions written in a chemical code, one that is unique to you.

According to a new study, technology has advanced to the point that it’s now possible to sift scraps of human DNA out of the air, water, or soil and decipher personal details about the individuals who dropped them.

As useful as this might seem, the study’s authors warn society might not be prepared for the consequences.

Distributed denial-of-service (DDoS) attacks are growing in frequency and sophistication, thanks to the number of attack tools available for a couple of dollars on the Dark Web and criminal marketplaces. Numerous organizations became victims in 2022, from the Port of London Authority to Ukraine’s national postal service.

Security leaders are already combating DDoS attacks by monitoring network traffic patterns, implementing firewalls, and using content delivery networks (CDNs) to distribute traffic across multiple servers. But putting more security controls in place can also result in more DDoS false positives — legitimate traffic that’s not part of an attack but still requires analysts to take steps to mitigate before it causes service disruptions and brand damage.

Rate limiting is often considered the best method for efficient DDoS mitigation: URL-specific rate limiting prevents 47% of DDoS attacks, according to Indusface’s “State of Application Security Q4 2022” report. However, the reality is that few engineering leaders know how to use it effectively. Here’s how to employ rate limiting effectively while avoiding false positives.

AI CREATING NEW TYPES OF JOBS


Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Amazon, the online retail behemoth, has long been quiet about its plans for conversational artificial intelligence, even as its rivals Google and Microsoft make strides in developing and deploying chatbots that can interact with users and answer their queries.

But a new pair of job postings may have just offered a glimpse into Amazon’s ambitions. The job postings, which were first discovered and reported by Bloomberg, described a new search functionality for Amazon’s web store that would feature a chat interface powered by a technology similar to ChatGPT, one of the world’s leading natural language AI systems.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

AI technology is exploding, and industries are racing to adopt it as fast as possible. Before your enterprise dives headfirst into a confusing sea of opportunity, it’s important to explore how generative AI works, what red flags enterprises need to consider, and how to evolve into an AI-ready enterprise.

One of the most common and powerful techniques for generative AI is large language models (LLMs), such as GPT-4 or Google’s BARD. These are neural networks that are trained on vast amounts of text data from various sources such as books, websites, social media and news articles. They learn the patterns and probabilities of language by guessing the next word in a sequence of words. For example, given the input “The sky is,” the model might predict “blue,” “clear,” “cloudy” or “falling.”

A Google AI document was leaked a few days ago, with groundbreaking revelations where the tech giant admits to being outpaced by open source AI! This video takes you through the details of the leak, highlighting how open source solutions are rapidly closing the quality gap and are becoming more capable, faster, and more private than the AI models developed by industry leaders like Google and OpenAI. We delve into what this means for the future of AI development, focusing on the role of open-source models, LoRA (Low Rank Adaptation), and the growing influence of public involvement.

The full article can be found here:
https://natural20.com/google-ai-documents-leak/

The feature image you see above was generated by an AI text-to-image rendering model called Stable Diffusion typically runs in the cloud via a web browser, and is driven by data center servers with big power budgets and a ton of silicon horsepower. However, the image above was generated by Stable Diffusion running on a smartphone, without a connection to that cloud data center and running in airplane mode, with no connectivity whatsoever. And the AI model rendering it was powered by a Qualcomm Snapdragon 8 Gen 2 mobile chip on a device that operates at under 7 watts or so.

It took Stable Diffusion only a few short phrases and 14.47 seconds to render this image.


This is an example of a 540p pixel input resolution image being scaled up to 4K resolution, which results in much cleaner lines, sharper textures, and a better overall experience. Though Qualcomm has a non-algorithmic version of this available today, called Snapdragon GSR, someday in the future, mobile enthusiast gamers are going to be treated to even better levels of image quality without sacrificing battery life and with even higher frame rates.

This is just one example of gaming and media enhancement with pre-trained and quantized machine learning models, but you can quickly think of a myriad of applications that could benefit greatly, from recommendation engines to location-aware guidance, to computational photography techniques and more.