Toggle light / dark theme

This Is How AI Can Use Your WI-FI As A Camera To Spy On You

In recent years, the field of artificial intelligence has witnessed remarkable advancements, with researchers exploring innovative ways to utilize existing technology in groundbreaking applications. One such intriguing concept is the use of WiFi routers as virtual cameras to map a home and detect the presence and locations of individuals, akin to an MRI machine. This revolutionary technology harnesses the power of AI algorithms and WiFi signals to create a unique, non-intrusive way of monitoring human presence within indoor spaces. In this article, we will delve into the workings of this technology, its potential capabilities, and the implications it may have on the future of smart homes and security.

The Foundation of WiFi Imaging: WiFi imaging, also known as radio frequency (RF) sensing, revolves around leveraging the signals emitted by WiFi routers. These signals interact with the surrounding environment, reflecting off objects and people within their range. AI algorithms then process the alterations in these signals to form an image of the indoor space, thus providing a representation of the occupants and their movements. Unlike traditional cameras, WiFi imaging is capable of penetrating walls and obstructions, making it particularly valuable for monitoring people without compromising their privacy.

AI Algorithms in WiFi Imaging: The heart of this technology lies in the powerful AI algorithms that interpret the fluctuations in WiFi signals and translate them into meaningful data. Machine learning techniques, such as neural networks, play a pivotal role in recognizing patterns, identifying individuals, and discerning between static objects and moving entities. As the AI model continuously learns from the WiFi data, it enhances its accuracy and adaptability, making it more proficient in detecting and tracking people over time.

Researcher Claims to Crack RSA-2048 With Quantum Computer

A scientist claims to have developed an inexpensive system for using quantum computing to crack RSA, which is the world’s most commonly used public key algorithm.

See Also: Live Webinar | Generative AI: Myths, Realities and Practical Use Cases

The response from multiple cryptographers and security experts is: Sounds great if true, but can you prove it? “I would be very surprised if RSA-2048 had been broken,” Alan Woodward, a professor of computer science at England’s University of Surrey, told me.

Late not great—imperfect timekeeping places significant limit on quantum computers

New research from a consortium of quantum physicists, led by Trinity College Dublin’s Dr. Mark Mitchison, shows that imperfect timekeeping places a fundamental limit to quantum computers and their applications. The team claims that even tiny timing errors add up to place a significant impact on any large-scale algorithm, posing another problem that must eventually be solved if quantum computers are to fulfill the lofty aspirations that society has for them.

The paper is published in the journal Physical Review Letters.

It is difficult to imagine modern life without clocks to help organize our daily schedules; with a digital clock in every person’s smartphone or watch, we take precise timekeeping for granted—although that doesn’t stop people from being late.

Like Humans, This Breakthrough AI Makes Concepts Out of the Words It Learns

Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a “dog” or what it means to “jump” or “skip.” These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, “But that’s not a dog!”

Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.

When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.

Research claims novel algorithm can exactly compute information rate for any system

75 years ago Claude Shannon, the “father of information theory,” showed how information transmission can be quantified mathematically, namely via the so-called information transmission rate.

Yet, until now this quantity could only be computed approximately. AMOLF researchers Manuel Reinhardt and Pieter Rein ten Wolde, together with a collaborator from Vienna, have now developed a simulation technique that—for the first time—makes it possible to compute the information rate exactly for any system. The researchers have published their results in the journal Physical Review X.

To calculate the information rate exactly, the AMOLF researchers developed a novel simulation algorithm. It works by representing a complex physical system as an interconnected network that transmits the information via connections between its nodes. The researchers hypothesized that by looking at all the different paths the information can take through this network, it should be possible to obtain the information rate exactly.

Memes, Genes, and Brain Viruses

Go to https://brilliant.org/EmergentGarden to get a 30-day free trial + the first 200 people will get 20% off their annual subscription.

Welcome to my meme complication video lolz rolf!!1! if you laugh you lose and you have to start the video over!! or if you blink or utilize any facial muscles! failing to restart or share the video will result in infinite suffering, success will result in infinite bliss.
☻/
/ ▌ copy me.
/ \

My Links.
Patreon: https://www.patreon.com/emergentgarden.
Discord: https://discord.gg/ZsrAAByEnr.
Music: https://youtube.com/@acolyte-compositions.

Sources:
Richard Dawkins ted: https://www.youtube.com/watch?v=mvJVseKdlJA
Orange Justice: https://www.youtube.com/watch?v=i0E10FASz2c.
Starling imitating camera: https://www.youtube.com/watch?v=S4IDaGo6a40
Baby imitating clapping: https://www.youtube.com/watch?v=uq1eZ8VZpSc.
Cinnamon challenge guy: https://www.youtube.com/watch?v=IqmhuYtldd0
Dawkins “It works…bitches”: https://www.youtube.com/watch?v=0OtFSDKrq88
Virus reproduction animation: https://www.youtube.com/watch?v=QHHrph7zDLw.
DNA replication animation: https://www.youtube.com/watch?v=7Hk9jct2ozY
Old bicycle: https://www.youtube.com/watch?v=p–RKyDFIAk.
Charlie bit me: https://www.youtube.com/watch?v=0EqSXDwTq6U
Vsauce fingers: https://www.youtube.com/watch?v=UMwEbsbLw3k.
Unknown Video (do not watch): https://www.youtube.com/watch?v=dQw4w9WgXcQ
The Internette: https://www.youtube.com/watch?v=Y5BZkaWZAAA
Beirut 2020 explosion: https://www.youtube.com/watch?v=LNDhIGR-83w.
Rotating Duck: https://www.youtube.com/watch?v=ggDbDRUauzo.
Free Real Estate: https://www.youtube.com/watch?v=cd4-UnU8lWY
Do I look like I know what a jpeg is: https://www.youtube.com/watch?v=QEzhxP-pdos.
Free Real Estate asmr meme: https://www.youtube.com/watch?v=CCSE6x8T0jc.
Silly red bird: https://www.youtube.com/watch?v=kUJw2eVYznw.
Silly siamangs: https://www.youtube.com/watch?v=-JUhUI_KvUI
Strong vsauce: https://www.youtube.com/shorts/crVjtJyc-r4
Jazz Frog TikTok: https://www.youtube.com/watch?v=0BWiMF8_v6M
Barry Algorithm: https://www.youtube.com/watch?v=ktAbh39aoU8
AI Pizza Ad: https://www.youtube.com/watch?v=qSewd6Iaj6I
Dawkins weird meme song: https://www.youtube.com/watch?v=2tIwYNioDL8
I could not realistically credit everything.

Timestamps.

New AI Model Counters Bias In Data With A DEI Lens

AI has exploded onto the scene in recent years, bringing both promise and peril. Systems like ChatGPT and Stable Diffusion showcase the tremendous potential of AI to enhance productivity and creativity. Yet they also reveal a dark reality: the algorithms often reflect the same systemic prejudices and societal biases present in their training data.

While the corporate world has quickly capitalized on integrating generative AI systems, many experts urge caution, considering the critical flaws in how AI represents diversity. Whether it’s text generators reinforcing stereotypes or facial recognition exhibiting racial bias, the ethical challenges cannot be ignored.


From generating text that furthers stereotypes to producing discriminatory facial recognition results, biased AI poses ethical and social challenges.

AI-ready architecture doubles power with FeFETs

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature Communications (“First demonstration of in-memory computing crossbar using multi-level Cell FeFET”), the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms and robotic applications.

  • The new architecture enables both data storage and calculations to be carried out on the same transistors, boosting efficiency and reducing heat.
  • The chip performs at 885 TOPS/W, significantly outperforming current CMOS chips which operate in the range of 10–20 TOPS/W, making it ideal for applications like real-time drone calculations, generative AI, and deep learning algorithms.
  • Atom Computing is the first to announce a 1,000+ qubit quantum computer

    How many qubits do we have to have in a quantum computer and accessble to a wide market to trully have something scfi worthy?


    Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits—a system that operated using only 100 qubits.

    The error rate for individual qubit operations is high enough that it won’t be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company’s claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it’ll simply run multiple instances in parallel to boost the chance of returning the right answer.

    Computing with atoms

    Atom Computing, as its name implies, has chosen neutral atoms as its qubit of choice (there are other companies that are working with ions). These systems rely on a set of lasers that create a series of locations that are energetically favorable for atoms. Left on their own, atoms will tend to fall into these locations and stay there until a stray gas atom bumps into them and knocks them out.

    Eureka: With GPT-4 overseeing training, robots can learn much faster

    On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

    “Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.