Toggle light / dark theme

A programmable surface plasmonic neural network to detect and process microwaves

AI tools based on artificial neural networks (ANNs) are being introduced in a growing number of settings, helping humans to tackle many problems faster and more efficiently. While most of these algorithms run on conventional digital devices and computers, electronic engineers have been exploring the potential of running them on alternative platforms, such as diffractive optical devices.

A research team led by Prof. Tie Jun Cui at Southeast University in China has recently developed a new programmable neural network based on a so-called spoof surface plasmon polariton (SSPP), which is a surface that propagates along planar interfaces. This newly proposed surface plasmonic neural network (SPNN) architecture, introduced in a paper in Nature Electronics, can detect and process microwaves, which could be useful for wireless communication and other technological applications.

“In digital hardware research for the implementation of , optical neural networks and diffractive deep neural networks recently emerged as promising solutions,” Qian Ma, one of the researchers who carried out the study, told Tech Xplore. “Previous research focusing on optical neural networks showed that simultaneous high-level programmability and nonlinear computing can be difficult to achieve. Therefore, these ONN devices usually have been limited to without programmability, or only applied for simple recognition tasks (i.e., linear problems).”

Google Quantum AI Braids Non-Abelian Anyons — A Breakthrough That Could Revolutionize Quantum Computing

In a paper published in the journal Nature on May 11, researchers at Google Quantum AI announced that they had used one of their superconducting quantum processors to observe the peculiar behavior of non-Abelian anyons for the first time ever. They also demonstrated how this phenomenon could be used to perform quantum computations. Earlier this week the quantum computing company Quantinuum released another study on the topic, complementing Google’s initial discovery. These new results open a new path toward topological quantum computation, in which operations are achieved by winding non-Abelian anyons around each other like strings in a braid.

Google Quantum AI team member and first author of the manuscript, Trond I. Andersen says, “Observing the bizarre behavior of non-Abelian anyons for the first time really highlights the type of exciting phenomena we can now access with quantum computers.”

Imagine you’re shown two identical objects and then asked to close your eyes. Open them again, and you see the same two objects. How can you determine if they have been swapped? Intuition says that if the objects are truly identical, there is no way to tell.

Google Unveils Plan to Demolish the Journalism Industry Using AI

It’s been living up to that removal lately. At its annual I/O in San Francisco this week, the search giant finally lifted the lid on its vision for AI-integrated search — and that vision, apparently, involves cutting digital publishers off at the knees.

Google’s new AI-powered search interface, dubbed “Search Generative Experience,” or SGE for short, involves a feature called “AI Snapshot.” Basically, it’s an enormous top-of-the-page summarization feature. Ask, for example, “why is sourdough bread still so popular?” — one of the examples that Google used in their presentation — and, before you get to the blue links that we’re all familiar with, Google will provide you with a large language model (LLM)-generated summary. Or, we guess, snapshot.

“Google’s normal search results load almost immediately,” The Verge’s David Pierce explains. “Above them, a rectangular orange section pulses and glows and shows the phrase ‘Generative AI is experimental.’ A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more.”

The Unexpected Truth About Google AI // 100k Giveaway /

In this Video I discuss if future of AI is in open-source.

The Leaked Document: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither.
The Book: When The Heavens Went on Sale: https://amzn.to/3Il3rNF
Vicuna Chatbot: https://chat.lmsys.org.

00:00 — Google Has No Moat.
02:01 — A Brief History of Open-Source AI
04:03 — Is Future Open Source?
05:52 — Risks of Open Source.
07:41 — Linux is a good example.
08:54 — Soon OpenAI won’t matter.
10:14 — Giveaway.

Support me at Patreon: https://www.patreon.com/AnastasiInTech.

To participate in the GIVEAWAY:
Step 1 — Subscribe to my newsletter https://anastasiintech.substack.com.
Step 2 — leave a comment below smile
I will select 3 winners smile

Tesla Bot Update: The humanoid is getting better and is learning faster than humans

The company hasn’t really put a date for their actual deployment in its own factories.

Elon Musk’s electric car to solar-making company, Tesla, also has one more product in the pipeline aimed at wooing customers, The Tesla Bot. Musk announced the bipedal robot in 2021, and recently the company has provided an update on its progress through a 65-second video.

Regarding bipedal robots, the benchmark is relatively high, with Boston Dynamics’ Atlas capable of doing flips and somersaults. Musk, however, never said that Tesla was looking to entertain people with its robots’ antics.


Tesla/ YouTube.

Meet Phoenix: The new humanoid robot built for general-purpose tasks

A robot is just a tool, the real star is the AI control system.

At five feet seven inches (57’) and 155 pounds (70 kg), Phoenix, the humanoid robot, is just about the height of an average human. What it aims to do is also something that humans can casually do, general tasks in an environment, and that is a tough ask from a robot.

While humanoid assistants have been familiar with most science-fiction stories, translating them to the real world has been challenging. Companies like Tesla have been looking to make them part of households for a few years, but robots have always been good at doing specific tasks.

OpenAI to soon release a new open-source AI model

The move is a significant development in the world of artificial intelligence.

In what seems like a response to the growing competition in the open-source large language model (LLM) space, OpenAI will soon release a new open-source AI model to the public, reported The Information.

OpenAI hasn’t come up with an open-source model since 2019, and although the news is exciting, it might not be as sophisticated or in direct competition with its proprietary model GPT.

How is human behaviour impacted by an unfair AI? A game of Tetris reveals all

A team of researchers give a spin to Tetris, and make observations as people play the game.

We live in a world run by machines. They make important decisions for us, like who to hire, who gets approved for a loan, or recommending user content on social media. Machines and computer programs have an increasing influence over our lives, now more than ever, with artificial intelligence (AI) making inroads in our lives in new ways. And this influence goes far beyond the person directly interacting with machines.


A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.

OpenAI’s Sam Altman To Congress: Regulate Us, Please!

In a wide-ranging and historic congressional hearing Tuesday, the creator of the world’s most powerful artificial intelligence called on the government to regulate his industry.

“There should be limits on what a deployed model is capable of and then what it actually does,” declared Sam Altman, CEO and cofounder of OpenAI, referring to the underlying AI which powers such products as ChatGPT.

He called on Congress to establish a new agency to license large-scale AI efforts, create safety standards, and carry out independent audits to ensure compliance with safety thresholds.

The hearing, run by Sen.


The hearings marked a significant step towards comprehensive understanding and governance of the AI landscape as AI continues to evolve and become an increasingly integral part of our lives. But they also highlighted the lack of understanding, even by AI researchers themselves, about how the most powerful generative systems do what they do.

“We need to know more about how the models work,” said Marcus.