Toggle light / dark theme

Quantum processor reveals bound states of photons hold strong even in the midst of chaos

Researchers have used a quantum processor to make microwave photons uncharacteristically sticky. They coaxed them to clump together into bound states, then found that these photon clusters survived in a regime where they were expected to dissolve into their usual, solitary states. The discovery was first made on a quantum processor, marking the growing role that these platforms are playing in studying quantum dynamics.

Photons—quantum packets of electromagnetic radiation like light or microwaves—typically don’t interact with one another. Two crossed flashlight beams, for example, pass through one another undisturbed. But in an array of superconducting qubits, microwave photons can be made to interact.

In “Formation of robust of interacting photons,” published today in Nature, researchers at Google Quantum AI describe how they engineered this unusual situation. They studied a ring of 24 that could host . By applying quantum gates to pairs of neighboring qubits, photons could travel around by hopping between neighboring sites and interacting with nearby photons.

Good Morning 2033

Good Morning, 2033 — A Sci-Fi Short Film.

What will your average morning look like in 2033? And who hacked us?

This scif-fi short film explores a number of near-future futurist predictions for the 2030s.

Sleep with a brain sensor sleep mask that determines when to wake you. Wake up with gentle stimulation. Drink enhanced water with nutrients, vitamins, and supplements you need. Slide on your smart glasses that you wear all day. Do yoga and stretching on a smart scale that senses you, and get tips from a virtual trainer. Help yourself wake up with a 99CRI, 500,000 lumen light. Go for a walk and your glasses scan your brain as you walk. Live neurofeedback helps you meditate. Your kitchen uses biodata to figure out the ideal health meal, and a kitchen robot makes it for you. You work in VR, AR, MR, XR, reality in the metaverse. You communicate with the world through your AI assistant and AI avatar. You enter the high tech bathroom that uses UV lights and robotics to clean your body for you. Ubers come in the form of flying cars, EVTOL aircraft, that move at 300km/h. Cities become a single color as every inch of roads and buildings become covered in photovoltaic materials.

Creator: Cayden Pierce — https://caydenpierce.com.

How did you make this sci-fi short film?

Talking to Robots in Real Time

A grand vision in robot learning, going back to the SHRDLU experiments in the late 1960s, is that of helpful robots that inhabit human spaces and follow a wide variety of natural language commands. Over the last few years, there have been significant advances in the application of machine learning (ML) for instruction following, both in simulation and in real world systems. Recent Palm-SayCan work has produced robots that leverage language models to plan long-horizon behaviors and reason about abstract goals. Code as Policies has shown that code-generating language models combined with pre-trained perception systems can produce language conditioned policies for zero shot robot manipulation. Despite this progress, an important missing property of current “language in, actions out” robot learning systems is real time interaction with humans.

Ideally, robots of the future would react in real time to any relevant task a user could describe in natural language. Particularly in open human environments, it may be important for end users to customize robot behavior as it is happening, offering quick corrections (“stop, move your arm up a bit”) or specifying constraints (“nudge that slowly to the right”). Furthermore, real-time language could make it easier for people and robots to collaborate on complex, long-horizon tasks, with people iteratively and interactively guiding robot manipulation with occasional language feedback.

Computing with Chemicals Makes Faster, Leaner AI

How far away could an artificial brain be? Perhaps a very long way off still, but a working analogue to the essential element of the brain’s networks, the synapse, appears closer at hand now.

That’s because a device that draws inspiration from batteries now appears surprisingly well suited to run artificial neural networks. Called electrochemical RAM (ECRAM), it is giving traditional transistor-based AI an unexpected run for its money—and is quickly moving toward the head of the pack in the race to develop the perfect artificial synapse. Researchers recently reported a string of advances at this week’s IEEE International Electron Device Meeting (IEDM 2022) and elsewhere, including ECRAM devices that use less energy, hold memory longer, and take up less space.

The artificial neural networks that power today’s machine-learning algorithms are software that models a large collection of electronics-based “neurons,” along with their many connections, or synapses. Instead of representing neural networks in software, researchers think that faster, more energy-efficient AI would result from representing the components, especially the synapses, with real devices. This concept, called analog AI, requires a memory cell that combines a whole slew of difficult-to-obtain properties: it needs to hold a large enough range of analog values, switch between different values reliably and quickly, hold its value for a long time, and be amenable to manufacturing at scale.

Bio-circuitry mimics synapses and neurons in a step toward sensory computing

Researchers at the Department of Energy’s Oak Ridge National Laboratory, the University of Tennessee and Texas A&M University demonstrated bio-inspired devices that accelerate routes to neuromorphic, or brain-like, computing.

Results published in Nature Communications report the first example of a lipid-based “memcapacitor,” a charge storage component with memory that processes information much like synapses do in the brain. Their discovery could support the emergence of computing networks modeled on biology for a sensory approach to machine learning.

“Our goal is to develop materials and computing elements that work like biological synapses and neurons—with vast interconnectivity and flexibility—to enable that operate differently than current computing devices and offer new functionality and learning capabilities,” said Joseph Najem, a recent postdoctoral researcher at ORNL’s Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, and current assistant professor of mechanical engineering at Penn State.

Daily Crunch: Lensa AI can transform Photoshopped fakes into nonconsensual pornography

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

Why, hello there, and welcome to your Tuesday Daily Crunch. I’ll be your host this week while Haje works from an undisclosed location where day is night and night is day. If you aren’t enjoying today’s Found podcast about tampons, we hope you at least saw stars at the TC Sessions: Space event. Let’s dig into some news! — Christine.

Amplifying human creativity: Adobe Stock defines new guidelines for content made with generative AI

The new guidelines provide restrictions and regulations for creators submitting art.

Adobe has now started accepting AI-generated stock images on its platform, but with regulations. The company updated its guidelines.


Image credit: Left: Adobe Stock / Art Master, Middle: Adobe Stock /Robert Kneschke, Right: Adobe Stock / Forest Spirit.

Adobe Stock, a global marketplace with over 320 million creative assets, has defined new guidelines for submissions of illustrations developed with generative AI — expanding how customers enhance their creative projects. Early generative AI technologies have raised questions about how it should be properly used. Adobe has deeply considered these questions and implemented a new submission policy that we believe will ensure our content uses AI technology responsibly by creators and customers alike.

Generative AI is a major leap forward for creators, leveraging machine learning’s incredible power to ideate faster by developing imagery using words, sketches, and gestures. Adobe Stock contributors are using AI tools and technologies to diversify their portfolios, expand their creativity, and increase their earning potential. Going forward, these submissions must meet our guidelines for AI generated content, notably including our ask that contributors label generative AI submissions.

An innovative method allows researchers to move objects using ultrasound waves

It can be specifically useful in the robotics and manufacturing industries.

Researchers from the University of Minnesota, Twin Cities, use ultrasound waves to move objects hands-free, according to an institutional press release.

It has been shown in previous studies that objects can be manipulated with light and sound waves, too. But the objects in question were always far smaller than the wavelengths of either light or sound or on the order of millimeters to nanometers.