Toggle light / dark theme

My AI Girlfriend won’t talk to me unless I renew my annual Netflix subscription.

— You in five years

Everyone has written about the dangers of AI and the uncertain future of humanity, and many of these worries focus on large scale issues like disinformation, democracy, wartime decision making by computers, etc. However, it is the small and personal changes to human life that tend to create the biggest effects down the line. If we assume that a sizeable portion of the population will have, at some point, some form of AI assistant, friend, companion, etc. and that these AI assistants are designed by for-profit companies to perfectly press our psychological buttons, then we are in serious danger of handing ourselves over to the whims of those companies, or governments.

Vibrating tiny robots could revolutionize research.

Individual robots can work collectively as to create major advances in everything from construction to surveillance, but microrobots’ small scale is ideal for drug delivery, disease diagnosis, and even surgeries.

Despite their potential, microrobots’ size often means they have limited sensing, communication, motility, and computation abilities, but new research from the Georgia Institute of Technology enhances their ability to collaborate efficiently. The work offers a new system to control swarms of 300 3-millimeter microbristle robots’ (microbots) ability to aggregate and disperse controllably without onboard sensing.

A new and more efficient way of modeling and designing power electronic converters using artificial intelligence (AI) has been created by a team of experts from Cardiff University and the Compound Semiconductor Applications (CSA) Catapult.

The method has reduced design times for technology by up to 78% compared to traditional approaches and was used to create a device with an efficiency of over 98%.

The team’s findings have been published in the IEEE Open Journal of Power Electronics and IEEE Transactions on Power Electronics.

Photograph: Shutterstock

Elon Musk doesn’t follow the same standards that most entrepreneurs do. He’s different, he likes to be different!

And when you’re different, and you’re not afraid to be, it’s okay to test a cigar (or should I say ‘joint’?) of tobacco mixed with marijuana, on Joe Rogan’s famous podcast. But if you look closely, Elon was just nice (polite) and followed Rogan’s elaborate script. Before trying it, Musk even asked him if it was legal.

Then all those facial expressions of Musk, which photojournalists love to catch, go viral as if he’s there promoting some soft drug or passing abroad that his office at Tesla (or SpaceX) is enveloped in a large cloud of smoke.

Quite the opposite. The expressions themselves spoke for themselves, as if to say, “This is nothing special, Joe. Why do you waste my time with these scenes”? Musk even claimed that weed is not good for productivity at all, but it has nothing against (as I do, by the way).

At 5:20 a.m. EST, NASA astronaut Nicole Mann, with NASA astronaut Josh Cassada acting as backup, captured Northrop Grumman’s Cygnus spacecraft using the International Space Station’s Canadarm2 robotic arm. Mission control in Houston will actively command the arm to rotate Cygnus to its installation orientation and then to guide it in for installation on the station’s Unity module Earth-facing port.

NASA Television, the NASA app, and agency’s website will provide live coverage of the spacecraft’s installation beginning at 7:15 a.m.

The Cygnus spacecraft launched Monday on an Antares rocket from NASA’s Wallops Flight Facility, Virginia at 5:32 a.m. This is Northrop Grumman’s 18th commercial resupply mission to the International Space Station for NASA. The Cygnus spacecraft is carrying a supply of 8,200 pounds of scientific investigations and cargo to the orbiting laboratory.

Human Brain Project researchers have trained a large-scale model of the primary visual cortex of the mouse to solve visual tasks in a highly robust way. The model provides the basis for a new generation of neural network models. Due to their versatility and energy-efficient processing, these models can contribute to advances in neuromorphic computing.

Modeling the brain can have a massive impact on (AI): Since the brain processes images in a much more energy-efficient way than artificial networks, scientists take inspiration from neuroscience to create neural networks that function similarly to the biological ones to significantly save energy.

In that sense, brain-inspired neural networks are likely to have an impact on future technology, by serving as blueprints for in more energy-efficient neuromorphic hardware. Now, a study by Human Brain Project (HBP) researchers from the Graz University of Technology (Austria) showed how a large data-based model can reproduce a number of the brain’s visual processing capabilities in a versatile and accurate way. The results were published in the journal Science Advances.

With mathematical modeling, a research team has now succeeded in better understanding how the optimal working state of the human brain, called criticality, is achieved. Their results mean an important step toward biologically-inspired information processing and new, highly efficient computer technologies and have been published in Scientific Reports.

“In particular tasks, supercomputers are better than humans, for example in the field of artificial intelligence. But they can’t manage the variety of tasks in —driving a car first, then making music and telling a story at a get-together in the evening,” explains Hermann Kohlstedt, professor of nanoelectronics. Moreover, today’s computers and smartphones still consume an enormous amount of energy.

“These are no sustainable technologies—while our brain consumes just 25 watts in everyday life,” Kohlstedt continues. The aim of their interdisciplinary research network, “Neurotronics: Bio-inspired Information Pathways,” is therefore to develop new electronic components for more energy-efficient computer architectures. For this purpose, the alliance of engineering, life and investigates how the is working and how that has developed.

Artificial intelligence has long been a hot topic: a computer algorithm “learns” by being taught by examples: What is “right” and what is “wrong.” Unlike a computer algorithm, the human brain works with neurons—cells of the brain. These are trained and pass on signals to other neurons. This complex network of neurons and the connecting pathways, the synapses, controls our thoughts and actions.

Biological signals are much more diverse when compared with those in conventional computers. For instance, neurons in a biological neural network communicate with ions, biomolecules and neurotransmitters. More specifically, neurons communicate either chemically—by emitting the messenger substances such as neurotransmitters—or via , so-called “action potentials” or “spikes”.

Artificial neurons are a current area of research. Here, the efficient communication between the biology and electronics requires the realization of that emulate realistically the function of their biological counterparts. This means artificial neurons capable of processing the diversity of signals that exist in biology. Until now, most artificial neurons only emulate their biological counterparts electrically, without taking into account the wet biological environment that consists of ions, biomolecules and neurotransmitters.

A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper by my colleagues and me.

The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example. It doesn’t matter whether it is made from cotton, plastic or any other substance. As long as one side is a fabric with stiff hooks and the other side has fluffy loops, the material will have the sticky properties of Velcro.

My colleagues and I based our new material’s architecture on that of an artificial neural network—layers of interconnected nodes that can learn to do tasks by changing how much importance, or weight, they place on each connection. We hypothesized that a mechanical lattice with physical nodes could be trained to take on certain mechanical properties by adjusting each connection’s rigidity.