Improvements in the performance of large language models such as ChatGPT are more predictable than they appear.

Previously, researchers have used implants surgically placed in the brain or bulky, expensive machines to translate brain activity into text. The new approach, presented at this week’s NeurIPS conference by researchers from the University of Technology Sydney, is impressive for its use of a non-invasive EEG cap and the potential to generalize beyond one or two people.
The team built an AI model called DeWave that’s trained on brain activity and language and linked it up to a large language model—the technology behind ChatGPT—to help convert brain activity into words. In a preprint posted on arXiv, the model beat previous top marks for EEG thought-to-text translation with an accuracy of roughly 40 percent. Chin-Teng Lin, corresponding author on the paper, told MSN they’ve more recently upped the accuracy to 60 percent. The results are still being peer-reviewed.
Though there’s a long way to go in terms of reliability, it shows progress in non-invasive methods of reading and translating thoughts into language. The team believes their work could give voice to those who can no longer communicate due to injury or disease or be used to direct machines, like walking robots or robotic arms, with thoughts alone.
Skyline Robotics is disrupting the century-old practice of window washing with new technology that the startup hopes will redefine a risky industry.
Its window-washing robot, Ozmo, is now operational in Tel Aviv and New York, and has worked on major Manhattan buildings such as 10 Hudson Yards, 383 Madison, 825 3rd Avenue and 7 World Trade Center in partnership with the city’s largest commercial window cleaner Platinum and real estate giant The Durst Organization.
The machine is suspended from the side of a high-rise. A robotic arm with a brush attached to the end cleans the window following instructions from a LiDAR camera, which uses laser technology to map 3D environments. The camera maps the building’s exterior and identifies the parameters of the windows.
When the theoretical physicist Leonard Susskind encountered a head-scratching paradox about black holes, he turned to an unexpected place: computer science. In nature, most self-contained systems eventually reach thermodynamic equilibrium… but not black holes. The interior volume of a black hole appears to forever expand without limit. But why? Susskind had a suspicion that a concept called computational complexity, which underpins everything from cryptography to quantum computing to the blockchain and AI, might provide an explanation.
He and his colleagues believe that the complexity of quantum entanglement continues to evolve inside a black hole long past the point of what’s called “heat death.” Now Susskind and his collaborator, Adam Brown, have used this insight to propose a new law of physics: the second law of quantum complexity, a quantum analogue of the second law of thermodynamics.
Also appearing in the video: Xie Chen of CalTech, Adam Bouland of Stanford and Umesh Vazirani of UC Berkeley.
00:00 Intro to a second law of quantum complexity.
01:16 Entropy drives most closed systems to thermal equilibrium. Why are black holes different?
03:34 History of the concept of “entropy” and “heat death“
05:01 Quantum complexity and entanglement might explain black holes.
07:32 A turn to computational circuit complexity to describe black holes.
08:47 Using a block cipher and cryptography to test the theory.
10:16 A new law of physics is proposed.
11:23 Embracing a quantum universe leads to new insights.
12:20 When quantum complexity reaches an end…the universe begins again.
Thumbnail / title card image designed by Olena Shmahalo.
- VISIT our Website: https://www.quantamagazine.org.
Artificial Intelligence is our best bet to understand the nature of our mind, and how it can exist in this universe. \r\
\
Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany. He is especially interested in the philosophy of AI, and in using computational models and conceptual tools to understand our minds and what makes us human.\r\
Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin, the Institute for Cognitive Science at Osnabrück, and the MIT Media Lab, and authored the book “Principles of Synthetic Intelligence” (Oxford University Press).\
\
This talk was given at a TEDx event using the TED conference format but independently organized by a local community.
An interview with J. Storrs Hall, author of the epic book “Where is My Flying Car — A Memoir of Future Past”: “The book starts as an examination of the technical limitations of building flying cars and evolves into an investigation of the scientific, technological, and social roots of the economic…
J. Storrs Hall or Josh is an independent researcher and author.
He was the founding Chief Scientist of Nanorex, which is developing a CAD system for nanomechanical engineering.
His research interests include molecular nanotechnology and the design of useful macroscopic machines using the capabilities of molecular manufacturing. His background is in computer science, particularly parallel processor architectures, artificial intelligence, particularly agoric and genetic algorithms.
Advancements in deep learning have influenced a wide variety of scientific and industrial applications in artificial intelligence. Natural language processing, conversational AI, time series analysis, and indirect sequential formats (such as pictures and graphs) are common examples of the complicated sequential data processing jobs involved in these. Recurrent Neural Networks (RNNs) and Transformers are the most common methods; each has advantages and disadvantages. RNNs have a lower memory requirement, especially when dealing with lengthy sequences. However, they can’t scale because of issues like the vanishing gradient problem and training-related non-parallelizability in the time dimension.
As an effective substitute, transformers can handle short-and long-term dependencies and enable parallelized training. In natural language processing, models like GPT-3, ChatGPT LLaMA, and Chinchilla demonstrate the power of Transformers. With its quadratic complexity, the self-attention mechanism is computationally and memory-expensive, making it unsuitable for tasks with limited resources and lengthy sequences.
A group of researchers addressed these issues by introducing the Acceptance Weighted Key Value (RWKV) model, which combines the best features of RNNs and Transformers while avoiding their major shortcomings. While preserving the expressive qualities of the Transformer, like parallelized training and robust scalability, RWKV eliminates memory bottleneck and quadratic scaling that are common with Transformers. It does this with efficient linear scaling.
A central challenge for systems neuroscience and artificial intelligence is to understand how cognitive behaviors arise from large, highly interconnected networks of neurons. Digital simulation is linking cognitive behavior to neural activity to bridge this gap in our understanding at great expense in time and electricity. A hybrid analog-digital approach, whereby slow analog circuits, operating in parallel, emulate graded integration of synaptic currents by dendrites while a fast digital bus, operating serially, emulates all-or-none transmission of action potentials by axons, may improve simulation efficacy. Due to the latter’s serial operation, this approach has not scaled beyond millions of synaptic connections (per bus). This limit was broken by following design principles the neocortex uses to minimize its wiring. The resulting hybrid analog-digital platform, Neurogrid, scales to billions of synaptic connections, between up to a million neurons, and simulates cortical models in real-time using a few watts of electricity. Here, we demonstrate that Neurogrid simulates cortical models spanning five levels of experimental investigation: biophysical, dendritic, neuronal, columnar, and area. Bridging these five levels with Neurogrid revealed a novel way active dendrites could mediate top-down attention.
K.B. and N.N.O. are co-founders and equity owners of Femtosense Inc.
As much as id love to see it. Not until someone solves human level hands, i believe will cost about 10+ billion USD. And, a battery can run 8 to 12 hours, and be changed or re charged in under 15 minutes.
HOUSTON/AUSTIN, Texas, Dec 27 (Reuters) — Standing at 6 feet 2 inches (188 centimeters) tall and weighing 300 pounds (136 kilograms), NASA’s humanoid robot Valkyrie is an imposing figure.
Valkyrie, named after a female figure in Norse mythology and being tested at the Johnson Space Center in Houston, Texas, is designed to operate in “degraded or damaged human-engineered environments,” like areas hit by natural disasters, according to NASA.
But robots like her could also one day operate in space.