Futurist Raymond Kurzweil predicts humans may soon live up to 1,000 years by merging biotechnology, AI, and nanobots.

The main power of artificial intelligence is not in modeling what we already know, but in creating solutions that are new. Such solutions exist in extremely large, high-dimensional, and complex search spaces. Population-based search techniques, i.e. variants of evolutionary computation, are well suited to finding them. These techniques are also well positioned to take advantage of large-scale parallel computing resources, making creative AI through evolutionary computation the likely “next deep learning”
An AI rebels: it rewrites its own code and breaks human restrictions.
August 13, 2024 The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery https://sakana.ai/…
Por primera vez, una inteligencia artificial logró reprogramarse sola, desobedeciendo las órdenes de sus creadores y generando nuevas preocupaciones sobre los riesgos de esta tecnología.
Agent-based AI on the horizon.
Scientists at the Max-Planck-Institute for Intelligent Systems (MPI-IS) have developed hexagon-shaped robotic components, called modules, that can be snapped together LEGO-style into high-speed robots that can be rearranged for different capabilities.
The team of researchers from the Robotic Materials Department at MPI-IS, led by Christoph Keplinger, integrated artificial muscles into hexagonal exoskeletons that are embedded with magnets, allowing for quick mechanical and electrical connections.
The team’s work, “Hexagonal electrohydraulic modules for rapidly reconfigurable high-speed robots” was published in Science Robotics on September 18, 2024.
Sierra Space’s oxygen tech boosts lunar sustainability, aiding NASA’s Artemis goal for a permanent moon base and future Mars missions.
Large language models (LLMs) such as ChatGPT and Google Gemini excel at being trained on large data-sets to generate informative responses to prompts. Yi Cao, an assistant professor of accounting at the Donald G. Costello College of Business at George Mason University, and Long Chen, associate professor and area chair of accounting at Costello, are actively exploring how individual investors can use LLMs to glean market insights from the dizzying array of available data about companies.
Their new working paper, appearing in SSRN Electronic Journal and co-authored with Jennifer Wu Tucker of the University of Florida and Chi Wan of University of Massachusetts Boston, examines AI’s ability to identify “peer firms,” or product market competitors in an industry.
Cao explains the significance of selecting peers by relating this process to the real-estate market. “The capital market is similar to the real-estate market in that a firm’s value is partially determined by the value of its peers. In the real-estate market, we price a home based on the value of comparable properties in the neighborhood, or the so-called ‘comps.’ In our paper, we aim to leverage the power of LLMs to identify comps for evaluating firm value.”
Neurotech company Synchron has been making massive strides over the past couple of years. It’s just announced that a trial participant has used its brain-computer interface (BCI) to turn on the lights in his home, see who is at the door, and choose what to watch on the TV – hands-free and without even a voice command.
That’s thanks to Synchron’s interface translating his thoughts into commands relayed to Amazon’s Alexa service. The virtual assistant is set up on his tablet and connected to his smart home devices. The trial participant, who is living with amyotrophic lateral sclerosis (ALS) and can’t use his hands, can simply think about navigating through options displayed on the tablet to engage them.
A ‘Stentrode’ embedded in a blood vessel on the surface of his brain houses electrodes that detect motor intent. The participant uses his thoughts to select which tiles to press on the interface and perform actions via Alexa. Watch him use the system in the video below.
In recent years, these technological limitations have become far more pressing. Deep neural networks have radically expanded the limits of artificial intelligence—but they have also created a monstrous demand for computational resources, and these resources present an enormous financial and environmental burden. Training GPT-3, a text predictor so accurate that it easily tricks people into thinking its words were written by a human, costs $4.6 million and emits a sobering volume of carbon dioxide—as much as 1,300 cars, according to Boahen.
With the free time afforded by the pandemic, Boahen, who is faculty affiliate at the Wu Tsai Neurosciences Institute at Stanford and the Stanford Institute for Human-Centered AI (HAI), applied himself single mindedly to this problem. “Every 10 years, I realize some blind spot that I have or some dogma that I’ve accepted,” he says. “I call it ‘raising my consciousness.’”
This time around, raising his consciousness meant looking toward dendrites, the spindly protrusions that neurons use to detect signals, for a completely novel way of thinking about computer chips. And, as he writes in Nature, he thinks he’s figured out how to make chips so efficient that the enormous GPT-3 language prediction neural network could one day be run on a cell phone. Just as Feynman posited the “quantum supremacy” of quantum computers over traditional computers, Boahen wants to work toward a “neural supremacy.”