A research team at Stanford’s Wu Tsai Neurosciences Institute has made a major stride in using AI to replicate how the brain organizes sensory information to make sense of the world, opening up new frontiers for virtual neuroscience.

The Kia EV3 — the new all-electric compact SUV revealed Thursday — illustrates a growing appetite among global automakers to bring generative AI into their vehicles.
The automaker said the Kia EV3 will feature a new voice assistant that is built off ChatGPT, the text-generating AI chatbot developed by OpenAI. The Kia EV3, and its AI assistant, will first come to market in Korea in July 2024, followed by Europe in the second half of the year. Kia expects to expand sales of the vehicle into other regions following the European launch. It will eventually come to the United States, although the automaker did not provide a date.
This isn’t, however, a pure OpenAI affair. Kia had its hands in the development of the voice assistant too.
New technology is shaping the toy industry by making manufacturing more efficient and the toy playing experience more immersive.
Modern smart toys, designed to provide a more immersive experience, often feature artificial intelligence (AI), Bluetooth connectivity, and sensors. These could include toys such as educational tablets that adapt to a child’s learning pace or robotic animals that can respond to voice commands.
Current AI training methods burn colossal amounts of energy to learn, but the human brain sips just 20 W. Swiss startup FinalSpark is now selling access to cyborg biocomputers, running up to four living human brain organoids wired into silicon chips.
The human brain communicates within itself and with the rest of the body mainly through electrical signals; sights, sounds and sensations are all converted into electrical pulses before our brains can perceive them. This makes brain tissue highly compatible with silicon chips, at least for as long as you can keep it alive.
For FinalSpark’s Neuroplatform, brain organoids comprising about 10,000 living neurons are grown from stem cells. These little balls, about 0.5 mm (0.02 in) in diameter, are kept in incubators at around body temperature, supplied with water and nutrients and protected from bacterial or viral contamination, and they’re wired into an electrical circuit with a series of tiny electrodes.
Despite their performance, current AI models have major weaknesses: they require enormous resources and are indecipherable. Help may be on the way.
ChatGPT has triggered an onslaught of artificial intelligence hype. The arrival of OpenAI’s large-language-model-powered (LLM-powered) chatbot forced leading tech companies to follow suit with similar applications as quickly as possible. The race is continuing to develop a powerful AI model. Meta came out with an LLM called Llama at the beginning of 2023, and Google presented its Bard model (now called Gemini) last year as well. Other providers, such as Anthropic, have also delivered impressive AI applications.
Non-personalized content and ads are influenced by things like the content you’re currently viewing and your location (ad serving is based on general location). Personalized content and ads can also include things like video recommendations, a customized YouTube homepage, and tailored ads based on past activity, like the videos you watch and the things you search for on YouTube. We also use cookies and data to tailor the experience to be age-appropriate, if relevant.
Select “More options” to see additional information, including details about managing your privacy settings. You can also visit g.co/privacytools at any time.
The illusion of AI consciousness: why gpt-4o and other chatbots are not conscious.
• Shannon Vallor, an AI expert and contributor to DeepMind, discusses the latest developments in generative AI, particularly OpenAI’s GPT-4o model, and warns of the dangers of the illusion of artificial consciousness.
…
AI expert and DeepMind contributor Shannon Vallor explores OpenAI’s latest GPT-4o model, based on the ideas of her new book, ‘The AI Mirror’. Despite modest intellectual improvements, AGI’s human-like behaviour raises serious ethical concerns, but as Vallor argues, AI today only presents the illusion of consciousness.
LoGAH: Predicting 774-Million-Parameter Transformers using Graph HyperNetworks with 1/100 Parameters.
https://huggingface.co/papers/2405.
A good initialization of deep learning models is essential since it can help them converge better and faster.
Join the discussion on this paper page.