Researchers call on technology companies to test their systems for consciousness and create AI welfare policies.

Researchers have developed a device that can simultaneously measure six markers of brain health. The sensor, which is inserted through the skull into the brain, can pull off this feat thanks to an artificial intelligence (AI) system that pieces apart the six signals in real time.
Being able to continuously monitor biomarkers in patients with traumatic brain injury could improve outcomes by catching swelling or bleeding early enough for doctors to intervene. But most existing devices measure just one marker at a time. They also tend to be made with metal, so they can’t easily be used in combination with magnetic resonance imaging.
Simultaneous access to measurements could improve outcomes for brain injuries.
Originally published on Towards AI.
AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly “understanding” the information they’re presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem — they break trust and sometimes lead to serious mistakes.
So, why do these models, which seem so advanced, get things so wrong? The reason isn’t only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess — and guess wrong. Interestingly, there’s a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gödel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries — some truths can’t be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just can’t handle.
The company has unveiled plans for a new data center in Richland Parish, Louisiana, marking a significant expansion of its global infrastructure. The $10 billion investment will be Meta’s largest data center to date, spanning a massive 4 million square feet. This state-of-the-art facility will be a crucial component in the company’s ongoing efforts to support the rapid growth of artificial intelligence (AI) technologies. Meta did not disclose an estimated completion or operational date for this facility.
The new Richland Parish data center will create over 500 full-time operational jobs, providing a substantial boost to the local economy. During peak construction, the project is expected to employ more than 5,000 workers.
Meta’s decision to build in Richland Parish was driven by several factors, including the region’s robust infrastructure, reliable energy grid, and business-friendly environment. The company also cited the strong support from local community partners, which played a critical role in facilitating the development of the data center.
Soukaina Filali Boubrahimi and Shah Muhammad Hamdi lead NSF-funded research to craft a comprehensive solar flare catalog of images and data, along with data-driven, machine learning models poised to deploy in an operational environment.
The dream of building a practical, fault-tolerant quantum computer has taken a significant step forward.
In a breakthrough study recently published in Nature, researchers from Google DeepMind and Google Quantum AI said they have developed an AI-based decoder, AlphaQubit, which drastically improves the accuracy of quantum error correction—a critical challenge in quantum computing.
“Our work illustrates the ability of machine learning to go beyond human-designed algorithms by learning from data directly, highlighting machine learning as a strong contender for decoding in quantum computers,” researchers wrote.
AI has gone so far now, and various state-of-the AI models are evolving that are used in Chatbots, Humanoid Robots, Self-driving cars, etc. It has become the fastest-growing technology, and Object Detection and Object Classification are trendy these days.
In this blog post, we will cover the complete steps of building and training an Image Classification model from scratch using Convolutional Neural Network. We will use the publicly available Cifar-10dataset to train the model. This dataset is unique because it contains images of everyday seen objects like cars, aeroplanes, dogs, cats, etc. By training the neural network to these objects, we will develop intelligent systems to classify such things in the real world. It contains more than 60,000 images of size 32×32 of 10 different types of objects. By the end of this tutorial, you will have a model which can determine the object based on its visual features.
Learn how to build and train your first Image Classification model with Keras and TensorFlow using Convolutional Neural Network.
Through MIT OpenCourseWare, MITx, and MIT xPRO learn about machine learning, computational thinking, deepfakes, and more, all for free.
With the rise of artificial intelligence, the job landscape is changing — rapidly. MIT Open Learning offers online courses and resources straight from the MIT classroom that are designed to empower learners and professionals across industries with the competencies essential for succeeding in an increasingly AI-powered world.