Toggle light / dark theme

The efforts of Jeff Hawkins and Numenta to understand how the brain works started over 30 years ago and culminated in the last two years with the publication of the Thousand Brains Theory of Intelligence. Since then, we’ve been thinking about how to apply our insights about the neocortex to artificial intelligence. As described in this theory, it is clear that the brain works on principles fundamentally different from current AI systems. To build the kind of efficient and robust intelligence that we know humans are capable of, we need to design a new type of artificial intelligence. This is what the Thousand Brains Project is about.

In the past Numenta has been very open with their research, posting meeting recordings, making code open-source and building a large community around our algorithms. We are happy to announce that we are returning to this practice with the Thousand Brains Project. With funding from the Gates Foundation, among others, we are significantly expanding our internal research efforts and also calling for researchers around the world to follow, or even join this exciting project.

Today we are releasing a short technical document describing the core principles of the platform we are building. To be notified when the code and other resources are released, please sign up for the newsletter below. If you have a specific inquiry please send us an email to [email protected].

For over five decades, futurist Raymond Kurzweil has shown a propensity for understanding how computers can change our world. Now he’s ready to anoint nanorobots as the key to allowing humans to transcend life’s ~120-year threshold.

As he wrote—both in the upcoming The Singularity is Nearer book (set for release on June 25) and in an essay published in Wired —the merging of biotechnology with artificial intelligence will lead to nanotechnology helping “overcome the limitations of our biological organs altogether.”

As our bodies accumulate errors when cells reproduce over and over, it invites damage. That damage can get repaired quickly by young bodies, but less so when age piles up.

Today I’m thrilled to announce BrainBridge, the world’s first concept for a head transplant system, which integrates advanced robotics and artificial intelligence to execute complete head and face transplantation procedures. This state-of-the-art system offers new hope to patients suffering from untreatable conditions such as stage-4 cancer, paralysis, and neurodegenerative diseases like Alzheimer’s and Parkinson’s.

Official website: https://brainbridge.tech/
Follow me everywhere: https://muse.io/hashemalghaili.

#Science #Technology #Research #HeadTransplant #BrainBridge #Neuroscience

A Stanford Medicine study reveals six subtypes of depression, identified through brain imaging and machine learning. These subtypes exhibit unique brain activity patterns, helping predict which patients will benefit from specific antidepressants or behavioral therapies. This approach aims to personalize and improve depression treatment efficacy.

In the not-too-distant future, a quick brain scan during a screening assessment for depression could identify the best treatment.

According to a new study led by researchers at Stanford Medicine, brain imaging combined with a type of AI called machine learning can reveal subtypes of depression and anxiety. The study, to be published today (June 17) in the journal Nature Medicine, sorts depression into six biological subtypes, or “biotypes,” and identifies treatments that are more likely or less likely to work for three of these subtypes.

From mundane chores to complex interactions, RoboCasa trains robots to navigate the real world:


Researchers have developed a large-scale simulation framework for training general-purpose robots in diverse, everyday settings.

The framework, called RoboCasa, provides access to thousands of 3D assets over more than 150 object categories, as well as dozens of furniture and appliances that may be interacted with.

A range of generative AI tools are used to increase realism and diversity, including text-to-3D models for object assets and text-to-image models for environmental textures.

Delicious.


Science publisher Springer Nature has developed two new AI tools to detect fake research and duplicate images in scientific papers, helping to protect the integrity of published studies.

The growing number of cases of fake research is already putting a strain on the scientific publishing industry, according to Springer Nature. Following a pilot phase, the publisher is now rolling out two AI tools to identify papers with AI-generated fake content and problematic images — both red flags for research integrity issues.

The first tool, called “Geppetto,” detects AI-generated content, a telltale sign of “paper mills” producing fake research papers. The tool divides the paper into sections and uses its own algorithms to check the consistency of the text in each section.

Researchers from Tohoku University and Kyoto University have successfully developed a DNA-based molecular controller that autonomously directs the assembly and disassembly of molecular robots. This pioneering technology marks a significant step towards advanced autonomous molecular systems with potential applications in medicine and nanotechnology.

Details of the breakthrough were published in the journal Science Advances (“Autonomous assembly and disassembly of gliding molecular robots regulated by a DNA-based molecular controller”).

“Our newly developed molecular controller, composed of artificially designed DNA molecules and enzymes, coexists with molecular robots and controls them by outputting specific DNA molecules,” points out Shin-ichiro M. Nomura, an associate professor at Tohoku University’s Graduate School of Engineering and co-author of the study. “This allows the molecular robots to self-assemble and disassemble automatically, without the need for external manipulation.”

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place.

In the meantime, they are running out of training data to build even bigger, more powerful models, and to Meta, your posts are a gold mine.

If you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models in perpetuity, consider opting out. Although Meta does not guarantee it will allow this, it does say it will “review objection requests in accordance with relevant data protection laws.”

From king’s college london, carnegie mellon, & U birmingham.

Llm-driven robots risk enacting discrimination, violence, and unlawful actions.

Rumaisa Azeem, Andrew Hundt, Masoumeh Mansouri, Martim Brandão June 2024 Paper: https://arxiv.org/abs/2406.08824 Code: https://github.com/SepehrDehdashtian/


The data and code for paper ‘The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models’ — SepehrDehdashtian/the-dark-side-of-dataset-scaling.