Toggle light / dark theme

A lightweight, customized mouse delivering maximum comfort and peak performance that fits snugly into your palm and your palm alone.

In this day and age, where we spend hours hunched over a computer, there is a case for everything being ergonomic.

Into this niche steps Formify, a team based out of Toronto with the belief that individualized design should be accessible to everyone.

No human intervention is required.

A research team led by Yan Zeng, a scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), has built a new material research laboratory where robots do the work and artificial intelligence (AI) can make routine decisions. This allows work to be conducted around the clock, thereby accelerating the pace of research.

Research facilities and instrumentation have come a long way over the years, but the nature of research remains the same. At the center of each experiment is a human doing the measurements, making sense of data, and deciding the next steps to be taken. At the A-Lab set up at Berkeley, the researchers led by Zeng want to break the current pace of research by using robotics and AI.

Scientists have long studied neurostimulation to treat paralysis and sensory deficits caused by strokes and spinal cord injuries, which in Canada affect some 380,000 people across the country.

A new study published in the journal Cell Reports Medicine demonstrates the possibility of autonomously optimizing the stimulation parameters of prostheses implanted in the brains of animals, without .

The work was done at Université de Montréal by neuroscience professors Marco Bonizzato, Numa Dancause and Marina Martinez, in collaboration with mathematics professor and Mila researcher Guillaume Lajoie.

In parallel to recent developments in machine learning like GPT-4, a group of scientists has recently proposed the use of neural tissue itself, carefully grown to recreate the structures of the animal brain, as a computational substrate. After all, if AI is inspired by neurological systems, what better medium to do computing than an actual neurological system? Gathering developments from the fields of computer science, electrical engineering, neurobiology, electrophysiology, and pharmacology, the authors propose a new research initiative they call “organoid intelligence.”

OI is a collective effort to promote the use of brain organoids —tiny spherical masses of brain tissue grown from stem cells—for computation, drug research and as a model to study at a small scale how a complete brain may function. In other words, organoids provide an opportunity to better understand the brain, and OI aims to use that knowledge to develop neurobiological computational systems that learn from less data and with less energy than silicon hardware.

The development of organoids has been made possible by two bioengineering breakthroughs: induced pluripotent stem cells and 3D cell culturing techniques.

Orion in March announced it has set out on a four-year project to build a cutting-edge ecosystem for pharmaceutical research in Finland.

Consisting of companies, universities and research institutes, the ecosystem will utilise artificial intelligence and machine learning in order to reduce the time required for studying and developing pharmaceutical products.

“Utilising data with the help of artificial intelligence is a competitive advantage for developing new innovative medicines because it expedites development and significantly increases the probability of success,” toldOuti Vaarala, director of innovative medicines at Orion.

Scientists have begun using artificial intelligence to help them communicate with animals — and they’re starting small with bats and bees.

AI allows humans to use breakthrough techniques to decode and observe how animals communicate so we can try to speak back to them.

Scientific American spoke with Professor Karen Bakker who is the author of the new book The Sounds of Life: How Digital Technology Is Bringing Us Closer to the Worlds of Animals and Plant.

NOTE: This article was written by GPT-4 based on the code base. For more info, read this.

Abstract:

In this research, we propose a novel task-driven autonomous agent that leverages OpenAI’s GPT-4 language model, Pinecone vector search, and the LangChain framework to perform a wide range of tasks across diverse domains. Our system is capable of completing tasks, generating new tasks based on completed results, and prioritizing tasks in real-time. We discuss potential future improvements, including the integration of a security/safety agent, expanding functionality, generating interim milestones, and incorporating real-time priority updates. The significance of this research lies in demonstrating the potential of AI-powered language models to autonomously perform tasks within various constraints and contexts.