Toggle light / dark theme

For scientists searching for the brain’s ‘control room, an area called the claustrum has emerged as a compelling candidate. This little-studied deep brain structure is thought to be the place where multiple senses are brought together, attention is controlled, and consciousness arises. Observations in mice now support the role of the claustrum as a hub for coordinating activity across the brain. New research from the RIKEN Center for Brain Science (CBS) shows that slow-wave brain activity, a characteristic of sleep and resting states, is controlled by the claustrum. The synchronization of silent and active states across large parts of the brain by these slow waves could contribute to consciousness.

A serendipitous discovery actually led Yoshihiro Yoshihara, team leader at CBS, to investigate the claustrum. His lab normally studies the sense of smell and the detection of pheromones, but they chanced upon a genetically engineered mouse strain with a specific population of brain cells that was present only in the claustrum. These neurons could be turned on using optogenetic technology or selectively silenced through , thus enabling the study of what turned out to be a vast, claustrum-controlled network. The study by Yoshihara and colleagues was published in Nature Neuroscience on May 11.

They started out by mapping the claustrum’s inputs and outputs and found that many higher-order brain areas send connections to the claustrum, such as those involved in sensation and motor control. Outgoing connections from the claustrum were broadly distributed across the brain, reaching numerous brain areas such as prefrontal, orbital, cingulate, motor, insular, and entorhinal cortices. “The claustrum is at the center of a widespread brain network, covering areas that are involved in cognitive processing,” says co-first author Kimiya Narikiyo. “It essentially reaches all higher brain areas and all types of neurons, making it a potential orchestrator of brain-wide activity.”

The table also shows the average normalized rank of transfer learning approaches. Hyperparameter transfer learning uses evaluation data from past HPO tasks in order to warmstart the current HPO task, which can result in significant speed-ups in practice.

Syne Tune supports transfer-learning-based HPO via an abstraction that maps a scheduler and transfer learning data to a warmstarted instance of the former. We consider the bounding-box and quantile-based ASHA, respectively referred to as ASHA-BB and ASHA-CTS. We also consider a zero-shot approach (ZS), which greedily selects hyperparameter configurations that complement previously considered ones, based on historical performances; and RUSH, which warmstarts ASHA with the best configurations found for previous tasks. As expected, we find that transfer learning approaches accelerate HPO.

Our experiments show that Syne Tune makes research on automated machine learning more efficient, reliable, and trustworthy. By making simulation on tabulated benchmarks a first-class citizen, it makes hyperparameter optimization accessible to researchers without massive computation budgets. By supporting advanced use cases, such as hyperparameter transfer learning, it allows better problem solving in practice.

Summary: Brain mapping study identifies important neural networks and their connections that appear to enhance the conscious experience.

Source: University of Tokyo

Science may be one step closer to understanding where consciousness resides in the brain. A new study shows the importance of certain types of neural connections in identifying consciousness.

The research, published in Cerebral Cortex, was led by Jun Kitazono, a corresponding author and a project researcher in the Department of General Systems Studies at the University of Tokyo.

RIYADH (BLOOMBERG) — Saudi Arabia wants to build a gigantic megastructure that contains a city for 9 million people, its crown prince announced on Monday (July 25).

The design takes the shape of two parallel buildings with mirrored surfaces, rising 500m above sea level — taller than the Empire State Building — and stretching horizontally for more than 100km.

They’re part of the prince’s US$500 billion (S$693 billion) Neom project, a plan to turn an expanse of desert the size of Belgium into a high-tech region.

Designed for precision agriculture and environmental management use cases, the P4 Multispectral drone combines data from six separate sensors to measure the health of crops. It can be used to monitor everything from individual plants to entire fields, as well as weeds, insects, and a variety of soil conditions.

The P4 Multispectral drone is compatible with standard industry workflows including flight programming, mapping, and analytics software from DJI and other leading providers. Using the DJI GS Pro application, you can create automated and repeatable missions including flight planning, mission execution, and flight data management. Data collected can be easily imported into DJI Terra or a suite of third-party software including Pix4D Mapper and DroneDeploy, for analysis and to generate additional vegetation index maps.

The drone was first announced in 2019.

Normally, robotic arms are controlled by a GUI running on a host PC, or with some kind of analog system that maps human inputs to various degrees of rotation. However, Maurizio Miscio was able to build a custom robotic arm that is completely self-contained — thanks to a companion mobile app that resides on an old smartphone housed inside a control box.

Miscio started his project by making 3D models of each piece, most of which were 3D-printed. These included the gripper, various joints that each give a single axis of rotation, and a large circular base that acts as a stable platform on which the arm can spin. He then set to work attaching five servo motors onto each rotational axis, along with a single SG90 micro servo motor for the gripper. These motors were connected to an Arduino Uno that also had an HC-05 Bluetooth® serial module for external communication.

In order to operate the arm, Miscio developed a mobile app with the help of MIT App Inventor, which presents the user with a series of buttons that rotate a particular servo motor to the desired degree. The app even lets a series of motion be recorded and “played back” to the Uno over Bluetooth for repeated, accurate movements.

By Planning in the Latent Space of a Learned World Model. The world model Director builds from pixels allows effective planning in a latent space. To anticipate future model states given future actions, the world model first maps pictures to model states. Director optimizes two policies based on the model states’ anticipated trajectories: Every predetermined number of steps, the management selects a new objective, and the employee learns to accomplish the goals using simple activities. The direction would have a difficult control challenge if they had to choose plans directly in the high-dimensional continuous representation space of the world model. To reduce the size of the discrete codes created by the model states, they instead learn a goal autoencoder. The goal autoencoder then transforms the discrete codes into model states and passes them as goals to the worker after the manager has chosen them.

Deep reinforcement learning advancements have accelerated the study of decision-making in artificial agents. Artificial agents may actively affect their environment by moving a robot arm based on camera inputs or clicking a button in a web browser, in contrast to generative ML models like GPT-3 and Imagen. Although artificial intelligence has the potential to aid humans more and more, existing approaches are limited by the necessity for precise feedback in the form of often given rewards to acquire effective techniques. For instance, even robust computers like AlphaGo are restricted to a certain number of moves before earning their next reward while having access to massive computing resources.

Contrarily, complex activities like preparing a meal necessitate decision-making at all levels, from menu planning to following directions to the shop to buy supplies to properly executing the fine motor skills required at each stage along the way based on high-dimensional sensory inputs. Artificial agents can complete tasks more independently with scarce incentives thanks to hierarchical reinforcement learning (HRL), which automatically breaks down complicated tasks into achievable subgoals. Research on HRL has, however, been difficult because there is no universal answer, and existing approaches rely on manually defined target spaces or subtasks.

Astronomers from MIT report today that they have discovered a mysterious signal with a pattern akin to a heartbeat, emanating from a far-off galaxy that is billions of light-years from Earth. Exactly what the source may be of this regular pulse of radio waves remains a mystery, as it is the first time that such a signal has been recorded.

They have identified the signal as a fast radio burst (FRB), which is typically an intensely strong burst of radio waves of unknown astrophysical origin that lasts only a few milliseconds at most. This new signal, labelled FRB 20191221A, is unusual, because it persists for up to three seconds, which is about 1,000 times longer than the average FRB. Within this time, there are shorter bursts of radio waves that repeat every 0.2 seconds in a clear periodic pattern, similar to that of a beating heart.

Since the first FRB was discovered in 2007, hundreds of similar radio flashes have been detected across the universe, most recently by the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, an interferometric radio telescope that is located at the Dominion Radio Astrophysical Observatory in British Columbia, Canada. CHIME is designed to pick up radio waves emitted by hydrogen in the very earliest stages of the universe, but the telescope is also sensitive to fast radio bursts. Since it began observing the sky in 2018, CHIME has detected hundreds of FRBs emanating from different parts of the sky.