Toggle light / dark theme

In the exercise, an engineer equipped with a set of virtual reality (VR) goggles is orchestrating the robot’s actions.


Advanced proposition.

Nadia, a cutting-edge humanoid robot, is engineered with a focus on achieving a remarkable power-to-weight ratio and extensive range of motion. This is made possible by leveraging innovative mechanisms and advanced composite materials.

“This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process.”


Researchers at the University of Sydney Nano Institute have introduced a compact silicon semiconductor chip that seamlessly integrates electronics with photonic components. The innovation promises to significantly expand radio-frequency (RF) bandwidth and the ability to accurately control information flowing within the chip.

The chip, built using cutting-edge silicon photonics technology, boasts integration capabilities for diverse systems on semiconductors less than 5 millimeters wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton described the process as akin to assembling Lego building blocks, where new materials are integrated through advanced packaging of electronic ‘chiplets’, in a statement.

Lego-style components in chips aren’t new, however. In 2022, researchers at MIT designed a Lego-like reconfigurable AI chip that consisted of alternating layers of sensing and processing elements allowing the chip’s layers to communicate optically.

GPT-4 and other models rely on transformers. With StripedHyena, researchers present an alternative to the widely used architecture.

With StripedHyena, the Together AI team presents a family of language models with 7 billion parameters. What makes it special: StripedHyena uses a new set of AI architectures that aim to improve training and inference performance compared to the widely used transformer architecture, used for example in GPT-4.

The release includes StripedHyena-Hessian-7B (SH 7B), a base model, and StripedHyena-Nous-7B (SH-N 7B), a chat model. These models are designed to be faster, more memory efficient, and capable of processing very long contexts of up to 128,000 tokens. Researchers from HazyResearch, hessian. AI, Nous Research, MILA, HuggingFace, and the German Research Centre for Artificial Intelligence (DFKI) were involved.

Researchers from Tsinghua University, Shanghai Artificial Intelligence Laboratory, and 01.AI have developed a new framework called OpenChat to improve open-source language models with mixed data quality.

Open-source language models such as LLaMA and LLaMA2, which allow anyone to inspect and understand the program code, are often refined and optimized using special techniques such as supervised fine-tuning (SFT) and reinforcement learning fine-tuning (RLFT).

However, these techniques assume that all data used is of the same quality. In practice, however, a data set typically consists of a mixture of optimal and relatively poor data. This can hurt the performance of language models.

To best move in their surrounding environment and tackle everyday tasks, robots should be able to perform complex motions, effectively coordinating the movement of individual limbs. Roboticists and computer scientists have thus been trying to develop computational techniques that can artificially replicate the process through which humans plan, execute, and coordinate the movements of different body parts.

A research group based at Intel Labs (Germany), University College London (UCL, UK), and VERSES Research Lab (US) recently set out to explore the motor control of using hierarchical generative models, computational techniques that organize variables in data into different levels or hierarchies, to then mimic specific processes.

Their paper, published in Nature Machine Intelligence, demonstrates the effectiveness of these models for enabling human-inspired motor control in autonomous robots.

Recent advances allow imaging of neurons inside freely moving animals. However, to decode circuit activity, these imaged neurons must be computationally identified and tracked. This becomes particularly challenging when the brain itself moves and deforms inside an organism’s flexible body, e.g. in a worm. Until now, the scientific community has lacked the tools to address the problem.

Now, a team of scientists from EPFL and Harvard have developed a pioneering AI method to track neurons inside moving and deforming animals. The study, now published in Nature Methods, was led by Sahand Jamal Rahi at EPFL’s School of Basic Sciences.

The new method is based on a convolutional neural network (CNN), which is a type of AI that has been trained to recognize and understand patterns in images. This involves a process called “convolution”, which looks at small parts of the picture – like edges, colors, or shapes – at a time and then combines all that information together to make sense of it and to identify objects or patterns.