Toggle light / dark theme

A high school robotics team has built the world’s smallest and cheapest network switch.

The device was created by Murex Robotics and formed by students from Phillips Exeter Academy in New Hampshire.

The invention was necessary because they could not find an affordable embedded ethernet switch for a remotely operated vehicle (ROV) they were building for an underwater drone competition.

Reconstructing a scene using a single-camera viewpoint is challenging. Researchers have deployed generative artificial intelligence (AI) to achieve this. However, the models can hallucinate objects when determining what is obscured.

An alternate approach is to use shadows in a color image to infer the shape of the hidden object. However, the method falls short when the shadows are hard to see.

To overcome these limitations, the MIT researchers used a single-photon LiDAR. A LiDAR emits pulses of light, and the time it takes for these signals to bounce back to the sensor creates a 3D map of a scene.

Large language models have emerged as a transformative technology and have revolutionized AI with their ability to generate human-like text with seemingly unprecedented fluency and apparent comprehension. Trained on vast datasets of human-generated text, LLMs have unlocked innovations across industries, from content creation and language translation to data analytics and code generation. Recent developments, like OpenAI’s GPT-4o, showcase multimodal capabilities, processing text, vision, and audio inputs in a single neural network.

Despite their potential for driving productivity and enabling new forms of human-machine collaboration, LLMs are still in their nascent stage. They face limitations such as factual inaccuracies, biases inherited from training data, lack of common-sense reasoning, and data privacy concerns. Techniques like retrieval augmented generation aim to ground LLM knowledge and improve accuracy.

To explore these issues, I spoke with Amir Feizpour, CEO and founder of AI Science, an expert-in-the-loop business workflow automation platform. We discussed the transformative impacts, applications, risks, and challenges of LLMs across different sectors, as well as the implications for startups in this space.

As a supplement to optical super-resolution microscopy techniques, computational super-resolution methods have demonstrated remarkable results in alleviating the spatiotemporal imaging trade-off. However, they commonly suffer from low structural fidelity and universality. Therefore, we herein propose a deep-physics-informed sparsity framework designed holistically to synergize the strengths of physical imaging models (image blurring processes), prior knowledge (continuity and sparsity constraints), a back-end optimization algorithm (image deblurring), and deep learning (an unsupervised neural network). Owing to the utilization of a multipronged learning strategy, the trained network can be applied to a variety of imaging modalities and samples to enhance the physical resolution by a factor of at least 1.67 without requiring additional training or parameter tuning.

Robots and food have long been distant worlds: Robots are inorganic, bulky, and non-disposable; food is organic, soft, and biodegradable. Yet, research that develops edible robots has progressed recently and promises positive impacts: Robotic food could reduce electronic waste, help deliver nutrition and medicines to people and animals in need, monitor health, and even pave the way to novel gastronomical experiences. But how far are we from having a fully edible robot for lunch or dessert? And what are the challenges?

Scientists from the RoboFood project, based at EPFL, address these and other questions in a new perspective article in the journal Nature Reviews Materials (“Towards edible robots and robotic food”).

“Bringing robots and food together is a fascinating challenge,” says Dario Floreano, director of the Laboratory of Intelligent Systems at EPFL and first author of the article. In 2021, Floreano joined forces with Remko Boom from Wageningen University, The Netherlands, Jonathan Rossiter from the University of Bristol, UK, and Mario Caironi from the Italian Institute of Technology, to launch the project RoboFood.

• Relightable Gaussian Codec Avatars ➡️ https://go.fb.me/gdtkjm • URHand: Universal Relightable Hands ➡️ https://go.fb.me/1lmv7o • RoHM: Robust Human Motion Reconstruction via Diffusion ➡️ https://www.youtube.com/embed/hX7yO2c1hEE


RoHM is a novel diffusion-based motion model that, conditioned on noisy and occluded input data, reconstructs complete, plausible motions in consistent global coordinates. Given the complexity of the problem — requiring one to address different tasks (denoising and infilling) in different solution spaces (local and global motion) — we decompose it into two sub-tasks and learn two models, one for global trajectory and one for local motion. To capture the correlations between the two, we then introduce a novel conditioning module, combining it with an iterative inference scheme. We apply RoHM to a variety of tasks — from motion reconstruction and denoising to spatial and temporal infilling. Extensive experiments on three popular datasets show that our method outperforms state-of-the-art approaches qualitatively and quantitatively, while being faster at test time.

A series of advances in materials and design have enabled manufacturers to work at scales smaller than a billionth of a size to create devices and objects of nanoscopic dimensions. This is nanotechnology, which, although relatively new, produces materials and technologies already used in mass production.

The European Commission defines nano as any material that is at least 50% composed of particles between one and one hundred nanometers in size (i.e. one billionth of a meter, or one-millionth of a millimeter). Nanomaterials differ from conventional materials because of their unique properties such as higher electrical conductivity and mechanical strength, sensor technologies, and biomedical applications, and because they can create coatings that make surfaces more hydrophobic or self-cleaning.

The widespread use of nanotechnology is relatively new. Since 2000, nanomaterials have been used industrially as new research and experimental designs have made their effectiveness in different sectors clear. For example, in the health field, nanotechnology helps to reduce diagnostic errors and to develop nanobots (microscale robots) to repair and replace intercellular structures, or repair DNA molecules; in the chemical sector, it facilitates coating devices with nanoparticles to improve their smoothness and heat resistance; in manufacturing, materials developed with nanotechnology enhance the performance of the final product by improving heat resistance, strength, durability, and electrical conductivity.

As one of the Department of Defense’s 14 critical technology areas, artificial intelligence has taken center stage in the organization’s research and development endeavors.

According to Matt Turek, deputy director of the Defense Advanced Research Projects Agency’s Information Innovation Office, approximately 70 percent of the agency’s programs now use AI and machine learning. Its priorities are not just to develop systems for U.S. warfighters, but to prevent “strategic surprise” from adversary AI systems.