84% of them are worried about it.
Paper page: https://huggingface.co/papers/2401.14403 https://open-world-mobilemanip.github.io/ Deploying robots in open-ended unstructured environments such as homes has been a long-standing research problem.
The Rail Bus, a pioneering mode of transportation originating in Zhuzhou, China, is a groundbreaking discovery. Introduced by the Chinese manufacturer CRRC, this self-driving vehicle, resembling a train but without tracks, completed its inaugural journey in 2017. The Rail Bus seeks to revolutionise traditional concepts of buses, trains, and trams. The design of the Rail Bus was presented to the public in June 2023, and remarkably, within a span of fewer than five months, CRRC initiated testing on October 30, 2017. Covering a 3-kilometer route with stops at four stations in Zhuzhou, this marked a significant milestone in transportation evolution.
Current artificial intelligence models utilize billions of trainable parameters to achieve challenging tasks. However, this large number of parameters comes with a hefty cost. Training and deploying these huge models require immense memory space and computing capability that can only be provided by hangar-sized data centers in processes that consume energy equivalent to the electricity needs of midsized cities.
The research community is presently making efforts to rethink both the related computing hardware and the machine learning algorithms to sustainably keep the development of artificial intelligence at its current pace. Optical implementation of neural network architectures is a promising avenue because of the low power implementation of the connections between the units.
New research reported in Advanced Photonics combines light propagation inside multimode fibers with a small number of digitally programmable parameters and achieves the same performance on image classification tasks with fully digital systems with more than 100 times more programmable parameters. This computational framework streamlines the memory requirement and reduces the need for energy-intensive digital processes, while achieving the same level of accuracy in a variety of machine learning tasks.
Lumiere, on its part, addresses this gap by using a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model, leading to more realistic and coherent motion.
“By deploying both spatial and (importantly) temporal down-and up-sampling and leveraging a pre-trained text-to-image diffusion model, our model learns to directly generate a full-frame-rate, low-resolution video by processing it in multiple space-time scales,” the researchers noted in the paper.
The video model was trained on a dataset of 30 million videos, along with their text captions, and is capable of generating 80 frames at 16 fps. The source of this data, however, remains unclear at this stage.
ICYMI: INTRODUCING MORPHEUS-1 The world’s first multi-modal generative ultrasonic transformer designed to induce and stabilize lucid dreams according to Porphetic #AI Available for beta users Spring 2024.
Startup company Prophetic is set to unveil the “Halo” device to induce lucid dreaming, Fortune reports.
Advanced proposition
The iCub3 robot avatar system has been designed to facilitate the embodiment of humanoid robots by human operators, encompassing aspects such as locomotion, manipulation, voice, and facial expressions with comprehensive sensory feedback, including visual, auditory, haptic, weight, and touch modalities.
The iCub3 avatar system consists primarily of the iCub3 humanoid robot, an evolved version of the IIT’s humanoid robot born two decades ago, and innovative wearable technologies named iFeel.
A team of researchers at Facebook’s parent company Meta has come up with a new benchmark to gauge the abilities of AI assistants like OpenAI’s large language model GPT-4.
And judging by current standards, OpenAI’s current crop of AI models are all… still pretty stupid.
The team, which includes “AI godfather” and Meta chief scientist Yann LeCun, came up with an exam called GAIA that’s made up of 466 questions that “are conceptually simple for humans yet challenging for most advanced AIs,” per a yet-to-be-peer-reviewed paper.