Toggle light / dark theme

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

The model is able to transfer knowledge between a simulated environment and real-world settings.

Researchers at MIT’s Center for Bits and Atoms are working on an ambitious project, designing robots that effectively self-assemble. The team admits that the goal of an autonomous self-building robot is still “years away,” but the work has thus far demonstrated positive results.

At the system’s center are voxels (a term borrowed from computer graphics), which carry power and data that can be shared between pieces. The pieces form the foundation of the robot, grabbing and attaching additional voxels before moving across the grid for further assembly.

The researchers note in an associated paper published in Nature, “Our approach challenges the convention that larger constructions need larger machines to build them, and could be applied in areas that today either require substantial capital investments for fixed infrastructure or are altogether unfeasible.”

MIT researchers have devised an algorithm using voxels robotics devices to build anything from houses to planes to cars and even other robots by using a grid system that transfers knowledge to determine when to build what, and when to build other robot builders. New Google Deepmind video game artificial intelligence develops agents that can talk, listen, ask questions, navigate, search and retrieve information, control things, and do a range of other intelligent tasks in real-time. New Non-invasive brain computer interface device transmits information through optic nerve to compete with Neuralink BCI.

Tech News Timestamps:
0:00 Robotics Breakthrough Builds Anything — Even Robots.
2:44 New Google Deepmind Video Game AI
5:25 New Neuralink BCI Competitor.

#robot #ai #neuralink

The combined force of these disruptive technologies (AI and 5G) enables fast, secure, and ubiquitous connectivity of cost-efficient smart networks and IoT (Internet-of-Things) devices. This convergence point is essential to concepts like intelligent wireless edge.

5G and AI, the connected digital edge

Artificial intelligence and 5G are the two most critical elements that would empower futuristic innovations. These cutting-edge technologies are inherently synergistic. The rapid advancements of AI significantly improve the entire 5G ecosystem, its performance, and efficiency. Besides, 5G-connected devices’ proliferation helps drive unparalleled intelligence and new improvements in AI-based learning and inference. Moreover, the transformation of the connected, intelligent edge has commenced as on-device intelligence has garnered phenomenal traction. This transformation is critical to leveraging the full potential of 5G’s future. With these prospects, these technologies hold enough potential to transform every industry. Here’s how the combination of AI and 5G has been reshaping industries.

If we can analyze the organization of neural circuits, it will play a crucial role in better understanding the process of thinking. It is where the maps come into play. Maps of the nervous system contain information about the identity of individual cells, like their type, subcellular component, and connectivity of the neurons.

But how do we obtain these maps?

Volumetric nanometer-resolution imaging of brain tissue is a technique that provides the raw data needed to build these maps. But inferring all the relevant information is a laborious and challenging task because of the multiple scales of brain structures (e.g., nm for a synapse vs. mm for an axon). It requires hours of manual ground truth labeling by expert annotators.

On September 14, 1956, IBM announced the 305 and 650 RAMAC (Random Access Memory Accounting) “data processing machines,” incorporating the first-ever disk storage product. The 305 came with fifty 24-inch disks for a total capacity of 5 megabytes, weighed 1 ton, and could be leased for $3,200 per month.

In 1953, Arthur J. Critchlow, a young member of IBM’s advanced technologies research lab in San Jose, California, was assigned the task of finding a better data storage medium than punch-cards.


The information explosion (a term first used in 1941, according to the Oxford English Dictionary) has turned into the big digital data explosion. And the data explosion enabled deep learning, an advanced data analysis method, to perform today’s AI breakthroughs in image identification and natural language processing.

The RAMAC became obsolete within a few years of its introduction as the vacuum tubes powering it were replaced by transistors. Today, disk drives still serve as the primary containers for digital data, but solid-state drives (flash memory), first used in mobile devices, are fast replacing disk drives even in today’s successors of the RAMAC, supporting large-scale business operations.

Whatever form the storage takes, IBM created in 1956 new markets and businesses based on fast access to digital data. As Seagate’s Mark Kryder asserted in 2006: “Instead of Silicon Valley, they should call it Ferrous Oxide Valley. It wasn’t the microprocessor that enabled the personal video recorder, it was storage. It’s enabling new industries.”

Just as car created job for drivers, computer created job for data entry operator.robots will also create new types of high paying jobs.


For decades, the arrival of robots in the workplace has been a source of public anxiety over fears that they will replace workers and create unemployment.

Now that more sophisticated and humanoid robots are actually emerging, the picture is changing, with some seeing robots as promising teammates rather than unwelcome competitors.

‘Cobot’ colleagues

Take Italian industrial-automation company Comau. It has developed a robot that can collaborate with – and enhance the safety of – workers in strict cleanroom settings in the pharmaceutical, cosmetics, electronics, food and beverage industries. The innovation is known as a “collaborative robot”, or “cobot”.

Vitaly Vanchurin, physicist and cosmologist at the University of Minnesota Duluth speaks to Luis Razo Bravo of EISM about the world as a neural network, machine learning, theories of everything, interpretations of quantum mechanics and long-term human survival.

Timestamp of the conversation:

00:00 — Opening quote by Vanchurin.
00:53 — Introduction to Vanchurin.
03:17 — Vanchurin’s thoughts about human extinction.
05:56 — Brief background on Vanchurin’s research interests.
10:24 — How Vanchurin became interested in neural networks.
12:31 — How quantum mechanics can be used to understand neural networks.
18:56 — How and where does gravity fit into Vanchurin’s model?
20:39 — Does Vanchurin incorporate holography (AdS/CFT) into hid model?
24:14 — Maybe the entirety of physics is an “emergent” neural network.
28:08 — Maybe there are forms of life that are more fit to survive than humans.
28:58 — Maldacena’s “principle of Maximal life“
29:28 — Theories of Everything.
31:06 — Why Vanchurin’s framework is potentially a true TOE (politics, ethics, etc.)
34:07 — Why physicists don’t like to talk to philosophers and ask big questions.
36:45 — Why the growing number of theories of everything?
39:11 — Apart from his own, does Vanchurin have a favorite TOE?
41:26 — Bohmian mechanics and Aharanov’s Two-time approach to quantum mechanics.
43:53 — How has Vanchurin’s recent paper been received? Beliefs about peer review.
46:03 — Connecting Vanchurin’s work to machine learning and recommendations.
49:21 — Leonard Susskind, quantum information theory, and complexity.
51:23 — Maybe various proposals are looking at the same thing from different angles.
52:17 — How to follow Vanchurin’s work and connect to him.

Vanchurin’s paper on the world as a NN: https://arxiv.org/abs/2008.01540
Vanchurin on a theory of machine learning: https://arxiv.org/abs/2004.

Vanchurin’s website and research interests: https://www.d.umn.edu/cosmology/

Learn more about EISM at www.eism.eu.

Image and video editing are two of the most popular applications for computer users. With the advent of Machine Learning (ML) and Deep Learning (DL), image and video editing have been progressively studied through several neural network architectures. Until very recently, most DL models for image and video editing were supervised and, more specifically, required the training data to contain pairs of input and output data to be used for learning the details of the desired transformation. Lately, end-to-end learning frameworks have been proposed, which require as input only a single image to learn the mapping to the desired edited output.

Video matting is a specific task belonging to video editing. The term “matting ” dates back to the 19th century when glass plates of matte paint were set in front of a camera during filming to create the illusion of an environment that was not present at the filming location. Nowadays, the composition of multiple digital images follows similar proceedings. A composite formula is exploited to shade the intensity of the foreground and background of each image, expressed as a linear combination of the two components.

Although really powerful, this process has some limitations. It requires an unambiguous factorization of the image into foreground and background layers, which are then assumed to be independently treatable. In some situations like video matting, hence a sequence of temporal-and spatial-dependent frames, the layers decomposition becomes a complex task.