Toggle light / dark theme

The combined force of these disruptive technologies (AI and 5G) enables fast, secure, and ubiquitous connectivity of cost-efficient smart networks and IoT (Internet-of-Things) devices. This convergence point is essential to concepts like intelligent wireless edge.

5G and AI, the connected digital edge

Artificial intelligence and 5G are the two most critical elements that would empower futuristic innovations. These cutting-edge technologies are inherently synergistic. The rapid advancements of AI significantly improve the entire 5G ecosystem, its performance, and efficiency. Besides, 5G-connected devices’ proliferation helps drive unparalleled intelligence and new improvements in AI-based learning and inference. Moreover, the transformation of the connected, intelligent edge has commenced as on-device intelligence has garnered phenomenal traction. This transformation is critical to leveraging the full potential of 5G’s future. With these prospects, these technologies hold enough potential to transform every industry. Here’s how the combination of AI and 5G has been reshaping industries.

If we can analyze the organization of neural circuits, it will play a crucial role in better understanding the process of thinking. It is where the maps come into play. Maps of the nervous system contain information about the identity of individual cells, like their type, subcellular component, and connectivity of the neurons.

But how do we obtain these maps?

Volumetric nanometer-resolution imaging of brain tissue is a technique that provides the raw data needed to build these maps. But inferring all the relevant information is a laborious and challenging task because of the multiple scales of brain structures (e.g., nm for a synapse vs. mm for an axon). It requires hours of manual ground truth labeling by expert annotators.

On September 14, 1956, IBM announced the 305 and 650 RAMAC (Random Access Memory Accounting) “data processing machines,” incorporating the first-ever disk storage product. The 305 came with fifty 24-inch disks for a total capacity of 5 megabytes, weighed 1 ton, and could be leased for $3,200 per month.

In 1953, Arthur J. Critchlow, a young member of IBM’s advanced technologies research lab in San Jose, California, was assigned the task of finding a better data storage medium than punch-cards.


The information explosion (a term first used in 1941, according to the Oxford English Dictionary) has turned into the big digital data explosion. And the data explosion enabled deep learning, an advanced data analysis method, to perform today’s AI breakthroughs in image identification and natural language processing.

The RAMAC became obsolete within a few years of its introduction as the vacuum tubes powering it were replaced by transistors. Today, disk drives still serve as the primary containers for digital data, but solid-state drives (flash memory), first used in mobile devices, are fast replacing disk drives even in today’s successors of the RAMAC, supporting large-scale business operations.

Whatever form the storage takes, IBM created in 1956 new markets and businesses based on fast access to digital data. As Seagate’s Mark Kryder asserted in 2006: “Instead of Silicon Valley, they should call it Ferrous Oxide Valley. It wasn’t the microprocessor that enabled the personal video recorder, it was storage. It’s enabling new industries.”

Just as car created job for drivers, computer created job for data entry operator.robots will also create new types of high paying jobs.


For decades, the arrival of robots in the workplace has been a source of public anxiety over fears that they will replace workers and create unemployment.

Now that more sophisticated and humanoid robots are actually emerging, the picture is changing, with some seeing robots as promising teammates rather than unwelcome competitors.

‘Cobot’ colleagues

Take Italian industrial-automation company Comau. It has developed a robot that can collaborate with – and enhance the safety of – workers in strict cleanroom settings in the pharmaceutical, cosmetics, electronics, food and beverage industries. The innovation is known as a “collaborative robot”, or “cobot”.

Vitaly Vanchurin, physicist and cosmologist at the University of Minnesota Duluth speaks to Luis Razo Bravo of EISM about the world as a neural network, machine learning, theories of everything, interpretations of quantum mechanics and long-term human survival.

Timestamp of the conversation:

00:00 — Opening quote by Vanchurin.
00:53 — Introduction to Vanchurin.
03:17 — Vanchurin’s thoughts about human extinction.
05:56 — Brief background on Vanchurin’s research interests.
10:24 — How Vanchurin became interested in neural networks.
12:31 — How quantum mechanics can be used to understand neural networks.
18:56 — How and where does gravity fit into Vanchurin’s model?
20:39 — Does Vanchurin incorporate holography (AdS/CFT) into hid model?
24:14 — Maybe the entirety of physics is an “emergent” neural network.
28:08 — Maybe there are forms of life that are more fit to survive than humans.
28:58 — Maldacena’s “principle of Maximal life“
29:28 — Theories of Everything.
31:06 — Why Vanchurin’s framework is potentially a true TOE (politics, ethics, etc.)
34:07 — Why physicists don’t like to talk to philosophers and ask big questions.
36:45 — Why the growing number of theories of everything?
39:11 — Apart from his own, does Vanchurin have a favorite TOE?
41:26 — Bohmian mechanics and Aharanov’s Two-time approach to quantum mechanics.
43:53 — How has Vanchurin’s recent paper been received? Beliefs about peer review.
46:03 — Connecting Vanchurin’s work to machine learning and recommendations.
49:21 — Leonard Susskind, quantum information theory, and complexity.
51:23 — Maybe various proposals are looking at the same thing from different angles.
52:17 — How to follow Vanchurin’s work and connect to him.

Vanchurin’s paper on the world as a NN: https://arxiv.org/abs/2008.01540
Vanchurin on a theory of machine learning: https://arxiv.org/abs/2004.

Vanchurin’s website and research interests: https://www.d.umn.edu/cosmology/

Learn more about EISM at www.eism.eu.

Image and video editing are two of the most popular applications for computer users. With the advent of Machine Learning (ML) and Deep Learning (DL), image and video editing have been progressively studied through several neural network architectures. Until very recently, most DL models for image and video editing were supervised and, more specifically, required the training data to contain pairs of input and output data to be used for learning the details of the desired transformation. Lately, end-to-end learning frameworks have been proposed, which require as input only a single image to learn the mapping to the desired edited output.

Video matting is a specific task belonging to video editing. The term “matting ” dates back to the 19th century when glass plates of matte paint were set in front of a camera during filming to create the illusion of an environment that was not present at the filming location. Nowadays, the composition of multiple digital images follows similar proceedings. A composite formula is exploited to shade the intensity of the foreground and background of each image, expressed as a linear combination of the two components.

Although really powerful, this process has some limitations. It requires an unambiguous factorization of the image into foreground and background layers, which are then assumed to be independently treatable. In some situations like video matting, hence a sequence of temporal-and spatial-dependent frames, the layers decomposition becomes a complex task.

The methods currently used to correct systematic issues in NLP models are either fragile or time-consuming and prone to shortcuts. Humans, on the other hand, frequently reprimand one another using natural language. This inspired recent research on natural language patches, which are declarative statements that enable developers to deliver corrective feedback at the appropriate level of abstraction by either modifying the model or adding information the model may be missing.

Instead of relying solely on labeled examples, there is a growing body of research on using language to provide instructions, supervision, and even inductive biases to models, such as building neural representations from language descriptions (Andreas et al., 2018; Murty et al., 2020; Mu et al., 2020), or language-based zero-shot learning (Brown et al., 2020; Hanjie et al., 2022; Chen et al., 2021). For corrective purposes, when the user interacts with an existing model to enhance it, language has yet to be properly utilized.

The neural language patching model has two heads: a gating head that determines if a patch should be applied and an interpreter head that forecasts results based on the information in the patch. The model is trained in two steps: first on a tagged dataset and then through task-specific fine-tuning. A set of patch templates are used to create patches and synthetic labeled samples during the second fine-tuning step.

This video is about, Boost Your Brain 150% With An AI Chip From Elon Musk! In 2023.

Remember the movie Limitless, now you can do it with a computer chip.

How often have you thought of eating all your notes so you can memorize them? Or how many times have you wished you could read others or take a photo from your eyes? Thanks to Elon musk; some of these childhood wishes might come to life.

For this, you will only need a small brain chip!

What is this AI chip?

It can be introduced as a brain-computer interface or a BCIs can create both internal and external brain connections. They are able to read brain activity, convert it into information, and then relay that information back to the brain or outside.

Researchers at the Electronics and Telecommunications Research Institute (ETRI) in Korea have recently developed a deep learning-based model that could help to produce engaging nonverbal social behaviors, such as hugging or shaking someone’s hand, in robots. Their model, presented in a paper pre-published on arXiv, can actively learn new context-appropriate social behaviors by observing interactions among humans.

“Deep learning techniques have produced interesting results in areas such as computer vision and ,” Woo-Ri Ko, one of the researchers who carried out the study, told TechXplore. “We set out to apply to , specifically by allowing robots to learn from human-human interactions on their own. Our method requires no prior knowledge of human behavior models, which are usually costly and time-consuming to implement.”

The (ANN)-based architecture developed by Ko and his colleagues combines the Seq2Seq (sequence-to-sequence) model introduced by Google researchers in 2014 with generative adversarial networks (GANs). The new architecture was trained on the AIR-Act2Act dataset, a collection of 5,000 human-human interactions occurring in 10 different scenarios.

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

When it comes to artificial intelligence (AI), the past year has been aspirational, but ultimately unsuccessful, says Athina Kanioura, who was named PepsiCo’s first chief strategy and transformation officer in September 2020. But she is optimistic about 2023.

“Think of how we started with the metaverse and the use of AI, suddenly it crumbled into pieces,” she told VentureBeat. “In AI, we tend to see what doesn’t work the first time, then we lose hope — but I think 2023 should be a year of hope and focus for AI.”