Toggle light / dark theme

She has also published two children’s books for geeky kids, “The Internet of Mysterious Things” and “A Robot Story.”

VentureBeat: First off, how would you define digital twins, and why is it essential to think about as a thing as distinct from other tools for organizing data like APIs, data fabrics, data warehouses, and enterprise software tools?

Lisa Seacat DeLuca: We define digital twins broadly as a digital representation of any physical object. You might picture certain use cases like manufacturing equipment or a generator, but really, anything can be a digital twin if it has a digital counterpart, which opens the door for a number of possibilities of what we can do with them.

Artificial neural networks modeled on real brains can perform cognitive tasks.

A new study shows that artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.

By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN). An ANN is a computing system consisting of multiple input and output units, much like the biological brain. A team of researchers from The Neuro (Montreal Neurological Institute-Hospital) and the Quebec Artificial Intelligence Institute trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment.

Artificial intelligence (AI) will fundamentally change medicine and healthcare: Diagnostic patient data, e.g. from ECG, EEG or X-ray images, can be analyzed with the help of machine learning, so that diseases can be detected at a very early stage based on subtle changes. However, implanting AI within the human body is still a major technical challenge. TU Dresden scientists at the Chair of Optoelectronics have now succeeded for the first time in developing a bio-compatible implantable AI platform that classifies in real time healthy and pathological patterns in biological signals such as heartbeats. It detects pathological changes even without medical supervision. The research results have now been published in the journal Science Advances.

In this work, the research team led by Prof. Karl Leo, Dr. Hans Kleemann and Matteo Cucchi demonstrates an approach for real-time classification of healthy and diseased bio-signals based on a biocompatible AI chip. They used polymer-based that structurally resemble the human brain and enable the neuromorphic AI principle of reservoir computing. The random arrangement of polymer fibers forms a so-called “recurrent ,” which allows it to process data, analogous to the human brain. The nonlinearity of these networks enables to amplify even the smallest signal changes, which—in the case of the heartbeat, for example—are often difficult for doctors to evaluate. However, the nonlinear transformation using the polymer network makes this possible without any problems.

In trials, the AI was able to differentiate between healthy heartbeats from three common arrhythmias with an 88% accuracy rate. In the process, the polymer network consumed less energy than a pacemaker. The potential applications for implantable AI systems are manifold: For example, they could be used to monitor cardiac arrhythmias or complications after surgery and report them to both doctors and patients via smartphone, allowing for swift medical assistance.

BeingAI is creating virtual beings with artificial intelligence. And its first AI being is a virtual character named Zbee.

Zbee can exist on different platforms and so can interact with people anytime, anywhere, to bring humanness and gamification into digital experiences. Zbee will come with an engaging personality and personal stories. They will offer friendship, entertainment, and mentorship, just like in the movie Her.

The Hong Kong-based BeingAI is the brainchild of founders Jeanne Lim (CEO), Lee Chapman (president), and Amit Kumar Pandey (chief technology officer). Zbee can autonomously interact with people in real time across devices and media platforms, as part of an engaging narrative experience. Most importantly, Zbee has human-defined values that steer toward positive behavior.

China’s goal of beating the USA in the race of creating the best and smartest artificial intelligence in the world has finally come to fruition with the Wu Dao 2.0 AI model. This new NLP AI is much superior to OpenAI’s GPT-3 model which was released last year. Some of the abilities the WU Dao AI has are being able to speak multiple languages (chinese and english), being able to learn new things, write poems, do medical research and create art.

It’s unlikely that the USA will take this lying down and forfeit the AI race. They’ll likely answer with even bigger AI models very soon and then the race to Artificial Intelligence supremacy will continue with the rate of innovation increasing exponentially.

Every day is a day closer to the Technological Singularity. Experience Robots learning to walk & think, humans flying to Mars and us finally merging with technology itself. And as all of that happens, we at AI News cover the absolute cutting edge best technology inventions of Humanity.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 A new player in the field of AI
02:01 What is an AI Language Model?
04:30 What can these AI’s actually do?
07:36 Last Words.

#ai #openai #wudao

Baidu’s autonomous driving unit, Apollo, has developed a new vehicle capable of Level 5 vehicle autonomy, meaning the car requires no human intervention during operation. Notably, it has no steering wheel, gas pedal, or brake pedal, signifying that drivers are completely unnecessary.

The “robocar,” as Baidu founder and CEO Robin Li called it, was showcased during a livestream event on Wednesday. It is equipped with two passenger seats, a large curved screen, an intelligent console, and electrochromic glass with varying tints based on natural brightness. This follows Apollo’s showcase of its Moon model in June.

During the event, Apollo indicated that the new vehicle will incorporate machine learning to analyze passengers’ needs and respond to verbal commands. In some scenarios, the system may even anticipate demands made by people in the vehicle.

😃


At Tesla’s AI Day event, Elon Musk unveiled the Tesla Bot — a humanoid robot that uses much of the tech found in Tesla’s car to perform such tasks as getting groceries or attaching a bolt to a car with a wrench. Oh, and a prototype is set to be ready next year.

The Tesla Bot will stand at 5’8” and will weigh approximately 125 pounds. Fortunately, for those who fear a possible robot uprising, the team at Tesla is building the Tesla Bot in a way that “you can run away from it… and most likely overpower it.”

The crew of Shenzhou-12 has conducted the second spacewalk of the mission, and the second spacewalk of the new Chinese Space Station’s lifetime. The extravehicular activity (EVA) comes two months into their planned 90-day mission in low Earth orbit.

Mission commander Nie Haisheng and first operator Liu Boming exited the Tianhe core module at 00:38 UTC on Friday, August 20. The goals of the EVA included the installation of a new panoramic camera (known as Panoramic Camera D) as well as a backup thermal control pump. Second operator Tang Hongbo stayed inside the station to support the two spacewalkers, similar to how crew onboard the International Space Station support American and Russian spacewalks.

Haisheng and Boming exited the depressurized docking node of the Tianhe module, which is being used as an EVA airlock until the Wentian lab module, equipped with its own airlock for crewmembers, arrives in the spring of 2022. Panoramic Camera D was successfully installed, and the station was prepared for future EVAs and module installations. To that end, the taikonauts finished installing additional foot restraints onto the station as well as a work platform on the station’s robotic arm.