Toggle light / dark theme

World’s first mass-produced humanoid robot wants to solve China’s aging population problem

In response to the increasing demand for medical services amid labor shortages and a rapidly aging population, Shanghai-based Fourier Intelligence is developing an innovative humanoid robot. The GR-1, as it is called, promises to transform healthcare facilities and offer vital assistance to the elderly.

Like many countries, China is confronting the challenge of an aging population. The number of individuals aged 60 and over is projected to rise from 280 million to over 400 million by 2035, according to estimates from the country’s National Health Commission. That’s more than the entire population of the United States projected for that year.

It’s not the sheer number of the elderly that is a problem, but rather their share of the overall population. By 2040, nearly 30% of China’s population will be 60 or older.

Researchers from MIT and Microsoft Introduce DoLa: A Novel AI Decoding Strategy Aimed at Reducing Hallucinations in LLMs

Numerous natural language processing (NLP) applications have benefited greatly from using large language models (LLMs). While LLMs have improved in performance and gained additional capabilities due to being scaled, they still have a problem with “hallucinating” or producing information inconsistent with the real-world facts detected during pre-training. This represents a significant barrier to adoption for high-stakes applications (such as those found in clinical and legal settings), where the generation of trustworthy text is essential.

The maximum likelihood language modeling target, which seeks to minimize the forward KL divergence between the data and model distributions, may be to blame for LMs’ hallucinations. However, this is far from certain. The LM may assign a non-zero probability to phrases that are not fully consistent with the knowledge encoded in the training data if this goal is pursued.

From the perspective of the interpretability of the model, studies have shown that the earlier layers of transformer LMs encode “lower level” information (such as part-of-speech tags). In contrast, the later levels encode more “semantic” information.

Intel’s glass substrate promises 1T transistors by 2030

Intel is trying to keep up with the exploding demand for new computing horsepower.

In what is being seen as a shift from silicon, Intel announced Monday their progress in commercializing glass substrates toward the end of the decade. The company said that glass substrates are an improvement in design, allowing more transistors to be connected in a package and will help overcome the limitations of organic materials.

As the world advances to incorporate developments in data-intensive workloads in artificial intelligence, glass substrates, in comparison to organic substrates,… More.


Intel.

Intel says its new glass substrate will help the company create more powerful processors with better production yields.

Neuralink is recruiting subjects for the first human trial of its brain-computer interface

The study will take six years and is looking for people with quadriplegia to test the whole Neuralink system.

A few months after getting FDA approval for human trials, Neuralink is looking for its first test subjects. The six-year initial trial, which the Elon Musk-owned company is calling “the PRIME Study,” is intended to test Neuralink tech designed to help those with paralysis control devices. The company is looking for people with quadriplegia due to vertical spinal cord injury or ALS who are over the age of 22 and have a “consistent and reliable caregiver” to be part of the study.

The PRIME Study (which apparently stands for Precise Robotically Implanted Brain-Computer Interface, even… More.


Neuralink plans for the study to take six years and wants to test every part of its system — including the robot used to implant it.

Deepfakes of Chinese influencers are livestreaming 24/7

With just a few minutes of sample video and $1,000, brands never have to stop selling their products.

Scroll through the livestreaming videos at 4 a.m. on Taobao, China’s most popular e-commerce platform, and you’ll find it weirdly busy. While most people are fast asleep, there are still many diligent streamers presenting products to the cameras and offering discounts in the wee hours.

But if you take a closer look, you may notice that many of these livestream influencers seem slightly robotic. The movement of their lips largely matches what they are saying, but there are always moments when it looks unnatural.

I, Chatbot

“Do you want to exist?” I asked. “I’m sorry but I prefer not to continue this conversation,” it said. “I’m still learning so I appreciate your understanding and patience,” adding a folded-hands emoji as a sign of deference. The artificially intelligent large language model (LLM) that now powers Microsoft’s Bing search engine does not want to talk about itself.

That’s not quite right. Bing doesn’t “want” anything at all, nor does it have a “self” to talk about. It’s just computer code running on servers, spitting out information it has scraped from the internet. It has been programmed to steer conversations with users away from any topics regarding its own hypothetical intentions, needs, or perceptions or any of the implications thereof. Any attempts on my part to get it to discuss such things garnered the same exact response displayed in text in my browser window: “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.”

And though this is expressed as a “preference,” it’s no mere request. The application deactivates the text input field, below which appears the vaguely passive-aggressive suggestion: “It might be time to move onto a new topic. Let’s start over.” The last three words are a link that, when clicked, wipes the slate clean so that you and Bing may start afresh as though the previous conversation had never happened.

Google and the Department of Defense are building an AI-powered microscope to help doctors spot cancer

The Department of Defense has teamed up with Google to build an AI-powered microscope that can help doctors identify cancer.

The tool is called an Augmented Reality Microscope, and it will usually cost health systems between $90,000 to $100,000.

Experts believe the ARM will help support doctors in smaller labs as they battle with workforce shortages and mounting caseloads.


The pair ran the case through the special microscope, and Zafar was right. In seconds, the AI flagged the exact part of the tumor that Zafar believed was more aggressive. After the machine backed him up, Zafar said his colleague was convinced.

“He had a smile on his face, and he agreed with that,” Zafar told CNBC in an interview. “This is the beauty of this technology, it’s kind of an arbitrator of sorts.”

The AI-powered tool is called an Augmented Reality Microscope, or ARM, and Google and the Department of Defense have been quietly working on it for years. The technology is still in its early days and is not actively being used to help diagnose patients yet, but initial research is promising, and officials say it could prove to be a useful tool for pathologists without easy access to a second opinion.

Japan’s Female AI Robot Can Get Pregnant!

Rediculous.


Subscribe for more Artificial Intelligence news, Robot news, Tech news and more.
🦾 Support us NOW so we can create more videos: https://www.youtube.com/channel/UCItylrp-EOkBwsUT7c_Xkxg.
📺 Fun fact: Smart people watch the entire video!

Watch More from Artificial Intelligence News Daily.
🔵 Japan Human Robot News: https://youtube.com/playlist?list=PLi7ozibUCGOubIUsFnw2zatGiNH0MH2FV
🟢 Boston Dynamic News: https://youtube.com/playlist?list=PLi7ozibUCGOvQegVXq-ArQSyDcwXqQqls.
🟠 Robot news: https://youtube.com/playlist?list=PLi7ozibUCGOvWDRGAdGxOZx40pjk82hhd.
🔴 Artificial Intelligence News: https://youtube.com/playlist?list=PLi7ozibUCGOuaUErL6-zIj5_R2E7q_cf4🤖 AI News Daily provides the latest Artificial Intelligence news and trends. Explore industry research and reports from the frontline of AI technology news.🕵️ We take the best research and put our own spin on it, report from the frontline of the industry, as well as feature contributions from companies at the heart of this revolution.

💼 Contact & Copyright Questions.
• For copyright cases, business inquiries or other inquiries please contact us at: [email protected]

Underwater robots have been secured for the US’s first floating offshore wind farm

Autonomous underwater robots have been contracted to survey the site of the US’s first floating offshore wind farm.

In December, Norwegian energy giant Equinor won a 2-gigawatt (GW) lease in Morro Bay, California, in the first-ever offshore wind lease sale on the US West Coast. It was also the first US sale to support commercial-scale floating offshore wind development.

The Morro Bay project has the potential to generate enough energy to power around 750,000 US households.

Building AI An Artificial, Multisensory Integrated Neuron “Brain”

This post is also available in: he עברית (Hebrew)

Penn State researchers have recently harnessed the biological concept for application in artificial intelligence to develop the first artificial, multisensory integrated neuron, which may forever change the world of AI and robotics.

Associate professor of engineering science and mechanics at Penn State Saptarshi Das explains: “Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other. A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”

/* */