Toggle light / dark theme

Boston Dynamics integrates GPT-4 with Spot and discovers emerging capabilities

What id really like to see is put the super realistic robot head on Atlas, and equip w/ a super advanced talking LLM, but maybe people arent ready for it yet. Definitely technically possible.


Robotics company Boston Dynamics has integrated OpenAI’s GPT-4 into its Spot robot dog, showcasing its emerging capabilities.

To build the talking and interactive robot dog, Boston Dynamics added a Bluetooth speaker and microphone to Spot’s body, in addition to a camera-equipped arm that serves as its neck and head. Spot’s grasping hand mimics a talking mouth by opening and closing. This gives the robot a form of body language.

For language and image processing, the upgraded Spot uses OpenAI’s latest GPT-4 model, as well as Visual Question Answering (VQA) datasets and OpenAI’s Whisper speech recognition software to enable realistic conversations with humans.

AI Can Screen for Diabetes

In America, roughly 40 million Americans have diabetes and about 95% of them have type 2 diabetes. Type 2 diabetes occurs when the body cannot correctly process sugar and fuel cells. More specifically, the body does not produce enough insulin to break down sugar into glucose for the cells to use. In this case, treatment includes insulin shots or a pump in addition to a strict diet excluding sweets or high fat meals. Treatment limitations disrupt patient quality of life. Some researchers have been working on better detection for diabetic retinopathy with artificial intelligence (AI), but research is limited on how to better detect diabetes itself. Thus, many researchers are working to detect diabetes early on and discover better treatments.

Klick labs, located in multiple cities across the world, is trying to detect type 2 diabetes by having a patient speak into a microphone for 10 seconds. Klick labs believes this technology can better detect diabetes and help patients get treatment earlier. The study was published in Mayo Clinic Proceedings: Digital Health, which details how patients spoke for 10 seconds and combined with health data, including age, sex, height, and weight, created an AI model that discerns whether a person has type 2 diabetes or not. After further tests, scientists determined it has 89% and 86% accuracy for women and men, respectively.

In the study, Klick Labs collected voice recordings of 267 people, either non-diabetic or diabetic. The participants were asked to record a phrase into their smartphones six times a day for a total of 2-weeks. Over 18,000 recordings were taken and analyzed to distinguish 14 acoustic features that helped distinguish non-diabetic to type 2 diabetic individuals. The research highlights specific vocal variations in pitch and intensity that could lead to how the medical community screens for early-onset diabetes. A major barrier to early detection includes time, travel, and cost, which many people do not have. Voice diagnosis can help eliminate those barriers and improve detection and treatment in diabetic patients.

Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI

OpenAI’s CEO, Sam Altman, spent a good part of the summer on a weeks-long outreach tour, glad-handing politicians and speaking to packed auditoriums around the world. But Sutskever is much less of a public figure, and he doesn’t give a lot of interviews.

He is deliberate and methodical when he talks. There are long pauses when he thinks about what he wants to say and how to say it, turning questions over like puzzles he needs to solve. He does not seem interested in talking about himself. “I lead a very simple life,” he says. “I go to work; then I go home. I don’t do much else. There are a lot of social activities one could engage in, lots of events one could go to. Which I don’t.”

But when we talk about AI, and the epochal risks and rewards he sees down the line, vistas open up: “It’s going to be monumental, earth-shattering. There will be a before and an after.”

Is AI Mimicking Consciousness or Truly Becoming Aware?

Summary: AI’s remarkable abilities, like those seen in ChatGPT, often seem conscious due to their human-like interactions. Yet, researchers suggest AI systems lack the intricacies of human consciousness. They argue that these systems don’t possess the embodied experiences or the neural mechanisms humans have. Therefore, equating AI’s abilities to genuine consciousness might be an oversimplification.

Key Facts:

New AI Model Counters Bias In Data With A DEI Lens

AI has exploded onto the scene in recent years, bringing both promise and peril. Systems like ChatGPT and Stable Diffusion showcase the tremendous potential of AI to enhance productivity and creativity. Yet they also reveal a dark reality: the algorithms often reflect the same systemic prejudices and societal biases present in their training data.

While the corporate world has quickly capitalized on integrating generative AI systems, many experts urge caution, considering the critical flaws in how AI represents diversity. Whether it’s text generators reinforcing stereotypes or facial recognition exhibiting racial bias, the ethical challenges cannot be ignored.


From generating text that furthers stereotypes to producing discriminatory facial recognition results, biased AI poses ethical and social challenges.

China’s tech giants race to secure Nvidia’s last AI chips amid US ban

The latest round of restrictions has left a huge blow to China’s AI aspirations, as per reports.

China’s tech firms are allegedly racing to secure Nvidia’s crucial graphics processing units (GPUs) after the latest embargo by the US on the components that support AI tech.

The latest round of restrictions has dealt a huge blow to China’s AI aspirations, leaving companies struggling to secure key components, according to a news report by South China Morning Post (SCMP) on Friday.

ChatGPT-like AI can be tricked to produce malicious code, cyber attacks

Researchers demonstrate how Text-to-SQL systems can lead to cyber attacks.

A team of researchers from the University of Sheffield has demonstrated that popular artificial intelligence applications like OpenAI’s ChatGPT, among five others, can be manipulated to produce potentially harmful Structured Query Language (SQL) commands and can be exploited to attack computer systems in the real world.

The applications they used in their study included BAIDU-UNIT, ChatGPT, AI2SQL, AIHELPERBOT, Text2SQL, and ToolSKE.

Apple’s $1 billion standoff against AI-rivals Microsoft, Google, OpenAI

No AI announcements expected at Apple event on Monday.

Apple is reportedly spending a billion dollars a year in a major push for artificial intelligence. Over the last year, the AI boom has seen many of its tech adversaries investing millions and billions of dollars into large language models (LLMs) and conversational platforms.

Although the iPhone maker is hush about what is cooking in its AI laboratory, Interesting Engineering reported earlier that the company may be looking to revamp Siri with generative AI capabilities. Much like how OpenAI’s ChatGPT (Plus and Enterprise) can now generate content from voice commands, iPhone users could use Siri similarly.

UBC, Honda researchers develop robot arm with human skin-like sensors

“As sensors continue to evolve to be more skin-like, there is a need for robots to be smarter. Developments in sensors and artificial intelligence will need to go hand in hand”

Scientists at the University of British Columbia and Honda’s research institute have revealed the creation of a revolutionary soft sensor that mimics human skin in a press release. This highly sensitive, smart, and stretchable sensor is poised to reshape how machines interact with the world.

Offering a myriad of applications, the soft sensor takes cues from human skin in terms of both sensitivity and texture. It can make actions such as picking up a piece of soft fruit possible when applied to the surface of a prosthetic or robotic arm.

/* */