Toggle light / dark theme

Elon Musk Claims Google Co-Founder Is Building a “Digital God”

In a bombastic interview with none other than Tucker freakin’ Carlson, Elon Musk made a bold claim about Google co-founder Larry Page that, we have to admit, isn’t entirely implausible.

During the newly-released Fox News interview, Musk alleged that back when he and the Google co-founder and CEO “used to be close friends” and he’d stay at the techster’s Palo Alto house, they’d get into lengthy discussions about “AI safety” — and that what Page told him led to his own cofounding of OpenAI.

In characteristic confused-puppy fashion, Carlson asked Musk what Page had said about AI.

Driverless cars creating traffic jams in San Francisco

In San Francisco, where two major companies are testing driverless taxis, some local officials are reporting that the vehicles have caused a number of issues, including rolling into fire scenes and running over hoses. NBC News’ Jake Ward reports.

» Subscribe to NBC News: http://nbcnews.to/SubscribeToNBC
» Watch more NBC video: http://bit.ly/MoreNBCNews.

NBC News Digital is a collection of innovative and powerful news brands that deliver compelling, diverse and engaging news stories. NBC News Digital features NBCNews.com, MSNBC.com, TODAY.com, Nightly News, Meet the Press, Dateline, and the existing apps and digital extensions of these respective properties. We deliver the best in breaking news, live video coverage, original journalism and segments from your favorite NBC News Shows.

Connect with NBC News Online!
NBC News App: https://smart.link/5d0cd9df61b80
Breaking News Alerts: https://link.nbcnews.com/join/5cj/breaking-news-signup?cid=s…lip_190621
Visit NBCNews. Com: http://nbcnews.to/ReadNBC
Find NBC News on Facebook: http://nbcnews.to/LikeNBC
Follow NBC News on Twitter: http://nbcnews.to/FollowNBC
Get more of NBC News delivered to your inbox: nbcnews.com/newsletters.

#SanFrancisco #DriverlessCars #Cars

Study observes the interactions between live fish and fish-like robots

In recent decades, engineers have created a wide range of robotic systems inspired by animals, including four legged robots, as well as systems inspired by snakes, insects, squid and fish. Studies exploring the interactions between these robots and their biological counterparts, however, as still relatively rare.

Researchers at Peking University and China Agricultural University recently set out to explore what happens when live fish are placed in the same environment as a robotic fish. Their findings, published in Bioinspiration & Biomimetics, could both inform the development of fish-inspired robots and shed some new light on the behavior of real fish.

“Our research team has been focusing on the development of self-propelled robotic fish for a considerable amount of time,” Dr. Junzhi Yu, one of the researchers who carried out the study, told Tech Xplore. “During our , we observed an exciting phenomenon where live fish were observed following the swimming robotic fish. We are eager to further explore the underlying principles behind this phenomenon and gain a deeper understanding of this ‘fish following’ behavior.”

Should We Stop Developing AI For The Good Of Humanity?

Almost 30,000 people have signed a petition calling for an “immediate pause” to the development of more powerful artificial intelligence (AI) systems. The interesting thing is that these aren’t Luddites with an inherent dislike of technology. Names on the petition include Apple co-founder Steve Wozniak, Tesla, Twitter, and SpaceX CEO Elon Musk, and Turing Prize winner Yoshua Bengio.

Others speaking out about the dangers include Geoffrey Hinton, widely credited as “the godfather of AI.” In a recent interview with the BBC to mark his retirement from Google at the age of 75, he warned that “we need to worry” about the speed at which AI is becoming smarter.


Many high-profile tech figures, including Steve Wozniak and Elon Musk, are calling for a pause in the development of AI over concerns about its potential to cause harm, whether intentionally or unintentionally. Is the speed of advancement outpacing the ability to put in place adequate safeguards?

Elon Musk wants to develop TruthGPT, ‘a maximum truth-seeking AI’

On Tuesday, Elon Musk said in an interview with Fox News’ Tucker Carlson that he wants to develop his own chatbot called TruthGPT, which will be “a maximum truth-seeking AI” — whatever that means.

The Twitter owner said that he wants to create a third option to OpenAI and Google with an aim to “create more good than harm.”

“I’m going to start something which you call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe. And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe,” Musk said during the Fox & Friends show.

AI Models Misjudge Rule Violations: Human Versus Machine Decisions

Summary: Researchers found AI models often fail to accurately replicate human decisions regarding rule violations, tending towards harsher judgments. This is attributed to the type of data these models are trained on; often labeled descriptively rather than normatively, which leads to differing interpretations of rule violations.

The discrepancy could result in serious real-world consequences, such as stricter judicial sentences. Therefore, the researchers suggest improving dataset transparency and matching the training context to the deployment context for more accurate models.

Sundar Pichai says ethicists and philosophers need to be involved in the development of AI to make sure it is moral, and doesn’t do things like lie

As generative AI gains traction and companies rush to incorporate it into their operations, concerns have mounted over the ethics of the technology. Deepfake images have circulated online, such as ones showing former President Donald Trump being arrested, and some testers have found that AI chatbots will give advice related to criminal activities, such as tips for how to murder people.

AI is known to sometimes hallucinate — make up information and continuously insist that it’s true — creating fears that it could spread false information. It can also develop bias and in some cases has argued with users. Some scammers have also used AI voice-cloning software in attempts to pose as relatives.

“How do you develop AI systems that are aligned to human values, including morality?,” Pichai said. “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on.”