Toggle light / dark theme

Almost 30,000 people have signed a petition calling for an “immediate pause” to the development of more powerful artificial intelligence (AI) systems. The interesting thing is that these aren’t Luddites with an inherent dislike of technology. Names on the petition include Apple co-founder Steve Wozniak, Tesla, Twitter, and SpaceX CEO Elon Musk, and Turing Prize winner Yoshua Bengio.

Others speaking out about the dangers include Geoffrey Hinton, widely credited as “the godfather of AI.” In a recent interview with the BBC to mark his retirement from Google at the age of 75, he warned that “we need to worry” about the speed at which AI is becoming smarter.


Many high-profile tech figures, including Steve Wozniak and Elon Musk, are calling for a pause in the development of AI over concerns about its potential to cause harm, whether intentionally or unintentionally. Is the speed of advancement outpacing the ability to put in place adequate safeguards?

On Tuesday, Elon Musk said in an interview with Fox News’ Tucker Carlson that he wants to develop his own chatbot called TruthGPT, which will be “a maximum truth-seeking AI” — whatever that means.

The Twitter owner said that he wants to create a third option to OpenAI and Google with an aim to “create more good than harm.”

“I’m going to start something which you call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe. And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe,” Musk said during the Fox & Friends show.

Summary: Researchers found AI models often fail to accurately replicate human decisions regarding rule violations, tending towards harsher judgments. This is attributed to the type of data these models are trained on; often labeled descriptively rather than normatively, which leads to differing interpretations of rule violations.

The discrepancy could result in serious real-world consequences, such as stricter judicial sentences. Therefore, the researchers suggest improving dataset transparency and matching the training context to the deployment context for more accurate models.

As generative AI gains traction and companies rush to incorporate it into their operations, concerns have mounted over the ethics of the technology. Deepfake images have circulated online, such as ones showing former President Donald Trump being arrested, and some testers have found that AI chatbots will give advice related to criminal activities, such as tips for how to murder people.

AI is known to sometimes hallucinate — make up information and continuously insist that it’s true — creating fears that it could spread false information. It can also develop bias and in some cases has argued with users. Some scammers have also used AI voice-cloning software in attempts to pose as relatives.

“How do you develop AI systems that are aligned to human values, including morality?,” Pichai said. “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on.”

Additionally, with the boom of artificial intelligence (AI) and advanced language learning models, Astro’s capabilities will only continue to improve in being able to solve increasingly challenging queries and requests. Amazon is investing billions of dollars into its SageMaker platform as a means to “Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.” Furthermore, the company’s Bedrock platform enables the “development of generative AI applications using [foundational models] through an API, without managing infrastructure.” Undoubtedly, Amazon has the resources and technical prowess to truly make significant strides in generative AI and machine learning, and will increasingly do so in the coming years.

However, it is important to note that Astro is not the only gladiator in the arena. AI enthusiast and Tesla founder Elon Musk announced last year that Tesla is actively working on developing a humanoid robot named “Optimus.” The goal behind the project will be to “Create a general purpose, bi-pedal, autonomous humanoid robot capable of performing unsafe, repetitive or boring tasks. Achieving that end goal requires building the software stacks that enable balance, navigation, perception and interaction with the physical world.” Musk has also ensured that the bot will be powered by Tesla’s advanced AI technology, meaning that it will be an intelligent and self-teaching bot that can respond to second-order queries and commands. Again, with enough time and testing, this technology can be leveraged in a positive way for healthcare-at-home needs and many more potential uses.

This is certainly an exciting and unprecedented time across multiple industries, including artificial intelligence, advanced robotics, and healthcare. The coming years will assuredly push the bounds of this technology and its applications. This advancement will undoubtedly bring with it certain challenges; however, if done correctly, it may also empower the means to benefit millions of people globally.

In case anyone is wondering how advances like ChatGPT are possible while Moore’s Law is dramatically slowing down, here’s what is happening:

Nvidia’s latest chip, the H100, can do 34 teraFLOPS of FP64 which is the standard 64-bit standard that supercomputers are ranked at. But this same chip can do 3,958 teraFLOPS of FP8 Tensor Core. FP8 is 8 times less precise than FP64. Also, Tensor Cores accelerate matrix operations, particularly matrix multiplication and accumulation, which are used extensively in deep learning calculations.

So by specializing in operations that AI cares about, the speed of the computer is increased by over 100 times!


A massive leap in accelerated compute.

Spotify ramps up policing after complaints of ‘artificial streaming.’

Spotify, the world’s most popular music streaming subscription service, has reportedly pulled down tens and thousands of songs from its platform, which were uploaded by an AI company Boomy, which came under the suspicion of ‘artificial streaming.’

Spotify took down around 7% of the AI-generated tracks created by Boomy, whose users have, till date, created a total of 14,591,095 songs, which the company claims is 13.95% of the world’s recorded music.