Toggle light / dark theme

One more. I hope it’s not posted yet. Even AI isn’t safe.


ChatGPT creator OpenAI has confirmed a data breach caused by a bug in an open source library, just as a cybersecurity firm noticed that a recently introduced component is affected by an actively exploited vulnerability.

OpenAI said on Friday that it had taken the chatbot offline earlier in the week while it worked with the maintainers of the Redis data platform to patch a flaw that resulted in the exposure of user information.

The issue was related to ChatGPT’s use of Redis-py, an open source Redis client library, and it was introduced by a change made by OpenAI on March 20.

FallenKingdomReads’ list of five must-read science fiction books for fans of The Matrix.

This dystopian novel is the basis for the classic film Blade Runner, and it explores a world devastated by nuclear war where bounty hunter Rick Deckard must retire six rogue androids.

In this novel, human consciousness is digitized and can be transferred between bodies, leading to an interstellar conspiracy that draws the protagonist Takeshi into a thrilling adventure.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/
Take a journey through the years 2023–2030 as artificial intelligence develops increasing levels of consciousness, becomes an indispensable partner in human decision-making, and even leads key areas of society. But as the line between man and machines becomes blurred, society grapples with the moral and ethical implications of sentient machines, and the question arises: which side of history will you be on?

AI news timestamps:
0:00 AI consciousness intro.
0:17 Unconscious artificial intelligence.
1:54 AI influence in media.
3:13 AI decisions.
4:05 AI awareness.
5:07 The AI ally.
6:07 Machine human hybrid minds.
7:02 Which side.
7:55 The will of artificial intelligence.

#ai #future #tech

One question for David Krakauer, president of the Sante Fe Institute for complexity science where he explores the evolution of intelligence and stupidity on Earth.

Does GPT-4 really understand what we’re saying?

Yes and no, is the answer to that. In my new paper with computer scientist Melanie Mitchell, we surveyed AI researchers on the idea that large pretrained language models, like GPT-4, can understand language. When they say these models understand us, or that they don’t, it’s not clear that we’re agreeing on our concept of understanding. When Claude Shannon was inventing information theory, he made it very clear that the part of information he was interested in was communication, not meaning: You can have two messages that are equally informative, with one having loads of meaning and the other none.

The Near–Ultrasound Invisible Trojan, or NUIT, was developed by a team of researchers from the University of Texas at San Antonio and the University of Colorado Colorado Springs as a technique to secretly convey harmful orders to voice assistants on smartphones and smart speakers.

If you watch videos on YouTube on your smart TV, then that television must have a speaker, right? According to Guinevere Chen, associate professor and co-author of the NUIT article, “the sound of NUIT harmful orders will [be] inaudible, and it may attack your mobile phone as well as connect with your Google Assistant or Alexa devices.” “That may also happen in Zooms during meetings. During the meeting, if someone were to unmute themself, they would be able to implant the attack signal that would allow them to hack your phone, which was placed next to your computer.

The attack works by playing sounds close to but not exactly at ultrasonic frequencies, so they may still be replayed by off-the-shelf hardware, using a speaker, either the one already built into the target device or anything nearby. If the first malicious instruction is to mute the device’s answers, then subsequent actions, such as opening a door or disabling an alarm system, may be initiated without warning if the first command was to silence the device in the first place.

The ability to be creative has always been a big part of what separates human beings from machines. But today, a new generation of “generative” artificial intelligence (AI) applications is casting doubt on how wide that divide really is!

Tools like ChatGPT and Dall-E give the appearance of being able to carry out creative tasks — such as writing a poem or painting a picture — in a way that’s often indistinguishable from what we can do ourselves.

Does this mean that computers are truly being creative? Or are they simply giving the impression of creativity while, in reality, simply following a set of pre-programmed or probabilistic rules provided by us?

What is Creativity?


Generative AI, the technology behind ChatGPT, is going supernova, as astronomers say, outshining other innovations for the moment. But despite alarmist predictions of AI overlords enslaving mankind, the technology still requires human handlers and will for some time to come.

While AI can generate content and code at a blinding pace, it still requires humans to oversee the output, which can be low quality or simply wrong. Whether it be writing a report or writing a computer program, the technology cannot be trusted to deliver accuracy that humans can rely on. It’s getting better, but even that process of improvement depends on an army of humans painstakingly correcting the AI model’s mistakes in an effort to teach it to ‘behave.’

Humans in the loop is an old concept in AI. It refers to the practice of involving human experts in the process of training and refining AI systems to ensure that they perform correctly and meet the desired objectives.