AI isn’t nearly as popular with the global populace as its boosters would have you believe.
As Axios reports based on a new poll of 32,000 global respondents from the consultancy firm Edelman, public trust is already eroding less than 18 months into the so-called “AI revolution” that popped off with OpenAI’s release of ChatGPT in November 2022.
“Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Justin Westcott, the global technology chair for the firm, told Axios. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence… the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?
According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.
We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity.
We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet’s evolution, and envisages the role of humans in a world where AI surpasses human intelligence.
A massive Microsoft data center in Goodyear, Arizona is guzzling the desert town’s water supply to support its cloud computing and AI efforts, The Atlantic reports.
A source familiar with Microsoft’s Goodyear facility told the Atlantic that it was specifically designed for use by Microsoft and the heavily Micosoft-funded OpenAI. In response to this allegation, both companies declined to comment.
Powering AI demands an incredible amount of energy. Worsening AI’s massive environmental footprint is the fact that it also consumes a mind-boggling amount of water. AI pulls enough electricity from data centers that they risk overheating, so to mitigate that risk, engineers use water to cool the servers back down.
AI news timestamps: 0:00 Everyone’s wrong. 1:57 Future of money. 3:04 The economy of AGI 5:35 Brain computer interface transactions. 6:26 Methods of income.
Brooks himself is among the philosophers who have previously said giving AI sensory and motor skills to engage with the world may be the only way to create true artificial intelligence. A good deal of human creativity, after all, comes from physical self-preservation — a caveman need only cut himself once on sharpened bone to see its use in hunting. And what is art if not a hope that our body-informed memories may outlive the body with which we formed them?
If you want to get even more mind-bent, consider thinkers like Lars Ludwig, who proposed that memory isn’t even something we can hold exclusively in our bodies anyway. Rather, to be human always meant sharing consciousness with technology to “extend artificial memory” — from a handprint on a cave wall, to the hard drive in your laptop. Thus, human cognition and memory could be considered to take place not just in the human brain, nor just in human bodily instinct, but also in the physical environment itself.
India has waded into global AI debate by issuing an advisory that requires “significant” tech firms to get government permission before launching new models.
India’s Ministry of Electronics and IT issued the advisory to firms on Friday. The advisory — not published on public domain but a copy of which TechCrunch has reviewed — also asks tech firms to ensure that their services or products “do not permit any bias or discrimination or threaten the integrity of the electoral process.”
Though the ministry admits the advisory is not legally binding, India’s IT Deputy Minister Rajeev Chandrasekhar says the notice is “signalling that this is the future of regulation.” He adds: “We are doing it as an advisory today asking you to comply with it.”
Claude 3 opus, claude 3 sonnet, and claude 3 haiku.
Let’s together take a close look at the use of generative AI for generating astrological horoscopes. Turns out that some worry this has mental health implications.
Unitree is a publicly traded robot company with about $5 billion in market value. They have sped up their humanoid robot to human jogging speed of 3.3 meters per second. This is about 7.5 miles per hour. It would not tire so it would take 50 minutes to cover a 10 kilometer race with enough battery power.
It can lift boxes and climb and descend stairs. It was able to jump vertically.
They have hand attachments that currently do not have finger and grasping motions.
Researchers have developed a computer “worm” that can spread from one computer to another using generative AI, a warning sign that the tech could be used to develop dangerous malware in the near future — if it hasn’t already.
As Wired reports, the worm can attack AI-powered email assistants to obtain sensitive data from emails and blast out spam messages that infect other systems.
“It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” Cornell Tech researcher Ben Nassi, coauthor of a yet-to-be-peer-reviewed paper about the work, told Wired.