“It’s the single largest capital investment that has ever been made in the state of Mississippi – by a lot.”
On Thursday, 25th January, Amazon Web Services (AWS) announced plans for a monumental $10 billion investment in Mississippi— the single largest capital investment in the state’s history.
Amazon Web Services invests $10 billion in Mississippi, building two data centers, creating jobs, and fostering community development and sustainability.
This can free humans from taking on those tedious — and potentially dangerous — jobs, but it also means manufacturers need to build or buy a new robot every time they find a new task they want to automate.
General purpose robots — ones that can do many tasks — would be far more useful, but developing a bot with anywhere near the versatility of a human worker has thus far proven out of reach.
What’s new? Figure thinks it has cracked the code — in March 2023, it unveiled Figure 1, a machine it said was “the world’s first commercially viable general purpose humanoid robot.”
A team of MIT researchers has found that in many instances, replacing human workers with AI is still more expensive than sticking with the people, a conclusion that flies in the face of current fears over the technology taking our jobs.
As detailed in a new paper, the team examined the cost-effectiveness of 1,000 “visual inspection” tasks across 800 occupations, such as inspecting food to see whether it’s gone bad. They discovered that just 23 percent of workers’ total wages “would be attractive to automate,” mainly because of the “large upfront costs of AI systems” — and that’s if the automatable tasks could even “be separated from other parts” of the jobs.
That said, they admit, those economics may well change over time.
Will AI automate human jobs, and — if so — which jobs and when?
That’s the trio of questions a new research study from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), out this morning, tries to answer.
There’s been many attempts to extrapolate out and project how the AI technologies of today, like large language models, might impact people’s’ livelihoods — and whole economies — in the future.
A report by the Massachusetts Institute of Technology (MIT) has revealed that it is still cheaper to use humans for certain jobs than artificial intelligence (AI).
This comes amid concerns that AI will replace many jobs currently handled by humans. The report suggests that AI cannot replace the majority of jobs in cost-effective ways at present.
In a study addressing fears about AI replacing humans in various industries, MIT established that using AI to replace humans is only profitable in a few industries.
The nature of work is evolving at an unprecedented pace. The rise of generative AI has accelerated data analysis, expedited the production of software code and even simplified the creation of marketing copy.
Those benefits have not come without concerns over job displacement, ethics and accuracy.
At the 2024 Consumer Electronics Show (CES), IEEE experts from industry and academia participated in a panel discussion discussing how the new tech landscape is changing the professional world, and how universities are educating students to thrive in it.
Just after filming this video, Sam Altman, CEO of OpenAI published a blog post about the governance of superintelligence in which he, along with Greg Brockman and Ilya Sutskever, outline their thinking about how the world should prepare for a world with superintelligences. And just before filming Geoffrey Hinton quite his job at Google so that he could express more openly his concerns about the imminent arrival of an artificial general intelligence, an AGI that could soon get beyond our control if it became superintelligent. So, the basic idea is moving from sci-fi speculation into being a plausible scenario, but how powerful will they be and which of the concerns about superAI are reasonably founded? In this video I explore the ideas around superintelligence with Nick Bostrom’s 2014 book, Superintelligence, as one of our guides and Geoffrey Hinton’s interviews as another, to try to unpick which aspects are plausible and which are more like speculative sci-fi. I explore what are the dangers, such as Eliezer Yudkowsky’s notion of a rapid ‘foom’ take over of humanity, and also look briefly at the control problem and the alignment problem. At the end of the video I then make a suggestion for how we could maybe delay the arrival of superintelligence by withholding the ability of the algorithms to self-improve themselves, withholding what you could call, meta level agency.
▬▬ Chapters ▬▬
00:00 — Questing for an Infinity Gauntlet. 01:38 — Just human level AGI 02:27 — Intelligence explosion. 04:10 — Sparks of AGI 04:55 — Geoffrey Hinton is concerned. 06:14 — What are the dangers? 10:07 — Is ‘foom’ just sci-fi? 13:07 — Implausible capabilities. 14:35 — Plausible reasons for concern. 15:31 — What can we do? 16:44 — Control and alignment problems. 18:32 — Currently no convincing solutions. 19:16 — Delay intelligence explosion. 19:56 — Regulating meta level agency.
▬▬ Other videos about AI and Society ▬▬
AI wants your job | Which jobs will AI automate? | Reports by OpenAI and Goldman Sachs. • Which jobs will AI automate? | Report…