And an AI could generate a picture of a person from scratch if it wanted or needed to. its only a matter of time before someone puts it all together. 1. AI writes a script. 2. AI generates pictures of a cast (face/&body). 3. AI animates pictures of the cast into scenes. 4. it cant create voices from scratch yet, but 10 second audio sample of a voice is enough for it to make voices say anything; AI voices all the dialog. And, viola, you ve reduced TV and movie production costs by 99.99%. Will take place by 2030.
Google’s PHORUM AI shows how impressive 3D avatars can be created just from a single photo.
Until now, however, such models have relied on complex automatic scanning by a multi-camera system, manual creation by artists, or a combination of both. Even the best camera systems still produce artifacts that must be cleaned up manually.
With the massive degree of progress in AI over the last decade or so, it’s natural to wonder about its future – particularly the timeline to achieving human (and superhuman) levels of general intelligence. Ajeya Cotra, a senior researcher at Open Philanthropy, recently (in 2020) put together a comprehensive report seeking to answer this question (actually, it answers the slightly different question of when transformative AI will appear, mainly because an exact definition of impact is easier than one of intelligence level), and over 169 pages she lays out a multi-step methodology to arrive at her answer. The report has generated a significant amount of discussion (for example, see this Astral Codex Ten review), and seems to have become an important anchor for many people’s views on AI timelines. On the whole, I found the report added useful structure around the AI timeline question, though I’m not sure its conclusions are particularly informative (due to the wide range of timelines across different methodologies). This post will provide a general overview of her approach (readers who are already familiar can skip the next section), and will then focus on one part of the overall methodology – specifically, the upper bound she chooses – and will seek to show that this bound may be vastly understated.
Part 1: Overview of the Report
In her report, Ajeya takes the following steps to estimate transformative AI timelines:
Aulos Biosciences is now recruiting cancer patients in Australian medical centers for a trial of the world’s first antibody drug designed by a computer.
The computationally designed antibody, known as AU-007, was planned by the artificial intelligence platform of Israeli biotech company Biolojic Design from Rehovot, in a way that would target a protein in the human body known as interleukin-2 (IL-2).
The goal is for the IL-2 pathway to activate the body’s immune system and attack the tumors.
Using nanotechnology, scientists have created a newly designed neuromorphic electronic device that endows microrobotics with colorful vision.
Researchers at Georgia State University have successfully designed a new type of artificial vision device that incorporates a novel vertical stacking architecture and allows for greater depth of color recognition and micro-level scaling. The new research study was published on April 18, 2022, in the top journal ACS Nano.
“This work is the first step toward our final destination–to develop a micro-scale camera for microrobots,” says assistant professor of Physics Sidong Lei, who led the research. “We illustrate the fundamental principle and feasibility to construct this new type of image sensor with emphasis on miniaturization.”
An automated system called Guardian is being developed by the Toyota Research Institute to amplify human control in a vehicle, as opposed to removing it.
Here’s the scenario: A driver falls asleep at the wheel. But their car is equipped with a dashboard camera that detects the driver’s eye condition, activating a safety system that promptly guides the vehicle to a secure halt.
That’s not just an idea on the drawing board. The system, called Guardian, is being refined at the Toyota Research Institute (TRI), where MIT Professor John Leonard is helping steer the group’s work, while on leave from MIT. At the MIT Mobility Forum, Leonard and Avinash Balachandran, head of TRI’s Human-Centric Driving Research Department, presented an overview of their work.
The presenters offered thoughts on multiple levels about automation and driving. Leonard and Balachandran discussed particular TRI systems while also suggesting that — after years of publicity about the possibility of fully automated vehicles — a more realistic prospect might be the deployment of technology that aids drivers, without replacing them.
Stuart Russell warns about the dangers involved in the creation of artificial intelligence. Particularly, artificial general intelligence or AGI. The idea of an artificial intelligence that might one day surpass human intelligence has been captivating and terrifying us for decades now. The possibility of what it would be like if we had the ability to create a machine that could think like a human, or even surpass us in cognitive abilities is something that many envision. But, as with many novel technologies, there are a few problems with building an AGI. But what if we succeed? What would happen should our quest to create artificial intelligence bear fruit? How do we retain power over entities that are more intelligent than us? The answer, of course, is that nobody knows for sure. But there are some logical conclusions we can draw from examining the nature of intelligence and what kind of entities might be capable of it.
Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He outlines the definition of AI, the risks and benefits it poses for the future. According to him, the idea of an AGI is the most important problem to intellectually to work on.
An AGI could be used for many good and evil purposes. Although there are huge benefits to creating an AGI, there are also downsides to doing so. If we create and deploy an AGI without understanding what risks it can cause for humans and other beings in our world, we could be contributing to great disasters.
Russel also postulates that we should focus on developing a machine that learns what each of the eight billion people on Earth would like the future to be like.
Papers referenced in the video: Dietary oxalate to calcium ratio and incident cardiovascular events: a 10-year follow-up among an Asian population. https://pubmed.ncbi.nlm.nih.gov/35346210/
Association between low-density lipoprotein cholesterol and cardiovascular mortality in statin non-users: a prospective cohort study in 14.9 million Korean adults. https://pubmed.ncbi.nlm.nih.gov/35218344/
Where is all the new physics? In the decade since the Higgs boson’s discovery, there have been no statistically significant hints of new particles in data from the Large Hadron Collider (LHC). Could they be sneaking past the standard searches? At the recent Rencontres de Moriond conference, the ATLAS collaboration at the LHC presented several results of novel types of searches for particles predicted by supersymmetry.
Supersymmetry, or SUSY for short, is a promising theory that gives each elementary particle a “superpartner”, thus solving several problems in the current Standard Model of particle physics and even providing a possible candidate for dark matter. ATLAS’s new searches targeted charginos and neutralinos – the heavy superpartners of force-carrying particles in the Standard Model – and sleptons – the superpartners of Standard Model matter particles called leptons. If produced at the LHC, these particles would each transform, or “decay”, into Standard Model particles and the lightest neutralino, which does not further decay and is taken to be the dark-matter candidate.
ATLAS’s newest search for charginos and sleptons studied a particle-mass region previously unexplored due to a challenging background of Standard Model processes that mimics the signals from the sought-after particles. The ATLAS researchers designed dedicated searches for each of these SUSY particle types, using all the data recorded from Run 2 of the LHC and looking at the particles’ decays into two charged leptons (electrons or muons) and “missing energy” attributed to neutralinos. They used new methods to extract the putative signals from the background, including machine-learning techniques and “data-driven” approaches.
“Britain moves closer to a self-driving revolution,” said a perky message from the Department for Transport that popped into my inbox on Wednesday morning. The purpose of the message was to let us know that the government is changing the Highway Code to “ensure the first self-driving vehicles are introduced safely on UK roads” and to “clarify drivers’ responsibilities in self-driving vehicles, including when a driver must be ready to take back control”.
The changes will specify that while travelling in self-driving mode, motorists must be ready to resume control in a timely way if they are prompted to, such as when they approach motorway exits. They also signal a puzzling change to current regulations, allowing drivers “to view content that is not related to driving on built-in display screens while the self-driving vehicle is in control”. So you could watch Gardeners’ World on iPlayer, but not YouTube videos of F1 races? Reassuringly, though, it will still be illegal to use mobile phones in self-driving mode, “given the greater risk they pose in distracting drivers as shown in research”.