Category: robotics/AI – Page 1,382
Next, we aimed to determine whether the model type, i.e., a linear regression vs. a neural network, would significantly impact the performance. We, therefore, compared the aforementioned linear models with the neural network AltumAge using the same set of features. AltumAge outperformed the respective linear model with Horvath’s 353 CpG sites (MAE = 2.425 vs. 3.011, MSE = 32.732 vs. 46.867) and ElasticNet-selected 903 CpG sites (MAE = 2.302 vs. 2.621, MSE = 30.455 vs. 39.198). This result shows that AltumAge outperforms linear models given the same training data and set of features.
Lastly, to compare the effect of the different sets of CpG sites, we trained AltumAge with all 20,318 CpG sites available and compared the results from the smaller sets of CpG sites obtained above. There is a gradual improvement in performance for AltumAge by expanding the feature set from Horvath’s 353 sites (MAE = 2.425, MSE = 32.732) to 903 ElasticNet-selected CpG sites (MAE = 2.302, MSE = 30.455) to all 20,318 CpG sites (MAE = 2.153, MSE = 29.486). This result suggests that the expanded feature set helps improve the performance, likely because relevant information in the epigenome is not entirely captured by the CpG sites selected by an ElasticNet model.
Overall, these results indicate that even though more data samples lower the prediction error, AltumAge’s performance improvement is greater than the increased data effect. Indeed, the lower error of AltumAge when compared to the ElaticNet is robust to other data splits (Alpaydin’s Combined 5x2cv F test p-value = 9.71e−5).
Discovering a system’s causal relationships and structure is a crucial yet challenging problem in scientific disciplines ranging from medicine and biology to economics. While researchers typically adopt the graphical formalism of causal Bayesian networks (CBNs) to induce a graph structure that best describes these relationships, such unsupervised score-based approaches can quickly lead to prohibitively heavy computation burdens.
A research team from DeepMind, Mila – University of Montreal and Google Brain challenges the conventional causal induction approach in their new paper Learning to Induce Causal Structure, proposing a neural network architecture that learns the graph structure of observational and/or interventional data via supervised training on synthetic graphs. The team’s proposed Causal Structure Induction via Attention (CSIvA) method effectively makes causal induction a black-box problem and generalizes favourably to new synthetic and naturalistic graphs.
The team summarizes their main contributions as:
And an AI could generate a picture of a person from scratch if it wanted or needed to. its only a matter of time before someone puts it all together. 1. AI writes a script. 2. AI generates pictures of a cast (face/&body). 3. AI animates pictures of the cast into scenes. 4. it cant create voices from scratch yet, but 10 second audio sample of a voice is enough for it to make voices say anything; AI voices all the dialog. And, viola, you ve reduced TV and movie production costs by 99.99%. Will take place by 2030.
Google’s PHORUM AI shows how impressive 3D avatars can be created just from a single photo.
Until now, however, such models have relied on complex automatic scanning by a multi-camera system, manual creation by artists, or a combination of both. Even the best camera systems still produce artifacts that must be cleaned up manually.
With the massive degree of progress in AI over the last decade or so, it’s natural to wonder about its future – particularly the timeline to achieving human (and superhuman) levels of general intelligence. Ajeya Cotra, a senior researcher at Open Philanthropy, recently (in 2020) put together a comprehensive report seeking to answer this question (actually, it answers the slightly different question of when transformative AI will appear, mainly because an exact definition of impact is easier than one of intelligence level), and over 169 pages she lays out a multi-step methodology to arrive at her answer. The report has generated a significant amount of discussion (for example, see this Astral Codex Ten review), and seems to have become an important anchor for many people’s views on AI timelines. On the whole, I found the report added useful structure around the AI timeline question, though I’m not sure its conclusions are particularly informative (due to the wide range of timelines across different methodologies). This post will provide a general overview of her approach (readers who are already familiar can skip the next section), and will then focus on one part of the overall methodology – specifically, the upper bound she chooses – and will seek to show that this bound may be vastly understated.
Part 1: Overview of the Report
In her report, Ajeya takes the following steps to estimate transformative AI timelines:
Aulos Biosciences is now recruiting cancer patients in Australian medical centers for a trial of the world’s first antibody drug designed by a computer.
The computationally designed antibody, known as AU-007, was planned by the artificial intelligence platform of Israeli biotech company Biolojic Design from Rehovot, in a way that would target a protein in the human body known as interleukin-2 (IL-2).
The goal is for the IL-2 pathway to activate the body’s immune system and attack the tumors.
Replacing motors with electrostatic brakes can boost the energy-efficiency of robot limbs, although the robots are slower.
Using nanotechnology, scientists have created a newly designed neuromorphic electronic device that endows microrobotics with colorful vision.
Researchers at Georgia State University have successfully designed a new type of artificial vision device that incorporates a novel vertical stacking architecture and allows for greater depth of color recognition and micro-level scaling. The new research study was published on April 18, 2022, in the top journal ACS Nano.
“This work is the first step toward our final destination–to develop a micro-scale camera for microrobots,” says assistant professor of Physics Sidong Lei, who led the research. “We illustrate the fundamental principle and feasibility to construct this new type of image sensor with emphasis on miniaturization.”
An automated system called Guardian is being developed by the Toyota Research Institute to amplify human control in a vehicle, as opposed to removing it.
Here’s the scenario: A driver falls asleep at the wheel. But their car is equipped with a dashboard camera that detects the driver’s eye condition, activating a safety system that promptly guides the vehicle to a secure halt.
That’s not just an idea on the drawing board. The system, called Guardian, is being refined at the Toyota Research Institute (TRI), where MIT Professor John Leonard is helping steer the group’s work, while on leave from MIT. At the MIT Mobility Forum, Leonard and Avinash Balachandran, head of TRI’s Human-Centric Driving Research Department, presented an overview of their work.
The presenters offered thoughts on multiple levels about automation and driving. Leonard and Balachandran discussed particular TRI systems while also suggesting that — after years of publicity about the possibility of fully automated vehicles — a more realistic prospect might be the deployment of technology that aids drivers, without replacing them.
Stuart Russell warns about the dangers involved in the creation of artificial intelligence. Particularly, artificial general intelligence or AGI.
The idea of an artificial intelligence that might one day surpass human intelligence has been captivating and terrifying us for decades now. The possibility of what it would be like if we had the ability to create a machine that could think like a human, or even surpass us in cognitive abilities is something that many envision. But, as with many novel technologies, there are a few problems with building an AGI. But what if we succeed? What would happen should our quest to create artificial intelligence bear fruit? How do we retain power over entities that are more intelligent than us? The answer, of course, is that nobody knows for sure. But there are some logical conclusions we can draw from examining the nature of intelligence and what kind of entities might be capable of it.
Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He outlines the definition of AI, the risks and benefits it poses for the future. According to him, the idea of an AGI is the most important problem to intellectually to work on.
An AGI could be used for many good and evil purposes. Although there are huge benefits to creating an AGI, there are also downsides to doing so. If we create and deploy an AGI without understanding what risks it can cause for humans and other beings in our world, we could be contributing to great disasters.
Russel also postulates that we should focus on developing a machine that learns what each of the eight billion people on Earth would like the future to be like.
#AI #AGI #science.
SUBSCRIBE to our channel “Science Time”: https://www.youtube.com/sciencetime24