Menu

Blog

Page 3430

Nov 12, 2022

Researchers At Stanford Have Developed An Artificial Intelligence (AI) Approach Called ‘MEND’ For Fast Model Editing At Scale

Posted by in category: robotics/AI

Large models have improved performance on a wide range of modern computer vision and, in particular, natural language processing problems. However, issuing patches to adjust model behavior after deployment is a significant challenge in deploying and maintaining such models. Because of the distributed nature of the model’s representations, when a neural network produces an undesirable output, making a localized update to correct its behavior for a single or small number of inputs is difficult. A large language model trained in 2019 might assign a higher probability to Theresa May than Boris Johnson when prompted. Who is the Prime Minister of the United Kingdom?

An ideal model editing procedure would be able to quickly update the model parameters to increase the relative likelihood of Boris Johnson while not affecting the model output for unrelated inputs. This procedure would yield edits with reliability, successfully changing the model’s work on the problematic input (e.g., Who is the Prime Minister of the United Kingdom?); locality, affecting the model’s output for unrelated inputs (e.g., What sports team does Messi play for?); and generality, generating the correct output for inputs related to the edit input (e.g., Who is the Prime Minister of the United Kingdom?). Making such edits is as simple as fine-tuning with a new label on the single example to be corrected. However, fine-tuning on a single sample tends to overfit, even when the distance between the pre-and post-fine-tuning parameters is limited.

Overfitting causes both locality and generality failures. While fine-tuning the edit example and ongoing training on the training set improves locality, their experiments show that it still needs more generality. Furthermore, it necessitates continuous access to the entire training set during testing and is more computationally demanding. Recent research has looked into methods for learning to make model edits as an alternative. Researchers present a bi-level meta-learning objective for determining a model initialization for which standard fine-tuning on a single edit example yields valuable modifications.

Nov 12, 2022

Metaverse ‘yada yada yada’ — Anything World raises $7.5m for its AI animation tool

Posted by in category: robotics/AI

The company’s technology essentially lets users take a pre-existing static 3D model and bring it to life. So, if you’re building a 3D forest in a virtual world, and you’ve got some 3D models of what you want the animals to look like, Anything World’s machine learning-powered tech will put a virtual skeleton in that animal, allowing it to move in a lifelike way.

The round comes amid waning interest in metaverse investments this year, according to data from Dealroom. Investment into startups tagged under “metaverse” on its platform dropped from a high of $2.8bn in Q2 to $446m in Q3, as low user interest affects previously hyped platforms and Mark Zuckerberg’s Meta lays off 11k employees.

Anything World cofounder Sebastian Hofer says that, while many investors have been seduced by the metaverse hype in the last year, his company is building a tool that’s also useful to clients who have no interest in jumping on the Zuckerberg bandwagon.

Nov 12, 2022

Google AI Researchers Propose An Artificial Intelligence-Based Method For Learning Perpetual View Generation of Natural Scenes Solely From Single-View Photos

Posted by in category: robotics/AI

Our earth is gorgeous, with majestic mountains, breathtaking seascapes, and tranquil forests. Flying past intricately detailed, three-dimensional landscapes, picture yourself taking in this splendor as a bird might. Is it possible for computers to learn to recreate this kind of visual experience? However, current techniques that combine new perspectives from photos typically only allow for a small amount of camera motion. Most earlier research can only extrapolate scene content within a constrained range of views corresponding to a subtle head movement.

In a recent research by Google Research, Cornell Tech, and UC Berkeley, they presented a technique for learning to create unrestricted flythrough videos of natural situations beginning with a single view, where this capacity is learned through a collection of single images, without the need for camera poses or even several views of each scene. This method can take a single image and construct long camera trajectories of hundreds of new views with realistic and varied contents during testing, despite never having seen a video during training. This method contrasts with the most recent cutting-edge supervised view generation techniques, which demand posed multi-view films and exhibit better performance and synthesis quality.

The fundamental concept is that they gradually learn to generate flythroughs. Using single-image depth prediction techniques, they first compute a depth map from a beginning view, such as the first image in the figure below. After rendering the image to a new camera viewpoint, as illustrated in the middle, they use that depth map to create a new image and depth map from that viewpoint.

Nov 12, 2022

Max Plank AI Researchers Have Developed Bio-Realistic Artificial Neurons That Can Work In A Biological Environment And Can Produce Diverse Spiking Dynamics

Posted by in categories: biological, chemistry, robotics/AI

The development of neuromorphic electronics depends on the effective mimic of neurons. But artificial neurons aren’t capable of operating in biological environments. Organic artificial neurons that work based on conventional circuit oscillators have been created, which require many elements for their implementation. An organic artificial neuron based on a compact nonlinear electrochemical element has been reported. This artificial neuron is sensitive to the concentration of biological species in its surroundings and can also operate in a liquid. The system offers in-situ operation, spiking behavior, and ion specificity in biologically relevant conditions, including normal physiological and pathological concentration ranges. While variations in ionic and biomolecular concentrations regulate the neuronal excitability, small-amplitude oscillations and noise in the electrolytic medium alter the dynamics of the neuron. A biohybrid interface is created in which an artificial neuron functions synergistically with biological membranes and epithelial cells in real-time.

Neurons are the basic units of the nervous system that are used to transmit and process electrochemical signals. They operate in a liquid electrolytic medium and communicate via gaps between the axon of presynaptic neurons and the dendrite of postsynaptic neurons. For effective brain-inspired computing, neuromorphic computing leverages hardware-based solutions that imitate the behavior of synapses and neurons. Neuron like dynamics can be established with conventional microelectronics by using oscillatory circuit topologies to mimic neuronal behaviors. However, these approaches can mimic only specific aspects of neuronal behavior by integrating many transistors and passive electronic components, resulting in a bulky biomemtic circuit unsuitable for direct in situ biointerfacing. Volatile and nonlinear devices based on spin torque oscillators or memristor can increase the integration density and emulate neuronal dynamics.

Nov 12, 2022

How GPT-3 Is Writing The Future Of Artificial Intelligence

Posted by in categories: education, robotics/AI

A new artificial intelligence tool called GPT-3 has recently been created, and it’s able to perform some tasks better than humans can. That’s because GPT-3 isn’t taught or trained by…

Nov 12, 2022

GPT-4 Rumors From Silicon Valley

Posted by in category: robotics/AI

But for two years OpenAI has been super shy about GPT-4—letting out info in dribs and drabs and remaining silent for the most part.

Not anymore.

People have been talking these months. What I’ve heard from several sources: GPT-4 is almost ready and will be released (hopefully) sometime December-February.

Nov 12, 2022

AI uses artificial sleep to learn new task without forgetting the last

Posted by in category: robotics/AI

Many AIs can only become good at one task, forgetting everything they know if they learn another. A form of artificial sleep could help stop this from happening.

Nov 12, 2022

AI Researchers from the Netherlands Propose a Machine Learning-based Method to Design New Complex Metamaterials with Useful Properties

Posted by in categories: chemistry, robotics/AI, solar power, space, sustainability

Combinatorial problems often arise in puzzles, origami, and metamaterial design. Such problems have rare collections of solutions that generate intricate and distinct boundaries in configuration space. Using standard statistical and numerical techniques, capturing these boundaries is often quite challenging. Is it possible to flatten a 3D origami piece without causing damage? This question is one such combinatorial issue. As each fold needs to be consistent with flattening, such results are difficult to predict simply by glancing at the design. To answer such questions, the UvA Institute of Physics and the research center AMOLF have shown that researchers may more effectively and precisely respond to such queries by using machine learning techniques.

Despite employing severely undersampled training sets, Convolutional Neural Networks (CNNs) can learn to distinguish these boundaries for metamaterials in minute detail. This raises the possibility of complex material design by indicating that the network infers the underlying combinatorial rules from the sparse training set. The research team thinks this will facilitate the development of sophisticated, functional metamaterials with artificial intelligence. The team’s recent study examined the accuracy of forecasting the characteristics of these combinatorial mechanical metamaterials using artificial intelligence. Their work has also been published in the Physical Review Letters publication.

The attributes of artificial materials, which are engineered materials, are governed by their geometrical structure rather than their chemical makeup. Origami is one such metamaterial. The capacity of an origami piece to flatten is governed by how it is folded, i.e., its structure, and not by the sort of paper it is made of. More generally, the clever design enables us to accurately regulate a metamaterial’s bending, buckling, or bulging. This can be used for many different things, from satellite solar panels that unfurl to shock absorbers.

Nov 12, 2022

Scientists found a way for people with paralysis to walk again

Posted by in categories: biotech/medical, neuroscience

Scientists have managed to do what many might have thought impossible. According to new research published in the journal Nature, a group of researchers from the Swiss research group NeuroRestore was able to identify neurons that could restore the ability to walk in paralyzed individuals. The researchers published their findings back in September.

Nov 12, 2022

NASA’s Successful Launch, Deployment, and Retrieval of LOFTID — An Innovative Inflatable Heat Shield

Posted by in categories: government, satellites

On the morning of November 10, an Atlas V rocket launched JPSS-2, NOAA’s newest environmental satellite into orbit. Hitching a ride on the rocket was NASA

Established in 1958, the National Aeronautics and Space Administration (NASA) is an independent agency of the United States Federal Government that succeeded the National Advisory Committee for Aeronautics (NACA). It is responsible for the civilian space program, as well as aeronautics and aerospace research. Its vision is “To discover and expand knowledge for the benefit of humanity.” Its core values are “safety, integrity, teamwork, excellence, and inclusion.”