Menu

Blog

Page 3913

Nov 12, 2022

AI’s new frontier: Connecting grieving loved ones with the deceased

Posted by in categories: robotics/AI, virtual reality

How scientists are using virtual reality to create avatars, chatbots and even eternal digital entities.

Nov 12, 2022

The human touch: ‘Artificial General Intelligence’ is next phase of AI

Posted by in categories: military, robotics/AI

Artificial intelligence is rapidly transforming all sectors of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using AI.

For better or worse, the same is true about the very character of warfare. This is the reason why the Department of Defense – like its counterparts in China and Russia– is investing billions of dollars to develop and integrate AI into defense systems. It’s also the reason why DoD is now embracing initiatives that envision future technologies, including the next phase of AI – artificial general intelligence.

AGI is the ability of an intelligent agent to understand or learn any intellectual task in the same way that humans do. Unlike AI which relies on ever-expanding datasets to perform more complex tasks, AGI will exhibit the same attributes as those associated with the human brain, including common sense, background knowledge, transfer learning, abstraction, and causality. Of particular interest is the human ability to generalize from scanty or incomplete input.

Nov 12, 2022

Imgur: Imgur: The magic of the Internet

Posted by in category: internet

The magic of the Internet.

Nov 12, 2022

Researchers At Stanford Have Developed An Artificial Intelligence (AI) Approach Called ‘MEND’ For Fast Model Editing At Scale

Posted by in category: robotics/AI

Large models have improved performance on a wide range of modern computer vision and, in particular, natural language processing problems. However, issuing patches to adjust model behavior after deployment is a significant challenge in deploying and maintaining such models. Because of the distributed nature of the model’s representations, when a neural network produces an undesirable output, making a localized update to correct its behavior for a single or small number of inputs is difficult. A large language model trained in 2019 might assign a higher probability to Theresa May than Boris Johnson when prompted. Who is the Prime Minister of the United Kingdom?

An ideal model editing procedure would be able to quickly update the model parameters to increase the relative likelihood of Boris Johnson while not affecting the model output for unrelated inputs. This procedure would yield edits with reliability, successfully changing the model’s work on the problematic input (e.g., Who is the Prime Minister of the United Kingdom?); locality, affecting the model’s output for unrelated inputs (e.g., What sports team does Messi play for?); and generality, generating the correct output for inputs related to the edit input (e.g., Who is the Prime Minister of the United Kingdom?). Making such edits is as simple as fine-tuning with a new label on the single example to be corrected. However, fine-tuning on a single sample tends to overfit, even when the distance between the pre-and post-fine-tuning parameters is limited.

Overfitting causes both locality and generality failures. While fine-tuning the edit example and ongoing training on the training set improves locality, their experiments show that it still needs more generality. Furthermore, it necessitates continuous access to the entire training set during testing and is more computationally demanding. Recent research has looked into methods for learning to make model edits as an alternative. Researchers present a bi-level meta-learning objective for determining a model initialization for which standard fine-tuning on a single edit example yields valuable modifications.

Nov 12, 2022

Metaverse ‘yada yada yada’ — Anything World raises $7.5m for its AI animation tool

Posted by in category: robotics/AI

The company’s technology essentially lets users take a pre-existing static 3D model and bring it to life. So, if you’re building a 3D forest in a virtual world, and you’ve got some 3D models of what you want the animals to look like, Anything World’s machine learning-powered tech will put a virtual skeleton in that animal, allowing it to move in a lifelike way.

The round comes amid waning interest in metaverse investments this year, according to data from Dealroom. Investment into startups tagged under “metaverse” on its platform dropped from a high of $2.8bn in Q2 to $446m in Q3, as low user interest affects previously hyped platforms and Mark Zuckerberg’s Meta lays off 11k employees.

Anything World cofounder Sebastian Hofer says that, while many investors have been seduced by the metaverse hype in the last year, his company is building a tool that’s also useful to clients who have no interest in jumping on the Zuckerberg bandwagon.

Nov 12, 2022

Google AI Researchers Propose An Artificial Intelligence-Based Method For Learning Perpetual View Generation of Natural Scenes Solely From Single-View Photos

Posted by in category: robotics/AI

Our earth is gorgeous, with majestic mountains, breathtaking seascapes, and tranquil forests. Flying past intricately detailed, three-dimensional landscapes, picture yourself taking in this splendor as a bird might. Is it possible for computers to learn to recreate this kind of visual experience? However, current techniques that combine new perspectives from photos typically only allow for a small amount of camera motion. Most earlier research can only extrapolate scene content within a constrained range of views corresponding to a subtle head movement.

In a recent research by Google Research, Cornell Tech, and UC Berkeley, they presented a technique for learning to create unrestricted flythrough videos of natural situations beginning with a single view, where this capacity is learned through a collection of single images, without the need for camera poses or even several views of each scene. This method can take a single image and construct long camera trajectories of hundreds of new views with realistic and varied contents during testing, despite never having seen a video during training. This method contrasts with the most recent cutting-edge supervised view generation techniques, which demand posed multi-view films and exhibit better performance and synthesis quality.

The fundamental concept is that they gradually learn to generate flythroughs. Using single-image depth prediction techniques, they first compute a depth map from a beginning view, such as the first image in the figure below. After rendering the image to a new camera viewpoint, as illustrated in the middle, they use that depth map to create a new image and depth map from that viewpoint.

Nov 12, 2022

Max Plank AI Researchers Have Developed Bio-Realistic Artificial Neurons That Can Work In A Biological Environment And Can Produce Diverse Spiking Dynamics

Posted by in categories: biological, chemistry, robotics/AI

The development of neuromorphic electronics depends on the effective mimic of neurons. But artificial neurons aren’t capable of operating in biological environments. Organic artificial neurons that work based on conventional circuit oscillators have been created, which require many elements for their implementation. An organic artificial neuron based on a compact nonlinear electrochemical element has been reported. This artificial neuron is sensitive to the concentration of biological species in its surroundings and can also operate in a liquid. The system offers in-situ operation, spiking behavior, and ion specificity in biologically relevant conditions, including normal physiological and pathological concentration ranges. While variations in ionic and biomolecular concentrations regulate the neuronal excitability, small-amplitude oscillations and noise in the electrolytic medium alter the dynamics of the neuron. A biohybrid interface is created in which an artificial neuron functions synergistically with biological membranes and epithelial cells in real-time.

Neurons are the basic units of the nervous system that are used to transmit and process electrochemical signals. They operate in a liquid electrolytic medium and communicate via gaps between the axon of presynaptic neurons and the dendrite of postsynaptic neurons. For effective brain-inspired computing, neuromorphic computing leverages hardware-based solutions that imitate the behavior of synapses and neurons. Neuron like dynamics can be established with conventional microelectronics by using oscillatory circuit topologies to mimic neuronal behaviors. However, these approaches can mimic only specific aspects of neuronal behavior by integrating many transistors and passive electronic components, resulting in a bulky biomemtic circuit unsuitable for direct in situ biointerfacing. Volatile and nonlinear devices based on spin torque oscillators or memristor can increase the integration density and emulate neuronal dynamics.

Nov 12, 2022

How GPT-3 Is Writing The Future Of Artificial Intelligence

Posted by in categories: education, robotics/AI

A new artificial intelligence tool called GPT-3 has recently been created, and it’s able to perform some tasks better than humans can. That’s because GPT-3 isn’t taught or trained by…

Nov 12, 2022

GPT-4 Rumors From Silicon Valley

Posted by in category: robotics/AI

But for two years OpenAI has been super shy about GPT-4—letting out info in dribs and drabs and remaining silent for the most part.

Not anymore.

People have been talking these months. What I’ve heard from several sources: GPT-4 is almost ready and will be released (hopefully) sometime December-February.

Nov 12, 2022

AI uses artificial sleep to learn new task without forgetting the last

Posted by in category: robotics/AI

Many AIs can only become good at one task, forgetting everything they know if they learn another. A form of artificial sleep could help stop this from happening.