Menu

Blog

Archive for the ‘robotics/AI’ category: Page 53

Feb 28, 2024

Enter the gridworld: Using geometry to detect danger in AI environments

Posted by in category: robotics/AI

Spacetime is a conceptual model that fuses the three dimensions of space (length, width, and breadth) with the fourth dimension of time. By doing so, a four-dimensional geometric object is created. Researchers have recently used a similar way of thinking to study AI environments, leading to a unique reframing of AI problems in geometric terms.

Dr. Thomas Burns, a Ph.D. graduate and Visiting Researcher at the Okinawa Institute of Science and Technology (OIST), and Dr. Robert Tang, a mathematician at Xi’an Jiaotong-Liverpool University and a former post-doctoral researcher at OIST, wanted to study AI systems from a geometric perspective to more accurately represent their properties.

Continue reading “Enter the gridworld: Using geometry to detect danger in AI environments” »

Feb 28, 2024

China issues world’s 1st legally binding verdict on copyright infringement of AI-generated images

Posted by in categories: business, internet, robotics/AI

China ruled on a case of infringement of copyright by an AI-generated service, the first effective ruling of its kind globally, which provided a judicial answer to the dilemma of whether the content generated by AI service providers infringes on copyright, media reported on Monday.

According to the 21st Century Business Herald, the Guangzhou Internet Court ruled that the an AI company had infringed the plaintiff’s copyright and adaptation rights to the Ultraman works in the process of providing generative AI services, and should bear relevant civil liability.

The protagonist of this case was the super IP Ultraman. In this case, the copyright owner of the “Ultraman” works exclusively authorized the copyright of the series images to the plaintiff, while the defendant company operated a website, providing services with AI conversation and AI-generated painting functions.

Feb 28, 2024

New AI image generator is 8 times faster than OpenAI’s best tool — and can run on cheap computers

Posted by in category: robotics/AI

Scientists used “knowledge distillation” to condense Stable Diffusion XL into a much leaner, more efficient AI image generation model that can run on low-cost hardware.

Feb 28, 2024

Your idea is Pika’s command

Posted by in category: robotics/AI

Supposedly workin on getting text to video AI characters to talk:


The idea-to-video platform that sets your creativity in motion.

Feb 28, 2024

AI video wars heat up as Pika adds Lip Sync powered by ElevenLabs

Posted by in categories: media & arts, robotics/AI

While Pika’s AI generated videos remain arguably lower quality and less “realistic” than the ones shown off by OpenAI’s Sora or even another rival AI video generation startup, Runway, the addition of the new Lip Sync feature puts it ahead of both in offering capabilities disruptive to traditional filmmaking software.

With Lip Sync, Pika is addressing one of the last remaining barriers to AI being useful for creating longer narrative films. Most other leading AI video generators don’t yet currently offer a similar feature natively.

Continue reading “AI video wars heat up as Pika adds Lip Sync powered by ElevenLabs” »

Feb 27, 2024

Enhancing Lunar Exploration: Realistic Simulation of Moon Dust for Robot Operation

Posted by in categories: robotics/AI, space

Joe Louca: “Think of it like a realistic video game set on the Moon – we want to make sure the virtual version of moon dust behaves just like the actual thing, so that if we are using it to control a robot on the Moon, then it will behave as we expect.”


After Neil Armstrong took his first steps on the Moon, he said, “It’s almost like a powder”, as he described the lunar regolith, and astronauts on future Apollo missions found working on the lunar surface rather cumbersome and tedious due to the much finer lunar dust compared to Earth’s dirt. Therefore, what steps can be taken to better prepare future rovers and astronauts for NASA’s Artemis program to work on the lunar surface?

This is what a recent study published in Frontiers in Space Technologies hopes to address as a team of researchers led by the University of Bristol developed virtual models of lunar regolith simulants that could provide cost-effective methods to prepare astronauts and robots to work on the lunar surface, someday.

Continue reading “Enhancing Lunar Exploration: Realistic Simulation of Moon Dust for Robot Operation” »

Feb 27, 2024

Facial Recognition Meets Mental Health: MoodCapture App Identifies Depression Early

Posted by in categories: biotech/medical, health, mobile phones, neuroscience, robotics/AI

Can smartphones apps be used to monitor a user’s mental health? This is what a recently submitted study scheduled to be presented at the 2024 ACM CHI Conference on Human Factors in Computing Systems hopes to address as a collaborative team of researchers from Dartmouth College have developed a smartphone app known as MoodCapture capable of evaluating signs of depression from a user with the front-facing camera. This study holds the potential to help scientists, medical professionals, and patients better understand how to identify signs of depression so proper evaluation and treatment can be made.

For the study, the researchers enlisted 177 participants for a 90-day trial designed to use their front-facing camera to capture facial images throughout their daily lives and while the participants answered a survey question with, “I have felt, down, depressed, or hopeless.” All participants consented to the images being taken at random times, not only when they used the camera to unlock their phone. During the study period, the researchers obtained more than 125,000 images and even accounted for the surrounding environment in their final analysis. In the end, the researchers found that MoodCapture exhibited 75 percent accuracy when attempting to identify early signs of depression.

“This is the first time that natural ‘in-the-wild’ images have been used to predict depression,” said Dr. Andrew Campbell, who is a professor in the Computer Science Department at Dartmouth and a co-author on the study. “There’s been a movement for digital mental-health technology to ultimately come up with a tool that can predict mood in people diagnosed with major depression in a reliable and non-intrusive way.”

Feb 27, 2024

India completes critical test for Gaganyaan flight crewed by humanoid robot later this year

Posted by in categories: robotics/AI, space travel

“Vyomitra” will be the robotic astronaut aboard the first Gaganyaan test flight, scheduled for later this year.

Feb 27, 2024

Frontiers: Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems

Posted by in categories: biotech/medical, information science, neuroscience, robotics/AI, supercomputing

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

Feb 27, 2024

Researchers develop powerful optical neuromorphic processor

Posted by in categories: biological, robotics/AI, transportation

An international team of researchers, led by Swinburne University of Technology, demonstrated what it claimed is the world’s fastest and most powerful optical neuromorphic processor for artificial intelligence (AI). It operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data.

The researchers said this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general. It could benefit autonomous vehicles and data-intensive machine learning tasks such as computer vision.

Artificial neural networks can ‘learn’ and perform complex operations with wide applications. Inspired by the biological structure of the brain’s visual cortex system, artificial neural networks extract key features of raw data to predict properties and behaviour with unprecedented accuracy and simplicity.

Page 53 of 2,165First5051525354555657Last