Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1737

Mar 5, 2020

Researcher Develops a Machine to Allow Psychonauts to Explore the DMT Realm

Posted by in categories: robotics/AI, space

Feature image ‘Psychonaut’ courtesy of Tetramode.

Follow the Daily Grail on Facebook and on Twitter.

Continue reading “Researcher Develops a Machine to Allow Psychonauts to Explore the DMT Realm” »

Mar 5, 2020

AGI: How to Ensure Benevolence in Synthetic Superintelligence

Posted by in category: robotics/AI

Devising an effective AGI value loading system should be of the utmost importance, especially when ETA of AGI is only years away. At the early stage of transition to the radically superintelligent civilization, we may use Naturalization Protocol Simulation to teach AGIs our human norms and values, and ultimately interlink with them to form the globally distributed Syntellect, civilizational superintelligence. Chances are AGIs and postbiological humans will peacefully coexist and thrive, though I doubt that we could tell which are which.

Mar 5, 2020

Engineers develop miniaturized ‘warehouse robots’ for biotechnology applications

Posted by in categories: biotech/medical, robotics/AI

UCLA engineers have developed minuscule warehouse logistics robots that could help expedite and automate medical diagnostic technologies and other applications that move and manipulate tiny drops of fluid. The study was published in Science Robotics.

The robots are disc-shaped magnets about 2 millimeters in diameter, designed to work together to move and manipulate droplets of blood or other fluids, with precision. For example, the robots can cleave one large droplet of fluid into smaller drops that are equal in volume for consistent testing. They can also move droplets into preloaded testing trays to check for signs of disease. The research team calls these robots “ferrobots” because they are powered by magnetism.

Continue reading “Engineers develop miniaturized ‘warehouse robots’ for biotechnology applications” »

Mar 5, 2020

Stanford’s AI Index Report: How Much Is BS?

Posted by in categories: economics, engineering, health, information science, law, mobile phones, robotics/AI, sustainability, transportation

Another important question is the extent to which continued increases in computational capacity are economically viable. The Stanford Index reports a 300,000-fold increase in capacity since 2012. But in the same month that the Report was issued, Jerome Pesenti, Facebook’s AI head, warned that “The rate of progress is not sustainable…If you look at top experiments, each year the cost is going up 10-fold. Right now, an experiment might be in seven figures but it’s not going to go to nine or 10 figures, it’s not possible, nobody can afford that.”

AI has feasted on low-hanging fruit, like search engines and board games. Now comes the hard part — distinguishing causal relationships from coincidences, making high-level decisions in the face of unfamiliar ambiguity, and matching the wisdom and commonsense that humans acquire by living in the real world. These are the capabilities that are needed in complex applications such as driverless vehicles, health care, accounting, law, and engineering.

Despite the hype, AI has had very little measurable effect on the economy. Yes, people spend a lot of time on social media and playing ultra-realistic video games. But does that boost or diminish productivity? Technology in general and AI in particular are supposed to be creating a new New Economy, where algorithms and robots do all our work for us, increasing productivity by unheard-of amounts. The reality has been the opposite. For decades, U.S. productivity grew by about 3% a year. Then, after 1970, it slowed to 1.5% a year, then 1%, now about 0.5%. Perhaps we are spending too much time on our smartphones.

Mar 4, 2020

Robot uses artificial intelligence and imaging to draw blood

Posted by in categories: biotech/medical, health, robotics/AI

Rutgers engineers have created a tabletop device that combines a robot, artificial intelligence and near-infrared and ultrasound imaging to draw blood or insert catheters to deliver fluids and drugs.

Their most recent research results, published in the journal Nature Machine Intelligence, suggest that autonomous systems like the image-guided could outperform people on some complex medical tasks.

Medical robots could reduce injuries and improve the efficiency and outcomes of procedures, as well as carry out tasks with minimal supervision when resources are limited. This would allow to focus more on other critical aspects of medical care and enable emergency medical providers to bring advanced interventions and resuscitation efforts to remote and resource-limited areas.

Mar 4, 2020

Google’s robot learns to walk in real world

Posted by in category: robotics/AI

The field of robotics took one step forward—followed by another, then several more—when a robot called Rainbow Dash recently taught itself to walk. The four-legged machine only required a few hours to learn to walk backward and forward, and turn right and left while doing so.

Researchers from Google, UC Berkeley and the Georgia Institute of Technology published a paper on the ArXiv preprint server describing a statistical AI technique known as learning they used to produce this accomplishment, which is significant for several reasons.

Continue reading “Google’s robot learns to walk in real world” »

Mar 4, 2020

In-space Robotic Servicing Program Moves Forward with New Commercial Partner

Posted by in categories: life extension, robotics/AI, satellites

DARPA has established a new partnership with U.S. industry to jointly develop and deploy advanced robotic capabilities in space. The agency has signed an Other Transactions for Prototypes agreement with Space Logistics, LLC, a wholly-owned subsidiary of Northrop Grumman Corporation, as its commercial partner for the Robotic Servicing of Geosynchronous Satellites (RSGS) program.

The RSGS program’s objective is to create a dexterous robotic operational capability in geosynchronous orbit that can extend satellite life spans, enhance resilience, and improve reliability for current U.S. space infrastructure. The first step is the RSGS program’s development of a dexterous robotic servicer, which a commercial enterprise will then operate.

“DARPA remains committed to a commercial partnership for the execution of the RSGS mission,” said Dr. Michael Leahy, director of DARPA’s Tactical Technology Office. “Building upon the successes of the DARPA Orbital Express mission and the recent successful docking of Space Logistics’ Mission Extension Vehicle-1, the agency seeks to bring dexterous on-orbit servicing to spacecraft in geosynchronous orbit (GEO), and to establish that inspection, repair, life extension, and improvement of our valuable GEO assets can be made possible and even routine.”

Mar 4, 2020

Invisible Headlights

Posted by in categories: information science, robotics/AI, transportation

Autonomous and semi-autonomous systems need active illumination to navigate at night or underground. Switching on visible headlights or some other emitting system like lidar, however, has a significant drawback: It allows adversaries to detect a vehicle’s presence, in some cases from long distances away.

To eliminate this vulnerability, DARPA announced the Invisible Headlights program. The fundamental research effort seeks to discover and quantify information contained in ambient thermal emissions in a wide variety of environments and to create new passive 3D sensors and algorithms to exploit that information.

“We’re aiming to make completely passive navigation in pitch dark conditions possible,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “In the depths of a cave or in the dark of a moonless, starless night with dense fog, current autonomous systems can’t make sense of the environment without radiating some signal—whether it’s a laser pulse, radar or visible light beam—all of which we want to avoid. If it involves emitting a signal, it’s not invisible for the sake of this program.”

Mar 4, 2020

A new AI chip can perform image recognition tasks in nanoseconds

Posted by in categories: robotics/AI, transportation

The news: A new type of artificial eye, made by combining light-sensing electronics with a neural network on a single tiny chip, can make sense of what it’s seeing in just a few nanoseconds, far faster than existing image sensors.

Why it matters: Computer vision is integral to many applications of AI—from driverless cars to industrial robots to smart sensors that act as our eyes in remote locations—and machines have become very good at responding to what they see. But most image recognition needs a lot of computing power to work. Part of the problem is a bottleneck at the heart of traditional sensors, which capture a huge amount of visual data, regardless of whether or not it is useful for classifying an image. Crunching all that data slows things down.

A sensor that captures and processes an image at the same time, without converting or passing around data, makes image recognition much faster using much less power. The design, published in Nature today by researchers at the Institute of Photonics in Vienna, Austria, mimics the way animals’ eyes pre-process visual information before passing it on to the brain.

Mar 4, 2020

#Robot protects elderly from #coronavirus

Posted by in categories: biotech/medical, robotics/AI

By allowing video calls