The texture of an artist’s original work can now be reproduced with AI-controlled 3D printing.
Almost everything you hear about artificial intelligence today is thanks to deep learning. This category of algorithms works by using statistics to find patterns in data, and it has proved immensely powerful in mimicking human skills such as our ability to see and hear. To a very narrow extent, it can even emulate our ability to reason. These capabilities power Google’s search, Facebook’s news feed, and Netflix’s recommendation engine—and are transforming industries like health care and education.
Our study of 25 years of artificial-intelligence research suggests the era of deep learning is coming to an end.
Posted in drones, robotics/AI
The world has not entered the age of the killer robot, at least not yet. Today’s autonomous weapons are mostly static systems to shoot down incoming threats in self-defence, or missiles fired into narrowly defined areas. Almost all still have humans “in the loop” (eg, remotely pulling the trigger for a drone strike) or “on the loop” (ie, able to oversee and countermand an action). But tomorrow’s weapons will be able to travel farther from their human operators, move from one place to another and attack a wider range of targets with humans “out of the loop” (see article). Will they make war even more horrible? Will they threaten civilisation itself? It is time for states to think harder about how to control them.
A good approach is a Franco-German proposal that countries should share more information on how they assess new weapons; allow others to observe demonstrations of new systems; and agree on a code of conduct for their development and use. This will not end the horrors of war, or even halt autonomous weapons. But it is a realistic and sensible way forward. As weapons get cleverer, humans must keep up.
A movie montage for modern artificial intelligence might show a computer playing millions of games of chess or Go against itself to learn how to win. Now, researchers are exploring how the reinforcement learning technique that helped DeepMind’s AlphaZero conquer chess and Go could tackle an even more complex task—training a robotic knee to help amputees walk smoothly.
Computer algorithms help prosthetics wearers walk within minutes rather than requiring hours of training.
Law firms are under tremendous pressure to innovate to provide better value to their clients, who demand more value for their legal dollars. Providing higher-value services in turn boosts firms’ competitiveness.
However, much of the day-to-day work of any legal office – whether it’s in-house counsel, a boutique firm or one of the largest legal power houses – is the tedious, repetitive work of reading and preparing answers to complaints. Larger firms may have armies of junior associates do much of this necessary but mundane case-preparation work. At smaller firms, partners and senior associates are often involved in all stages of litigation. Preparing responses is time-consuming. It can take several hours to a full day to complete. Those are hours that both attorneys and firms would prefer to use tackling more strategic legal work.
We asked ourselves, what if, instead of taking hours, those high-volume, repetitive tasks could take a couple of minutes?
A collaboration of researchers from MIT and Microsoft have developed a system that helps identify lapses in artificial intelligence knowledge in autonomous cars and robots. These lapses, referred to as “blind spots,” occur when there are significant differences between training examples and what a human would do in a certain situation — such as a driverless car not detecting the difference between a large white car and an ambulance with its sirens on, and thus not behaving appropriately.
An artificial intelligence solution (AI) can accurately identify precancerous changes that could require medical attention in images from a woman’s cervix. Researchers from the National Institutes of Health and Global Good developed the computer algorithm, which is called automated visual evaluation.
Researchers created the algorithm by using more than 60,000 cervical images from a National Cancer Institute (NCI) archive of photos collected during a cervical cancer screening study that was carried out in Costa Rica in the 1990s.
More than 9,400 women participated in that population study, with follow up that lasted up to 18 years. Because of the prospective nature of the study, the researchers said that they gained nearly complete information on which cervical changes became pre-cancers and which did not.
Prof Pugh is using motion-tracking sensors to test how trainee surgeons use the instruments, for example in a simulated hernia repair. Their performance is measured, videoed and compared with best practice at each stage, so they can understand where they need to improve.
“Like Olympic athletes, they can practise repeatedly until they understand the routine and where they need to improve. That is the goal in training surgeons.” The next step is to use sensors in real operations.
Being able to measure pressure will help create better surgical robots, says Richard Trimlett, a cardiothoracic surgeon and head of mechanical support at the Royal Brompton and Harefield Trust, London.