In case you weren’t already terrified of robots that can jump over walls, fly or crawl, Army researchers are developing your next nightmare — a robot squid.
Category: robotics/AI – Page 2124
There is an enduring fear in the music industry that artificial intelligence will replace the artists we love, and end creativity as we know it.
As ridiculous as this claim may be, it’s grounded in concrete evidence. Last December, an AI-composed song populated several New Music Friday playlists on Spotify, with full support from Spotify execs. An entire startup ecosystem is emerging around services that give artists automated songwriting recommendations, or enable the average internet user to generate customized instrumental tracks at the click of a button.
But AI’s long-term impact on music creation isn’t so cut and dried. In fact, if we as an industry are already thinking so reductively and pessimistically about AI from the beginning, we’re sealing our own fates as slaves to the algorithm. Instead, if we take the long view on how technological innovation has made it progressively easier for artists to realize their creative visions, we can see AI’s genuine potential as a powerful tool and partner, rather than as a threat.
SAS® supports the creation of deep neural network models. Examples of these models include convolutional neural networks, recurrent neural networks, feedforward neural networks and autoencoder neural networks. Let’s examine in more detail how SAS creates deep learning models using SAS® Visual Data Mining and Machine Learning.
Deep learning models with SAS Cloud Analytic Services
SAS Visual Mining and Machine Learning takes advantage of SAS Cloud Analytic Services (CAS) to perform what are referred to as CAS actions. You use CAS actions to load data, transform data, compute statistics, perform analytics and create output. Each action is configured by specifying a set of input parameters. Running a CAS action processes the action’s parameters and data, which creates an action result. CAS actions are grouped into CAS action sets.
About a year ago, Apple made the bold proclamation that it was zeroing in on a future where iPhones and MacBooks were created wholly of recycled materials. It was, and still is, an ambitious thought. In a technologically-charged world, many forget that nearly 100 percent of e-waste is recyclable. Apple didn’t.
Named “Daisy,” Apple’s new robot builds on its previous iteration, Liam, which Apple used to disassemble unneeded iPhones in an attempt to scrap or reuse the materials. Like her predecessor, Daisy can successfully salvage a bulk of the material needed to create brand new iPhones. All told, the robot is capable of extracting parts from nine types of iPhone, and for every 100,000 devices it manages to recover 1,900 kg (4,188 pounds) of aluminum, 770 kg of cobalt, 710 kg of copper, and 11 kg of rare earth elements — which also happen to be some of the hardest and environmentally un-friendly materials required to build the devices.
In its latest environmental progress report, Apple noted:
Researchers in artificial intelligence can stand to make a ton of money. But this week, we actually know just how much some A.I. experts are being paid — and it’s a lot, even at a nonprofit.
OpenAI, a nonprofit research lab, paid its lead A.I. expert, Ilya Sutskever, more than $1.9 million in 2016, according to a recent public tax filing. Another researcher, Ian Goodfellow, made more than $800,000 that year, even though he was only hired in March, the New York Times reported.
As the publication points out, the figures are eye-opening and offer a bit of insight on how much A.I. researchers are being paid across the globe. Normally, this kind of data isn’t readily accessible. But since OpenAI is a nonprofit organization, it’s required by law to make these figures public.
That raises significant issues for universities and governments. They also need A.I. expertise, both to teach the next generation of researchers and to put these technologies into practice in everything from the military to drug discovery. But they could never match the salaries being paid in the private sector.
Tax forms filed by OpenAI provide insight into the enormous salaries and bonuses paid to artificial intelligence specialists across the world.
Robots are going to make life a lot easier for amputees and people with spinal cord injuries. Take HAL from CYBERDYNE for instance: it is robotic assistive limb that improves patients’ ability to walk. HAL uses sensors to detect signals from the patient’s body to assist with desired movement.
🛩️ Latest App Smart Drones & Multirotors
Humans are usually good at isolating a single voice in a crowd, but computers? Not so much — just ask anyone trying to talk to a smart speaker at a house party. Google may have a surprisingly straightforward solution, however. Its researchers have developed a deep learning system that can pick out specific voices by looking at people’s faces when they’re speaking. The team trained its neural network model to recognize individual people speaking by themselves, and then created virtual “parties” (complete with background noise) to teach the AI how to isolate multiple voices into distinct audio tracks.
The results, as you can see below, are uncanny. Even when people are clearly trying to compete with each other (such as comedians Jon Dore and Rory Scovel in the Team Coco clip above), the AI can generate a clean audio track for one person just by focusing on their face. That’s true even if the person partially obscures their face with hand gestures or a microphone.
Google is currently “exploring opportunities” to use this feature in its products, but there are more than a few prime candidates. It’s potentially ideal for video chat services like Hangouts or Duo, where it could help you understand someone talking in a crowded room. It could also be helpful for speech enhancement in video recording. And there are big implications for accessibility: it could lead to camera-linked hearing aids that boost the sound of whoever’s in front of you, and more effective closed captioning. There are potential privacy issues (this could be used for public eavesdropping), but it wouldn’t be too difficult to limit the voice separation to people who’ve clearly given their consent.
You might only know JPEG as the default image compression standard, but the group behind it has now branched out into something new: JPEG XS. JPEG XS is described as a new low-energy format designed to stream live video and VR, even over WiFi and 5G networks. It’s not a replacement for JPEG and the file sizes themselves won’t be smaller; it’s just that this new format is optimized specifically for lower latency and energy efficiency. In other words, JPEG is for downloading, but JPEG XS is more for streaming.
The new standard was introduced this week by the Joint Photographic Experts Group, which says that the aim of JPEG XS is to “stream the files instead of storing them in smartphones or other devices with limited memory.” So in addition to getting faster HD content on your large displays, the group also sees JPEG XS as a valuable format for faster stereoscopic VR streaming plus videos streamed by drones and self-driving cars.
“We are compressing less in order to better preserve quality, and we are making the process faster while using less energy,” says JPEG leader Touradj Ebrahimi in a statement. According to Ebrahimi, the JPEG XS video compression will be less severe than with JPEG photos — while JPEG photos are compressed by a factor of 10, JPEG XS is compressed by a factor of 6. The group promises a “visual lossless” quality to the images of JPEG XS.