Toggle light / dark theme

One day, who knows when, artificial intelligence could hollow out the job market. But for now, it is generating relatively low-paying jobs. The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, according to the research firm Cognilytica. This kind of work, the study showed, accounted for 80% of the time spent building AI technology.

Is the work exploitative? It depends on where you live and what you’re working on. In India, it is a ticket to the middle class. In New Orleans, it’s a decent enough job. For someone working as an independent contractor, it is often a dead end.

There are skills that must be learned — like spotting signs of a disease in a video or medical scan or keeping a steady hand when drawing a digital lasso around the image of a car or a tree. In some cases, when the task involves medical videos, pornography or violent images, the work turns grisly.

This is already happening on a small testing scale; the Big roll out is coming in 4 or 5 years.


UPS has been delivering a new kind of automated mail — and it’s not via email.

The parcel delivery company has revealed a collaboration with TuSimple. In a statement, they said that, since May, the autonomous truck company has been carrying UPS cargo on a 115-mile route between Phoenix and Tucson.

Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into.


And now, over 50 years later, AI is bringing stories to life like we’ve never seen before.

Converging with the rise of virtual reality and colossal virtual worlds, AI has begun to create vastly detailed renderings of dead stars, generate complex supporting characters with intricate story arcs, and even bring your favorite stars—whether Marlon Brando or Amy Winehouse—back to the big screen and into a built environment.

While still in its nascent stages, AI has already been used to embody virtual avatars that you can converse with in VR, soon to be customized to your individual preferences.

A trio of physicists from the National Autonomous University of Mexico and Tec de Monterrey has solved a 2,000-year-old optical problem—the Wasserman-Wolf problem. In their paper published in the journal Applied Optics, Rafael González-Acuña, Héctor Chaparro-Romo, and Julio Gutiérrez-Vega outline the math involved in solving the puzzle, give some examples of possible applications, and describe the efficiency of the results when tested.

Over 2,000 years ago, Greek scientist Diocles recognized a problem with —when looking through devices equipped with them, the edges appeared fuzzier than the center. In his writings, he proposed that the effect occurs because the lenses were spherical—light striking at an angle could not be focused because of differences in refraction. Isaac Newton was reportedly stumped in his efforts to solve the problem (which became known as ), as was Gottfried Leibniz.

In 1949, Wasserman and Wolf devised an analytical means for describing the problem, and gave it an official name—the Wasserman-Wolf problem. They suggested that the to solving the problem would be to use two aspheric adjacent surfaces to correct aberrations. Since that time, researchers and engineers have come up with a variety of ways to fix the problem in specific applications—most particularly cameras and telescopes. Most such efforts have involved creating aspherical lenses to counteract refraction problems. And while they have resulted in improvement, the solutions have generally been expensive and inadequate for some applications.

Technology that translates cortical activity into speech would be transformative for people unable to communicate as a result of neurological impairment. Decoding speech from neural activity is challenging because speaking requires extremely precise and dynamic control of multiple vocal tract articulators on the order of milliseconds. Here, we designed a neural decoder that explicitly leverages the continuous kinematic and sound representations encoded in cortical activity to generate fluent and intelligible speech. A recurrent neural network first decoded direct cortical recordings into vocal tract movement representations, and then transformed those representations to acoustic speech output. Modeling the articulatory dynamics of speech significantly enhanced performance with limited data. Naïve listeners were able to accurately identify and transcribe decoded sentences. Additionally, speech decoding was not only effective for audibly produced speech, but also when participants silently mimed speech. These results advance the development of speech neuroprosthetic technology to restore spoken communication in patients with disabling neurological disorders.

Robots are about to go underground — for a competition anyways.

The Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense dedicated to developing new emerging technologies, is holding a challenge intended to develop technology for first responders and the military to map, navigate, and search underground. But the technology developed for the competition could also be used in future NASA missions to caves and lava tubes on other planets.

The DARPA Subterranean Challenge Systems Competition will be held August 15 – 22 in mining tunnels under Pittsburgh, and among the robots competing will be an entry from a team led by NASA’s Jet Propulsion Laboratory (JPL) that features wheeled rovers, drones, and climbing robots that can rise on pinball-flipper-shaped treads to scale obstacles.

This presentation was posted by Jason Mayes, senior creative engineer at Google, and was shared by many data scientists on social networks. Chances are that you might have seen it already. Below are a few of the slides. The presentation provides a list of machine learning algorithms and applications, in very simple words. It also explain the differences between AI, ML and DL (deep learning.)

Try Audible for free and get a free Halo audiobook to keep while also helping me out!: http://www.audibletrial.com/HiddenXperia

If you enjoy my content, consider supporting me on Patreon: https://www.patreon.com/HiddenXperia

Huge props to Elzie for the L I T new intro + outro sequences, you can follow him here: https://goo.gl/GeFF6e

Come join the official HiddenXperia discord server!: https://discord.gg/kvDme5C

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules. Images: Connor Coley, Felice Frankel.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.