Toggle light / dark theme

You have to admit it. Some of the uses of artificial intelligence are simply fascinating. One of the more exciting aspects of artificial intelligence is seeing all the potential ways the technology can be applied to our daily lives, even if it at times it seems a little creepy. We have seen artificial intelligence technology shape everything from the medical world to art. However, did you ever think that AI would go on to shape the world of stock images?

RELATED: AIS CONTINUE TO ACT IN UNPREDICTABLE WAYS, SHOULD WE PANIC?

Now if you are familiar with people using AI to create portraits of people who do not exist, then surely this idea came to your mind at some point. Yet, another industry influenced by the world of AI.

A new technology using artificial intelligence detects depressive language in social media posts more accurately than current systems and uses less data to do it.

The technology, which was presented during the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, is the first of its kind to show that, to more accurately detect depressive language, small, high-quality data sets can be applied to deep learning, a commonly used AI approach that is typically data intensive.

Previous psycholinguistic research has shown that the words we use in interaction with others on a daily basis are a good indicator of our mental and emotional state.

This is the final part in a series of in-depth articles examining China’s efforts to build a stronger domestic semiconductor industry amid rising trade tensions.


Some in China see custom AI chips, which can offer superior performance to conventional integrated circuits even when manufactured using older processes, as helping the country loosen its dependence on the US in core technology.

Combining new classes of nanomembrane electrodes with flexible electronics and a deep learning algorithm could help disabled people wirelessly control an electric wheelchair, interact with a computer or operate a small robotic vehicle without donning a bulky hair-electrode cap or contending with wires.

By providing a fully portable, wireless brain-machine interface (BMI), the wearable system could offer an improvement over conventional electroencephalography (EEG) for measuring signals from visually evoked potentials in the . The system’s ability to measure EEG signals for BMI has been evaluated with six human subjects, but has not been studied with disabled individuals.

The project, conducted by researchers from the Georgia Institute of Technology, University of Kent and Wichita State University, was reported on September 11 in the journal Nature Machine Intelligence.

Of interest?


Welcome to our annual HUAWEI CONNECT in Shanghai from September 18 to 20. At this year’s event we will announce our latest cloud and AI solutions, and share what we’re doing to help our customers and partners go digital.

Robots aren’t going to take everyone’s jobs, but technology has already reshaped the world of work in ways that are creating clear winners and losers. And it will continue to do so without intervention, says the first report of MIT’s Task Force on the Work of the Future.


Widespread press reports of a looming “employment apocalypse” brought on by AI and automation are probably wide of the mark, according to the authors. Shrinking workforces as developed countries age and outstanding limitations in what machines can do mean we’re unlikely to have a shortage of jobs.

But while unemployment is historically low, recent decades have seen a polarization of the workforce as the number of both high- and low-skilled jobs have grown at the expense of the middle-skilled ones, driving growing income inequality and depriving the non-college-educated of viable careers.

This is at least partly attributable to the growth of digital technology and automation, the report notes, which are rendering obsolete many middle-skilled jobs based around routine work like assembly lines and administrative support.

Houston Mechatronics (HMI) unveiled Aquanaut at the NASA Neutral Buoyancy Laboratory, one year after the announcement of the platform concept.

Aquanaut is a revolutionary multi-mode transforming all-electric undersea vehicle. The vehicle is capable of efficient long-distance transit and data collection in ‘AUV’ (autonomous underwater vehicle) mode.

After transforming into ‘ROV’ (remotely operated vehicle) mode the head of the vehicle pitches up, the hull separates, and two arms are activated so that Aquanaut may manipulate its environment.