Toggle light / dark theme

Scientists develop artificial intelligence system for high precision recognition of hand gestures

Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an Artificial Intelligence (AI) system that recognizes hand gestures by combining skin-like electronics with computer vision.

The recognition of human by AI systems has been a valuable development over the last decade and has been adopted in high-precision surgical robots, health monitoring equipment and in .

AI recognition systems that were initially visual-only have been improved upon by integrating inputs from wearable sensors, an approach known as ‘data fusion’. The wearable sensors recreate the skin’s sensing ability, one of which is known as ‘somatosensory’.

New device delivers single cells in just one click

EPFL spin-off SEED Biosciences has developed a pipetting robot that can dispense individual cells one by one. Their innovation allows for enhanced reliability and traceability, and can save life-science researchers time and money.

The engineers at SEED Biosciences, an EPFL spin-off, have come up with a unique pipetting robot that can isolate single with the push of a button—without damaging the cells. Their device also records the cells’ electrical signature so that they can be traced. While this innovation may seem trivial, it can save researchers several weeks of precious time and speed up development work in pharmaceuticals, cancer treatments and personalized medicine. The company began marketing its device this year.

Deepfakes declared top AI threat, biometrics and content attribution scheme proposed to detect them

Biometrics may be the best way to protect society against the threat of deepfakes, but new solutions are being proposed by the Content Authority Initiative and the AI Foundation.

Deepfakes are the most serious criminal threat posed by artificial intelligence, according to a new report funded by the Dawes Centre for Future Crime at the University College London (UCL), among a list of the top 20 worries for criminal facilitation in the next 15 years.

The study is published in the journal Crime Science, and ranks the 20 AI-enabled crimes based on the harm they could cause.

Scientists find vision relates to movement

To get a better look at the world around them, animals constantly are in motion. Primates and people use complex eye movements to focus their vision (as humans do when reading, for instance); birds, insects, and rodents do the same by moving their heads, and can even estimate distances that way. Yet how these movements play out in the elaborate circuitry of neurons that the brain uses to “see” is largely unknown. And it could be a potential problem area as scientists create artificial neural networks that mimic how vision works in self-driving cars.

To better understand the relationship between movement and vision, a team of Harvard researchers looked at what happens in one of the brain’s primary regions for analyzing imagery when animals are free to roam naturally. The results of the study, published Tuesday in the journal Neuron, suggest that image-processing circuits in the primary not only are more active when animals move, but that they receive signals from a movement-controlling region of the brain that is independent from the region that processes what the animal is looking at. In fact, the researchers describe two sets of movement-related patterns in the visual cortex that are based on head motion and whether an animal is in the light or the dark.

The movement-related findings were unexpected, since vision tends to be thought of as a feed-forward computation system in which enters through the retina and travels on neural circuits that operate on a one-way path, processing the information piece by piece. What the researchers saw here is more evidence that the visual system has many more feedback components where information can travel in opposite directions than had been thought.

Artificial Intelligence And Data Privacy – Turning A Risk Into A Benefit

On the higher end, they work to ensure that development is open in order to work on multiple cloud infrastructures, providing companies the ability to know that portability exists.

That openness is also why deep learning is not yet part of a solution. There is still not the transparency needed into the DL layers in order to have the trust necessary for privacy concerns. Rather, these systems aim to help manage information privacy for machine learning applications.

Artificial intelligence applications are not open, and can put privacy at risk. The addition of good tools to address privacy for data being used by AI systems is an important early step in adding trust into the AI equation.