But no, it’s not as smart as a high school student.
Category: robotics/AI – Page 1865
Circa 2018
Teaching artificial intelligence to code and create software has been a holy grail of the field. The new system, which you can see in action for yourself, is a step in that direction.
IBM HR Director Diane Gherson says that over the next three years, 120 million workers will need retraining as artificial intelligence continues to take jobs.
Artificial intelligence is obviously ready to get started. Over the next three years, about 120 million workers from the 12 largest economies in the world may need to undergo retraining due to advances in artificial intelligence and intelligent automation, according to a study published on Friday by the IBM Institute of Business Value. However, less than half of the CEOs surveyed by IBM said they had the resources needed to bridge the skills gap caused by these new technologies.
Concerns about how AI successes will affect work are not new. Tesla and SpaceX CEO Elon Musk said last month that AI could make many jobs “pointless”. In one report earlier this year, it was discovered that robots could replace people with a quarter of US jobs by 2030.
Much like US corporations do now.
Debates about rights are frequently framed around the concept of legal personhood. Personhood is granted not just to human beings but also to some non-human entities, such as corporations or governments. Legal entities, aka legal persons, are granted certain privileges and responsibilities by the jurisdictions in which they are recognized, and many such rights are not available to non-person agents. Attempting to secure legal personhood is often seen as a potential pathway to get certain rights and protections for animals1, fetuses2, trees and rivers 3, and artificially intelligent (AI) agents4.
It is commonly believed that a new law or judicial ruling is necessary to grant personhood to a new type of entity. But recent legal literature 5–8 suggests that loopholes in the current law may permit legal personhood to be granted to AI/software without the need to change the law or persuade a court.
For example, L. M. LoPucki6 points out, citing Shawn Bayern’s work on conferring legal personhood on AI7, 8, “Professor Shawn Bayern demonstrated that anyone can confer legal personhood on an autonomous computer algorithm merely by putting it in control of a limited liability company (LLC). The algorithm can exercise the rights of the entity, making them effectively rights of the algorithm. The rights of such an algorithmic entity (AE) would include the rights to privacy, to own property, to enter into contracts, to be represented by counsel, to be free from unreasonable search and seizure, to equal protection of the laws, to speak freely, and perhaps even to spend money on political campaigns. Once an algorithm had such rights, Bayern observed, it would also have the power to confer equivalent rights on other algorithms by forming additional entities and putting those algorithms in control of them.”6. (See Note 1.)
San Francisco-based startup Simbe Robotics is on a mission to transform retail with robots that scan store shelves for missing inventory.
EPFL scientists are developing new approaches for improved control of robotic hands—in particular for amputees—that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof of concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects. The results are published in today’s issue of Nature Machine Intelligence.
The technology merges two concepts from two different fields. Implementing them both together had never been done before for robotic hand control, and contributes to the emerging field of shared control in neuroprosthetics.
One concept, from neuroengineering, involves deciphering intended finger movement from muscular activity on the amputee’s stump for individual finger control of the prosthetic hand which has never before been done. The other, from robotics, allows the robotic hand to help take hold of objects and maintain contact with them for robust grasping.
Would you consent to a surveillance system that watches without video and listens without sound?
If your knee-jerk reaction is “no!”, then “huh?” I’m with you. In a new paper in Applied Physics Letters, a Chinese team is wading into the complicated balance between privacy and safety with computers that can echolocate. By training AI to sift through signals from arrays of acoustic sensors, the system can gradually learn to parse your movements—standing, sitting, falling—using only ultrasonic sound.
To study author Dr. Xinhua Guo at the Wuhan University of Technology, the system may be more palatable to privacy advocates than security cameras. Because it relies on ultrasonic waves—the type that bats use to navigate dark spaces—it doesn’t capture video or audio. It’ll track your body position, but not you per se.
A bio-inspired bot uses water from the environment to create a gas and launch itself from the water’s surface.
The robot, which can travel 26 metres through the air after take-off, could be used to collect water samples in hazardous and cluttered environments, such as during flooding or when monitoring ocean pollution.
Robots that can transition from water to air are desirable in these situations, but the launch requires a lot of power, which has been difficult to achieve in small robots.
Tel Aviv-based Explorium, a developer of cloud-based data and AI model prep solutions, has rasied $19 million in venture capital.