Toggle light / dark theme

DeepScribe, an AI-powered medical transcription platform, has raised $30 million in Series A funding led by Nina Achadjian at Index Ventures, with participation from Scale.ai CEO Alex Wang, Figma CEO Dylan Field and existing investors Bee Partners, Stage 2 Capital and 1984 Ventures. The company’s latest round of funding follows its $5.2 million seed round announced in May 2021. DeepScribe was founded in 2017 by Akilesh Bapu, Matthew Ko and Kairui Zeng with the aim of unburdening doctors from tedious data entry and allowing them to focus on their patients.

In 2019, DeepScribe launched its ambient voice AI technology that summarizes natural patient-physician conversations. The idea for DeepScribe was prompted by Bapu and Ko’s own experiences. Bapu’s father was an oncologist and he saw the toll that documentation had on his father’s work/life balance. On the other hand, Ko saw how the burden of clinical documentation was impacting patients’ perception of care when he was the care coordinator for his mother when she was diagnosed with breast cancer.

After being frustrated with the care his mother was receiving, Ko turned to Bapu and his father for help. The pair then began to understand the importance of clinical documentation and realized that recent breakthroughs in artificial intelligence and natural language processing were not being used to remedy the situation. They then decided to create a platform that would address the problem.

University of Utah engineers have built a robotic exoskeleton that gives people with prosthetic legs a power boost that makes walking less difficult.

“It’s equivalent to taking off a 26-pound backpack [while walking],” lead researcher Tommaso Lenzi said in a press release. “That is a really big improvement.”

The challenge: About 220,000 people in the U.S. have had above-knee amputations, meaning their leg was amputated somewhere between the knee and hip.

However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, “AI doesn’t have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.”

“We are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.” —Andrew Lohn, Georgetown University.

In interviews with AI experts, IEEE Spectrum has uncovered six real-world AI worst-case scenarios that are far more mundane than those depicted in the movies. But they’re no less dystopian. And most don’t require a malevolent dictator to bring them to full fruition. Rather, they could simply happen by default, unfolding organically—that is, if nothing is done to stop them. To prevent these worst-case scenarios, we must abandon our pop-culture notions of AI and get serious about its unintended consequences.

Robots could become crucial caregivers in the near future.

Robots could become crucial caregivers in the future, with new technologies constantly in development to help improve the quality of life for the globe’s aging population and for people with physical disabilities.

One example comes from Cornell University scientist Tapomayukh Bhattacharjee who is developing a robotic arm to help feed people with spinal injuries, a press statement explains.

A robot as an extension of the body Bhattacharjee, an assistant professor of computer science at Cornell, believes that robots have the potential to transform caregiving and that eating is one of the key areas where they could provide a helping robotic hand.

The roboticist was recently granted a four-year, $1.5 million grant from the National Science Foundation’s National Robotics Initiative to help him and his EmPRISE Lab develop caregiving robotics solutions for people with physical disabilities.

Full Story:

In 1987, at the beginning of the IT-driven technological revolution, the Nobel-Prize-winning economist Robert famously quipped that “you can see the computer age everywhere but in the productivity statistics.”

More than 30 years later, another technological revolution seems imminent. In what is called “the Fourth Industrial revolution,” attention is devoted to automation and robots. Many have argued that robots may significantly transform corporations, leading to massive worker displacement and a significant increase in firms’ capital intensity. Yet, despite these omnipresent predictions, it is hard to find robots not only in aggregate productivity statistics but also anywhere else.

While investment in robots has increased significantly in recent years, it remains a small share of total investment. The use of robots is almost zero in industries other than manufacturing, and even within manufacturing, robotization is very low for all but a few poster-child industries, such as automotive. For example, in the manufacturing sector, robots account for around 2.1% of total capital expenditures. For the economy as a whole, robots account for about 0.3% of total investment in equipment. Moreover, recent increases in sales of robotics are driven mostly by China and other developing nations as they play catch up in manufacturing, rather than by increasing robotization in developed countries. These low levels of robotization cast doubt on doomsday projections in which robots will cut demand for human employees.

But is it too early to assess the future of robots? Is it possible that robots are still in their infancy, and the current levels of adoption are not indicative of their future impact on the workplace? After all, Solow’s productivity paradox was ultimately resolved in subsequent decades, as investments in digital technologies paid off, transforming the world in the process.

Maybe, but maybe not. A decade after Solow’s observation, the economic impact of IT was evident. The same cannot be said about robotics.

Full Story: