Toggle light / dark theme

How Studying Animal Sentience Could Help Solve the Ethical Puzzle of Sentient AI

As ive said before we should at least attempt to reverse engineer brains of: mice, lab rats, crows, octupi, pigs, chimps, and end on the… human brain. it would be messy and expensive, and animal activsts would be runnin around it.


Lurking just below the surface of these concerns is the question of machine consciousness. Even if there is “nobody home” inside today’s AIs, some researchers wonder if they may one day exhibit a glimmer of consciousness—or more. If that happens, it will raise a slew of moral and ethical concerns, says Jonathan Birch, a professor of philosophy at the London School of Economics and Political Science.

As AI technology leaps forward, ethical questions sparked by human-AI interactions have taken on new urgency. “We don’t know whether to bring them into our moral circle, or exclude them,” said Birch. “We don’t know what the consequences will be. And I take that seriously as a genuine risk that we should start talking about. Not really because I think ChatGPT is in that category, but because I don’t know what’s going to happen in the next 10 or 20 years.”

In the meantime, he says, we might do well to study other non-human minds—like those of animals. Birch leads the university’s Foundations of Animal Sentience project, a European Union-funded effort that “aims to try to make some progress on the big questions of animal sentience,” as Birch put it. “How do we develop better methods for studying the conscious experiences of animals scientifically? And how can we put the emerging science of animal sentience to work, to design better policies, laws, and ways of caring for animals?”

Paralyzed NY man can move and feel again — thanks to AI ‘miracle’ surgery

A Long Island man who was paralyzed in a diving accident has regained motion and feeling in his body after a breakthrough, machine learning-based surgery that successfully “connected a computer to his brain” through microelectrode implants.

Now, the successful case of Massapequa’s Keith Thomas, 45, is being heralded throughout the medical world as a “pioneer” case for AI-infused surgeries to treat or cure impassible illnesses like blindness, deafness, ALS, seizures, cerebral palsy and Parkinson’s, experts at Manhasset’s Feinstein Institutes for Medical Research boast.

“This is the first time a paralyzed person is regaining movement and sensation by having their brain, body and spinal cord electronically linked together,” Chad Bouton, a professor at Feinstein’s Institute of Bioelectronic Medicine, told The Post.

Is AI up to snuff? Cardiac clinical trial points to yes

There’s a lot of talk about the potential for artificial intelligence in medicine, but few researchers have shown through well-designed clinical trials that it could be a boon for doctors, health care providers and patients.

Now, researchers at Stanford Medicine have conducted one such trial; they tested an artificial intelligence algorithm used to evaluate heart function. The algorithm, they found, improves evaluations of heart function from echocardiograms — movies of the beating heart, filmed with ultrasound waves, that show how efficiently it pumps blood.

“This blinded, randomized clinical trial is, to our knowledge, one of the first to evaluate the performance of an artificial intelligence algorithm in medicine. We showed that AI can help improve accuracy and speed of echocardiogram readings,” said James Zou, PhD, assistant professor of biomedical data science and co-senior author on the study. “This is important because heart disease is the leading cause of death in the world. There are over 10 million echocardiograms done each year in the U.S., and AI has the potential to add precision to how they are interpreted.”

‘Organoid Intelligence’ — how mini-brains could replace AI for supercomputing

While Artificial Intelligence has the ability to crunch huge amounts of data in a short span of time, it still falls behind when it comes to finding an energy-efficient way to make complex decisions. Researchers from John Hopkins University in the US are now proposing that 3D cell structures that mimic brain functions can be used to create biocomputers.

Join our channel to get access to perks. Click ‘JOIN’ or follow the link below:
https://www.youtube.com/channel/UCuyRsHZILrU7ZDIAbGASHdA/join.

Connect with ThePrint.
» Subscribe to ThePrint: https://theprint.in/subscribe/
» Subscribe to our YouTube Channel: https://bit.ly/3nCMpht.
» Like us on Facebook: https://www.facebook.com/theprintindia.
» Tweet us on Twitter: https://twitter.com/theprintindia.
» Follow us on Instagram: https://www.instagram.com/theprintindia.
» Find us on LinkedIn : https://www.linkedin.com/company/theprint.
» Subscribe to ThePrint on Telegram: https://t.me/ThePrintIndia.
» Find us on Spotify: https://spoti.fi/2NMVlnB
» Find us on Apple Podcasts: https://apple.co/3pEOta8

Researchers create electronics-free robotic gripper with 3D printing

UC San Diego.

According to the team, the soft gripper can be put to use right after it comes off the 3D printer and is equipped with built-in gravity and touch sensors, which enable it to pick up, hold, and release objects. “It’s the first time such a gripper can both grip and release. All you have to do is turn the gripper horizontally. This triggers a change in the airflow in the valves, making the two fingers of the gripper release,” said a statement by the university.

New AI chatbot equips doctors with the latest research

Could this be the future of medicine?

In order for chatbots to be useful to doctors and other health professionals, they are going to need access to the latest research. But current models simply don’t have access to data beyond their latest update. Daniel Nadler has been working to resolve this issue with his new startup OpenEvidence.

He plans to achieve his lofty goal by “marrying these language models with a real-time firehose of clinical documents,” Nadler told Forbes on Thursday. He claims that his new model “can answer with an open book, as opposed to a closed book.”

Meet the Autonomous Lab of the Future

To accelerate development of useful new materials, researchers are building a new kind of automated lab that uses robots guided by artificial intelligence.

“Our vision is using AI to discover the materials of the future,” said Yan Zeng, a staff scientist leading the A-Lab at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab). The “A” in A-Lab is deliberately ambiguous, standing for artificial intelligence (AI), automated, accelerated, and abstracted, among others.

Scientists have computationally predicted hundreds of thousands of novel materials that could be promising for new technologies – but testing to see whether any of those materials can be made in reality is a slow process. Enter A-Lab, which can process 50 to 100 times as many samples as a human every day and use AI to quickly pursue promising finds.

Artificial intelligence vs. evolving super-complex tumor intelligence: critical viewpoints

Recent developments in various domains have led to a growing interest in the potential of artificial intelligence to enhance our lives and environments. In particular, the application of artificial intelligence in the management of complex human diseases, such as cancer, has garnered significant attention. The evolution of artificial intelligence is thought to be influenced by multiple factors, including human intervention and environmental factors. Similarly, tumors, being heterogeneous and complex diseases, continue to evolve due to changes in the physical, chemical, and biological environment. Additionally, the concept of cellular intelligence within biological systems has been recognized as a potential attribute of biological entities. Therefore, it is plausible that the tumor intelligence present in cancer cells of affected individuals could undergo super-evolution due to changes in the pro-tumor environment. Thus, a comparative analysis of the evolution of artificial intelligence and super-complex tumor intelligence could yield valuable insights to develop better artificial intelligence-based tools for cancer management.

Tumor evolution refers to the changes that occur in a cancerous tumor over time as it grows and spreads (Hanahan and Weinberg, 2011; Lyssiotis and Kimmelman, 2017). These changes are the result of genetic mutations and changes in gene expression that can give rise to new subpopulations of cells within the tumor (Lyssiotis and Kimmelman, 2017; Balaparya and De, 2018). Over time, these subpopulations may accumulate subsequent mutations that confer enhanced survival and heightened proliferative capacity, thereby culminating in the emergence of a more formidable tumor exhibiting either heightened aggressiveness or treatment resistance (Balaparya and De, 2018; Gui and Bivona, 2022; Shin and Cho, 2023). Tumor evolution can have important implications for cancer diagnosis and treatment.

3D display could soon bring touch to the digital world

Imagine an iPad that’s more than just an iPad—with a surface that can morph and deform, allowing you to draw 3D designs, create haiku that jump out from the screen and even hold your partner’s hand from an ocean away.

That’s the vision of a team of engineers from the University of Colorado Boulder. In a new study, they’ve created a one-of-a-kind shape-shifting display that fits on a card table. The device is made from a 10-by-10 grid of soft robotic “muscles” that can sense outside pressure and pop up to create patterns. It’s precise enough to generate scrolling text and fast enough to shake a chemistry beaker filled with fluid.

It may also deliver something even rarer: the sense of touch in a digital age.

/* */