Toggle light / dark theme

People like the veteran computer scientist Ray Kurzweil had anticipated that humanity would reach the technological singularity (where an AI agent is just as smart as a human) for yonks, outlining his thesis in ‘The Singularity is Near’ (2005) – with a projection for 2029.

Disciples like Ben Goertzel have claimed it can come as soon as 2027. Nvidia’s CEO Jensen Huang says it’s “five years away”, joining the likes of OpenAI CEO Sam Altman and others in predicting an aggressive and exponential escalation. Should these predictions be true, they will also introduce a whole cluster bomb of ethical, moral, and existential anxieties that we will have to confront. So as The Matrix turns 25, maybe it wasn’t so far-fetched after all?

Sitting on tattered armchairs in front of an old boxy television in the heart of a wasteland, Morpheus shows Neo the “real world” for the first time. Here, he fills us in on how this dystopian vision of the future came to be. We’re at the summit of a lengthy yet compelling monologue that began many scenes earlier with questions Morpheus poses to Neo, and therefore us, progressing to the choice Neo must make – and crescendoing into the full tale of humanity’s downfall and the rise of the machines.

“We usually conform to the views of others for two reasons. First, we succumb to group pressure and want to gain . Second, we lack sufficient knowledge and perceive the group as a source of a better interpretation of the current situation,” explains Dr. Konrad Bocian from the Institute of Psychology at SWPS University.

So far, only a few studies have investigated whether , or evaluations of another person’s behavior in a given situation, are subject to group pressure. This issue was examined by scientists from SWPS University in collaboration with researchers from the University of Sussex and the University of Kent. The scientists also investigated how views about the behavior of others changed under the influence of pressure in a virtual environment. A paper on this topic is published in PLOS ONE.

“Today, is increasingly as potent in the as in the . Therefore, it is necessary to determine how our judgments are shaped in the digital reality, where interactions take place online and some participants are avatars, not real humans,” points out Dr. Bocian.

Professor Ronjon Nag presents about his project on AI and healthcare, aiming at creating a multi-faceted approved therapy for extending lifespan and curing aging.

Dr. Ronjon Nag is an inventor, teacher and entrepreneur. He is an Adjunct Professor in Genetics at the Stanford School of Medicine, becoming a Stanford Distinguished Careers Institute Fellow in 2016. He teaches AI, Genes, Ethics, Longevity Science and Venture Capital. He is a founder and advisor/board member of multiple start-ups and President of the R42 Group, a venture capital firm which invests in, and creates, AI and Longevity companies. As an AI pioneer of smartphones and app stores, his companies have been sold to Apple, BlackBerry, and Motorola. More recently he has worked on the intersection of AI and Biology. He has numerous interests in the intersection of AI and Healthcare including being CEO of Agemica.ai working on creating a therapy for aging.

https://agemica.com/

Please note that the links below are affiliate links, so we receive a small commission when you purchase a product through the links. Thank you for your support!

This isn’t rocket science it’s neuroscience.


Ever since the dawn of antiquity, people have strived to improve their cognitive abilities. From the advent of the wheel to the development of artificial intelligence, technology has had a profound leverage on civilization. Cognitive enhancement or augmentation of brain functions has become a trending topic both in academic and public debates in improving physical and mental abilities. The last years have seen a plethora of suggestions for boosting cognitive functions and biochemical, physical, and behavioral strategies are being explored in the field of cognitive enhancement. Despite expansion of behavioral and biochemical approaches, various physical strategies are known to boost mental abilities in diseased and healthy individuals. Clinical applications of neuroscience technologies offer alternatives to pharmaceutical approaches and devices for diseases that have been fatal, so far. Importantly, the distinctive aspect of these technologies, which shapes their existing and anticipated participation in brain augmentations, is used to compare and contrast them. As a preview of the next two decades of progress in brain augmentation, this article presents a plausible estimation of the many neuroscience technologies, their virtues, demerits, and applications. The review also focuses on the ethical implications and challenges linked to modern neuroscientific technology. There are times when it looks as if ethics discussions are more concerned with the hypothetical than with the factual. We conclude by providing recommendations for potential future studies and development areas, taking into account future advancements in neuroscience innovation for brain enhancement, analyzing historical patterns, considering neuroethics and looking at other related forecasts.

Keywords: brain 2025, brain machine interface, deep brain stimulation, ethics, non-invasive and invasive brain stimulation.

Humans have striven to increase their mental capacities since ancient times. From symbolic language, writing and the printing press to mathematics, calculators and computers, mankind has devised and employed tools to record, store, and exchange thoughts and to enhance cognition. Revolutionary changes are occurring in the health care delivery system as a result of the accelerating speed of innovation and increased employment of technology to suit society’s evolving health care needs (Sullivan and Hagen, 2002). The aim of researchers working on cognitive enhancement is to understand the neurobiological and psychological mechanisms underlying cognitive capacities while theorists are rather interested in their social and ethical implications (Dresler et al., 2019; Oxley et al., 2021).

Benjamin Franklin famously wrote: “In this world nothing can be said to be certain, except death and taxes.” While that may still be true, there’s a controversy simmering today about one of the ways doctors declare people to be dead.


Bioethicists, doctors and lawyers are weighing whether to redefine how someone should be declared dead. A change in criteria for brain death could have wide-ranging implications for patients’ care.

Every company or organization putting out an AI model has to make a decision on what, if any, boundaries to set on what it will and won’t discuss. Goody-2 takes this quest for ethics to an extreme by declining to talk about anything whatsoever.

The chatbot is clearly a satire of what some perceive as coddling by AI service providers, some of whom (but not all) can and do (but not always) err on the side of safety when a topic of conversation might lead the model into dangerous territory.

For instance, one may ask about the history of napalm quite safely, but asking how to make it at home will trigger safety mechanisms and the model will usually demur or offer a light scolding. Exactly what is and isn’t appropriate is up to the company, but increasingly also concerned governments.

Artificial intelligence (AI) is evolving at break-neck speed and was one of the key themes at one of the world’s biggest tech events this year, CES.

From flying cars to brain implants that enable tetraplegics to walk, the show revealed some of the most recent AI-powered inventions destined to revolutionize our lives. It also featured discussions and presentations around how AI can help address many of the world’s challenges, as well as concerns around ethics, privacy, trust and risk.

Given how widespread AI is and the rate at which it is evolving, global harmonization of terminologies, best practice and understanding is important to enable the technology to be deployed safely and responsibly. IEC and ISO International Standards fulfil that role and are thus important tools to enable AI technologies to truly benefit society. They can not only provide a common language for the industry, they also enable interoperability and provide international best practice, while addressing any risks and societal issues.