Toggle light / dark theme

What Would It Mean for AI to Become Conscious?

As artificial intelligence systems take on more tasks and solve more problems, it’s hard to say which is rising faster: our interest in them or our fear of them. Futurist Ray Kurzweil famously predicted that “By 2029, computers will have emotional intelligence and be convincing as people.”

We don’t know how accurate this prediction will turn out to be. Even if it takes more than 10 years, though, is it really possible for machines to become conscious? If the machines Kurzweil describes say they’re conscious, does that mean they actually are?

Perhaps a more relevant question at this juncture is: what is consciousness, and how do we replicate it if we don’t understand it?

Paralyzed man walks using brain-controlled robotic suit

A tetraplegic man has been able to move all four of his paralyzed limbs by using a brain-controlled robotic suit, researchers have said.

The 28-year-old man from Lyon, France, known as Thibault, was paralyzed from the shoulders down after falling 40 feet from a balcony, severing his spinal cord, the AFP news agency reported.

He had some movement in his biceps and left wrist, and was able to operate a wheelchair using a joystick with his left arm.

This AI writes a text adventure while you play it

It’s easy to imagine advances in AI will have an impact on strategy games and digital versions of board games like Chess and Go, but one of the most interesting implementations of AI technology I’ve seen so far is a text adventure.

AI Dungeon 2 by Nick Walton uses OpenAI to simulate an old-school text adventure of the Zork variety, only instead of having to read the designer’s mind to figure out what to type to use this thing on that thing, you write plain English and get results. It helps to start sentences with verbs but you’ll get a response to basically anything, and that response is likely to be surprising. I played a wizard exploring a ruin and within a handful of turns I’d found out I was responsible for the state of these ruins and confronted a younger version of myself.

Why AI Will Be the Best Tool for Extending Our Longevity

Dmitry Kaminskiy speaks as though he were trying to unload everything he knows about the science and economics of longevity—from senolytics research that seeks to stop aging cells from spewing inflammatory proteins and other molecules to the trillion-dollar life extension industry that he and his colleagues are trying to foster—in one sitting.

At the heart of the discussion with Singularity Hub is the idea that artificial intelligence will be the engine that drives breakthroughs in how we approach healthcare and healthy aging—a concept with little traction even just five years ago.

“At that time, it was considered too futuristic that artificial intelligence and data science … might be more accurate compared to any hypothesis of human doctors,” said Kaminskiy, co-founder and managing partner at Deep Knowledge Ventures, an investment firm that is betting big on AI and longevity.

Edward Snowden on the Dangers of Mass Surveillance and Artificial General Intelligence

AI is Pandora’s box, s’ true…

On the one hand we can’t close it and on the other hand our current direction is not good. And this is gonna get worse as AI starts taking its own ‘creative’ decisions… the human overlords will claim it has nothing to do with them if and when things go wrong.

The solution for commercialization is actually quite simple.

If my dog attacks a child on the street when off the leash who is responsible?

The owner of course. Although I dislike that word when it comes to anything living, maybe the dogs human representative is a better term.

Business must be held accountable for its AI dogs off the leash, if necessary keep the leash on for longer. So no need to stop commercialization, just reinstate clear accountability, something which seems to be lacking today.

And in my view, this would actually be more profitable… Everyone happy then.

The US is in danger of losing its global leadership in AI

Some argue that only a “Sputnik” moment will wake the American people and government to act with purpose, just as the 1957 Soviet launch of a satellite catalyzed new educational and technological investments. We disagree. We have been struck by the broad, bipartisan consensus in America to “get AI right” now. We are in a rare moment when challenge, urgency, and consensus may just align to generate the energy we need to extend our AI leadership and build a better future.


Congress asked us to serve on a bipartisan commission of tech leaders, scientists, and national security professionals to explore the relationship between artificial intelligence (AI) and national security. Our work is not complete, but our initial assessment is worth sharing now: in the next decade, the United States is in danger of losing its global leadership in AI and its innovation edge. That edge is a foundation of our economic prosperity, military power and ultimately the freedoms we enjoy.

As we consider the leadership stakes, we are struck by AI’s potential to propel us towards many imaginable futures. Some hold great promise; others are concerning. If past technological revolutions are a guide, the future will include elements of both.

Some of us have dedicated our professional lives to advancing AI for the benefit of humanity. AI technologies have been harnessed for good in sectors ranging from health care to education to transportation. Today’s progress only scratches the surface of AI’s potential. Computing power, large data sets, and new methods have led us to an inflection point where AI and its sub-disciplines (including machine vision, machine learning, natural language understanding, and robotics) will transform the world.

Simple machine learning scorecard for seizures is saving lives

Computer scientists from Duke University and Harvard University have joined with physicians from Massachusetts General Hospital and the University of Wisconsin to develop a machine learning model that can predict which patients are most at risk of having destructive seizures after suffering a stroke or other brain injury.

A point system they’ve developed helps determine which patients should receive expensive continuous electroencephalography (cEEG) monitoring. Implemented nationwide, the authors say their could help hospitals monitor nearly three times as many patients, saving many lives as well as $54 million each year.

A paper detailing the methods behind the interpretable machine learning approach appeared online June 19 in the Journal of Machine Learning Research.

World must prepare for biological weapons that target ethnic groups based on genetics, says Cambridge University

Biological weapons could be built which target individuals in a specific ethnic group based on their DNA, a report by the University of Cambridge has warned.

Researchers from Cambridge’s Centre for the Study of Existential Risk (CSER) said the government was failing to prepare for ‘human-driven catastrophic risks’ that could lead to mass harm and societal collapse.

In recent years advances in science such as genetic engineering, and artificial intelligence (AI) and autonomous vehicles have opened the door to a host of new threats.