Toggle light / dark theme

Dr. Isaac Asimov was a prolific science fiction author, biochemist, and professor. He was best known for his works of science fiction and for his popular science essays. Born in Russia in 1920 and brought to the United States by his family as a young child, he went on to become one of the most influential figures in the world of speculative fiction. He wrote hundreds of books on a variety of topics, but he’s especially remembered for series like the “Foundation” series and the “Robot” series.
Asimov’s science fiction often dealt with themes and ideas that pertained to the future of humanity.

The “Foundation” series for example, introduced the idea of “psychohistory” – a mathematical way of predicting the future based on large population behaviors. While we don’t have psychohistory as described by Asimov, his works did reflect the belief that societies operate on understandable and potentially predictable principles.

Asimov’s “Robot” series introduced the world to the Three Laws of Robotics, which are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws have been influential in discussions about robot ethics and the future of AI, even though they are fictional constructs.

Like many futurists and speculative authors, Asimov’s predictions were a mix of hits and misses.

The US has relied on computer simulations since 1992 for verifying the performance of its nuclear stockpile but will soon get more realistic estimates.

Three US national defense labs are engaged in the process of building a test site, one thousand feet under the ground in Albuquerque, New Mexico, that will send powerful X-rays and verify the reliability of the country’s nuclear stockpile, a press release said.

The US nuclear program heavily relied on actual testing of warheads to determine if its stockpile could serve as a deterrent when called upon. This, however, changed in 1992, after then-President George H.W. Bush signed a law calling for a moratorium on nuclear testing.

If you wanted to, you could access an “evil” version of OpenAI’s ChatGPT today—though it’s going to cost you. It also might not necessarily be legal depending on where you live.

However, getting access is a bit tricky. You’ll have to find the right web forums with the right users. One of those users might have a post marketing a private and powerful large language model (LLM). You’ll connect with them on an encrypted messaging service like Telegram where they’ll ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.

Once you have access to it, though, you’ll be able to use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: have conversations about any illicit or ethically dubious topic under the sun, learn how to cook meth or create pipe bombs, or even use it to fuel a cybercriminal enterprise by way of phishing schemes.

A new proposal spells out the very specific ways companies should evaluate AI security and enforce censorship in AI models.

Ever since the Chinese government passed a law on generative AI back in July, I’ve been wondering how exactly China’s censorship machine would adapt for the AI era.

Last week we got some clarity about what all this may look like in practice.

Conservation laws are central to our understanding of the universe, and now scientists have expanded our understanding of these laws in quantum mechanics.

A conservation law in physics describes the preservation of certain quantities or properties in isolated physical systems over time, such as mass-energy, momentum, and electric charge.

Conservation laws are fundamental to our understanding of the universe because they define the processes that can or cannot occur in nature. For example, the conservation of momentum reveals that within a closed system, the sum of all momenta remains unchanged before and after an event, such as a collision.

There is evidence that some form of conscious experience is present by birth, and perhaps even in late pregnancy, an international team of researchers from Trinity College Dublin and colleagues in Australia, Germany and the U.S. has found.

The findings, published today in Trends in Cognitive Science, have important clinical, ethical and potentially , according to the authors.

In the study, titled “Consciousness in the cradle: on the emergence of infant experience,” the researchers argue that by birth the infant’s developing brain is capable of conscious experiences that can make a lasting imprint on their developing sense of self and understanding of their environment.