Toggle light / dark theme

Oh the things we can see and accomplish when time and death can no longer hinder us.


Immortality is eternal life, being exempt from death, unending existence.
Human beings seem to be obsessed with the idea of immortality. But a study published in the journal Proceedings of the National Academy of Sciences has stated, through a mathematical equation, that it is impossible to stop ageing in multicellular organisms, which include humans, bringing the immortality debate to a possible end.
So you probably don’t want to die, most people don’t. But death takes us all no matter what we want. However, today in our scenario, humans have found a way to obtain that immortality. Watch the whole timeline video to find out how reaching immortality changes the world and the way we live.

DISCLAIMER: This Timeline/Comparison is based on public data, surveys, public comments & discussions and approximate estimations that might be subjected to some degree of error.

Icons: www.flaticon.com

Every time a human or machine learns how to get better at a task, a trail of evidence is left behind. A sequence of physical changes — to cells in a brain or to numerical values in an algorithm — underlie the improved performance. But how the system figures out exactly what changes to make is no small feat. It’s called the credit assignment problem, in which a brain or artificial intelligence system must pinpoint which pieces in its pipeline are responsible for errors and then make the necessary changes. Put more simply: It’s a blame game to find who’s at fault.

AI engineers solved the credit assignment problem for machines with a powerful algorithm called backpropagation, popularized in 1986 with the work of Geoffrey Hinton, David Rumelhart and Ronald Williams. It’s now the workhorse that powers learning in the most successful AI systems, known as deep neural networks, which have hidden layers of artificial “neurons” between their input and output layers. And now, in a paper published in Nature Neuroscience in May, scientists may finally have found an equivalent for living brains that could work in real time.

A team of researchers led by Richard Naud of the University of Ottawa and Blake Richards of McGill University and the Mila AI Institute in Quebec revealed a new model of the brain’s learning algorithm that can mimic the backpropagation process. It appears so realistic that experimental neuroscientists have taken notice and are now interested in studying real neurons to find out whether the brain is actually doing it.

(2021). Nuclear Technology: Vol. 207 No. 8 pp. 1163–1181.


Focusing on nuclear engineering applications, the nation’s leading cybersecurity programs are focused on developing digital solutions to support reactor control for both on-site and remote operation. Many of the advanced reactor technologies currently under development by the nuclear industry, such as small modular reactors, microreactors, etc., require secure architectures for instrumentation, control, modeling, and simulation in order to meet their goals. 1 Thus, there is a strong need to develop communication solutions to enable secure function of advanced control strategies and to allow for an expanded use of data for operational decision making. This is important not only to avoid malicious attack scenarios focused on inflicting physical damage but also covert attacks designed to introduce minor process manipulation for economic gain. 2

These high-level goals necessitate many important functionalities, e.g., developing measures of trustworthiness of the code and simulation results against unauthorized access; developing measures of scientific confidence in the simulation results by carefully propagating and identifying dominant sources of uncertainties and by early detection of software crashes; and developing strategies to minimize the computational resources in terms of memory usage, storage requirements, and CPU time. By introducing these functionalities, the computers are subservient to the programmers. The existing predictive modeling philosophy has generally been reliant on the ability of the programmer to detect intrusion via specific instructions to tell the computer how to detect intrusion, keep log files to track code changes, limit access via perimeter defenses to ensure no unauthorized access, etc.

The last decade has witnessed a huge and impressive development of artificial intelligence (AI) algorithms in many scientific disciplines, which have promoted many computational scientists to explore how they can be embedded into predictive modeling applications. The reality, however, is that AI, premised since its inception on emulating human intelligence, is still very far from realizing its goal. Any human-emulating intelligence must be able to achieve two key tasks: the ability to store experiences and the ability to recall and process these experiences at will. Many of the existing AI advances have primarily focused on the latter goal and have accomplished efficient and intelligent data processing. Researchers on adversarial AI have shown over the past decade that any AI technique could be misled if presented with the wrong data. 3 Hence, this paper focuses on introducing a novel predictive paradigm, referred to as covert cognizance, or C2 for short, designed to enable predictive models to develop a secure incorruptible memory of their execution, representing the first key requirement for a human-emulating intelligence. This memory, or self-cognizance, is key for a predictive model to be effective and resilient in both adversarial and nonadversarial settings. In our context, “memory” does not imply the dynamic or static memory allocated for a software execution; instead, it is a collective record of all its execution characteristics, including run-time information, the output generated in each run, the local variables rendered by each subroutine, etc.

In 2,021 Instagram will be the most popular social media platform. Recent statistics show that the platform now boasts over 1 billion monthly active users. With this many eyes on their content, influencers can reap great rewards through sponsored posts if they have a large enough following with this many eyes on their content. The question for today then becomes: How do we effectively grow our Instagram account in the age of algorithmic bias? Instagram expert and AI growth specialist Faisal Shafique help us answer this question utilizing his experience growing his @fact account to about 8M followers while also helping major, edgy brands like Fashion Nova to over 20M.

Full Story:

Duke professor becomes second recipient of AAAI Squirrel AI Award for pioneering socially responsible AI.

Whether preventing explosions on electrical grids, spotting patterns among past crimes, or optimizing resources in the care of critically ill patients, Duke University computer scientist Cynthia Rudin wants artificial intelligence (AI) to show its work. Especially when it’s making decisions that deeply affect people’s lives.

While many scholars in the developing field of machine learning were focused on improving algorithms, Rudin instead wanted to use AI’s power to help society. She chose to pursue opportunities to apply machine learning techniques to important societal problems, and in the process, realized that AI’s potential is best unlocked when humans can peer inside and understand what it is doing.

At the outbreak of World War I, the French army was mobilized in the fashion of Napoleonic times. On horseback and equipped with swords, the cuirassiers wore bright tricolor uniforms topped with feathers—the same get-up as when they swept through Europe a hundred years earlier. The remainder of 1914 would humble tradition-minded militarists. Vast fields were filled with trenches, barbed wire, poison gas and machine gun fire—plunging the ill-equipped soldiers into a violent hellscape of industrial-scale slaughter.

Capitalism excels at revolutionizing war. Only three decades after the first World War I bayonet charge across no man’s land, the US was able to incinerate entire cities with a single (nuclear) bomb blast. And since the destruction of Hiroshima and Nagasaki in 1,945 our rulers’ methods of war have been made yet more deadly and “efficient”.

Today imperialist competition is driving a renewed arms race, as rival global powers invent new and technically more complex ways to kill. Increasingly, governments and military authorities are focusing their attention not on new weapons per se, but on computer technologies that can enhance existing military arsenals and capabilities. Above all is the race to master so-called artificial intelligence (AI).

In this post I outline my journey creating a dynamic NFT on the Ethereum blockchain with IPFS and discuss the possible use cases for scientific data. I do not cover algorithmic generation of static images (you should read Albert Sanchez Lafuente’s neat step-by-step for that) but instead demonstrate how I used Cytoscape.js, Anime.js and genomic feature data to dynamically generate visualizations/art at run time when NFTs are viewed from a browser. I will also not be providing an overview of Blockchain but I highly recommend reading Yifei Huang’s recent post: Why every data scientist should pay attention to crypto.

W h ile stuck home during the pandemic, I’m one of the 10 million that tried my hand at gardening on our little apartment balcony in Brooklyn. The Japanese cucumbers were a hit with our neighbors and the tomatoes were a hit with the squirrels but it was the peppers I enjoyed watching grow the most. This is what set the objective for my first NFT: create a depiction of a pepper that ripens over time.

How much of the depiction is visualization and how much is art? Well that’s in the eye of the beholder. When you spend your days scrutinizing data points, worshiping best practices and optimizing everything from memory usage to lunch orders it’s nice to take some artistic license and make something just because you like it, which is exactly what I’ve done here. The depiction is authentically generated from genomic data features but obviously this should not be viewed as any kind of serious biological analysis.

According to this guy, the argument will be that the AI is needed to make split second decisions, and will gradually increase from there.


Retired U.S. Army General Stanley McChrystal joins ‘Influencers with Andy Serwer’ to share his biggest fears regarding artificial intelligence.

ANDY SERWER: I want to ask you about AI, artificial intelligence, because you wrote, “ceding the ability to manage relationships to an algorithm, we rolled a dangerous die.” What are the specific uses of AI that concern you and then we can talk about AI weapons and that’s really scary stuff. But let’s talk about it generally and then specifically with regard to the military.

STANLEY MCCHRYSTAL: Let’s start by something we all get. We call company X and we get this recording that says if you’re calling about so-and-so hit one. If you’re calling about so-and-so hit two. And you go for a while and by the time you get to 8 and they didn’t cover your problem, you’re furious. And you just want to talk to someone. You want somebody to take your problem for you.