Toggle light / dark theme

The Pentagon, the CIA, and the State Department are already using the technology.

Who can forget the attack on Capital last January 6th? For those who do remember it well, there is an urgency to do something to avoid it ever happening again. One way to do that is to predict these events before they happen just like you can predict weather patterns.

Some data scientists believe they can achieve exactly that, according to The Washington Post. “We now have the data — and opportunity — to pursue a very different path than we did before,” said Clayton Besaw, who helps run CoupCast, a machine-learning-driven program based at the University of Central Florida that predicts coups for a variety of countries.

This type of predictive modeling has been around for a while but has mostly focused on countries where political unrest is far more common. Now, the hope is that it can be redirected to other nations to help prevent events like that of January 6th. And so far, the firms working in this field have been quite successful.

Today’s Google Doodle honors the late physicist Stephen Hawking on his 80th birthday. Hawking was a renowned cosmologist, and he spent his career theorizing about the origins of the universe, the underlying structure of reality, and the nature of black holes. But he became a household name for the way he communicated those ideas to the public through books and TV appearances.

“My goal is simple,” he once said. “It is a complete understanding of the universe, why it is as it is and why it exists at all.”

One of Hawking’s best-known ideas is that black holes slowly regurgitate information about all the matter they’ve swallowed — but it comes out in a jumbled form called Hawking radiation. In 1974, Hawking proposed that the event horizon of a black hole emits energy. Because energy can be converted into mass, and vice versa (that’s what Albert Einstein’s famous equation E=MC2 tells us), emitting all that energy into space will shrink the black hole. Eventually, it will run out of mass and disappear.

Full Story:

A review of The Age of AI and Our Human Future by Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher. Little, Brown and Company, 272 pages (November, 2021).

Potential bridges across the menacing chasm of incompatible ideas are being demolished by a generation of wannabe autocrats presenting alternative facts as objective knowledge. This is not new. The twist is that modern network-delivery platforms can insert, at scale, absurd information into national discourse. In fact, sovereign countries intent on political mischief and social disruption already do this to their adversaries by manipulating the stories they see on the Internet.

About half the country gets its news from social media. These digital platforms dynamically tune the content they suggest according to age, gender, race, geography, family status, income, purchase history, and, of course, user clicks and cliques. We know that their algorithms demote and promote perspectives that may come from opposing viewpoints and amplify like-minded “us-against-them” stories, further exacerbating emotional response on their websites and in the real world. Even long-established and once-reputable outlets capture attention by manufactured outrage and fabricated scandal. Advertising revenue pays for it all, but the real products here are the hundreds of millions of users who think they are getting a free service. Their profiles are sold by marketeers to the highest corporate bidder.

About a decade ago, MIT researchers discovered a technique that speeds physics modeling by 1000X. They spun this out into a new company, called Akselos, which has been helping enterprises to weave the tech into various kinds of digital twins used to improve shipping, refining, and wind power generation.

A digital twin is a virtual representation of an object or system that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning, and reasoning to help decision-making. Connected sensors on the physical asset collect data that can be mapped onto the virtual model.

The specific innovation improves the performance of finite element analysis (FEA) algorithms which underpin most types of physics simulations. Akselos experience over the last decade can help executives explore the implications of the million-fold improvements in physics simulation that Nvidia is now demonstrating thanks to improvement in hardware, scalability, and new algorithms.

Smart factories will be very useful in metaverse.workers can operated machines in factories using Internet.


As the idea of interconnected and intelligent manufacturing is gaining ground, competing in the world of Industry 4.0 can be challenging if you’re not on the very cusp of innovation.

Seeing the growing economic impact of IIoT around the globe, many professionals and investors have been asking themselves if the industry is on the verge of a technological revolution. But judging from the numbers and predictions, there is tangible and concrete evidence that the idea of smart manufacturing has already burst into corporate consciousness. According to IDC, global spending on the Internet of Things in 2020 is projected to top $840 billion if it maintains the 12.6% year-over-year compound annual growth rate. There is no doubt that a huge part of this expenditure will be devoted to the introduction of IoT into all types of industry, especially including manufacturing.

But there is not only the forecasts and statistics to tell us that the idea of Industrial Internet of Things is gaining traction across virtually all business sectors. Having already proven to be the crunch point in manufacturing, IIoT brings the reliability of the machine to machine communication, the security of preventive maintenance and the insight of big data analytics. In other words, the IIoT revolution has already begun.

The first-generation AI systems did not address these needs, which led to a low adoption rate. But the second-generation AI systems are focused on a single subject – improving patients’ clinical outcomes. The digital pills combine a personalised second-generation AI system along with the branded or generic drug and improve the patient response as it increases adherence and overcomes the loss of response to chronic medications. It works on improving the effectiveness of drugs and therefore reducing healthcare costs and increasing end-user adoption.

There are many examples to prove that there is a partial or complete loss of response to chronic medications. Cancer drug resistance is a major obstacle for the treatment of multiple malignancies, one-third of epileptics develop resistance to anti-epileptic drugs; also, a similar percentage of patients with depression develop resistance to anti-depressants. Other than the loss of response to chronic medications, low adherence is also a common problem for many NCDs. A little less than 50% of severely asthmatic patients adhere to inhaled treatments, while 40% of hypertensive patients show non-adherence.

The second-generation systems are aimed at improving outcomes and reducing side effects. To overcome the hurdle of biases induced by big data, these systems implement an n = 1 concept in a personalised therapeutic regimen. This focus of the algorithm improves the clinically meaningful outcome for an individual subject. The personalised closed-loop system used by the second-generation system is designed to improve the end-organ function and overcome tolerance and loss of effectiveness.

Artificial intelligence drug design company Iktos, and South Korean clinical research biotech Astrogen announced today a collaboration with the goal of discovering small-molecule pre-clinical drug candidates for a specific, undisclosed, marker of Parkinson’s disease (PD).

Under the terms of the agreement, whose value was not disclosed, Iktos will apply its generative learning algorithms which seek to identify new molecular structures with the potential address the target in PD. Astrogen, which has a focus of the development of therapeutics for “intractable neurological diseases,” will provide in-vitro and in-vivo screening of lead compounds and pre-clinical compounds. While both companies will contribute to the identification of new small-molecule candidates, Astrgoen will lead the drug development process from the pre-clinical stages.

“Our objective is to expedite drug discovery and achieve time and cost efficiencies for our global collaborators by using Iktos’s proprietary AI platform and know-how,” noted Yann Gaston-Mathé, president and CEO of Paris-based Iktos in a press release. “We are confident that together we will be able to identify promising novel chemical matter for the treatment of intractable neurological diseases. Our strategy has always been to tackle challenging problems alongside our collaborators where we can demonstrate value generation for new and on-going drug discovery projects.”

The research study of Spanish clinical neuropsychologist Gabriel G. De la Torre, Does artificial intelligence dream of non-terrestrial techno-signatures?, suggests that one of the “potential applications of artificial intelligence is not only to assist in big data analysis but to help to discern possible artificiality or oddities in patterns of either radio signals, megastructures or techno-signatures in general.”

“Our form of life and intelligence,” observed Silvano P. Colombano at NASA’s Ames Research Center who was not involved in the study, “may just be a tiny first step in a continuing evolution that may well produce forms of intelligence that are far superior to ours and no longer based on carbon ” machinery.”

Artificial intelligence is unlike previous technology innovations in one crucial way: it’s not simply another platform to be deployed, but a fundamental shift in the way data is used. As such, it requires a substantial rethinking as to the way the enterprise collects, processes, and ultimately deploys data to achieve business and operational objectives.

So while it may be tempting to push AI into legacy environments as quickly as possible, a wiser course of action would be to adopt a more careful, thoughtful approach. One thing to keep in mind is that AI is only as good as the data it can access, so shoring up both infrastructure and data management and preparation processes will play a substantial role in the success or failure of future AI-driven initiatives.

According to Open Data Science, the need to foster vast amounts of high-quality data is paramount for AI to deliver successful outcomes. In order to deliver valuable insights and enable intelligent algorithms to continuously learn, AI must connect with the right data from the start. Not only should organizations develop sources of high-quality data before investing in AI, but they should also reorient their entire cultures so that everyone from data scientists to line-of-business knowledge workers understand the data needs of AI and how results can be influenced by the type and quality of data being fed into the system.