Toggle light / dark theme

Im still in the 2029 camp. But if it happens soon would be great. And, i still project ASI 5 years after arrival of Agi.


The online forecasting community Metaculus currently predicts that there will be a weak human-like AI in January 2026. This is about 25 years earlier than the early 2020 prediction.

Metaculus is a forecasting platform and community focused on compiling and refining predictions about various events, scientific advances, technological breakthroughs, and other topics. Participants submit their predictions to questions posed on the platform, and these predictions are combined into a consensus forecast.

Metaculus uses a scoring system to incentivize accurate predictions and to track participants’ performance over time. This motivates them to revisit and update their forecasts as new information becomes available. The platform is open to anyone who wants to participate.

Shortly after resigning as CEO of Twitter, Musk is expected to assume the position of CEO once again, this time at a fresh AI startup.

The world’s richest person has announced a new startup called xAI. Elon Musk has assembled a team of highly experienced professionals in the realm of artificial intelligence (AI) to establish this enterprise. However, Musk’s Twitter statement merely offers limited information about the venture’s objectives, leaving its purpose ambiguous for the time being.

Musk’s recent foray into AI is not his initial involvement in the field. In 2015, he became an investor in OpenAI, a non-profit research laboratory dedicated to AI exploration. However, over time, Musk… More.


Getty Images.

Announcing formation of @xAI to understand reality— Elon Musk (@elonmusk) July 12, 2023

July 17 — Around a decade after virtual assistants like Siri and Alexa burst onto the scene, a new wave of AI helpers with greater autonomy is raising the stakes, powered by the latest version of the technology behind ChatGPT and its rivals.

Experimental systems that run on GPT-4 or similar models are attracting billions of dollars of investment as Silicon Valley competes to capitalize on the advances in AI. The new assistants — often called “agents” or “copilots” — promise to perform more complex personal and work tasks when commanded to by a human, without needing close supervision.

“High level, we want this to become something like your personal AI friend,” said developer Div Garg, whose company MultiOn is beta-testing an AI agent.

Artificial Intelligence (AI) has transformed our world at an astounding pace. It’s like a vast ocean, and we’re just beginning to navigate its depths.

To appreciate its complexity, let’s embark on a journey through the seven distinct stages of AI, from its simplest forms to the mind-boggling prospects of superintelligence and singularity.

Picture playing chess against a computer. Every move it makes, every strategy it deploys, is governed by a predefined set of rules, its algorithm. This is the earliest stage of AI — rule-based systems. They are excellent at tasks with clear-cut rules, like diagnosing mechanical issues or processing tax forms. But their capacity to learn or adapt is nonexistent, and their decisions are only as good as the rules they’ve been given.

Can human-aware artificial intelligence help accelerate science? In this article, the authors incorporate the distribution of human expertise by training unsupervised models on simulated inferences cognitively accessible to experts and show that this substantially improves the models’ predictions of future discoveries, but also enables AI to generate high-value alternatives that complement human discoveries.

MIT researchers have developed PIGINet, a new system that aims to efficiently enhance the problem-solving capabilities of household robots, reducing planning time by 50–80 percent.

This is according to a press release by the institution published on Friday.

Under normal conditions, household robots follow predefined recipes for performing tasks, which isn’t always suitable for diverse or changing environments. PIGINet, as described by MIT, is a neural network that takes in “Plans, Images, Goal, and Initial facts,” then predicts the probability that a task plan can be refined to find feasible motion plans.

Singapore: A research paper, published in iScience, has decribed the development of a deep learning model for predicting hip fractures on pelvic radiographs (Xrays), even with the presence of metallic implants.

Yet Yen Yan of Changi General Hospital and colleagues at the Duke-NUS Medical School, Singapore, and colleagues developed the AI (artificial intelligence) algorithm using more than fortythousand pelvic radiographs from a single institution. The model demonstrated high specificity and sensitivity when applied to a test set of emergency department (ED) radiographs.

This study approximates the realworld application of a deep learning fracture detection model by including radiographs with suboptimal image quality, other nonhip fractures and meta llic implants, which were excluded from prior published work. The research team also explored the effect of ethnicity on model performance, and the accuracy of visualization algorithm for fracture localization.

Science and Technology:

Hope that they find a medicine to cure aging and turn us immortal and able to live forever still during “our” lifetime.


Insilico Medicine, a clinical stage generative artificial intelligence (AI)-driven drug discovery company, today announced that it combined two rapidly developing technologies, quantum computing and generative AI, to explore lead candidate discovery in drug development and successfully demonstrated the potential advantages of quantum generative adversarial networks in generative chemistry.

The study, published in the Journal of Chemical Information and Modeling, was led by Insilico’s Taiwan and UAE centers which focus on pioneering and constructing breakthrough methods and engines with rapidly developing technologies—including generative AI and —to accelerate drug discovery and development.

The research was supported by University of Toronto Acceleration Consortium director Alán Aspuru-Guzik, Ph.D., and scientists from the Hon Hai (Foxconn) Research Institute.

In today’s column, I am going to identify and explain the momentous pairing of both generative AI and data science. These two realms are each monumental in their own respective ways, thus they are worthy of rapt attention on a standalone basis individually. On top of that, when you connect the dots and bring them together as a working partnership, you have to admire and anticipate big changes that will arise, especially as the two fields collaboratively reinvent data strategies all told.

This is entirely tangible and real-world, not merely something abstract or obtuse.


I will first do a quick overview of generative AI. If you are already versed in generative AI, perhaps do a fast skim on this portion.

Foundations Of Generative AI

Generative AI is the latest and hottest form of AI and has caught our collective devout attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on the text and other content found on the Internet. For my detailed elaboration on how this works see the link here.