Toggle light / dark theme

Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex.

In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

In the study, the team found it took a five-to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

Showing how far AI engineering has come, a new aerospike engine burning oxygen and kerosene capable of 1,100 lb (5,000 N) of thrust has successfully been hot-fired for 11 seconds. It was designed from front to back using an advanced Large Computational Engineering Model.

Designing and developing advanced aerospace engines is generally a complicated affair taking years of modeling, testing, revision, prototyping, rinsing and repeating. With their ability to discern patterns, carry out complex analysis, create virtual prototypes, and run models thousands of times, engineering AIs are altering the aerospace industry in some surprising ways – provided, of course, they are properly programmed and trained.

Otherwise, it’s garbage in, garbage out, which has been the Golden Rule of computers since they ran on radio valves and electromechanical relays.

One used AI to dream up a universe of potential CRISPR gene editors. Inspired by large language models—like those that gave birth to ChatGPT—the AI model in the study eventually designed a gene editing system as accurate as existing CRISPR-based tools when tested on cells. Another AI designed circle-shaped proteins that reliably turned stem cells into different blood vessel cell types. Other AI-generated proteins directed protein “junk” into the lysosome, a waste treatment blob filled with acid inside cells that keeps them neat and tidy.

Outside of medicine, AI designed mineral-forming proteins that, if integrated into aquatic microbes, could potentially soak up excess carbon and transform it into limestone. While still early, the technology could tackle climate change with a carbon sink that lasts millions of years.

It seems imagination is the only limit to AI-based protein design. But there are still a few cases that AI can’t yet fully handle. Nature has a comprehensive list, but these stand out.

In 2014, a team of Googlers (many of whom were former educators) launched Google Classroom as a “mission control” for teachers. With a central place to bring Google’s collaboration tools together, and a constant feedback loop with schools through the Google for Education Pilot Program, Classroom has evolved from a simple assignment distribution tool to a destination for everything a school needs to deliver real learning impact.

Introduction: The integration of ChatGPT, an advanced AI-powered chatbot, into educational settings, has caused mixed reactions among educators. Therefore, we conducted a systematic review to explore the strengths and weaknesses of using ChatGPT and discuss the opportunities and threats of using ChatGPT in teaching and learning.

Methods: Following the PRISMA flowchart guidelines, 51 articles were selected among 819 studies collected from Scopus, ERIC and Google Scholar databases in the period from 2022–2023.

Results: The synthesis of data extracted from the 51 included articles revealed 32 topics including 13 strengths, 10 weaknesses, 5 opportunities and 4 threats of using ChatGPT in teaching and learning. We used Biggs’s Presage-Process-Product (3P) model of teaching and learning to categorize topics into three components of the 3P model.

Basically chat gpt, gemini, and apple intelligence all can be a great teaching tool that can teach oneself nearly anything. Essentially college even can be quickly solved with AI like chat gpt 4 because it can do more advanced thinking processing than even humans can in any subject. The way to think of this is that chat gpt 4 is like having a neuralink without even needing a physical device inside the brain. Essentially AI can augmented us to become god like just by being able to farm out computer AI instead needing to use our brains for hard mental labor.


I created a prompt chain that enables you to learn any complex concept from ChatGPT.

The melting point is one of the most important measurements of material properties, which informs potential applications of materials in various fields. Experimental measurement of the melting point is complex and expensive, but computational methods could help achieve an equally accurate result more quickly and easily.

A research group from Skoltech conducted a study to calculate the maximum of a high-entropy carbonitrides—a compound of titanium, zirconium, tantalum, hafnium, and niobium with carbon and nitrogen.

The results published in the Scientific Reports journal indicate that high-entropy carbonitrides can be used as promising materials for protective coatings of equipment operating under —high temperature, thermal shock, and chemical corrosion.

Large language models (LLMs), such as Open AI’s renowned conversational platform ChatGPT, have recently become increasingly widespread, with many internet users relying on them to find information quickly and produce texts for various purposes. Yet most of these models perform significantly better on computers, due to the high computational demands associated with their size and data processing capabilities.

To tackle this challenge, computer scientists have also been developing small language models (SLMs), which have a similar architecture but are smaller. These models could be easier to deploy directly on smartphones, allowing users to consult ChatGPT-like platforms more easily daily.

Researchers at Beijing University of Posts and Telecommunications (BUPT) recently introduced PhoneLM, a new SLM architecture for smartphones that could be both efficient and highly performing. Their proposed architecture, presented in a paper published on the arXiv preprint server, was designed to attain near-optimal runtime efficiency before it undergoes pre-training on text data.