Toggle light / dark theme

The quarterly reports by these tech behemoths show their efforts to increase AI productivity in the face of growing economic worries.

The US tech giants like Alphabet, Microsoft, Amazon, and Meta are increasing their large language model (LLM) investments as a show of their dedication to utilizing the power of artificial intelligence (AI) while cutting costs and jobs.

Since the launch of OpenAI’s ChatGPT chatbot in late 2022, these businesses have put their artificial intelligence AI models on steroids to compete in the market, CNBC reported on Friday.


The IT behemoths Alphabet, Microsoft, Amazon, and Meta are increasing their large language model (LLM) investments as a show of their dedication to utilizing the power of artificial intelligence (AI) while cutting costs and jobs.

“We can now ask the robots about past and future missions and get an answer in real-time.”

A team of programmers has outfitted Boston Dynamics’ robot dog, Spot, with OpenAI’s ChatGPT and Google’s Text-to-Speech speech modulation in a viral video.

Santiago Valdarrama, a machine learning engineer, tweeted about the successful integration, which allows the robot to answer inquiries about its missions in real-time, considerably boosting data query efficiency, in a viral video on Twitter.

An astrophysicist and a neurosurgeon walked into a room.

It may sound like the start of a horrible joke, but what a group of Italian academics came up with is a truly galaxy brain take: the structures of the observable universe, they claim, are startlingly similar to the neural networks of the human brain.

In a recent research published in the journal Frontiers in Physics, University of Bologna astronomer Franco Vazza and University of Verona neurosurgeon Alberto Feletti reveal the unexpected similarities between the cosmic network of galaxies and the complex web of neurons in the human brain. According to the researchers, despite being nearly 27 orders of magnitude distant in scale, the human brain and the makeup of the cosmic web exhibit similar levels of complexity and self-organization.

Neuroscientists have uncovered how exploratory actions enable animals to learn their spatial environment more efficiently. Their findings could help build better AI agents that can learn faster and require less experience.

Researchers at the Sainsbury Wellcome Center and Gatsby Computational Neuroscience Unit at UCL found the instinctual exploratory runs that animals carry out are not random. These purposeful actions allow mice to learn a map of the world efficiently. The study, published today, April 28, in Neuron, describes how neuroscientists tested their hypothesis that the specific exploratory actions that animals undertake, such as darting quickly towards objects, are important in helping them learn how to navigate their environment.

“There are a lot of theories in psychology about how performing certain actions facilitates learning. In this study, we tested whether simply observing obstacles in an environment was enough to learn about them, or if purposeful, sensory-guided actions help animals build a cognitive map of the world,” said Professor Tiago Branco, Group Leader at the Sainsbury Wellcome Center and corresponding author on the paper.

Protein-coding sequence differences have failed to fully explain the evolution of multiple mammalian phenotypes. This suggests that these phenotypes have evolved at least in part through changes in gene expression, meaning that their differences across species may be caused by differences in genome sequence at enhancer regions that control gene expression in specific tissues and cell types. Yet the enhancers involved in phenotype evolution are largely unknown. Sequence conservation–based approaches for identifying such enhancers are limited because enhancer activity can be conserved even when the individual nucleotides within the sequence are poorly conserved. This is due to an overwhelming number of cases where nucleotides turn over at a high rate, but a similar combination of transcription factor binding sites and other sequence features can be maintained across millions of years of evolution, allowing the function of the enhancer to be conserved in a particular cell type or tissue. Experimentally measuring the function of orthologous enhancers across dozens of species is currently infeasible, but new machine learning methods make it possible to make reliable sequence-based predictions of enhancer function across species in specific tissues and cell types.

The Google employee who claimed last June his company’s A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots, even if he hasn’t tested them himself yet.

Blake Lemoine was let go from Google last summer for violating the company’s confidentiality policy after he published transcripts of several conversations he had with LaMDA, the company’s large language model he helped create that forms the artificial intelligence backbone of Google’s upcoming search engine assistant, the chatbot Bard.

Lemoine told the Washington Post at the time that LaMDA resembled “a 7-year-old, 8-year-old kid that happens to know physics” and said he believed the technology was sentient, while urging Google to take care of it as it would a “sweet kid who just wants to help the world be a better place for all of us.”

A new planet outside the solar system was discovered using Artificial Intelligence (AI) technology, in what can be called as a major success achieved by AI, which has been making headlines these days.

The technology was put into use by the astronomers to discover the new planet, which gave a major boost to machine learning.

Researchers, working at the University of Georgia, said that the discovery of a previously unknown planet which was present outside our solar system took place using the technology.