Toggle light / dark theme

Google releases new Bard Gemini model that is on par with GPT-4 in human evaluation

Google’s Bard chatbot is powered by a new Gemini model. Early users rate it as similar to GPT-4.

Google’s head of AI, Jeff Dean, announced the new Gemini model on X. It is a model from the Gemini Pro family with the suffix “scale”

Thanks to the Gemini updates, Bard is “much better” and has “many more capabilities” compared to the launch in March, according to Dean.

Large language models improve annotation of prokaryotic viral proteins

The latest in the intersection of large language models and life science: virus sequences, virus proteins, and their function.

Large language models improve annotation of prokaryotic viral proteins.


Ocean viral proteome annotations are expanded by a machine learning approach that is not reliant on sequence homology and can annotate sequences not homologous to those seen in training.

Elon Musk’s Neuralink implants first brain chip in human

The ambition is to supercharge human capabilities, treat neurological disorders like ALS or Parkinson’s, and may be one day achieve a symbiotic relationship between humans and artificial intelligence.

“The first human received an implant from Neuralink yesterday and is recovering well,” Musk said in a post on X, formerly Twitter.

“Initial results show promising neuron spike detection,” he added.

As AI Destroys Search Results, Google Fires Workers in Charge of Improving It

Amid a massive wave of tech company layoffs in favor of AI, Google is firing thousands of contractors tasked with making its namesake search engine work better.

As Vice reports, news of the company ending its contract with Appen — a data training firm that employs thousands of poorly paid gig workers in developing countries to maintain, among other things, Google’s search algorithm — coincidentally comes a week after a new study found that the quality of its search engine’s results has indeed gotten much worse in recent years.

Back in late 2022, journalist Cory Doctorow coined the term “enshittification” to refer to the demonstrable worsening of all manner of online tools, which he said was by design as tech giants seek to extract more and more money out of their user bases. Google Search was chief among the writer’s examples of the enshittification effect in a Wired article published last January, and as the new study out of Germany found, that effect can be measured.

The most powerful AI processing supercomputer in the world is set to be built in Germany, and planned to become operational within a mere year. Crikey

AI processing can take a huge amount of computing power, but by the looks of this latest joint project from the Jülich Supercomputing Center and French computing provider Eviden, power will not be in short supply.


“But can it run Crysis” is an old gag, but I’m still going to see if I get away with it.

New “Brainoware” hybrid computing system signals advancement of AI computing

Feng Guo, an associate professor of intelligent systems engineering at the Indiana University Luddy School of Informatics, Computing and Engineering, is addressing the technical limitations of artificial intelligence computing hardware by developing a new hybrid computing system—which has been…


A team of IU bioengineers are building the intersection of brain organoids and artificial intelligence, which could potentially transform the performance and efficiency of advanced AI techniques.

Here Come the Cyborgs: Mating AI with Human Brain Cells

If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:

“‘Biocomputer’ combines lab-grown brain tissue with electronic hardware”

“A system that integrates brain cells into a hybrid machine can recognize voices”

China’s first natively built supercomputer goes online — the Central Intelligent Computing Center is liquid-cooled and built for AI

China Telecom claims it has built the country’s first supercomputer constructed entirely with Chinese-made components and technology (via ITHome). Based in Wuhan, the Central Intelligent Computing Center supercomputer is reportedly built for AI and can train large language models (LLM) with trillions of parameters. Although China has built supercomputers with domestic hardware and software before, going entirely domestic is a new milestone for the country’s tech industry.

Exact details on the Central Intelligent Computing Center are scarce. What’s clear so far: The supercomputer is purportedly made with only Chinese parts; it can train AI models with trillions of parameters; and it uses liquid cooling. It’s unclear exactly how much performance the supercomputer has. A five-exaflop figure is mentioned in ITHome’s report, but to our eyes it seems that the publication was talking about the total computational power of China Telecom’s supercomputers, and not just this one.

Scientists use artificial intelligence to achieve the seemingly impossible with hurricane simulations: ‘It performs very well’

“It performs very well. Depending on where you’re looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one,” Pintar said.

However, the system isn’t without flaws. The data it is fed does not account for the potential effects of rising temperatures, and the simulated storms produced for areas with less data were not as plausible.

“Hurricanes are not as frequent in, say, Boston as in Miami, for example. The less data you have, the larger the uncertainty of your predictions,” NIST Fellow Emil Simiu said.

/* */