Toggle light / dark theme

An experimental computing system physically modeled after the biological brain has “learned” to identify handwritten numbers with an overall accuracy of 93.4%. The key innovation in the experiment was a new training algorithm that gave the system continuous information about its success at the task in real time while it learned. The study was published in Nature Communications.

The algorithm outperformed a conventional machine-learning approach in which training was performed after a batch of data had been processed, producing 91.4% accuracy. The researchers also showed that memory of past inputs stored in the system itself enhanced learning. In contrast, other computing approaches store memory within software or hardware separate from a device’s processor.

For 15 years, researchers at the California NanoSystems Institute at UCLA, or CNSI, have been developing a new platform technology for computation. The technology is a brain-inspired system composed of a tangled-up network of wires containing silver, laid on a bed of electrodes. The system receives input and produces output via pulses of electricity. The individual wires are so small that their diameter is measured on the nanoscale, in billionths of a meter.

At Oak Ridge National Laboratory (ORNL), quantum biology, artificial intelligence, and bioengineering have collided to redefine the landscape of CRISPR Cas9 genome editing tools. This multidisciplinary approach, detailed in the journal Nucleic Acids Research, promises to elevate the precision and efficiency of genetic modifications in organisms, particularly microbes, paving the way for enhanced production of renewable fuels and chemicals.

CRISPR is adept at modifying genetic code to enhance an organism’s performance or correct mutations. CRISPR Cas9 requires a guide RNA (gRNA) to direct the enzyme to its target site to perform these modifications. However, existing computational models for predicting effective guide RNAs in CRISPR tools have shown limited efficiency when applied to microbes. ORNL’s Synthetic Biology group, led by Carrie Eckert, observed these disparities and set out to bridge the gap.

“A lot of the CRISPR tools have been developed for mammalian cells, fruit flies, or other model species. Few have been geared towards microbes where the chromosomal structures and sizes are very different,” explained Eckert.

On Monday, Nvidia announced the HGX H200 Tensor Core GPU, which utilizes the Hopper architecture to accelerate AI applications. It’s a follow-up of the H100 GPU, released last year and previously Nvidia’s most powerful AI GPU chip. If widely deployed, it could lead to far more powerful AI models—and faster response times for existing ones like ChatGPT—in the near future.

According to experts, lack of computing power (often called “compute”) has been a major bottleneck of AI progress this past year, hindering deployments of existing AI models and slowing the development of new ones. Shortages of powerful GPUs that accelerate AI models are largely to blame. One way to alleviate the compute bottleneck is to make more chips, but you can also make AI chips more powerful. That second approach may make the H200 an attractive product for cloud providers.

The “bounties” feature has mostly been used to recreate women (big surprise.)

Civitai, an online marketplace for sharing AI models, just introduced a new feature called “bounties” to encourage its community to develop passable deepfakes of real people, as originally reported by 404 Media.


Popular AI platform Civitai just launched a feature called ‘bounties’ that encourages the community to create passable deepfakes upon request. The best one gets some fake money.

Risk is certainly an area of concern for CFOs when it comes to implementing generative AI.

However, Andrew McAfee, a principal research scientist at MIT, has a message for CFOs regarding the technology: “Risk tolerance needs to shift,” McAfee said.


“The risks are real, but they are manageable,” Andrew McAfee told a group of CFOs.