Toggle light / dark theme

First robotic liver transplant in U.S. performed by Washington University surgeons

A surgical team from Washington University School of Medicine in St. Louis recently performed the first robotic liver transplant in the U.S. The successful transplant, accomplished in May at Barnes-Jewish Hospital, extends to liver transplants the advantages of minimally invasive robotic surgery: a smaller incision resulting in less pain and faster recoveries, plus the precision needed to perform one of the most challenging abdominal procedures.

The patient, a man in his 60s who needed a transplant because of liver cancer and cirrhosis caused by hepatitis C virus, is doing well and has resumed normal, daily activities. Typically, liver transplant recipients require at least six weeks before they can walk without any discomfort. The patient was walking easily six weeks after surgery and cleared to resume golfing and swimming seven weeks after the surgery.


Groundbreaking surgery performed at Barnes-Jewish Hospital in St. Louis.

Multimedia platform releases ethics statement into generative AI

A blog by the company’s chief trust officer, Dana Rao, highlighted the importance of ethics while developing new AI tools. Generative AI has come much more into mainstream awareness in recent months with the launch of ChatGPT and DALL-E systems, which can understand written and verbal requests to generate convincingly human-like text and images.

Artists have complained that generative AIs being trained on their work is tantamount to ‘ripping off’ their styles or creating discriminatory or explicit content from harmless inputs. Others have called into question the ease with which humans can pass off AI-generated prose as their own work.

Read more: Schools ban ChatGPT.

Brain Asymmetry Driven by Task Complexity

A mathematical model shows how increased intricacy of cognitive tasks can break the mirror symmetry of the brain’s neural network.

The neural networks of animal brains are partly mirror symmetric, with asymmetries thought to be more common in more cognitively advanced species. This assumption stems from a long-standing theory that increased complexity of neural tasks can turn mirror-symmetric neural circuits into circuits existing in only one side of the brain. This hypothesis has now received support from a mathematical model developed by Luís Seoane at the National Center for Biotechnology in Spain [1]. The researcher’s findings could help explain how the brain’s architecture is shaped not only by cognitively demanding tasks but also by damage or aging.

A mirror-symmetric neural network is useful when controlling body parts that are themselves mirror symmetric, such as arms and legs. Moreover, the presence of duplicate circuits on each side of the brain can help increase computing accuracy and offer a replacement circuit if one becomes faulty. However, the redundancy created by such duplication can lead to increased energy consumption. This trade-off raises an important question: Does the optimal degree of mirror symmetry depend on the complexity of the cognitive tasks performed by the neural network?

Meta releases big, new open-source AI large language model

Meta, better known to most of us as Facebook, has released a commercial version of Llama-v2, its open-source large language model (LLM) that uses artificial intelligence (AI) to generate text, images, and code.

The first version of the Large Language Model Meta AI (Llama), was publicly announced in February and was restricted to approved researchers and organizations. However, it was soon leaked online in early March for anyone to download and use.

-I dislike Meta because of how I am personally treated by Meta, not because Zuck bought something. However that dislike will not deter my objectivity in posting what Meta is doing, as I support the opensource movement.


Open-sourced by accident — or was it? — back in March, Meta has now officially opened up Llama-v2, its newest large language model.

Meta is developing a new, more powerful AI system —WSJ

Meta Platforms is working on a new artificial-intelligence system intended to be as powerful as the most advanced model offered by OpenAI, the Wall Street Journal reported on Sunday, citing people familiar with the matter.

The Facebook parent is aiming for its new AI model to be ready next year, the Journal said, adding it will be several times more powerful than its commercial version dubbed Llama 2.

Llama 2 is Meta’s open source AI language model launched in July, and distributed by Microsoft’s cloud Azure services to compete with OpenAI’s ChatGPT and Google’s Bard.

China’s 1.5 Exaflops Supercomputer Chases Gordon Bell Prize — Again

The Association for Computing Machinery has just put out the finalists for the Gordon Bell Prize award that will be given out at the SC23 supercomputing conference in Denver, and as you might expect, some of the biggest iron assembled in the world are driving the advanced applications that have their eyes on the prize.

The ACM warns that the final system sizes and final results of the simulations and models run are not yet completed, but we have a look at one of them because the researchers in China’s National Supercomputing Center in Wuxi actually published a paper they will formally released in November ahead of the SC23 conference. That paper, Towards Exascale Computation for Turbomachinery Flows, was run on the “Oceanlite” supercomputing system, which we first wrote about way back in February 2021, that won a Gorden Bell prize in November 2021 for a quantum simulation across 41.9 million cores, and that we speculated the configuration of back in March 2022 when Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence ran a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters in the Oceanlite machine.

NASA tossed down a grand challenge nearly a decade ago to do a time-dependent simulation of a complete jet engine, with aerodynamic and heat transfer simulated, and the Wuxi team, with the help of engineering researchers at a number of universities in China, the United States, m and the United Kingdom have picked up the gauntlet. What we found interesting about the paper is that it confirmed many of our speculations about the Oceanlite machine.

AI-powered robot can reproduce artists’ paintings at scale

The machine generates nearly identical works of art with small discrepancies that make them unique.

Robots or automated systems that are built and programmed to generate different types of artistic creations are referred to as art robots. These robots, which come in a variety of shapes and have different capacities, create artwork using a combination of hardware and software.

Among these machines are certain art robots that are engineered expressly to produce visual art, including drawings and paintings. These robots have the ability to use ink or paint to create an image on a canvas, applying the substances with such tools as pens and paint brushes.

Swiss startup puts a new spin on the security robot market

Developed by a spinoff from ETH Zurich, the Ascento Guard is the newest kid on the block for autonomous security robots. It also happens to be very cute.

A Swiss startup called Ascento has recently unveiled its novel and adorable new security robot called the Ascento Guard. An autonomous outdoor security robot’s standout features are its wheeled “legs” and cartoon-esque, almost anthropomorphic “face.”


ETH Zurich/ YouTube.

Cute autonomous security.

/* */