The actor told an audience in London that AI was a “burning issue” for actors.
Category: robotics/AI – Page 941

Brain Asymmetry Driven by Task Complexity
A mathematical model shows how increased intricacy of cognitive tasks can break the mirror symmetry of the brain’s neural network.
The neural networks of animal brains are partly mirror symmetric, with asymmetries thought to be more common in more cognitively advanced species. This assumption stems from a long-standing theory that increased complexity of neural tasks can turn mirror-symmetric neural circuits into circuits existing in only one side of the brain. This hypothesis has now received support from a mathematical model developed by Luís Seoane at the National Center for Biotechnology in Spain [1]. The researcher’s findings could help explain how the brain’s architecture is shaped not only by cognitively demanding tasks but also by damage or aging.
A mirror-symmetric neural network is useful when controlling body parts that are themselves mirror symmetric, such as arms and legs. Moreover, the presence of duplicate circuits on each side of the brain can help increase computing accuracy and offer a replacement circuit if one becomes faulty. However, the redundancy created by such duplication can lead to increased energy consumption. This trade-off raises an important question: Does the optimal degree of mirror symmetry depend on the complexity of the cognitive tasks performed by the neural network?

Meta releases big, new open-source AI large language model
Meta, better known to most of us as Facebook, has released a commercial version of Llama-v2, its open-source large language model (LLM) that uses artificial intelligence (AI) to generate text, images, and code.
The first version of the Large Language Model Meta AI (Llama), was publicly announced in February and was restricted to approved researchers and organizations. However, it was soon leaked online in early March for anyone to download and use.
-I dislike Meta because of how I am personally treated by Meta, not because Zuck bought something. However that dislike will not deter my objectivity in posting what Meta is doing, as I support the opensource movement.
Open-sourced by accident — or was it? — back in March, Meta has now officially opened up Llama-v2, its newest large language model.

Meta is developing a new, more powerful AI system —WSJ
Meta Platforms is working on a new artificial-intelligence system intended to be as powerful as the most advanced model offered by OpenAI, the Wall Street Journal reported on Sunday, citing people familiar with the matter.
The Facebook parent is aiming for its new AI model to be ready next year, the Journal said, adding it will be several times more powerful than its commercial version dubbed Llama 2.
Llama 2 is Meta’s open source AI language model launched in July, and distributed by Microsoft’s cloud Azure services to compete with OpenAI’s ChatGPT and Google’s Bard.


China’s 1.5 Exaflops Supercomputer Chases Gordon Bell Prize — Again
The Association for Computing Machinery has just put out the finalists for the Gordon Bell Prize award that will be given out at the SC23 supercomputing conference in Denver, and as you might expect, some of the biggest iron assembled in the world are driving the advanced applications that have their eyes on the prize.
The ACM warns that the final system sizes and final results of the simulations and models run are not yet completed, but we have a look at one of them because the researchers in China’s National Supercomputing Center in Wuxi actually published a paper they will formally released in November ahead of the SC23 conference. That paper, Towards Exascale Computation for Turbomachinery Flows, was run on the “Oceanlite” supercomputing system, which we first wrote about way back in February 2021, that won a Gorden Bell prize in November 2021 for a quantum simulation across 41.9 million cores, and that we speculated the configuration of back in March 2022 when Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence ran a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters in the Oceanlite machine.
NASA tossed down a grand challenge nearly a decade ago to do a time-dependent simulation of a complete jet engine, with aerodynamic and heat transfer simulated, and the Wuxi team, with the help of engineering researchers at a number of universities in China, the United States, m and the United Kingdom have picked up the gauntlet. What we found interesting about the paper is that it confirmed many of our speculations about the Oceanlite machine.

AI-powered robot can reproduce artists’ paintings at scale
The machine generates nearly identical works of art with small discrepancies that make them unique.
Robots or automated systems that are built and programmed to generate different types of artistic creations are referred to as art robots. These robots, which come in a variety of shapes and have different capacities, create artwork using a combination of hardware and software.
Among these machines are certain art robots that are engineered expressly to produce visual art, including drawings and paintings. These robots have the ability to use ink or paint to create an image on a canvas, applying the substances with such tools as pens and paint brushes.
Swiss startup puts a new spin on the security robot market
Developed by a spinoff from ETH Zurich, the Ascento Guard is the newest kid on the block for autonomous security robots. It also happens to be very cute.
A Swiss startup called Ascento has recently unveiled its novel and adorable new security robot called the Ascento Guard. An autonomous outdoor security robot’s standout features are its wheeled “legs” and cartoon-esque, almost anthropomorphic “face.”
ETH Zurich/ YouTube.
Cute autonomous security.
AI model speeds up high-resolution computer vision
An autonomous vehicle must rapidly and accurately recognize objects that it encounters, from an idling delivery truck parked at the corner to a cyclist whizzing toward an approaching intersection.
To do this, the vehicle might use a powerful computer vision model to categorize every pixel in a high-resolution image of this scene, so it doesn’t lose sight of objects that might be obscured in a lower-quality image. But this task, known as semantic segmentation, is complex and requires a huge amount of computation when the image has high resolution.
Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a more efficient computer vision model that vastly reduces the computational complexity of this task. Their model can perform semantic segmentation accurately in real-time on a device with limited hardware resources, such as the on-board computers that enable an autonomous vehicle to make split-second decisions.
