Toggle light / dark theme

AI is creating an alternative reality where even Tom Hanks can’t be trusted

Like Tom Hanks, victims of AI-generated trickery will need help getting out the message that they are not the ones who generated the content that’s being ascribed to them. The value of human expertise and decision-making in crisis management will become even more evident.

The AI revolution will have permanent repercussions beyond anything we can perhaps imagine today. Soon, organizations exploring and implementing the use of AI will invite unprecedented levels of risk.

Deepfake AI is now being used to create voice clones, convincing images and video hoaxes, all of which can be used to destroy an individual’s reputation or livelihood.

AI and Emerging Tech Challenges Call for Collaborative Solutions

Artificial intelligence (AI) and emerging technologies have ushered in a new era, bringing unprecedented opportunities and challenges. In today’s rapidly evolving digital landscape, addressing these multifaceted challenges necessitates a collaborative effort spanning various sectors and calls for policy reforms while emphasizing global cooperation.

The rapid advancement of technologies, particularly artificial intelligence, has introduced transformative possibilities alongside a range of concerns. While AI holds the potential to revolutionize industries and enhance our daily lives, it also raises pressing issues related to data privacy, misinformation, and cybersecurity.

Experts have proposed adopting the “information environment” framework to address these multifaceted challenges. This framework comprises three essential components:

Google’s RT-2-X Generalist AI Robots: 500 Skills, 150,000 Tasks, 1,000,000+ Workflows

Google DeepMind and academic partners have unveiled an AI that trains robots for generalized tasks using the “Open X-Embodiment” dataset. ConceptGraphs, on the other hand, offers a new 3D scene representation, improving robot perception and planning by combining vision and language.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/

AI news timestamps:
0:00 Google DeepMind RT-2-X
4:33 ConceptGraphs AI Robot Vision.

#google #ai #robot

Microsoft spent hundreds of millions of dollars on a ChatGPT supercomputer

To build the supercomputer that powers OpenAI’s projects, Microsoft says it linked together thousands of Nvidia graphics processing units (GPUs) on its Azure cloud computing platform. In turn, this allowed OpenAI to train increasingly powerful models and “unlocked the AI capabilities” of tools like ChatGPT and Bing.

Scott Guthrie, Microsoft’s vice president of AI and cloud, said the company spent several hundreds of millions of dollars on the project, according to a statement given to Bloomberg. And while that may seem like a drop in the bucket for Microsoft, which recently extended its multiyear, multibillion-dollar investment in OpenAI, it certainly demonstrates that it’s willing to throw even more money at the AI space.

Researchers create a neural network for genomics that explains how it achieves accurate predictions

A team of New York University computer scientists has created a neural network that can explain how it reaches its predictions. The work reveals what accounts for the functionality of neural networks—the engines that drive artificial intelligence and machine learning—thereby illuminating a process that has largely been concealed from users.

The breakthrough centers on a specific usage of that has become popular in recent years—tackling challenging biological questions. Among these are examinations of the intricacies of RNA splicing—the focal point of the study—which plays a role in transferring information from DNA to functional RNA and protein products.

“Many neural networks are —these algorithms cannot explain how they work, raising concerns about their trustworthiness and stifling progress into understanding the underlying biological processes of genome encoding,” says Oded Regev, a computer science professor at NYU’s Courant Institute of Mathematical Sciences and the senior author of the paper, which was published in the Proceedings of the National Academy of Sciences.

The emergent industrial metaverse

Annika Hauptvogel, head of technology and innovation management at Siemens, describes the industrial metaverse as “immersive, making users feel as if they’re in a real environment; collaborative in real time; open enough for different applications to seamlessly interact; and trusted by the individuals and businesses that participate”—far more than simply a digital world.

The industrial metaverse will revolutionize the way work is done, but it will also unlock significant new value for business and societies. By allowing businesses to model, prototype, and test dozens, hundreds, or millions of design iterations in real time and in an immersive, physics-based environment before committing physical and human resources to a project, industrial metaverse tools will usher in a new era of solving real-world problems digitally.

“The real world is very messy, noisy, and sometimes hard to really understand,” says Danny Lange, senior vice president of artificial intelligence at Unity Technologies, a leading platform for creating and growing real-time 3D content. “The idea of the industrial metaverse is to create a cleaner connection between the real world and the virtual world, because the virtual world is so much easier and cheaper to work with.”

New technique based on 18th-century mathematics shows simpler AI models don’t need deep learning

Researchers from the University of Jyväskylä were able to simplify the most popular technique of artificial intelligence, deep learning, using 18th-century mathematics. They also found that classical training algorithms that date back 50 years work better than the more recently popular techniques. Their simpler approach advances green IT and is easier to use and understand.

The recent success of artificial intelligence is significantly based on the use of one core technique: . Deep learning refers to techniques where networks with a large number of data processing layers are trained using massive datasets and a substantial amount of computational resources.

Deep learning enables computers to perform such as analyzing and generating images and music, playing digitized games and, most recently in connection with ChatGPT and other generative AI techniques, acting as a conversational agent that provides high-quality summaries of existing knowledge.

/* */