Toggle light / dark theme

Artificial intelligence (AI) systems have long drawn inspiration from the intricacies of the human brain. Now, a groundbreaking branch of research led by Columbia University in New York seeks to unravel the workings of living brains and enhance their function by leveraging advancements in AI.

Designated by the National Science Foundation as one of seven universities serving as the headquarters for a new national AI research institute, Columbia University received a substantial $20 million grant to bolster the AI Institute for Artificial and Natural Intelligence (ARNI). ARNI is a consortium comprising educational institutions and research groups, with Columbia at the helm. The overarching goal of ARNI is to forge connections between the remarkable progress achieved in AI systems and the ongoing revolution in our understanding of the brain.

Richard Zemel, a professor of computer science at Columbia, explained that the aim is to foster a cross-disciplinary collaboration between leading AI and neuroscience researchers, yielding mutual benefits for AI systems and human beings alike. Zemel emphasized that the exchange of knowledge flows in both directions, with AI systems drawing inspiration from the brain while neural networks in turn bear loose resemblances to its structure.

A leading expert in artificial intelligence warns that the race to develop more sophisticated models is outpacing our ability to regulate the technology. Critics say his warnings overhype the dangers of new AI models like GPT. But MIT professor Max Tegmark says private companies risk leading the world into dangerous territory without guardrails on their work. His Institute of Life issued a letter signed by tech luminaries like Elon Musk warning Silicon Valley to immediately stop work on AI for six months to unite on a safe way forward. Without that, Tegmark says, the consequences could be devastating for humanity.

#ai #chatgpt #siliconvalley.

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

For more news go to: http://www.dw.com/en/

One novel approach — that some experts say could actually work — is to use metadata, watermarks, and other technical systems to distinguish fake from real. Companies like Google, Adobe, and Microsoft are all supporting some form of labeling of AI in their products. Google, for example, said at its recent I/O conference that, in the coming months, it will attach a written disclosure, similar to a copyright notice, underneath AI-generated results on Google Images. OpenAI’s popular image generation technology DALL-E already adds a colorful stripe watermark to the bottom of all images it creates.

“We all have a fundamental right to establish a common objective reality,” said Andy Parsons, senior director of Adobe’s content authenticity initiative group. “And that starts with knowing what something is and, in cases where it makes sense, who made it or where it came from.”

In order to reduce confusion between fake and real images, the content authenticity initiative group developed a tool Adobe is now using called content credentials that tracks when images are edited by AI. The company describes it as a nutrition label: information for digital content that stays with the file wherever it’s published or stored. For example, Photoshop’s latest feature, Generative Fill, uses AI to quickly create new content in an existing image, and content credentials can keep track of those changes.

The material can self-heal in just 24 hours when warmed to 158°F or in about a week at room temperature.

Stanford professor Zhenan Bao and his team have invented a multi-layer self-healing synthetic electronic skin.

This is according to a report by Fox News published on Friday.


Stanford scientists have invented a multi-layer self-healing synthetic electronic skin that can now self-recognize and align with each other when injured, allowing the skin to continue functioning while healing.

It’s ideal for search and rescue missions.

Scientists at the ETH Zurich spinoff company Tethys Robotics have developed an underwater robot that can be deployed in situations that are too dangerous for human divers to undertake.

This is according to a report by InceptiveMind published on Saturday.

The new machine is an autonomous underwater vehicle that has been specifically engineered for use in challenging and dangerous environments like turbid channels and rivers. When conventional search and rescue techniques fail, the Tethys robot is there to take over.

Machine learning has an “AI” problem. With new breathtaking capabilities from generative AI released every several months — and AI hype escalating at an even higher rate — it’s high time we differentiate most of today’s practical ML projects from those research advances. This begins by correctly naming such projects: Call them “ML,” not “AI.” Including all ML initiatives under the “AI” umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term “AI” goes entirely too far — it alludes to human-level capabilities. In fact, when you unpack the meaning of “AI,” you discover just how overblown a buzzword it is: If it doesn’t mean artificial general intelligence, a grandiose goal for technology, then it just doesn’t mean anything at all.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.357346 data-title= The AI Hype Cycle Is Distracting Companies data-url=/2023/06/the-ai-hype-cycle-is-distracting-companies data-topic= AI and machine learning data-authors= Eric Siegel data-content-type= Digital Article data-content-image=/resources/images/article_assets/2023/06/Jun23_02_Skizzomat-383x215.jpg data-summary=

By focusing on sci-fi goals, they’re missing out on projects that create real value right now.

Researchers have trained a robotic ‘chef’ to watch and learn from cooking videos, and recreate the dish itself.

The researchers, from the University of Cambridge, programmed their robotic chef with a cookbook of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which was being prepared and make it.

In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their results, reported in the journal IEEE Access, demonstrate how can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.