Category: robotics/AI – Page 547
Board directors and CEO’s need to increase their knowledge of Deep Fakes and develop risk management strategies to protect their companies. Deepfakes are videos or images that often feature people who have been digitally altered, whether it be their voice, face or body, so that they appear to be “saying” something else or are someone else entirely.
You may recall the trickery of the video in 2019 showing Tesla cars crashing into a robot at tech convention causing havoc or of Wayfair false information involved in child sex trafficking through the sale of industrial cabinets. Even Mark Zuckerberg has been inflicted by deep fakes from a video where he was allegedly thanking U.S. legislators for their inaction on antitrust issues.
Unfortunately, deep fakes are incredibly easy to produce having gone mainstream and with AI, there are even more accelerated risks to plan for.
The specifics of the funding round are yet to be finalized.
In the early stages of this process, discussions have taken place with potential investors, as per a report by Bloomberg. However, specific details such as the terms, valuation, and timing of the funding round are still being worked out and may undergo changes.
OpenAI in talks to raise fresh funding
The hottest startup in Silicon Valley has already raised about $13 billion from Microsoft. OpenAI’s upward growth trajectory is in tandem with the artificial intelligence boom brought on by ChatGPT last year.
One of the early opportunities for Optimus to which Elon has alluded.
Disrupting prostitution, OnlyFans and the female profiteering off men’s emotional and sexual needs.
. Our guest, Mo Gawdat, former chief business officer for Google X, brings a stark warning from the forefront of technology. Having shaped the tech landscape through his work with IBM, Microsoft, and Google, Mo unflinchingly declares AI as a greater threat to humanity than global warming. The AI revolution is upon us, reshaping our future, irrespective of our stance. This episode delves into the startling implications of a world intertwined with sex robots. Could such artificial companionship eclipse our inherent need for human connection?
To effectively assist humans in real-world settings, robots should be able to learn new skills and adapt their actions based on what users require them to do at different times. One way to achieve this would be to design computational approaches that allow robots to learn from human demonstrations, for instance observing videos of a person washing dishes and learning to repeat the same sequence of actions.
Researchers at University of British Columbia, Carnegie Mellon University, Monash University and University of Victoria recently set out to gather more reliable data to train robots via demonstrations. Their paper, posted to the arXiv preprint server, shows that the data they gathered can significantly improve the efficiency with which robots learn from the demonstrations of human users.
“Robots can build cars, gather the items for shopping orders in busy warehouses, vacuum floors, and keep the hospital shelves stocked with supplies,” Maram Sakr, one of the researchers who carried out the study, told Tech Xplore. “Traditional robot programming systems require an expert programmer to develop a robot controller that is capable of such tasks while responding to any situation the robot may face.”
Large language models (LLMs) are advanced deep learning algorithms that can process written or spoken prompts and generate texts in response to these prompts. These models have recently become increasingly popular and are now helping many users to create summaries of long documents, gain inspiration for brand names, find quick answers to simple queries, and generate various other types of texts.
Researchers at the University of Georgia and Mayo Clinic recently set out to assess the biological knowledge and reasoning skills of different LLMs. Their paper, pre-published on the arXiv server, suggests that OpenAI’s model GPT-4 performs better than the other predominant LLMs on the market on reasoning biology problems.
“Our recent publication is a testament to the significant impact of AI on biological research,” Zhengliang Liu, co-author of the recent paper, told Tech Xplore. “This study was born out of the rapid adoption and evolution of LLMs, especially following the notable introduction of ChatGPT in November 2022. These advancements, perceived as critical steps towards Artificial General Intelligence (AGI), marked a shift from traditional biotechnological approaches to an AI-focused methodology in the realm of biology.”
New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation.
In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT’s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction. This was part of Waterloo researchers’ efforts to investigate human-technology interactions and explore how to mitigate risks.
They discovered that GPT-3 frequently made mistakes, contradicted itself within the course of a single answer, and repeated harmful misinformation. The study, “Reliability Check: An Analysis of GPT-3’s Response to Sensitive Topics and Prompt Wording,” was published in Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing.
Researchers at NASA’s Langley Research Center in Hampton, Virginia recently flew multiple drones beyond visual line of sight with no visual observer. The drones successfully flew around obstacles and each other during takeoff, along a planned route, and upon landing, all autonomously without a pilot controlling the flight. This test marks an important step towards advancing self-flying capabilities for air taxis.
“Flying the vehicles beyond visual line of sight, where neither the vehicle nor the airspace is monitored using direct human observation, demonstrates years of research into automation and safety systems, and required specific approval from the Federal Aviation Administration and NASA to complete,” said Lou Glaab, branch head for the aeronautics systems engineering branch at NASA Langley.
It is safer and more cost-effective to test self-flying technology meant for larger, passenger carrying air taxis on smaller drones to observe how they avoid each other and other obstacles.
Researchers develop an AI technique called Material Transformer Generator that integrates composition generation, structure prediction, and stability analysis to automatically design promising new two-dimensional materials.