Toggle light / dark theme

Exclusive: Google in talks to invest in AI startup Character.AI

Nov 10 (Reuters) — Alphabet’s (GOOGL.O) Google is in talks to invest hundreds of millions of dollars in Character. AI, as the fast growing artificial intelligence chatbot startup seeks capital to train models and keep up with user demand, two sources briefed on the matter told Reuters.

The investment, which could be structured as convertible notes, according to a third source, will deepen the existing partnership Character. AI already has with Google, in which it uses Google’s cloud services and Tensor Processing Units (TPUs) to train models.

Google and Character AI did not respond to requests for comment.

The World Is Running Out of Data to Feed AI, Experts Warn

As artificial intelligence (AI) reaches the peak of its popularity, researchers have warned the industry might be running out of training data – the fuel that runs powerful AI systems.

This could slow down the growth of AI models, especially large language models, and may even alter the trajectory of the AI revolution.

But why is a potential lack of data an issue, considering how much there are on the web? And is there a way to address the risk?

AI will transform Airbnb more than hotels in near term, CEO says

Airbnb CEO Brian Chesky said that the digital home-share company will reap the benefits of artificial intelligence more than hotels, at least in the near term.

“The reason we know this is because AI is mostly changing… the digital world a lot faster than the physical world,” Chesky told reporters during a meeting in New York City on Tuesday. “Because we have more of a digital product, we can actually adapt and change faster.”

Chesky also said that hotels are not going to be different five years from now because of AI, but that Airbnb “will be transformed.”

New algorithm finds failures and fixes in autonomous systems, from drone teams to power grids

From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.

Now, MIT engineers have developed an approach that can be paired with any , to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.

The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.

What If We Became A Type I Civilization? 15 Predictions

This video explores what life would be like if we became a Type I Civilization. Watch this next video about the Technological Singularity: https://youtu.be/yHEnKwSUzAE.
🎁 5 Free ChatGPT Prompts To Become a Superhuman: https://bit.ly/3Oka9FM
🤖 AI for Business Leaders (Udacity Program): https://bit.ly/3Qjxkmu.
☕ My Patreon: https://www.patreon.com/futurebusinesstech.
➡️ Official Discord Server: https://discord.gg/R8cYEWpCzK

SOURCES:
https://www.futuretimeline.net.
• The Singularity Is Near: When Humans Transcend Biology (Ray Kurzweil): https://amzn.to/3ftOhXI
• The Future of Humanity (Michio Kaku): https://amzn.to/3Gz8ffA

SOURCES:
• Life 3.0: Being Human in the Age of Artificial Intelligence (Max Tegmark): https://amzn.to/3xrU351
• The Future of Humanity (Michio Kaku): https://amzn.to/3Gz8ffA
• The Singularity Is Near: When Humans Transcend Biology (Ray Kurzweil): https://amzn.to/3ftOhXI
___

💡 Future Business Tech explores the future of technology and the world.

Examples of topics I cover include:
• Artificial Intelligence & Robotics.
• Virtual and Augmented Reality.
• Brain-Computer Interfaces.
• Transhumanism.
• Genetic Engineering.

SUBSCRIBE: https://bit.ly/3geLDGO

Can’t quite develop that dangerous pathogen? AI may soon be able to help

AI tools are close to being able to do a host of dangerous things, like walking people through the mistakes they made during a failed attempt at making dangerous pathogens and guiding them to a better protocol. There needs to be a durable way to probe the ways these systems might be misused, even as newer and more powerful technologies are continuously released.

AI Industry Insider Claims They Can No Longer Tell Apart Real and Fake

The people building the next iteration of AI technology are growing concerned with how lifelike the next generation of generative content has already become.

In an interview with Axios, an unnamed “leading AI architect” said that in private tests, experts can no longer tell whether AI-generated imagery is real or fake, which nobody expected to be possible this soon.

As the report continues, AI insiders expect this kind of technology to be available for anyone to use or purchase in 2024 — even as social media companies are weakening their disinformation policies and slashing the departments that work to enforce them.

/* */