Tesla CEO Elon Musk thinks the automaker’s market capitalization is directly tied to whether the automaker is able to solve autonomous driving.
It seems that Google doesn’t trust any AI chatbot, including its own Bard AI bot. In an update to its security measures, Alphabet Inc., Google’s parent company has asked its employees to keep sensitive data away from public AI chatbots, including their own Bard AI.
According to sources familiar with the matter, Alphabet Inc, the parent organisation of Google, is advising its employees to be cautious when using chatbots, including its own program called Bard, even as it continues to promote the software globally.
The company has updated a longstanding policy to protect confidential information, instructing employees not to input sensitive materials into AI chatbots. These chatbots, such as Bard… More.
Current artificial intelligence systems like ChatGPT do not have human-level intelligence and are not even as smart as a dog, Meta’s AI chief Yann LeCunn said. LeCun talked about the limitations of generative AI, such as ChatGPT, and said they are not very intelligent because they are solely trained on language.
Meta’s LeCun said that, in the future, there will be machines that are more intelligent than humans, which should not be seen as a threat.
Current artificial intelligence systems like ChatGPT do not have human-level intelligence and are barely smarter than a dog, Meta’s AI chief said, as the debate over the dangers of the fast-growing technology rages on.
Meta’s AI chief said the company is working on training AI on video, rather than just on language, which is a tougher task.
In another example of current AI limitations, he said a five-month-old baby would look at an object floating and not think too much of it. However, a nine-month year old baby would look at this item and be surprised, as it realizes that an object shouldn’t float.
LeCun said we have “no idea how to reproduce this capacity with machines today. Until we can do this, we are not going to have human-level intelligence, we are not going to have dog level or cat level [intelligence].”
A video worth watching. An amazingly detailed deep dive into Sam Altman’s interviews and a high-level look at AI LLMs.
Missed by much of the media, Sam Altman (and co) have revealed at least 16 surprising things over his World Tour. From AI’s designing AIs to ‘unstoppable opensource’, the ‘customisation’ leak (with a new 16k ChatGPT and ‘steerable GPT 4), AI and religion, and possible regrets over having ‘pushed the button’.
I’ll bring in all of this and eleven other insights, together with a new and highly relevant paper just released this week on ‘dual-use’. Whether you are interested in ‘solving climate change by telling AIs to do it’, ‘staring extinction in the face’ or just a deepfake Altman, this video touches on it all, ending with comments from Brockman in Seoul.
I watched over ten hours of interviews to bring you this footage from Jordan, India, Abu Dhabi, UK, South Korea, Germany, Poland, Israel and more.
Altman Abu Dhabi, HUB71, ‘change it’s architecture’: https://youtu.be/RZd870NCukg.
Superfast, subatomic-sized particles called muons have been used to wirelessly navigate underground for the first time. By using muon-detecting ground stations synchronized with an underground muon-detecting receiver, researchers at the University of Tokyo were able to calculate the receiver’s position in the basement of a six-story building.
As GPS cannot penetrate rock or water, this new technology could be used in future search and rescue efforts, to monitor undersea volcanoes, and guide autonomous vehicles underground and underwater. The findings are published in the journal iScience.
GPS, the global positioning system, is a well-established navigation tool and offers an extensive list of positive applications, from safer air travel to real-time location mapping. However, it has some limitations. GPS signals are weaker at higher latitudes and can be jammed or spoofed (where a counterfeit signal replaces an authentic one). Signals can also be reflected off surfaces like walls, interfered with by trees, and can’t pass through buildings, rock or water.
There is tremendous apprehension about the potential of generative AI—technologies that can create new content such as text, images, and video—to replace people in many jobs. But one of the biggest opportunities generative AI offers is to augment human creativity and overcome the challenges of democratizing innovation.
In the past two decades, companies have used crowdsourcing and idea competitions to involve outsiders in the innovation process. But many businesses have struggled to capitalize on these contributions. They’ve lacked an efficient way to evaluate the ideas, for instance, or to synthesize different ideas.
Generative AI can help overcome those challenges, the authors say. It can supplement the creativity of employees and customers and help them produce and identify novel ideas—and improve the quality of raw ideas. Specifically, companies can use generative AI to promote divergent thinking, challenge expertise bias, assist in idea evaluation, support idea refinement, and facilitate collaboration among users.