Toggle light / dark theme

Despite their performance, current AI models have major weaknesses: they require enormous resources and are indecipherable. Help may be on the way.

By Manon Bischoff

ChatGPT has triggered an onslaught of artificial intelligence hype. The arrival of OpenAI’s large-language-model-powered (LLM-powered) chatbot forced leading tech companies to follow suit with similar applications as quickly as possible. The race is continuing to develop a powerful AI model. Meta came out with an LLM called Llama at the beginning of 2023, and Google presented its Bard model (now called Gemini) last year as well. Other providers, such as Anthropic, have also delivered impressive AI applications.

Non-personalized content and ads are influenced by things like the content you’re currently viewing and your location (ad serving is based on general location). Personalized content and ads can also include things like video recommendations, a customized YouTube homepage, and tailored ads based on past activity, like the videos you watch and the things you search for on YouTube. We also use cookies and data to tailor the experience to be age-appropriate, if relevant.

Select “More options” to see additional information, including details about managing your privacy settings. You can also visit g.co/privacytools at any time.

The illusion of AI consciousness: why gpt-4o and other chatbots are not conscious.

• Shannon Vallor, an AI expert and contributor to DeepMind, discusses the latest developments in generative AI, particularly OpenAI’s GPT-4o model, and warns of the dangers of the illusion of artificial consciousness.


AI expert and DeepMind contributor Shannon Vallor explores OpenAI’s latest GPT-4o model, based on the ideas of her new book, ‘The AI Mirror’. Despite modest intellectual improvements, AGI’s human-like behaviour raises serious ethical concerns, but as Vallor argues, AI today only presents the illusion of consciousness.

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.