Toggle light / dark theme

A new Gartner poll reveals that genAI technology is being deployed by organizations for three key business processes, while a Google Cloud poll shows enthusiasm for AI tools among developers.

A poll of more than 1,400 executive leaders revealed a threefold increase in organizations piloting generative AI (genAI) and more than a doubling of those who have placed the tech into production.

The new Gartner Research survey revealed that 45% of organizations are running genAI pilots, and another 10% have put genAI solutions into production — a significant increase from an earlier poll conducted in March and April 2023, in which only 15% of respondents were piloting generative AI and 4% were in… More.

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

With all the generative AI hype swirling among evangelists, one might think that the Fortune 500 is galloping wildly towards putting large language models (LLMs) into production and turning corporate America into one big chatbot. To that, I say: “Whoa, Nelly!” — meaning, think again.

That’s because for all the C-suite executives out there feeling generative AI FOMO and getting pressure from CEOs to move quickly to develop AI-centric strategies, things are actually moving far slower than you might imagine (or AI vendors, who warn companies about falling behind, might want). As I reported back in April, there’s certainly no doubt that executives want to access the power of generative AI, as tools such as ChatGPT continue to spark the public imagination. But a KPMG study of U.S. executives that month found that a solid majority (60%) of respondents said that while they expect generative AI to have enormous long-term impact, they are still a year or two away from implementing their first solution.

Each member works out within a designated station facing wall-to-wall LED screens. These tall screens mask sensors that track both the motions of the exerciser and the gym’s specially built equipment, including dumbbells, medicine balls, and skipping ropes, using a combination of algorithms and machine-learning models.

Once members arrive for a workout, they’re given the opportunity to pick their AI coach through the gym’s smartphone app. The choice depends on whether they feel more motivated by a male or female voice and a stricter, more cheerful, or laid-back demeanor, although they can switch their coach at any point. The trainers’ audio advice is delivered over headphones and accompanied by the member’s choice of music, such as rock or country.

Although each class at the Las Colinas studio is currently observed by a fitness professional, that supervisor doesn’t need to be a trainer, says Brandon Bean, cofounder of Lumin Fitness. “We liken it to being more like an airline attendant than an actual coach,” he says. “You want someone there if something goes wrong, but the AI trainer is the one giving form feedback, doing the motivation, and explaining how to do the movements.”

Can artificial intelligence, or AI, make it possible for us to live forever? Or at least, be preserved for posterity? What are the current developments in the fields of artificial intelligence and biotechnology?

Will humanity exist without biological bodies, in the near future? Could humans and AI merge into one being? This documentary explores these questions, and more.

The film also explores current advances in AI, robotics and biotechnology. What is the essence of human existence? Can that essence be replicated? Technological development in these fields is rapid. It is also increasingly urgent, as people’s lives play out more and more online. Visionaries, authors, and theorists such as Nick Bostrom, Hiroshi Ishiguro, Douglas Rushkoff and Deepak Chopra are questioning how a humanity without a biological body might evolve.

The scientific community is fascinated by the idea of merging human and machine. However, leading minds are also pondering the question of whether AI might just be the last thing humans ever create.

Large Language Models (LLMs) have gained a lot of attention for their human-imitating properties. These models are capable of answering questions, generating content, summarizing long textual paragraphs, and whatnot. Prompts are essential for improving the performance of LLMs like GPT-3.5 and GPT-4. The way that prompts are created can have a big impact on an LLM’s abilities in a variety of areas, including reasoning, multimodal processing, tool use, and more. These techniques, which researchers designed, have shown promise in tasks like model distillation and agent behavior simulation.

The manual engineering of prompt approaches raises the question of whether this procedure can be automated. By producing a set of prompts based on input-output instances from a dataset, Automatic Prompt Engineer (APE) made an attempt to address this, but APE had diminishing returns in terms of prompt quality. Researchers have suggested a method based on a diversity-maintaining evolutionary algorithm for self-referential self-improvement of prompts for LLMs to overcome decreasing returns in prompt creation.

LLMs can alter their prompts to improve their capabilities, just as a neural network can change its weight matrix to improve performance. According to this comparison, LLMs may be created to enhance both their own capabilities and the processes by which they enhance them, thereby enabling Artificial Intelligence to continue improving indefinitely. In response to these ideas, a team of researchers from Google DeepMind has introduced PromptBreeder (PB) in recent research, which is a technique for LLMs to better themselves in a self-referential manner.

Mo Gawdat openly discusses the current rate of advancement of AI and the expected technological innovation that will follow at the Nordic Business Forum 2023 in Helsinki on September 27, 2023.

Learning points:

Where AI might be heading as a technology.
Ethical questions to consider as a business leader.
What is the role each of us and our businesses have to play to ensure that AI will be a driver for positive change.

In this post you will learn about the transformer architecture, which is at the core of the architecture of nearly all cutting-edge large language models. We’ll start with a brief chronology of some relevant natural language processing concepts, then we’ll go through the transformer step by step and uncover how it works.

Who is this useful for? Anyone interested in natural language processing (NLP).

How advanced is this post? This is not a complex post, but there are a lot of concepts, so it might be daunting to less experienced data scientists.