In Sparks of Artificial General Intelligence: Early experiments with GPT-4.
In chatbots we trust.
In Sparks of Artificial General Intelligence: Early experiments with GPT-4.
In chatbots we trust.
Derek Thompson published an essay in the Atlantic last week that pondered an intriguing question: “When we’re looking at generative AI, what are we actually looking at?” The essay was framed like this: “Narrowly speaking, GPT-4 is a large language model that produces human-inspired content by using transformer technology to predict text. Narrowly speaking, it is an overconfident, and often hallucinatory, auto-complete robot. This is an okay way of describing the technology, if you’re content with a dictionary definition.
He closes his essay with one last analogy, one that really makes you think about the-as-of-yet unforeseen consequences of generative AI technologies — good or bad: Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity … fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds … Our ancestors knew that open flame was a feral power, which deserved reverence and even fear. The same technology that made civilization possible also flattened cities.
Thompson concisely passes judgment about what he thinks generative AI will do to us in his final sentence: I think this technology will expand our minds. And I think it will burn us.
Thompson’s essay inadvertently but quite poetically illustrates why it’s so difficult to predict events and consequences too far into the future. Scientists and philosophers have studied the process of how knowledge is expanded from a current state to novel directions of thought and knowledge.
This year’s NVIDIA GPU Technology Conference (GTC) could not have come at a more auspicious time for the company. The hottest topic in technology today is the Artificial Intelligence (AI) behind ChatGPT, other related Large Language Models (LLMs), and their applications for generative AI applications. Underlying all this new AI technology are NVIDIA GPUs. NVIDIA’s CEO Jensen Huang doubled down on support for LLMs and the future of generative AI based on it. He’s calling it “the iPhone moment for AI.” Using LLMs, AI computers can learn the languages of people, programs, images, or chemistry. Using the large knowledge base and based on a query, they can create new, unique works: this is generative AI.
Jumbo sized LLM’s are taking this capability to new levels, specifically the latest GPT 4.0, which was introduced just prior to GTC. Training these complex models takes thousands of GPUs, and then applying these models to specific problems require more GPUs as well for inference. Nvidia’s latest Hopper GPU, the H100, is known for training, but the GPU can also be divided into multiple instances (up to 7), which Nvidia calls MIG (Multi-Instance GPU), to allow multiple inference models to be run on the GPU. It’s in this inference mode that the GPU transforms queries into new outputs, using trained LLMs.
Nvidia is using its leadership position to build new business opportunities by being a full-stack supplier of AI, including chips, software, accelerator cards, systems, and even services. The company is opening up its services business in areas such as biology, for example. The company’s pricing might be based on use time, or it could be based on the value of the end product built with its services.
The release of ChatGPT in late November 2022 lit a fire under the subdued venture capital sector, a hesitant business community, and the work of academics and regulators. While venture funding decreased by 19% from Q3’22 to Q4’22, AI funding increased 15% over the same period, according to CB Insights’ State of AI 2022 Report (annual AI funding dropped by 34% in 2022, mirroring the broader venture funding downturn). Looking specifically at generative AI startups, CB Insights found that 2022 was a record year, with equity funding topping $2.6 billion across 110 deals.
Everywhere you turn, you encounter generative AI.
Generative AI, in concert with other quickly growing technologies, is propelling a revolutionary future, blurring the line between the digital and physical world, says Accenture’s new report.
When combined, cloud, metaverse, and AI trends will reduce the gap between the virtual and real worlds, according to the Fortune Global 500 tech company.
Machine learning has been leveraged to accelerate analysis in nuclear processing facilities and investigations in the field.
Surprise nuclear attacks or threats will soon be a thing of the past. Researchers at the Department of Energy’s Pacific Northwest National Laboratory (PNNL), U.S., have developed new techniques to accelerate the discovery and understanding of nuclear weapons by leveraging machine learning.
One enticing application of these new techniques in national security is to use data analytics and machine learning to monitor several ingredients used to produce nukes.
DKosig/iStock.
This advancement in AI technology may have far-reaching effects if the claim is accurate.
According to Microsoft, 1,287 password attacks occur every second around the world.
Microsoft is now focusing on cybersecurity as part of its ongoing efforts to incorporate generative artificial intelligence into the majority of its products. The company previously announced an AI-powered assistant for Office apps.
To enhance cyber security, Microsoft Corp has announced the implementation of the next generation of AI in its security products.
Lcva2/iStock.
It is as weird as Saudi Arabia giving an AI citizenship.
Italy is the first Western country to prohibit the advanced chatbot ChatGPT according to authorities. The Italian data protection authorities expressed privacy concerns about the model, which was developed by the US start-up OpenAI and is supported by Microsoft.
Authorities also accused OpenAI of failing to verify the age of its ChatGPT users and of failing to enforce laws prohibiting users over the age of 13. Given their relative lack of development, these young users may be exposed to “unsuitable answers” from the chatbot, according to officials.
NurPhoto/Getty Images.
One million years ago. An ancient hominid cradles a large stone—black and glassy—in the palm of his hand, feeling for creases in the rock with his fingertips. In the other hand he grasps the antler of a deer, the bone’s blunt base pointing forward. He strikes the stone with the antler, and it splits along an invisible fracture. He flips it over and strikes again. Another flake of stone falls away. Examining the contours of the rock he continues to flip and strike—sometimes with force, other times with a gentle tap. Gradually, a useful and deadly object emerges from the formless stone. It is a bifaced handaxe, the most important tool that accompanied our ancestors out of Africa.
Movies and television usually depict embodied AI as a malevolent robot. Terror sells. But without access to the physical world and a tactile curiosity, AI will never be fully creative.