Toggle light / dark theme

OpenAI is forming a new team to bring ‘superintelligent’ AI under control

OpenAI is forming a new team led by Ilya Sutskever, its chief scientist and one of the company’s co-founders, to develop ways to steer and control “superintelligent” AI systems.

In a blog post published today, Sutskever and Jan Leike, a lead on the alignment team at OpenAI, predict that AI with intelligence exceeding that of humans could arrive within the decade. This AI — assuming it does, indeed, arrive eventually — won’t necessarily be benevolent, necessitating research into ways to control and restrict it, Sutskever and Leike say.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” they write. “Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.”

United Nations counting on AI and robots to save its failing Social Development Goals

The 17 goals were set by the UN in 2015 and over the years, these goals have become unachievable.

The United Nations’ ‘AI for Good’ Summit is underway in Geneva and will showcase specialized robots to help the organization reach its 17 Social Development Goals (SDGs).

The goals were set in 2015, and over the years, these goals have become improbable, owing to the increasing costs of meeting the targets. The United Nations has been fighting issues like hunger, poverty, and climate change, whose prices have risen 25 PERCENT to $176 trillion from 2021 to 2022, reported Reuters.

AI tests into top 1% for original creative thinking

New research from the University of Montana and its partners suggests artificial intelligence can match the top 1% of human thinkers on a standard test for creativity.

The study was directed by Dr. Erik Guzik, an assistant clinical professor in UM’s College of Business. He and his partners used the Torrance Tests of Creative Thinking, a well-known tool used for decades to assess human creativity.

The researchers submitted eight responses generated by ChatGPT, the application powered by the GPT-4 engine. They also submitted answers from a of 24 UM students taking Guzik’s entrepreneurship and personal finance classes. These scores were compared with 2,700 college students nationally who took the TTCT in 2016. All submissions were scored by Scholastic Testing Service, which didn’t know AI was involved.

How Generative AI Can Be Combined With Causal AI To Transform DevOps Innovation

One way to achieve this is to combine GPTs with causal AI—a precise and trustworthy type of AI that provides rich and accurate context, which is particularly valuable in cloud observability, analytics and automation.

Causal AI observes the actual relationships within a system, such as a multicloud technology stack, and delivers detailed and precise answers in near real time based on these observations. These answers enable users to discern the cause, type, severity, risk, impact and location of any issue flagged by the AI with very high precision based on real-time observed facts and their interdependencies.

In the future, DevOps teams can use automated prompt engineering to feed real-time data and causal AI-derived context to their GPT. As a result, the answers they receive will be more relevant, accurate and actionable.

Wolfram’s New Update Gives Developers Genius-level Generative AI

After being one of the first plugins to ever come to ChatGPT, Wolfram has now gone all in on the LLM wave. In the latest version 13.3 update, the Wolfram language has added support for LLM technology, as well as integrating an AI model into the Wolfram Cloud.

This update comes on the heels of Wolfram slowly building the tooling for making the language LLM-ready. The update puts LLMs directly into the language with the introduction of an LLM subsystem for the language. It also builds on the LLM functions technology added in May, which ‘packages’ AI powers into a callable function, with the new subsystem now being user-addressable.

With these new updates, developers have a whole new way of interfacing with their data. This approach combines Stephen Wolfram’s idea of natural language programming along with the Wolfram language’s symbolic programming, creating a force to be reckoned with. What’s more, with the Wolfram language API, this can be plugged in to larger systems, delivering amazing power through a natural language interface.

Quantum neural networks: An easier way to learn quantum processes

EPFL scientists show that even a few simple examples are enough for a quantum machine-learning model, the “quantum neural networks,” to learn and predict the behavior of quantum systems, bringing us closer to a new era of quantum computing.

Imagine a world where computers can unravel the mysteries of , enabling us to study the behavior of complex materials or simulate the intricate dynamics of molecules with unprecedented accuracy.

Thanks to a pioneering study led by Professor Zoe Holmes and her team at EPFL, we are now closer to that becoming a reality. Working with researchers at Caltech, the Free University of Berlin, and the Los Alamos National Laboratory, they have found a new way to teach a quantum computer how to understand and predict the behavior of quantum systems. The research has been published in Nature Communications.

AMD AI chips are nearly as fast as Nvidia’s, MosaicML says

As Nvidia’s recent surge in market capitalization clearly demonstrates, the AI industry is in desperate need of new hardware to train large language models (LLMs) and other AI-based algorithms. While server and HPC GPUs may be worthless for gaming, they serve as the foundation for data centers and supercomputers that perform highly parallelized computations necessary for these systems.

When it comes to AI training, Nvidia’s GPUs have been the most desirable to date. In recent weeks, the company briefly achieved an unprecedented $1 trillion market capitalization due to this very reason. However, MosaicML now emphasizes that Nvidia is just one choice in a multifaceted hardware market, suggesting companies investing in AI should not blindly spend a fortune on Team Green’s highly sought-after chips.

The AI startup tested AMD MI250 and Nvidia A100 cards, both of which are one generation behind each company’s current flagship HPC GPUs. They used their own software tools, along with the Meta-backed open-source software PyTorch and AMD’s proprietary software, for testing.

/* */