Toggle light / dark theme

GPT-4 Creator Ilya Sutskever on AI Hallucinations and AI Democracy

As we hurtle towards a future filled with artificial intelligence, many commentators are wondering aloud whether we’re moving too fast. The tech giants, the researchers, and the investors all seem to be in a mad dash to develop the most advanced AI. But are they considering the risks, the worriers ask?

The question is not entirely moot, and rest assured that there are hundreds of incisive minds considering the dystopian possibilities — and ways to avoid them. But the fact is that the future is unknown, the implications of this powerful new technology are as unimagined as was social media at the advent of the Internet. There will be good and there will be bad, but there will be powerful artificial intelligence systems in our future and even more powerful AIs in the futures of our grandchildren. It can’t be stopped, but it can be understood.

I spoke about this new technology with Ilya Stutskeve r, a co-founder of OpenAI, the not-for-profit AI research institute whose spinoffs are likely to be among the most profitable entities on earth. My conversation with Ilya was shortly before the release of GPT-4, the latest iteration of OpenAI’s giant AI system, which has consumed billions of words of text — more than any one human could possibly read in a lifetime.

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

AI Might Be Seemingly Everywhere, but There Are Still Plenty of Things It Can’t Do—For Now

These days, we don’t have to wait long until the next breakthrough in artificial intelligence impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Unlike previous developments, these text-to-image tools quickly found their way from research labs to mainstream culture, leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylized images of its users.

Meta lead engineer announces end of NFTs on Instagram and Facebook

It looks like Mark Zuckerberg’s company is winding down its metaverse dreams.

Amid the crypto slump, Meta has announced it would be parting with non-fungible tokens (NFTs) on its platforms less than a year after launch.

Stephane Kasriel, the Commerce and FinTech lead at Meta said in a Twitter thread that the company will be “winding down” on digital collectibles, specifically NFTs, for now, and focus on other ways to support creators. Digital collectibles like NFTs were one of the pillars of the company’s pitch for a ‘metaverse’-based future of the internet.

The Future of VPNs

This post is also available in: he עברית (Hebrew)

According to a report done by Surfshark VPN, out of the approximately 5 billion of internet users, over 1.6 billion of them (31% of users) use a VPN. That’s close to a fifth of the worlds population.

A VPN, or a Virtual Private Network, is a mechanism for creating a secure connection between a computing device and a computer network, or between two networks, using an insecure communication medium such as the public Internet. A VPN can extend a private network (one that disallows or restricts public access), enabling users to send and receive data across public networks as if their devices were directly connected to the private network.

Starlink faces competition, OneWeb one launch away from global internet

The firm faced financial collapse during the pandemic but is now serving customers in 15 countries.

U.K.-based OneWeb is one launch away from having enough satellites in orbit to cover the entire expanse of the Earth. Once ready, Elon Musk’s Starlink won’t be the only company offering such as service, the BBC

Both OneWeb and Starlink use constellations of satellites in low Earth orbits (LEO) instead of the conventional geostationary orbits (GEO). The lower altitude of the LEO satellites helps in reducing latency or the delay that data takes to make a round trip over a network.


1, 2

In defense of space colonies and mining the high frontier

Exploiting the natural and energy resources of the moon and asteroids can spark a space-based industrial revolution that could be a boon to all humankind. Pure science alone will be enough reason for the people who pay the bills to finance space exploration. Accessing the wealth that exists beyond the Earth is more than enough incentive for both public and private investment. Science will benefit. Someone will have to prospect for natural and energy resources in space and to develop safe and sustainable ways to exploit it.

Conflict between scientists and commercial space is already happening. Astronomers complain that SpaceX’s Starlink satellite internet constellation is ruining ground-based observation. Some critics fear that commercial exploitation of the moon’s resources will impede the operation of telescopes on the far side of the moon.

Bank of America Obsessed With AI, Says It’s the “New Electricity”

The financial industry’s response to artificial intelligence has been all over the place. Now, Bank of America is weighing in very much on the side of the bots.

In a note to clients viewed by CNBC and other outlets, BofA equity strategist Haim Israel boasted that AI was one of its top trends to watch — and invest in — for the year, and used all kinds of hypey language to convince its clients.

“We are at a defining moment — like the internet in the ’90s — where Artificial Intelligence (AI) is moving towards mass adoption,” the client note reads, “with large language models like ChatGPT finally enabling us to fully capitalize on the data revolution.”

Bioinspired Neural Network Model Can Store Significantly More Memories

Researchers have developed a new model inspired by recent biological discoveries that shows enhanced memory performance. This was achieved by modifying a classical neural network.

Computer models play a crucial role in investigating the brain’s process of making and retaining memories and other intricate information. However, constructing such models is a delicate task. The intricate interplay of electrical and biochemical signals, as well as the web of connections between neurons and other cell types, creates the infrastructure for memories to be formed. Despite this, encoding the complex biology of the brain into a computer model for further study has proven to be a difficult task due to the limited understanding of the underlying biology of the brain.

Researchers at the Okinawa Institute of Science and Technology (OIST) have made improvements to a widely utilized computer model of memory, known as a Hopfield network, by incorporating insights from biology. The alteration has resulted in a network that not only better mirrors the way neurons and other cells are connected in the brain, but also has the capacity to store significantly more memories.

Microsoft makes it easier to integrate quantum and classical computing

By default, every quantum computer is going to be a hybrid that combines quantum and classical compute. Microsoft estimates that a quantum computer that will be able to help solve some of the world’s most pressing questions will require at least a million stable qubits. It’ll take massive classical compute power — which is really only available in the cloud — to control a machine like this and handle the error correction algorithms needed to keep it stable. Indeed, Microsoft estimates that to achieve the necessary fault tolerance, a quantum computer will need to be integrated with a peta-scale compute platform that can manage between 10 to 100 terabits per second of data moving between the quantum and classical machine. At the American Physical Society March Meeting in Las Vegas, Microsoft today is showing off some of the work it has been doing on enabling this and launching what it calls the “Integrated Hybrid” feature in Azure Quantum.

“With this Integrated Hybrid feature, you can start to use — within your quantum applications — classical code right alongside quantum code,” Krysta Svore, Microsoft’s VP of Advanced Quantum Development, told me. “It’s mixing that classical and quantum code together that unlocks new types, new styles of quantum algorithms, prototypes, sub routines, if you will, where you can control what you do to qubits based on classical information. This is a first in the industry.”

/* */