Toggle light / dark theme

AI company Runaway enters the game of text-to-video generation

Artificial intelligence advancement has taken the world by storm. And it has remarkably improvised the way we use the internet.

With text-to-image translation, generative AI has proven its worth. AI-powered images have been created by services such as Dall-E and Stable Diffusion. Now, coming up is the text-to-video generation concept, which is set to be the next big craze.

How AI could upend the world even more than electricity or the internet

The rise of artificial general intelligence — now seen as inevitable in Silicon Valley — will bring change that is “orders of magnitude” greater than anything the world has yet seen, observers say. But are we ready?

AGI — defined as artificial intelligence with human cognitive abilities, as opposed to more narrow artificial intelligence, such as the headline-grabbing ChatGPT — could free people from menial tasks and usher in a new era of creativity.

But such a historic paradigm shift could also threaten jobs and raise insurmountable social issues, experts warn.

5 jaw-dropping things GPT-4 can do that ChatGPT couldn’t

In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

On Tuesday, OpenAI announced the next-generation version of the artificial intelligence technology that underpins its viral chatbot tool, ChatGPT. The more powerful GPT-4 promises to blow previous iterations out of the water, potentially changing the way we use the internet to work, play and create. But it could also add to challenging questions around how AI tools can upend professions, enable students to cheat, and shift our relationship with technology.

GPT-4 is an updated version of the company’s large language model, which is trained on vast amounts of online data to generate complex responses to user prompts. It is now available via a waitlist and has already made its way into some third-party products, including Microsoft’s new AI-powered Bing search engine. Some users with early access to the tool are sharing their experiences and highlighting some of its most compelling use cases.

New research suggests AI image generation using DALL-E 2 has promising future in radiology

A new paper published in the Journal of Medical Internet Research describes how generative models such as DALL-E 2, a novel deep learning model for text-to-image generation, could represent a promising future tool for image generation, augmentation, and manipulation in health care. Do generative models have sufficient medical domain knowledge to provide accurate and useful results? Dr. Lisa C Adams and colleagues explore this topic in their latest viewpoint titled “What Does DALL-E 2 Know About Radiology?”

First introduced by OpenAI in April 2022, DALL-E 2 is an artificial intelligence (AI) tool that has gained popularity for generating novel photorealistic images or artwork based on textual input. DALL-E 2’s generative capabilities are powerful, as it has been trained on billions of existing text-image pairs off the internet.

To understand whether these capabilities can be transferred to the medical domain to create or augment data, researchers from Germany and the United States examined DALL-E 2’s radiological knowledge in creating and manipulating X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound images.

AI Image Generation Using DALL-E 2 Has Promising Future in Radiology

Summary: Text-to-image generation deep learning models like OpenAI’s DALL-E 2 can be a promising new tool for image augmentation, generation, and manipulation in a healthcare setting.

Source: JMIR Publications

A new paper published in the Journal of Medical Internet Research describes how generative models such as DALL-E 2, a novel deep learning model for text-to-image generation, could represent a promising future tool for image generation, augmentation, and manipulation in health care.

GPT-4 Creator Ilya Sutskever on AI Hallucinations and AI Democracy

As we hurtle towards a future filled with artificial intelligence, many commentators are wondering aloud whether we’re moving too fast. The tech giants, the researchers, and the investors all seem to be in a mad dash to develop the most advanced AI. But are they considering the risks, the worriers ask?

The question is not entirely moot, and rest assured that there are hundreds of incisive minds considering the dystopian possibilities — and ways to avoid them. But the fact is that the future is unknown, the implications of this powerful new technology are as unimagined as was social media at the advent of the Internet. There will be good and there will be bad, but there will be powerful artificial intelligence systems in our future and even more powerful AIs in the futures of our grandchildren. It can’t be stopped, but it can be understood.

I spoke about this new technology with Ilya Stutskeve r, a co-founder of OpenAI, the not-for-profit AI research institute whose spinoffs are likely to be among the most profitable entities on earth. My conversation with Ilya was shortly before the release of GPT-4, the latest iteration of OpenAI’s giant AI system, which has consumed billions of words of text — more than any one human could possibly read in a lifetime.

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

AI Might Be Seemingly Everywhere, but There Are Still Plenty of Things It Can’t Do—For Now

These days, we don’t have to wait long until the next breakthrough in artificial intelligence impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Unlike previous developments, these text-to-image tools quickly found their way from research labs to mainstream culture, leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylized images of its users.

Meta lead engineer announces end of NFTs on Instagram and Facebook

It looks like Mark Zuckerberg’s company is winding down its metaverse dreams.

Amid the crypto slump, Meta has announced it would be parting with non-fungible tokens (NFTs) on its platforms less than a year after launch.

Stephane Kasriel, the Commerce and FinTech lead at Meta said in a Twitter thread that the company will be “winding down” on digital collectibles, specifically NFTs, for now, and focus on other ways to support creators. Digital collectibles like NFTs were one of the pillars of the company’s pitch for a ‘metaverse’-based future of the internet.

The Future of VPNs

This post is also available in: he עברית (Hebrew)

According to a report done by Surfshark VPN, out of the approximately 5 billion of internet users, over 1.6 billion of them (31% of users) use a VPN. That’s close to a fifth of the worlds population.

A VPN, or a Virtual Private Network, is a mechanism for creating a secure connection between a computing device and a computer network, or between two networks, using an insecure communication medium such as the public Internet. A VPN can extend a private network (one that disallows or restricts public access), enabling users to send and receive data across public networks as if their devices were directly connected to the private network.