Toggle light / dark theme

This Site Uses Deep Learning to Generate Fake Airbnb Listings

There’s a four-bedroom Edinburgh unit with “original wood floors,” listed by Christine. And there’s a two-bathroom apartment in Gainesville with a double sofa bed and open kitchen plan, listed by Michel. A “beautiful apartment” in Berlin has a “floral feeling.” A three-bedroom in Rome includes “utilities and toiletries.”

There’s just one problem with these Airbnb listings: they don’t exist.

Scientists Developed an AI So Advanced They Say It’s Too Dangerous to Release

A group of computer scientists once backed by Elon Musk has caused some alarm by developing an advanced artificial intelligence (AI) they say is too dangerous to release to the public.

OpenAI, a research non-profit based in San Francisco, says its “chameleon-like” language prediction system, called GPT–2, will only ever see a limited release in a scaled-down version, due to “concerns about malicious applications of the technology”.

That’s because the computer model, which generates original paragraphs of text based on what it is given to ‘read’, is a little too good at its job.

Artificial intelligence alone won’t solve the complexity of Earth sciences

One way to crack this problem, according to the authors of a Perspective in this issue, is through a hybrid approach. The latest techniques in deep learning should be accompanied by a hand-in-glove pursuit of conventional physical modelling to help to overcome otherwise intractable problems such as simulating the particle-formation processes that govern cloud convection. The hybrid approach makes the most of well-understood physical principles such as fluid dynamics, incorporating deep learning where physical processes cannot yet be adequately resolved.


Studies of complex climate and ocean systems could gain from a hybrid between artificial intelligence and physical modelling.

A Tsunami of Fake News is On Its Way and Here is How

The Elon Musk funded OpenAI non-profit has created a breakthrough system for writing high-quality text. It can write text, performs basic reading comprehension, machine translation, question answering, and summarization and all without task-specific training.

The system is able to take a few sentences of sample writing and then produce a multi-paragraph article in the style and context of the sample. This capability would let AI’s to impersonate the writing style of any person from previous writing samples.

GPT-2, is a 1.5 billion parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting, yet still simplifies (or in AI term underfits) their database called WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

OpenAI’s GPT-2 algorithm is good in knitting fake news

Fake. Dangerous. Scary. Too good. When headlines swim with verdicts like those then you suspect, correctly, that you’re in the land of artificial intelligence, where someone has come up with yet another AI model.

So, this is, GPT-2, an algorithm and, whether it makes one worry or marvel, “It excels at a task known as language modeling,” said The Verge, “which tests a program’s ability to predict the next word in a given sentence.”

Depending on how you look at it, you can blame, or congratulate, a team at California-based OpenAI who created GPT-2. Their language modeling program has written a convincing essay on a topic which they disagreed with.