Toggle light / dark theme

Manas Bhatia has a bold vision of the future — one where residential skyscrapers covered in trees, plants and algae act as “air purification towers.” In a series of detailed images, the New Delhi-based architect and computational designer has brought the idea to life. His imagined buildings are depicted rising high above a futuristic metropolis, their curved forms inspired by shapes found in nature.

But the pictures were not entirely of his own imagination.

For his conceptual project, “AI x Future Cities,” Bhatia turned to an artificial intelligence imaging tool, Midjourney, that generates elaborate pictures based on written prompts.


A New Delhi-based architect’s bold vision of the future is not entirely of his own imagination.

With the image generator Stable Diffusion, you can conjure within seconds a potrait of Beyoncé as if painted by Vincent van Gogh, a cyberpunk cityscape in the style of 18th century Japanese artist Hokusai and a complex alien world straight out of science fiction. Released to the public just two weeks ago, it’s become one of several popular AI-powered text-to-image generators, including DALL-E 2, that have taken the internet by storm.

Now, the company behind Stable Diffusion is in discussions to raise $100 million from investors, according to three people with knowledge of the matter.


Stability AI’s open source text-to-image generator was released to the general public in late August. It has already accumulated massive community goodwill — and controversy over how it’s been used by individuals on websites like 4chan.

Training a machine-learning model to effectively perform a task, such as image classification, involves showing the model thousands, millions, or even billions of example images. Gathering such enormous datasets can be especially challenging when privacy is a concern, such as with medical images. Researchers from MIT and the MIT-born startup DynamoFL have now taken one popular solution to this problem, known as federated learning, and made it faster and more accurate.

Federated learning is a collaborative method for training a machine-learning model that keeps sensitive user data private. Hundreds or thousands of users each train their own model using their own data on their own device. Then users transfer their models to a central server, which combines them to come up with a better model that it sends back to all users.

A collection of hospitals located around the world, for example, could use this method to train a machine-learning model that identifies brain tumors in medical images, while keeping patient data secure on their local servers.

As demonstrated by breakthroughs in various fields of artificial intelligence (AI), such as image processing, smart health care, self-driving vehicles and smart cities, this is undoubtedly the golden period of deep learning. In the next decade or so, AI and computing systems will eventually be equipped with the ability to learn and think the way humans do—to process continuous flow of information and interact with the real world.

However, current AI models suffer from a performance loss when they are trained consecutively on new information. This is because every time new data is generated, it is written on top of existing data, thus erasing previous information. This effect is known as “catastrophic forgetting.” A difficulty arises from the stability-plasticity issue, where the AI model needs to update its memory to continuously adjust to the new information, and at the same time, maintain the stability of its current knowledge. This problem prevents state-of-the-art AI from continually learning from real world information.

Edge computing systems allow computing to be moved from the cloud storage and to near the , such as devices connected to the Internet of Things (IoTs). Applying continual learning efficiently on resource limited edge computing systems remains a challenge, although many continual learning models have been proposed to solve this problem. Traditional models require high computing power and large memory capacity.

The price has increased by $3,000, but Tesla’s FSD is still a work-in-progress.

Tesla’s Full Self-Driving Beta option now costs a hefty $15,000. Tesla CEO Elon Musk announced on Twitter late last month that it would increase the option’s price by $3,000.

As of this week, the change has been made official, meaning anyone selecting the FSD option for their Tesla will have to pay the increased price. Musk mentioned in his August tweet that the previous price would be “honored for orders made before September 5, but delivered later”.

Is Tesla’s $15,000 FSD offering worth it?


Tesla.