Toggle light / dark theme

How MLops deployment can be easier with open-source versioning

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Modern software development typically follows a very iterative approach known as continuous integration/continuous development (CI/CD). The promise of CI/CD is better software that is released quicker and it’s a promise that ClearML now intends to bring to the world of machine learning (ML).

ClearML today announced the general availability of its enterprise MLops platform that extends the capabilities of the company’s open-source edition. The ClearML Enterprise platform provides organizations with security controls and additional capabilities for rapidly iterating and deploying ML workflows.

Researchers Warn of New Go-based Malware Targeting Windows and Linux Systems

A new, multi-functional Go-based malware dubbed Chaos has been rapidly growing in volume in recent months to ensnare a wide range of Windows, Linux, small office/home office (SOHO) routers, and enterprise servers into its botnet.

“Chaos functionality includes the ability to enumerate the host environment, run remote shell commands, load additional modules, automatically propagate through stealing and brute-forcing SSH private keys, as well as launch DDoS attacks,” researchers from Lumen’s Black Lotus Labs said in a write-up shared with The Hacker News.

A majority of the bots are located in Europe, specifically Italy, with other infections reported in China and the U.S., collectively representing “hundreds of unique IP addresses” over a one-month time period from mid-June through mid-July 2022.

Meta’s new Make-a-Video AI can generate quick movie clips from text prompts

Meta unveiled its Make-a-Scene text-to-image generation AI in July, which like Dall-E and Midjourney, utilizes machine learning algorithms (and massive databases of scraped online artwork) to create fantastical depictions of written prompts. On Thursday, Meta CEO Mark Zuckerberg revealed Make-a-Scene’s more animated contemporary, Make-a-Video.

As its name implies, Make-a-Video is, “a new AI system that lets people turn text prompts into brief, high-quality video clips,” Zuckerberg wrote in a Meta blog Thursday. Functionally, Video works the same way that Scene does — relying on a mix of natural language processing and generative neural networks to convert non-visual prompts into images — it’s just pulling content in a different format.

“Our intuition is simple: learn what the world looks like and how it is described from paired text-image data, and learn how the world moves from unsupervised video footage,” a team of Meta researchers wrote in a research paper published Thursday morning. Doing so enabled the team to reduce the amount of time needed to train the Video model and eliminate the need for paired text-video data, while preserving “the vastness (diversity in aesthetic, fantastical depictions, etc.) of today’s image generation models.”

Forget Silicon. This Computer Is Made of Fabric

The existing jacket can perform one logical operation per second, compared to the more than a billion operations per second typical of a home computer, says Preston. In practice, this means the jacket can only execute short command sequences. Due to the speed of the logic, along with some other engineering challenges, Zhang says he thinks it’ll take five to 10 years for these textile-based robots to reach commercial maturity.

In the future, Preston’s team plans to do away with the carbon dioxide canister, which is impractical. (You have to refill it like you would a SodaStream.) Instead, his team wants to just use ambient air to pump up the jacket. As a separate project, the team has already developed a foam insole for a shoe that pumps the surrounding air into a bladder worn around the waist when the wearer takes a step. They plan to integrate a similar design into the jacket.

Preston also envisions clothing that senses and responds to the wearer’s needs. For example, a sensor on a future garment could detect when the wearer is beginning to lift their arm and inflate without any button-pressing. “Based on some stimulus from the environment and the current state, the logic system can allow the wearable robot to choose what to do,” he says. We’ll be waiting for this fashion trend to blow up.

Breakthrough Prize for the Physics of Quantum Information…and of Cells

This year’s Breakthrough Prize in Life Sciences has a strong physical sciences element. The prize was divided between six individuals. Demis Hassabis and John Jumper of the London-based AI company DeepMind were awarded a third of the prize for developing AlphaFold, a machine-learning algorithm that can accurately predict the 3D structure of proteins from just the amino-acid sequence of their polypeptide chain. Emmanuel Mignot of Stanford University School of Medicine and Masashi Yanagisawa of the University of Tsukuba, Japan, were awarded for their work on the sleeping disorder narcolepsy.

The remainder of the prize went to Clifford Brangwynne of Princeton University and Anthony Hyman of the Max Planck Institute of Molecular Cell Biology and Genetics in Germany for discovering that the molecular machinery within a cell—proteins and RNA—organizes by phase separating into liquid droplets. This phase separation process has since been shown to be involved in several basic cellular functions, including gene expression, protein synthesis and storage, and stress responses.

The award for Brangwynne and Hyman shows “the transformative role that the physics of soft matter and the physics of polymers can play in cell biology,” says Rohit Pappu, a biophysicist and bioengineer at Washington University in St. Louis. “[The discovery] could only have happened the way it did: a creative young physicist working with an imaginative cell biologist in an ecosystem where boundaries were always being pushed at the intersection of multiple disciplines.”

Meta announces Make-A-Video, which generates video from text

Today, Meta announced Make-A-Video, an AI-powered video generator that can create novel video content from text or image prompts, similar to existing image synthesis tools like DALL-E and Stable Diffusion. It can also make variations of existing videos, though it’s not yet available for public use.

The key technology behind Make-A-Video—and why it has arrived sooner than some experts anticipated—is that it builds off existing work with text-to-image synthesis used with image generators like OpenAI’s DALL-E. In July, Meta announced its own text-to-image AI model called Make-A-Scene.

AI Day 2022: Tesla may unveil a major milestone for its Optimus robot

We wonder what Tesla is going to reveal.

Once again, it is that time of the year. The annual Tesla AI Day, a demonstration of the most cutting-edge technologies from all of the company’s operating divisions is tomorrow. While Tesla vehicles receive the majority of press attention, the company has a wide range of other applications and products that it is constantly developing and improving.

Tesla’s AI is growing in popularity over time and has a livestream that millions of people view.

Some people watch because they know Elon Musk’s penchant for spectacle.

Whatever the motivation, on September 30, 2022, we can anticipate some shocks and introduce fresh concepts.

We expect four announcements about what Tesla works on; below are our expectations from Tesla AI Day 2022.

/* */