Toggle light / dark theme

Researchers at Stanford developed an Artificial Intelligence (AI) Model called ‘RoentGen,’ based on Stable Diffusion and fine-tuned on a Large Chest X-ray and Radiology Dataset

Latent diffusion models (LDMs), a subclass of denoising diffusion models, have recently acquired prominence because they make generating images with high fidelity, diversity, and resolution possible. These models enable fine-grained control of the image production process at inference time (e.g., by utilizing text prompts) when combined with a conditioning mechanism. Large, multi-modal datasets like LAION5B, which contain billions of real image-text pairs, are frequently used to train such models. Given the proper pre-training, LDMs can be used for many downstream activities and are sometimes referred to as foundation models (FM).

LDMs can be deployed to end users more easily because their denoising process operates in a relatively low-dimensional latent space and requires only modest hardware resources. As a result of these models’ exceptional generating capabilities, high-fidelity synthetic datasets can be produced and added to conventional supervised machine learning pipelines in situations where training data is scarce. This offers a potential solution to the shortage of carefully curated, highly annotated medical imaging datasets. Such datasets require disciplined preparation and considerable work from skilled medical professionals who can decipher minor but semantically significant visual elements.

Despite the shortage of sizable, carefully maintained, publicly accessible medical imaging datasets, a text-based radiology report often thoroughly explains the pertinent medical data contained in the imaging tests. This “byproduct” of medical decision-making can be used to extract labels that can be used for downstream activities automatically. However, it still demands a more limited problem formulation than might otherwise be possible to describe in natural human language. By prompting pertinent medical terms or concepts of interest, pre-trained text conditional LDMs could be used to synthesize synthetic medical imaging data intuitively.

How the Brain Works: The Thousand Brains Theory of Intelligence | Numenta

Have you ever wondered what makes you intelligent? How are you able to see, hear, think, read, sing, solve problems, and perform any number of intelligent tasks?

Your brain learns a model of the world, and this model recreates the structure of everything you know. Everything you do and experience is based on this model. Intelligence is the ability to create this model of the world.

But how can a bunch of cells in your brain create a model of the world and everything in it? The Thousand Brains Theory provides an answer. Not only that, but it also provides a blueprint for how to build truly intelligent machines.

Visit https://numenta.com/ for more information.

Produced by Mind’s Eye Creative Studio: https://www.mec.co.za/
- — - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

The smallest robotic arm you can imagine is controlled by artificial intelligence

Researchers used deep reinforcement learning to steer atoms into a lattice shape, with a view to building new materials or nanodevices.

In a very cold vacuum chamber, single atoms of silver form a star-like . The precise formation is not accidental, and it wasn’t constructed directly by either. Researchers used a kind of artificial intelligence called learning to steer the atoms, each a fraction of a nanometer in size, into the lattice shape. The process is similar to moving marbles around a Chinese checkers board, but with very tiny tweezers grabbing and dragging each atom into place.

The main application for deep is in robotics, says postdoctoral researcher I-Ju Chen. “We’re also building robotic arms with deep learning, but for moving atoms,” she explains. “Reinforcement learning is successful in things like playing chess or video games, but we’ve applied it to solve at the nanoscale.”

Student-made invisibility coat aims to hide wearers from AI cameras

The accuracy of pedestrian identification was reduced by 57% when the students tested the outfit on on-campus security cameras.

According to the South China Morning Post (SCMP), Chinese students have successfully developed a coat that can make people invisible to security cameras. So the SCMP story goes, the coat looks the same as regular camouflaged clothing, but it can trick digital cameras, especially ones with AI.

This is achieved, so it is claimed, by virtue of the patterning of the coat that was developed using a complex algorithm. The coat also comes with inbuilt thermal devices that can emit various temperatures at night.

Exploring Large Language Models with ChatGPT

1 hour “interview” with ChatGPT.


Is OpenAI’s ChatGPT capable of having a coherent conversation? We find out in this special edition of the TWIML AI Podcast!

In this episode of the podcast, we are joined by ChatGPT, the latest and coolest large language model developed by OpenAl. In our conversation with ChatGPT, we discuss the background and capabilities of large language models, the potential applications of these models, and some of the technical challenges and open questions in the field. We also explore the role of supervised learning in creating ChatGPT, and the use of PPO in training the model. Finally, we discuss the risks of misuse of large language models, and the best resources for learning more about these models and their applications. Join us for a fascinating conversation with ChatGPT, and learn more about the exciting world of large language models.

The complete show notes for this episode can be found at twimlai.com/go/603.

🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_confirmation=1

Building A Virtual Machine inside ChatGPT

Unless you have been living under a rock, you have heard of this new ChatGPT assistant made by OpenAI. You might be aware of its capabilities for solving IQ tests, tackling leetcode problems or to helping people write LateX. It is an amazing resource for people to retrieve all kinds of information and solve tedious tasks, like copy-writing!

Today, Frederic Besse told me that he managed to do something different. Did you know, that you can run a whole virtual machine inside of ChatGPT?

Great, so with this clever prompt, we find ourselves inside the root directory of a Linux machine. I wonder what kind of things we can find here. Let’s check the contents of our home directory.

Tesla AI Day 2 will feature “hardware demos” and tons of technical details: Elon Musk

Tesla CEO Elon Musk recently provided a teaser on what will be happening during the company’s AI Day 2 event this Friday. Considering Musk’s recent comments, it appears that AI Day 2 will be filled to the brim with exciting discussions and demos of next-generation tech.

This is not Tesla’s first AI Day. Last year, the electric vehicle maker held a similar event, outlining the company’s work in artificial intelligence. During the event, Tesla held an extensive discussion on its neural networks, Dojo supercomputer, and humanoid robot, the Tesla Bot (Optimus). Interestingly enough, mainstream coverage of the event later suggested that AI Day was underwhelming or disappointing.

The hidden danger of ChatGPT and generative AI

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Since OpenAI launched its early demo of ChatGPT last Wednesday, the tool already has over a million users, according to CEO Sam Altman — a milestone, he points out, that took GPT-3 nearly 24 months to get to and DALL-E over 2 months.

The “interactive, conversational model,” based on the company’s GPT-3.5 text-generator, certainly has the tech world in full swoon mode. Aaron Levie, CEO of Box, tweeted that “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Y Combinator cofounder Paul Graham tweeted that “clearly something big is happening.” Alberto Romero, author of The Algorithmic Bridge, calls it “by far, the best chatbot in the world.” And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from dangerously strong AI.”

/* */