Toggle light / dark theme

Transformer-based deep learning models like GPT-3 have been getting much attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s search experience. However, these models can fail to capture more nuanced relationships between query and document terms beyond pure semantics.

The Microsoft team of researchers developed a neural network with 135 billion parameters, which is the largest “universal” artificial intelligence that they have running in production. The large number of parameters makes this one of the most sophisticated AI models ever detailed publicly to date. OpenAI’s GPT-3 natural language processing model has 175 billion parameters and remains as the world’s largest neural network built to date.

Microsoft researchers are calling their latest AI project MEB (Make Every Feature Binary). The 135-billion parameter machine is built to analyze queries that Bing users enter. It then helps identify the most relevant pages from around the web with a set of other machine learning algorithms included in its functionality, and without performing tasks entirely on its own.

Stanford is looking to democratize research on artificial intelligence and medicine by releasing the world’s largest free repository of AI-ready annotated medical imaging datasets. This will allow people from all over the world to access specific data that they need for their respective projects, which could lead to potentially life-saving breakthroughs in these fields.

The use of artificial intelligence in medicine is becoming increasingly pervasive. From analyzing tumors to detecting a person’s pumping heart, AI looks like it will have an important role for the near future.

The AI-powered devices, which can rival the accuracy of human doctors in diagnosing diseases and illnesses, have been making strides as well. These systems not only spot a likely tumor or bone fracture but also predict the course of an illness with some reliability for recommendations on what to do next. However, these systems require expensive datasets that are created by humans who annotate images meticulously before handing them over to compute power, so they’re rather costly either way you look at it given their price tags–millions even if your data is purchased from others or millions more if one has created their own dataset painstakingly through careful annotation of images such as CT scans and x-rays along with MRI’s etcetera depending upon how advanced each system needs be.

A new article in Science magazine gives an overview of almost three decades of research into colloidal quantum dots, assesses the technological progress for these nanometer-sized specs of semiconductor matter, and weighs the remaining challenges on the path to widespread commercialization for this promising technology with applications in everything from TVs to highly efficient sunlight collectors.

“Thirty years ago, these structures were just a subject of scientific curiosity studied by a small group of enthusiasts. Over the years, have become industrial-grade materials exploited in a range of traditional and emerging technologies, some of which have already found their way into commercial markets,” said Victor I. Klimov, a coauthor of the paper and leader of the team conducting quantum dot research at Los Alamos National Laboratory.

Many advances described in the Science article originated at Los Alamos, including the first demonstration of colloidal quantum dot lasing, the discovery of carrier multiplication, pioneering research into quantum dot light emitting diodes (LEDs) and luminescent solar concentrators, and recent studies of single-dot quantum emitters.

We’re heading northwest for the 11th flight of NASA’s Ingenuity Mars Helicopter, which will happen no earlier than Wednesday night, Aug. 4. The mission profile is designed to stay ahead of the rover – supporting its future science goals in the “South Séítah” region, where it will be able to gather aerial imagery in support of future Perseverance Mars rover surface operations in the area.

Here is how we plan to do it: On whatever day the flight takes place, we will start at 12:30 p.m. local Mars time (on Aug. 4, this would be 9:47 p.m. PDT/Aug. 5, 12:47 a.m. EDT). Ingenuity wakes up from its slumber and begins a pre-programmed series of preflight checks. Three minutes later, we’re off – literally – climbing to a height of 39 feet (12 meters), then heading downrange at a speed of 11 mph (5 meters per second).

And while Flight 11 is primarily intended as a transfer flight – moving the helicopter from one place to the other — we’re not letting the opportunity go to waste to take a few images along the way. Ingenuity’s color camera will take multiple photos en route, and then at the end of the flight, near our new airfield, we’ll take two images to make a 3D stereo pair. Flight 11 – from takeoff to landing –- should take about 130 seconds.

Driver Clocks And Longevity — Dissecting True Functional “Drivers” Of Aging Phenotypes — Dr. Daniel Ives Ph.D., Founder and CEO — Shift Bioscience Ltd.


Dr. Daniel Ives, Ph.D. is Founder and CEO of Shift Bioscience Ltd. (https://shiftbioscience.com), a biotech company making drugs for cellular rejuvenation in humans through the application of machine-learning ‘driver’ clocks to cellular reprogramming, and is the scientific founder who first discovered the gene shifting targets upon which the Shift drug discovery platform is based.

Dr. Ives graduated from Imperial College with a degree in biochemistry and gained his PhD in 2013 working at the MRC Mitochondrial Biology Unit in Cambridge. He carried out his post-doctoral studies under Ian Holt at the National Institute of Medical Research in Mill Hill, now part of the Crick Institute, pursuing damage-removal strategies for mitochondrial DNA mutations.