Toggle light / dark theme

We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology. With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action. At Microsoft, we think about this as having a copilot to help navigate any task. We have been building AI-powered copilots into our most used and loved products – making coding more efficient with GitHub, transforming productivity at work with Microsoft 365, redefining search with Bing and Edge and delivering contextual value that works across your apps and PC with Windows.

Today we take the next step to unify these capabilities into a single experience we call Microsoft Copilot, your everyday AI companion. Copilot will uniquely incorporate the context and intelligence of the web, your work data and what you are doing in the moment on your PC to provide better assistance – with your privacy and security at the forefront. It will be a simple and seamless experience, available in Windows 11, Microsoft 365, and in our web browser with Edge and Bing. It will work as an app or reveal itself when you need it with a right click. We will continue to add capabilities and connections to Copilot across to our most-used applications over time in service of our vision to have one experience that works across your whole life.

Copilot will begin to roll out in its early form as part of our free update to Windows 11, starting Sept. 26 — and across Bing, Edge, and Microsoft 365 Copilot this fall. We’re also announcing some exciting new experiences and devices to help you be more productive, spark your creativity, and to meet the everyday needs of people and businesses.

Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC–Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.

Their research has reached a new milestone. In a new paper, published in Science Robotics, Ron Alterovitz, Ph.D., in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.

“This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”

Many of the genetic mutations that directly cause a condition, such as those responsible for cystic fibrosis and sickle-cell disease, tend to change the amino acid sequence of the protein that they encode. But researchers have observed only a few million of these single-letter ‘missense mutations’. Of the more than 70 million such mutations that can occur in the human genome, only a sliver have been linked conclusively to disease, and most seem to have no ill effect on health.

So when researchers and doctors find a missense mutation that they’ve never seen before, it can be difficult to know what to make of it. To help interpret such ‘variants of unknown significance’, researchers have developed dozens of computational tools that can predict whether a variant is likely to cause disease. AlphaMissense incorporates existing approaches to the problem, which are increasingly being addressed with machine learning.

Jacopo Pantaleoni joined Nvidia in 2001 when the company had less than 500 employees. He worked on what was then a small research project to improve Nvidia’s graphics processing units so they could better render images on computers and gaming consoles.

More than two decades later, Nvidia has more than 26,000 employees and its GPUs are at the center of the generative AI explosion. Pantaleoni had climbed the ranks to become a principal engineer and research scientist, one of the highest ranking positions for an individual contributor, he says. Then, in July, as Nvidia boomed like no other company, Pantaleoni says he resigned, giving up a substantial amount of unvested stock units, after coming to a realization.

“This market of machine learning, artificial intelligence” is “almost entirely driven by the big players— Googles, Amazons, Metas”—that have the “enormous amounts of data and enormous amounts of capital” to develop AI at scale. Those companies are also Nvidia’s biggest customers. “This was not the world I wanted to help build,” he said.

Waymo is offering some Los Angeles residents the chance to ride in the self-driving car across the city.

The company is kicking off its Waymo One tour, which they say is ‘the world’s first fully autonomous ride-hailing service,’ in October.

“Chauffeured by the Waymo Driver — Waymo’s autonomous driving technology that never gets distracted, drowsy or drunk — Angelenos will discover new and stress-free ways to explore their city, whether it’s finding hidden gem thrift spots in Mid City, trying a new cafe in Koreatown or catching a concert in DTLA,” the company said.

Google’s new Bard extension will apparently summarize emails, plan your travels, and — oh, yeah — fabricate emails that you never actually sent.

Last week, Google plugged its large language model-powered chatbot called Bard into a bevy of Google products including Gmail, Google Drive, Google Docs, Google Maps, and the Google-owned YouTube, among other apps and services. While it’s understandable that Google would want to marry its newer generative AI efforts with its already-established product lineup, it seems that Google might have moved a little too fast.

According to New York Times columnist Kevin Roose, Bard isn’t the helpful inbox assistant that Google apparently wants it to be — at least yet. In his testing, says Roose, the AI hallucinated entire email correspondences that never took place.

Scientists have been exploring both experimental and theoretical ways to prove quantum supremacy.

Ramis Movassagh, a researcher at Google Quantum AI, recently had a study published in the journal Nature Physics. Here, he has reportedly demonstrated in theory that simulating random quantum circuits and determining their output will be extremely difficult for classical computers. In other words, if a quantum computer solves this problem, it can achieve quantum supremacy.

But why do such problems exist?

I have said it time and time again. Ironically I have been an electronic music producer for decades.

Electronic producer, singer and AI advocate Holly Herndon has drawn a comparison between AI music and sampling, saying that AI music could impact music in the same way sampling did hip-hop.

Herndon made the statement during a recent interview with Mixmag, as part of a feature entitled The rise of AI music: a force for good or a new low for artistic creativity? The feature explores the advantages and disadvantages of using AI technology to create music.

“Sampling old records to create something new led to the formation of genres like hip hop and innovative new forms of artistic expression.” She says. “AI music has the potential to do something very similar.”