Toggle light / dark theme

Researchers from the RIKEN Center for Quantum Computing have used machine learning to perform error correction for quantum computers—a crucial step for making these devices practical—using an autonomous correction system that despite being approximate, can efficiently determine how best to make the necessary corrections.

The research is published in the journal Physical Review Letters.

In contrast to , which operate on bits that can only take the basic values 0 and 1, quantum computers operate on “qubits”, which can assume any superposition of the computational basis states. In combination with , another quantum characteristic that connects different qubits beyond classical means, this enables quantum computers to perform entirely new operations, giving rise to potential advantages in some computational tasks, such as large-scale searches, , and cryptography.

No detectors “reliably distinguish between AI-generated and human-generated content.”

In a section of the FAQ titled “Do AI detectors work?”, OpenAI writes, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and… More.


Last week, OpenAI published tips for educators in a promotional blog post that shows how some teachers are using ChatGPT as an educational aid, along with suggested prompts to get started. In a related FAQ, they also officially admit what we already know: AI writing detectors don’t work, despite frequently being used to punish students with false positives.

In July, we covered in depth why AI writing detectors such as GPTZero don’t work, with experts calling them mostly snake oil. These detectors often yield false positives due to relying on unproven detection metrics. Ultimately, there is nothing special about AI-written text that always distinguishes it from human-written, and detectors can be defeated by rephrasing. That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

Bad news for anyone looking to get their hands on Nvidia’s top specced GPUs, such as the A100 or H100: it’s not going to get any easier to source the parts until at least the end of 2024, TSMC has warned. The problem, it seems, isn’t that TSMC – which fabricates not just those GPUs for Nvidia but also components for AMD, Apple, and many others – can’t make enough chips. Rather, a lack of advanced packaging capacity used to stitch the silicon together is holding up production, TSMC chairman Mark Liu told Nikkei Asia.

According to Liu, TSMC is only able to meet about 80 percent of demand for its chip on wafer on substrate (CoWoS) packaging technology. This is used in some of the most advanced… More.


Boss Mark Liu says silicon ready but advanced packaging isn’t.

Schmidt has become an indispensable adviser to government, even as some of his investments have won federal contracts.

Eric Schmidt isn’t shy about his wealth and power: The former Google CEO recently won an auction for a superyacht seized from a Russian oligarch, he owns a big stake in a secretive and successful hedge fund and he spent $15 million for the Manhattan penthouse featured in Oliver Stone’s sequel to Wall Street.

He has also leveraged his $27 billion fortune to build a powerful influence machine in Washington that’s allowed him to shape public policy to reflect his worldview and benefit the industries in which he’s deeply invested — most recently, artificial intelligence. When senators meet next week to hear from tech executives and experts about how AI should be regulated, Schmidt will be at the table.

Roblox’s new AI assistant is one of a few new AI tools from the company.

Roblox announced a new conversational AI assistant at its 2023 Roblox Developers Conference (RDC) that can help creators more easily make experiences for the popular social app. The new tool, the Roblox Assistant, builds on previously announced features that let creators build virtual assets and write code with the help of generative AI.

With the Roblox Assistant, creators will be able to type in prompts to do things like generate virtual environments. In one demo, somebody types in “I want to make a game set in ancient ruins,” and Roblox drops in some stones, moss-covered columns, and broken architecture. “Make the… More.


Another AI chatbot, but this seems useful.

GPT-4, PaLM, Claude, Bard, LaMDA, Chinchilla, Sparrow – the list of large-language models on the market continues to grow. But behind their remarkable capabilities, users are discovering substantial costs. While LLMs offer tremendous potential, understanding their economic implications is crucial for businesses and individuals considering their adoption.

While LLMs offer tremendous potential, understanding their economic implications is crucial for businesses and individuals considering their adoption.

First, building and training LLMs is expensive. It requires thousands of Graphics Processing Units, or GPUs, offering the parallel processing power needed to handle the massive datasets these models learn from. The cost of the GPUs, alone, can amount to millions of dollars. According to a technical overview of OpenAI’s GPT-3 language model, training required at least $5 million worth of GPUs.

With the pace at which artificial intelligence (AI) and machine learning (ML) applications are ramping up, we can expect to see industries and companies use these systems and tools in everyday processes. As these data-intensive applications continue to grow in complexity, the demand for high-speed transmission and efficient communication between computing units becomes paramount.

This need has sparked interest in optical interconnects, particularly in the context of short-reach connections between XPUs (CPUs, GPUs and memory). Silicon photonics is emerging as a promising technology that improves performance, cost-efficiency and thermal-management capabilities that ultimately improve the function of AI/ML applications compared with traditional approaches.


The key to getting the most out of artificial intelligence may lie in the use of silicon photonics, a powerful new tech.

Bobbi is SVP, Software Engineering at Loopio. She is a technology leader with over 25 years of diverse experience in the industry.

AI and emerging technologies under the AI umbrella—like generative pre-trained transformers (GPT)—are reshaping the business world. These technologies are fostering greater organizational efficiencies and innovations and are quickly becoming crucial for companies of all sizes.

The ability to automate processes and tasks opens up a plethora of new opportunities for organizations. When automation can scale with an organization, this can completely transform day-to-day operations. In this article, I’ll look at three ways that engineering organizations in particular can use AI to transform their organizational efficiencies, organizational structure and software practices and processes.