Toggle light / dark theme

A research team at Osaka Metropolitan University has fabricated a gallium nitride (GaN) transistor using diamond, which of all natural materials has the highest thermal conductivity on earth, as a substrate, and they succeeded in increasing heat dissipation by more than 2X compared with conventional transistors. The transistor is expected to be useful not only in the fields of 5G communication base stations, weather radar, and satellite communications, but also in microwave heating and plasma processing.

Researchers at Osaka Metropolitan University are proving that diamonds are so much more than just a ‘girl’s best friend.’ Their groundbreaking research focuses on gallium nitride (GaN) transistors, which are high-power, high-frequency semiconductor devices used in mobile data and satellite communication systems.

With the increasing miniaturization of semiconductor devices, problems arise such as increases in power density and heat generation that can affect the performance, reliability, and lifetime of these devices.

Explore the digital archaeology of computing’s past with the unearthing of 86-DOS version 0.1-C, the oldest ancestor of MS-DOS

A code archaeologist has unearthed a treasure trove for tech historians: the oldest-known ancestor of Microsoft’s iconic MS-DOS.


Discover the hidden gems of computing history as a code enthusiast shares the earliest-known iteration of 86-DOS online from an archive.

Although chaos theory can solve nearly anything that is unknown I basically think that in an infinite universe as made real from the infinite microchip that uses superfluid processing power is the real answer and we are off by factor of infinite parameters still.


When we look at scientific progress, especially in physics, it can seem like all the great discoveries lie behind us. Since the revolutions of Einstein’s theory of relativity and quantum mechanics, physicists have been struggling to find a way to make them fit together with little to no success. Tim Palmer argues that the answer to this stalemate lies in chaos theory.

Revisiting a book by John Horgan, science communicator and theoretical physicist Sabine Hossenfelder recently asked on her YouTube channel whether we are facing the end of science. It might seem like a rhetorical question — it’s not possible for science to really end — but she concludes that we are in dire need of some new paradigms in physics, and seemingly unable to arrive at them. We are yet to solve the deep ongoing mysteries of the dark universe and still haven’t convincingly synthesised quantum and gravitational physics. She suggests that ideas from chaos theory might hold some of the answers, and therefore the ability to rejuvenate science. I think she’s right.

The first functional semiconductor made from graphene has been created at the Georgia Institute of Technology. This could enable smaller and faster electronic devices and may have applications for quantum computing.

Credit: Georgia Institute of Technology.

Semiconductors, which are materials that conduct electricity under specific conditions, are foundational components of electronic devices like the chips in your computer, laptop, and smartphone. For many decades, their architecture has been getting smaller and more compact – a trend known as Moore’s Law. This has enabled gigantic leaps in a vast range of technologies, from general computing speeds and video game graphics, to the resolution of medical scans and the sensitivity of astronomical observatories.

Quantum computing is becoming more accessible for performing calculations. However, research indicates that there are inherent limitations, particularly related to the quality of the clock utilized.

There are different ideas about how quantum computers could be built. But they all have one thing in common: you use a quantum physical system – for example, individual atoms – and change their state by exposing them to very specific forces for a specific time. However, this means that in order to be able to rely on the quantum computing operation delivering the correct result, you need a clock that is as precise as possible.

But here you run into problems: perfect time measurement is impossible. Every clock has two fundamental properties: a certain precision and a certain time resolution. The time resolution indicates how small the time intervals are that can be measured – i.e. how quickly the clock ticks. Precision tells you how much inaccuracy you have to expect with every single tick.

Over the past twenty years, many companies, including Google, Microsoft, and IBM, have invested in quantum computing development. Investors have contributed over $5 billion to this cause. The aim is to use quantum physics properties to process information in ways that traditional computers cannot. Quantum computing could impact various fields, including drug discovery, cryptography, finance, and supply-chain logistics. However, the excitement around this technology has led to a mix of claims, making it hard to gauge the actual progress.

The main challenge in developing quantum computers is managing the ‘noise’ that can interfere with these sensitive systems. Quantum systems can be disrupted by disturbances like stray photons from heat, random signals from nearby electronics, or physical vibrations. This noise can cause errors or stop a quantum computation. Regardless of the processor size or the technology’s potential uses, a quantum computer will not surpass a classical computer unless the noise is controlled.

For a while, researchers thought they might have to tolerate some noise in their quantum systems, at least temporarily. They looked for applications that could still work effectively with this constraint. However, recent theoretical and experimental advances suggest that the noise issue might soon be resolved. A mix of hardware and software strategies is showing potential for reducing and correcting quantum errors. Earl Campbell, vice president of quantum science at Riverlane, a UK-based quantum computing company, believes there is growing evidence to be hopeful about quantum computing’s future.

Almost exactly one year ago at CES 2023, Qualcomm announced its Snapdragon Ride Flex system-on-chip (SoC) product family. As an expansion of the company’s Snapdragon Digital Chassis product portfolio, the new SoC family is meant to support advanced driver assistance systems (ADAS) as well as digital cockpit and infotainment applications spanning from entry level to premium vehicles. At the time, Qualcomm announced that the Ride Flex SoC was sampling with an expected start of production in early 2024. It’s now early 2024 and CES is about to kick off again. Tirias Research is expecting to hear an update on the product family next week. We anticipate the update will include, at the very least, some of the partners who will be bringing the Ride Flex SoCs to market in production volumes this year and into 2025. Given Qualcomm’s track record for hitting their estimated timelines, we felt that a re-cap of the product family is warranted leading up to next week’s anticipated update.

“Flex-ing” Resources to Support Mixed Criticality and Multiple Tiers

The Snapdragon Ride Flex is actually two monolithically integrated 4nm SoCs – a primary SoC and what Qualcomm are calling a Safety Island SoC. The primary SoC consists of a Kryo Gen 6 Arm v8.2 central processing unit (CPU) with integrated L3 cache, an Adreno 663 graphics processing unit (GPU), a Hexagon neural processing unit (NPU), a Spectra 690 image signal processing (ISP), two Adreno display processing units (DPUs) for multiple high-resolution display support and associated memory and I/O interconnects. This part of the SoC is Automotive Safety Integrity Level (ASIL) B certified. The Safety Island SoC, which is ASIL-D certified, consists of a multi-core real-time CPU with enhanced error managements support and isolated memory and peripherals. ASIL is a risk classification methodology established under ISO 26,262 from the International Organization for Standardization which defines functional safety for road vehicles.