Toggle light / dark theme

America is the undisputed world leader in quantum computing even though China spends 8x more on the technology–but an own goal could soon erode U.S. dominance

When it comes to quantum computing, that chilling effect on research and development would enormously jeopardize U.S. national security. Our projects received ample funding from defense and intelligence agencies for good reason. Quantum computing may soon become the https://www.cyberdefensemagazine.com/quantum-security-is-nat...at%20allow, codebreaking%20attacks%20against%20traditional%20encryption" rel="noopener" class="">gold standard technology for codebreaking and defending large computer networks against cyberattacks.

Adopting the proposed march-in framework would also have major implications for our future economic stability. While still a nascent technology today, quantum computing’s ability to rapidly process huge volumes of data is set to revolutionize business in the coming decades. It may be the only way to capture the complexity needed for future AI and machine learning in, say, self-driving vehicles. It may enable companies to hone their supply chains and other logistical operations, such as manufacturing, with unprecedented precision. It may also transform finance by allowing portfolio managers to create new, superior investment algorithms and strategies.

Given the technology’s immense potential, it’s no mystery why China committed what is believed to be more than https://www.mckinsey.com/featured-insights/sustainable-inclu…n-quantum” rel=“noopener” class=””>$15 billion in 2022 to develop its quantum computing capacity–more than double the budget for quantum computing of EU countries and eight times what the U.S. government plans to spend.

TextGrad: Automatic “Differentiation” via Text

From Stanford & Chan Zuckerberg Biohub TextGrad Automatic “Differentiation” via Text.

From stanford & chan zuckerberg biohub.

TextGrad.

Automatic “Differentiation” via Text.

Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Zhi Huang, Carlos Guestrin, James Zou June 2024 https://huggingface.co/papers/2406.

AI is undergoing a paradigm shift, with breakthroughs achieved…


What using artificial intelligence to help monitor surgery can teach us

1. Privacy is important, but not always guaranteed. Grantcharov realized very quickly that the only way to get surgeons to use the black box was to make them feel protected from possible repercussions. He has designed the system to record actions but hide the identities of both patients and staff, even deleting all recordings within 30 days. His idea is that no individual should be punished for making a mistake.

The black boxes render each person in the recording anonymous; an algorithm distorts people’s voices and blurs out their faces, transforming them into shadowy, noir-like figures. So even if you know what happened, you can’t use it against an individual.

But this process is not perfect. Before 30-day-old recordings are automatically deleted, hospital administrators can still see the operating room number, the time of the operation, and the patient’s medical record number, so even if personnel are technically de-identified, they aren’t truly anonymous. The result is a sense that “Big Brother is watching,” says Christopher Mantyh, vice chair of clinical operations at Duke University Hospital, which has black boxes in seven operating rooms.

World’s largest robots will help airlines cut carbon emissions

A Norwegian startup is building massive AI robots to help airlines reduce their carbon emissions, save water, and inspect their planes in a fraction of the time it usually takes.

The challenge: The aviation industry is responsible for about 2.5% of global carbon emissions, and while sustainable jet fuels or electric propulsion systems could one day slash that figure, airlines can reduce their emissions right now — simply by cleaning their planes more often.

Washing an airplane’s exterior reduces air resistance, which means it can decrease the amount of jet fuel a plane needs to burn by up to 2% — while that’s not a huge difference, it can add up when you consider there are about 28,000 commercial jets in the global fleet.

3D-printed mini-actuators can move small soft robots, lock them into new shapes

If users wish to “freeze” the soft robot’s shape, they can apply moderate heat (64°C, or 147°F), and then let the robot cool briefly. This prevents the soft robot from reverting to its original shape, even after the liquid in the microfluidic channels is pumped out. If users want to return the soft robot to its original shape, they simply apply the heat again after pumping out the liquid, and the robot relaxes to its original configuration.

“A key factor here is fine-tuning the thickness of the shape memory layer relative to the layer that contains the microfluidic channels,” says Yinding Chi, co-lead author of the paper and a former Ph.D. student at NC State. “You need the shape memory layer to be thin enough to bend when the actuator’s pressure is applied, but thick enough to get the soft robot to retain its shape even after the pressure is removed.”

To demonstrate the technique, the researchers created a soft robot “,” capable of picking up small objects. The researchers applied hydraulic pressure, causing the gripper to pinch closed on an object. By applying heat, the researchers were able to fix the gripper in its “closed” position, even after releasing pressure from the hydraulic actuator.

Former OpenAI Director Warns There Are Bad Things AI Can Do Besides Kill You

There’s a lot of other ways that AI could really take things in a bad direction.


One of the OpenAI directors who worked to oust CEO Sam Altman is issuing some stark warnings about the future of unchecked artificial intelligence.

In an interview during Axios’ AI+ summit, former OpenAI board member Helen Toner suggested that the risks AI poses to humanity aren’t just worst-case scenarios from science fiction.

“I just think sometimes people hear the phrase ‘existential risk’ and they just think Skynet, and robots shooting humans,” Toner said, referencing the evil AI technology from the “Terminator” films that’s often used as a metaphor for worst-case-scenario AI predictions.