Toggle light / dark theme

In a new paper in Nature, a team of researchers from JPMorganChase, Quantinuum, Argonne National Laboratory, Oak Ridge National Laboratory and The University of Texas at Austin describe a milestone in the field of quantum computing, with potential applications in cryptography, fairness and privacy.

Using a 56-qubit quantum computer, they have for the first time experimentally demonstrated certified randomness, a way of generating random numbers from a quantum computer and then using a classical supercomputer to prove they are truly random and freshly generated. This could pave the way toward the use of quantum computers for a practical task unattainable through classical methods.

Scott Aaronson, Schlumberger Centennial Chair of Computer Science and director of the Quantum Information Center at UT Austin, invented the certified randomness protocol that was demonstrated. He and his former postdoctoral researcher, Shih-Han Hung, provided theoretical and analytical support to the experimentalists on this latest project.

In an era where data privacy concerns loom large, a new approach in artificial intelligence (AI) could reshape how sensitive information is processed.

Researchers Austin Ebel and Karthik Garimella, Ph.D. students, and Assistant Professor of Electrical and Computer Engineering Brandon Reagen have introduced Orion, a novel framework that brings fully (FHE) to deep learning—allowing AI models to practically and efficiently operate directly on encrypted data without needing to decrypt it first.

The implications of this advancement, published on the arXiv preprint server and scheduled to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, are profound.

Such credentials could be obtained from a data breach of a social media service or be acquired from underground forums where they are advertised for sale by other threat actors.

Credential stuffing is also different from brute-force attacks, which revolve around cracking passwords, login credentials, and encryption keys using a trial and error method.

Atlantis AIO, per Abnormal Security, offers threat actors the ability to launch credential stuffing attacks at scale via pre-configured modules for targeting a range of platforms and cloud-based services, thereby facilitating fraud, data theft, and account takeovers.

Quantum physics just took a leap from theory to reality! Empa researchers have, for the first time, successfully built a long-theorized one-dimensional alternating Heisenberg model using synthetic nanographenes.

By precisely shaping these tiny carbon structures, they’ve unlocked new ways to manipulate quantum states, confirming century-old predictions. This breakthrough could be a stepping stone toward real-world quantum technologies, from ultra-fast computing to unbreakable encryption.

Recreating a Century-Old Quantum Model.

A hospital that wants to use a cloud computing service to perform artificial intelligence data analysis on sensitive patient records needs a guarantee those data will remain private during computation. Homomorphic encryption is a special type of security scheme that can provide this assurance.

The technique encrypts data in a way that anyone can perform computations without decrypting the data, preventing others from learning anything about underlying patient records. However, there are only a few ways to achieve homomorphic encryption, and they are so computationally intensive that it is often infeasible to deploy them in the real world.

MIT researchers have developed a new theoretical approach to building homomorphic encryption schemes that is simple and relies on computationally lightweight cryptographic tools. Their technique combines two tools so they become more powerful than either would be on its own. The researchers leverage this to construct a “somewhat homomorphic” encryption scheme—that is, it enables users to perform a limited number of operations on encrypted data without decrypting it, as opposed to fully homomorphic encryption that can allow more complex computations.

The Akira ransomware gang was spotted using an unsecured webcam to launch encryption attacks on a victim’s network, effectively circumventing Endpoint Detection and Response (EDR), which was blocking the encryptor in Windows.

Cybersecurity firm S-RM team discovered the unusual attack method during a recent incident response at one of their clients.

Notably, Akira only pivoted to the webcam after attempting to deploy encryptors on Windows, which were blocked by the victim’s EDR solution.

Rufo Guerreschi.
https://www.linkedin.com/in/rufoguerreschi.

Coalition for a Baruch Plan for AI
https://www.cbpai.org/

0:00 Intro.
0:21 Rufo Guerreschi.
0:28 Contents.
0:41 Part 1: Why we have a governance problem.
1:18 From e-democracy to cybersecurity.
2:42 Snowden showed that international standards were needed.
3:55 Taking the needs of intelligence agencies into account.
4:24 ChatGPT was a wake up moment for privacy.
5:08 Living in Geneva to interface with states.
5:57 Decision making is high up in government.
6:26 Coalition for a Baruch plan for AI
7:12 Parallels to organizations to manage nuclear safety.
8:11 Hidden coordination between intelligence agencies.
8:57 Intergovernmental treaties are not tight.
10:19 The original Baruch plan in 1946
11:28 Why the original Baruch plan did not succeed.
12:27 We almost had a different international structure.
12:54 A global monopoly on violence.
14:04 Could expand to other weapons.
14:39 AI is a second opportunity for global governance.
15:19 After Soviet tests, there was no secret to keep.
16:22 Proliferation risk of AI tech is much greater?
17:44 Scale and timeline of AI risk.
19:04 Capabilities of security agencies.
20:02 Internal capabilities of leading AI labs.
20:58 Governments care about impactful technologies.
22:06 Government compute, risk, other capabilities.
23:05 Are domestic labs outside their jurisdiction?
23:41 What are the timelines where change is required?
24:54 Scientists, Musk, Amodei.
26:24 Recursive self improvement and loss of control.
27:22 A grand gamble, the rosy perspective of CEOs.
28:20 CEOs can’t really say anything else.
28:59 Altman, Trump, Softbank pursuing superintelligence.
30:01 Superintelligence is clearly defined by Nick Bostrom.
30:52 Explain to people what “superintelligence” means.
31:32 Jobs created by Stargate project?
32:14 Will centralize power.
33:33 Sharing of the benefits needs to be ensured.
34:26 We are running out of time.
35:27 Conditional treaty idea.
36:34 Part 2: We can do this without a global dictatorship.
36:44 Dictatorship concerns are very reasonable.
37:19 Global power is already highly concentrated.
38:13 We are already in a surveillance world.
39:18 Affects influential people especially.
40:13 Surveillance is largely unaccountable.
41:35 Why did this machinery of surveillance evolve?
42:34 Shadow activities.
43:37 Choice of safety vs liberty (privacy)
44:26 How can this dichotomy be rephrased?
45:23 Revisit supply chains and lawful access.
46:37 Why the government broke all security at all levels.
47:17 The encryption wars and export controls.
48:16 Front door mechanism replaced by back door.
49:21 The world we could live in.
50:03 What would responding to requests look like?
50:50 Apple may be leaving “bug doors” intentionally.
52:23 Apple under same constraints as government.
52:51 There are backdoors everywhere.
53:45 China and the US need to both trust AI tech.
55:10 Technical debt of past unsolved problems.
55:53 Actually a governance debt (social-technical)
56:38 Provably safe or guaranteed safe AI
57:19 Requirement: Governance plus lawful access.
58:46 Tor, Signal, etc are often wishful thinking.
59:26 Can restructure incentives.
59:51 Restrict proliferation without dragnet?
1:00:36 Physical plus focused surveillance.
1:02:21 Dragnet surveillance since the telegraph.
1:03:07 We have to build a digital dog.
1:04:14 The dream of cyber libertarians.
1:04:54 Is the government out to get you?
1:05:55 Targeted surveillance is more important.
1:06:57 A proper warrant process leveraging citizens.
1:08:43 Just like procedures for elections.
1:09:41 Use democratic system during chip fabrication.
1:10:49 How democracy can help with technical challenges.
1:11:31 Current world: anarchy between countries.
1:12:25 Only those with the most guns and money rule.
1:13:19 Everyone needing to spend a lot on military.
1:14:04 AI also engages states in a race.
1:15:16 Anarchy is not a given: US example.
1:16:05 The forming of the United States.
1:17:24 This federacy model could apply to AI
1:18:03 Same idea was even proposed by Sam Altman.
1:18:54 How can we maximize the chances of success?
1:19:46 Part 3: How to actually form international treaties.
1:20:09 Calling for a world government scares people.
1:21:17 Genuine risk of global dictatorship.
1:21:45 We need a world /federal/ democratic government.
1:23:02 Why people are not outspoken.
1:24:12 Isn’t it hard to get everyone on one page?
1:25:20 Moving from anarchy to a social contract.
1:26:11 Many states have very little sovereignty.
1:26:53 Different religions didn’t prevent common ground.
1:28:16 China and US political systems similar.
1:30:14 Coming together, values could be better.
1:31:47 Critical mass of states.
1:32:19 The Philadelphia convention example.
1:32:44 Start with say seven states.
1:33:48 Date of the US constitutional convention.
1:34:42 US and China both invited but only together.
1:35:43 Funding will make a big difference.
1:38:36 Lobbying to US and China.
1:38:49 Conclusion.
1:39:33 Outro