Toggle light / dark theme

Meta signs $10bn+ cloud deal with Google

Meta has signed a major cloud deal with Google, worth more than $10 billion.

First reported by The Information, the deal will see Meta using Google’s cloud computing services and spans a six-year period.

Sources told The Information that the deal is mostly for AI infrastructure. Google did not respond to requests for comment.

Meta and Google have long been rivals in online ads. This is not the first time in recent months that Google has signed a cloud contract with a company that is partially in competition with it.

Google Cloud signed a contract with OpenAI in June of this year, enabling the AI company to use its compute resources, including Google’s TPUs. Some of this is outsourced to CoreWeave.


Set to last six years, sources say.

HUGE: Elon’s “Macrohard” AI — His CRAZIEST Idea Ever

Questions to inspire discussion.

Industry Disruption.

🏢 Q: How might traditional companies be affected by AI simulations? A: Traditional firms like Microsoft could see their valuation drop by 50% if undercut by AI clones, while the tech industry may experience millions of jobs vanishing, potentially leading to recessions or increased inequality.

🤖 Q: What is the potential scale of AI company simulations? A: AI-simulated companies like “Macrohard” could become real entities, operating at a fraction of the cost of traditional companies and disrupting markets 10 times faster and bigger than the internet’s impact on retail.

Regulatory Landscape.

📊 Q: How might governments respond to AI-simulated companies? A: Governments may implement regulations on AI companies to slow innovation, potentially creating monopolies that regulators would later need to break up, further disrupting markets.

The AI revolution: facilitator or terminator?

We’ve all heard the arguments – “AI will supercharge the economy!” versus “No, AI is going to steal all our jobs!” The reality lies somewhere in between. Generative AI1 is a powerful tool that will boost productivity, but it won’t trigger mass unemployment overnight, and it certainly isn’t Skynet (if you know, you know). The International Monetary Fund (IMF) estimates that “AI will affect almost 40% of jobs around the world, replacing some and complementing others”. In practice, that means a large portion of workers will see some tasks automated by AI, but not necessarily lose their entire job. However, even jobs heavily exposed to AI still require human-only inputs and oversight: AI might draft a report, but you’ll still need someone to fine-tune the ideas and make the decisions.

From an economic perspective, AI will undoubtedly be a game changer. Nobel laureate Michael Spence wrote in September 2024 that AI “has the potential not only to reverse the downward productivity trend, but over time to produce a major sustained surge in productivity.” In other words, AI could usher in a new era of faster growth by enabling more output from the same labour and capital. Crucially, AI often works best in collaboration with existing worker skillsets; in most industries AI has the potential to handle repetitive or time-consuming work (like basic coding or form-filling), letting people concentrate on higher-value-add aspects. In short, AI can raise output per worker without making workers redundant en masse. This, in turn, has the potential to raise GDP over time; if this occurs in a non-inflationary environment it could outpace the growth in US debt for example.

Some jobs will benefit more than others. Knowledge workers who harness AI – e.g. an analyst using AI to sift data – can become far more productive (and valuable). New roles (AI auditors, prompt engineers) are already emerging. Conversely, jobs heavy on routine information processing are already under pressure. The job of a translator is often cited as the most at risk; for example, today’s AI can already handle c.98% of a translator’s typical tasks, and is gradually conquering more technically challenging real-time translation.

Normal Computing tapes-out world’s first thermodynamic chip for energy efficient AI workloads

According to the company, by using natural dynamics such as fluctuations, dissipation, and stochasticity – otherwise known as a lack of predictability — the chips are able to compute far more efficiently than traditional semiconductors.

CN101 specifically targets two tasks necessary for supporting AI workloads: solving linear algebra and matrix operations, and stochastic sampling with lattice random walk, and represents the first step on Normal Computing’s roadmap towards commercializing thermodynamic computing at scale and enabling significantly more AI performance per watt, rack, and dollar, the company says.


Chip startup Normal Computing has announced the successful tape-out of the world’s first thermodynamic semiconductor, CN101.

Designed to support AI and HPC workloads, Normal Computing describes the CN101 as a “physics-based ASIC” that harnesses the “intrinsic dynamics of physical systems… achieving up to 1,000x energy consumption efficiency.”

Universal logical quantum photonic neural network processor via cavity-assisted interactions

Encoding quantum information within bosonic modes offers a promising direction for hardware-efficient and fault-tolerant quantum information processing. However, achieving high-fidelity universal control over bosonic encodings using native photonic hardware remains a significant challenge. We establish a quantum control framework to prepare and perform universal logical operations on arbitrary multimode multi-photon states using a quantum photonic neural network. Central to our approach is the optical nonlinearity, which is realized through strong light-matter interaction with a three-level Λ atomic system. The dynamics of this passive interaction are asymptotically confined to the single-mode subspace, enabling the construction of deterministic entangling gates and overcoming limitations faced by many nonlinear optical mechanisms. Using this nonlinearity as the element-wise activation function, we show that the proposed architecture is able to deterministically prepare a wide array of multimode multi-photon states, including essential resource states. We demonstrate universal code-agnostic control of bosonic encodings by preparing and performing logical operations on symmetry-protected error-correcting codes. Our architecture is not constrained by symmetries imposed by evolution under a system Hamiltonian such as purely χ and χ processes, and is naturally suited to implement non-transversal gates on photonic logical qubits. Additionally, we propose an error-correction scheme based on non-demolition measurements that is facilitated by the optical nonlinearity as a building block. Our results pave the way for near-term quantum photonic processors that enable error-corrected quantum computation, and can be achieved using present-day integrated photonic hardware.


Basani, J.R., Niu, M.Y. & Waks, E. Universal logical quantum photonic neural network processor via cavity-assisted interactions. npj Quantum Inf 11, 142 (2025). https://doi.org/10.1038/s41534-025-01096-9

Download citation.

/* */