Humans excel at adapting to new situations, while machines often stumble.

Tesla continues to advance and solidify its momentum in the electric vehicle market through significant technological innovations, expansions, and achievements in autonomous driving, AI-powered technologies, and overall growth.
## Questions to inspire discussion.
Robo Taxi Service Expansion.
🚕 Q: How has Tesla’s robo taxi service in California expanded its operations? A: Tesla’s robo taxi service now operates until 2 a.m. with only 4 hours of downtime, indicating operational readiness and confidence in the system’s performance.
🌎 Q: What hiring moves suggest Tesla’s plans for global robo taxi expansion? A: Tesla is hiring a senior software engineer in Fremont to develop backend systems for real-time pricing and fees for robo taxi rides worldwide.
🌙 Q: How is Tesla preparing for expanded robo taxi coverage across the US? A: Tesla is hiring autopilot data collection supervisors for night and afternoon shifts in Arizona, Florida, Texas, and Nevada, indicating planned expansion of services.
A British startup has installed New York City’s first quantum computer at a data center in Manhattan.
Oxford Quantum Circuits has placed the system at a data center run by Digital Realty Trust in the Google building in Chelsea, billing the technology to customers of the site as a means of running artificial intelligence programs faster and more efficiently. Oxford Quantum Chief Executive Officer Gerald Mullally said he expects his firm to spend tens of millions of dollars over three to five years, in part to buy Nvidia Corp. chips to integrate into it. He declined to provide the exact costs of the computer.
To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.
Thanks to some longstanding relationships with senior executives in the prepaid and gift card industry has provided us with some unprecedented opportunities.
Our engineers are hard at work building more AI tools and utilities into the user interface and our administrative management dashboard as well.
It won’t be more than a few weeks before our first distributor is interconnected and starting to sell My Instant AI e-PIN codes.
There are many different ways to sell this product. Some will be selling it in their online stores, using their own sales and payment engines, while pulling PINs from our API in real-time as they are sold.
Others will have carded product that has a value applied to it and then activated at the checkout in a retail environment.
Some mobile phone and wireless network providers are including a card in the box, and preloading a shortcut to our platform as an app on the phone’s home screen.
Elon Musk has revealed Tesla’s new AI chips, AI5 and AI6, which will drive the company’s shift towards AI-powered services, enabling significant advancements in Full Self-Driving capabilities and potentially revolutionizing the self-driving car industry and beyond.
## Questions to inspire discussion.
Tesla’s AI Chip Advancements.
🚀 Q: What are the key features of Tesla’s AI5 and AI6 chips? A: Tesla’s AI5 and AI6 chips are inference-first, designed for high-throughput and efficient processing of AI models on devices like autos, Optimus, and Grok voice agents, being 40x faster than previous models.
💻 Q: How do Tesla’s AI5 and AI6 chips compare to previous models? A: Tesla’s AI5 chip is a 40x improvement over AI4, with 500 TOPS expanding to 5,000 TOPS, enabling excellent performance in full self-driving and Optimus humanoid robots.
🧠 Q: What is the significance of softmax in Tesla’s AI5 chip? A: AI5 is designed to run softmax natively in a few steps, unlike AI4 which relies on CPU and runs softmax in 40 steps in emulation mode.
“Cancer and other complex diseases arise from the interplay of various biological factors, for example, at the DNA, RNA, and protein levels,” explains the author. Characteristic changes at these levels — such as the amount of HER2 protein produced in breast or stomach cancer — are often recorded, but typically not yet analyzed in conjunction with all other therapy-relevant factors.
This is where Flexynesis comes in. “Comparable tools so far have often been either difficult to use, or only useful for answering certain questions,” says the author. “Flexynesis, by contrast, can answer various medical questions at the same time: for example, what type of cancer is involved, what drugs are particularly effective in this case, and how these will affect the patient’s chances of survival.” The tool also helps identify suitable biomarkers for diagnosis and prognosis, or — if metastases of unknown origin are discovered — to identify the primary tumor. “This makes it easier to develop comprehensive and personalized treatment strategies for all kinds of cancer patients,” says the author.
Nearly 50 new cancer therapies are approved every year. This is good news. “But for patients and their treating physicians, it is becoming increasingly difficult to keep track and to select the treatment methods from which the people affected — each with their very individual tumor characteristics — will benefit the most,” says the senior author. The researcher has been working for some time on developing tools that use artificial intelligence to make more precise diagnoses and that also determine the best form of therapy tailored to individual patients.
The team has now developed a toolkit called Flexynesis, which does not rely solely on classical machine learning but also uses deep learning to evaluate very different types of data simultaneously — for example, multi-omics data as well as specially processed texts and images, such as CT or MRI scans. “In this way, it enables doctors to make better diagnoses, prognoses, and develop more precise treatment strategies for their patients,” says the author. Flexynesis is described in detail in a paper published in “Nature Communications.”
“We are running multiple translational projects with medical doctors who want to identify biomarkers from multi-omics data that align with disease outcomes,” says the first and co-corresponding author of the publication. “Although many deep-learning based methods have been published for this purpose, most have turned out to be inflexible, tied to specific modeling tasks, or difficult to install and reuse. That gap motivated us to build Flexynesis as a proper toolkit, which is flexible for different modeling tasks and packaged on PyPI, Guix, Docker, Bioconda, and Galaxy, so others can readily apply it in their own pipelines.”
This Collection supports and amplifies research related to SDG 9 — Industry, Innovation & Infrastructure.
Discovering new materials with customizable and optimized properties, driven either by specific application needs or by fundamental scientific interest, is a primary goal of materials science. Conventionally, the search for new materials is a lengthy and expensive manual process, frequently based on trial and error, requiring the synthesis and characterization of many compositions before a desired material can be found. In recent years this process has been greatly improved by a combination of artificial intelligence and high-throughput approaches. Advances in machine learning for materials science, data-driven materials prediction, autonomous synthesis and characterization, and data-guided high-throughput exploration, can now significantly accelerate materials discovery.
This Collection brings together the latest computational and experimental advances in artificial intelligence, machine learning and data-driven approaches to accelerate high-throughput prediction, synthesis, characterization, optimization, discovery, and understanding of new materials.
Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models.
For example, you might observe that asking ChatGPT the same question multiple times provides different results. This by itself is not surprising, since getting a result from a language model involves “sampling”, a process that converts the language model’s output into a probability distribution and probabilistically selects a token.
What might be more surprising is that even when we adjust the temperature down to 0This means that the LLM always chooses the highest probability token, which is called greedy sampling. (thus making the sampling theoretically deterministic), LLM APIs are still not deterministic in practice (see past discussions here, here, or here). Even when running inference on your own hardware with an OSS inference library like vLLM or SGLang, sampling still isn’t deterministic (see here or here).