Toggle light / dark theme

📣Just announced at [#GTC25](https://www.facebook.com/hashtag/gtc25?__eep__=6&__cft__[0]=AZXGE68SvdjQyRxtqhq57u6xDScMuziTjPrrOj7ic9_n1QMWssMuQdAZ4MLZmg3kpo3u92u-w_Z12HEaFeSJnvxJ_h_dNAloE8I86x4WxG8730kGwR10dtKo0yYVmS4GQdeMF0xu2E5mpp8VTUcHoNIO&__tn__=*NK-R): NVIDIA will be open-sourcing cuOpt, an AI-powered decision optimization engine.

âžĄïž [ https://nvda.ws/43REYuW](https://nvda.ws/43REYuW open-sourcing this powerful solver, developers can harness real-time optimization at an unprecedented scale for free.

The best-known AI applications are all about predictions — whether forecasting weather or generating the next word in a sentence. But prediction is only half the challenge. The real power comes from acting on information in real time.

That’s where cuOpt comes in.

CuOpt dynamically evaluates billions of variables — inventory levels, factory output, shipping delays, fuel costs, risk factors and regulations — and delivers the best move in near real time.

Unlike traditional optimization methods that navigate solution spaces sequentially or with limited parallelism, cuOpt taps into GPU acceleration to evaluate millions of possibilities simultaneously — finding optimal solutions exponentially faster for specific instances.

It doesn’t replace existing techniques — it enhances them. By working alongside traditional solvers, cuOpt rapidly identifies high-quality solutions, helping CPU-based models discard bad paths faster.

There is a growing desire to integrate rapidly advancing artificial intelligence (AI) technologies into Department of Defense (DoD) systems. AI may give battlefield advantage by helping improve the speed, quality, and accuracy of decision-making while enabling autonomy and assistive automation.

Due to the statistical nature of machine learning, a significant amount of work has focused on ensuring the robustness of AI-enabled systems at inference time to natural degradations in performance caused by data distribution shifts (for example, from a highly dynamic deployment environment).

However, as early as 2014, researchers demonstrated the ability to manipulate AI given adversary control of the input. Additional work has confirmed the theoretical risks of data poisoning, physically constrained adversarial patches for evasion, and model stealing attacks. These attacks are typically tested in simulated or physical environments with relatively pristine control compared to what might be expected on a battlefield.

Researchers have enabled a man who is paralyzed to control a robotic arm through a device that relays signals from his brain to a computer.

He was able to grasp, move and drop objects just by imagining himself performing the actions.

The device, known as a brain-computer interface (BCI), worked for a record 7 months without needing to be adjusted. Until now, such devices have only worked for a day or two.

The BCI relies on an AI model that can adjust to the small changes that take place in the brain as a person repeats a movement – or in this case, an imagined movement – and learns to do it in a more refined way.

“This blending of learning between humans and AI is the next phase for these brain-computer interfaces,” said the neurologist. “It’s what we need to achieve sophisticated, lifelike function.”

Science, Policy And Advocacy For Impactful And Sustainable Health Ecosystems — Dr. Catharine Young, Ph.D. — fmr. Assistant Director of Cancer Moonshot Policy and International Engagement, White House Office of Science and Technology Policy (OSTP)


Dr. Catharine Young, Ph.D. recently served as Assistant Director of Cancer Moonshot Policy and International Engagement at the White House Office of Science and Technology Policy (https://www.whitehouse.gov/ostp/) where she served at OSTP to advance the Cancer Moonshot (https://www.cancer.gov/research/key-i
 with a mission to decrease the number of cancer deaths by 50% over the next 25 years.

Dr. Young’s varied career has spanned a variety of sectors including academia, non-profit, biotech, and foreign government, all with a focus on advancing science.

Dr. Young previously served as Executive Director of the SHEPHERD Foundation, where she championed rare cancer research and drove critical policy changes. Her work has also included fostering interdisciplinary collaborations and advancing the use of AI, data sharing, and clinical trial reform to accelerate cancer breakthroughs.

Dr. Young’s leadership in diplomacy and innovation includes roles such as Senior Director of Science Policy at the Biden Cancer Initiative and Senior Science and Innovation Policy Advisor at the British Embassy, where she facilitated international agreements to enhance research collaborations.

All eyes will be on Nvidia’s GPU Technology Conference this week, where the company is expected to unveil its next artificial intelligence chips. Nvidia chief executive Jensen Huang said he will share more about the upcoming Blackwell Ultra AI chip, Vera Rubin platform, and plans for following products at the annual conference, known as the GTC, during the company’s fiscal fourth quarter earnings call.

On the earnings call, Huang said Nvidia has some really exciting things to share at the GTC about enterprise and agentic AI, reasoning models, and robotics. The chipmaker introduced its highly anticipated Blackwell AI platform at last year’s GTC, which has successfully ramped up large-scale production, and made billions of dollars in sales in its first quarter, according to Huang.

Analysts at Bank of America said in a note on Wednesday that they expect Nvidia to present attractive albeit well-expected updates on Blackwell Ultra, with a focus on inferencing for reasoning models, which major firms such as OpenAI and Google are racing to develop.

The analysts also anticipate the chipmaker to share more information on its next-generation networking technology, and long-term opportunities in autonomous cars, physical AI such as robotics, and quantum computing.

In January, Nvidia announced that it would host its first Quantum Day at the GTC, and have executives from D-Wave and Rigetti discuss where quantum computing is headed. The company added that it will unveil quantum computing advances shortening the timeline to useful applications.

The same month, quantum computing stocks tanked after Huang expressed doubts over the technology’s near-term potential during the chipmaker’s financial analyst day at the Consumer Electronics Show, saying useful quantum computers are likely decades away.

Large Language Models (LLMs) both threaten the uniqueness of human social intelligence and promise opportunities to better understand it. In this talk, I evaluate the extent to which distributional information learned by LLMs allows them to approximate human behavior on tasks that appear to require social intelligence. In the first half, I will compare human and LLM responses in experiments designed to measure theory of mind—the ability to represent and reason about the mental states of other agents. Second, I present the results of an evaluation of LLMs using the Turing test, which measures a machine’s ability to imitate humans in a multi-turn social interaction.

Cameron Jones recently graduated with a PhD in Cognitive Science from the Language and Cognition Lab at UC San Diego. His work focuses on comparing humans and Large Language Models (LLMs) to learn more about how each of those systems works. He is interested in the extent to which LLMs can explain human behavior that appears to rely on world knowledge, reasoning, and social intelligence. In particular, he is interested in whether LLMs can approximate human social behavior, for instance in the Turing test, or by persuading or deceiving human interlocutors.

https://camrobjones.com/

https://scholar.google.com/citations?
 / camrobjones.

/ camrobjones.

Reinforcement learning (RL) has become central to advancing Large Language Models (LLMs), empowering them with improved reasoning capabilities necessary for complex tasks. However, the research community faces considerable challenges in reproducing state-of-the-art RL techniques due to incomplete disclosure of key training details by major industry players. This opacity has limited the progress of broader scientific efforts and collaborative research.

Researchers from ByteDance, Tsinghua University, and the University of Hong Kong recently introduced DAPO (Dynamic Sampling Policy Optimization), an open-source large-scale reinforcement learning system designed for enhancing the reasoning abilities of Large Language Models. The DAPO system seeks to bridge the gap in reproducibility by openly sharing all algorithmic details, training procedures, and datasets. Built upon the verl framework, DAPO includes training codes and a thoroughly prepared dataset called DAPO-Math-17K, specifically designed for mathematical reasoning tasks.

DAPO’s technical foundation includes four core innovations aimed at resolving key challenges in reinforcement learning. The first, “Clip-Higher,” addresses the issue of entropy collapse, a situation where models prematurely settle into limited exploration patterns. By carefully managing the clipping ratio in policy updates, this technique encourages greater diversity in model outputs. “Dynamic Sampling” counters inefficiencies in training by dynamically filtering samples based on their usefulness, thus ensuring a more consistent gradient signal. The “Token-level Policy Gradient Loss” offers a refined loss calculation method, emphasizing token-level rather than sample-level adjustments to better accommodate varying lengths of reasoning sequences. Lastly, “Overlong Reward Shaping” introduces a controlled penalty for excessively long responses, gently guiding models toward concise and efficient reasoning.

A team of medical researchers and engineers at Google Research has developed a way to use the front-facing camera on a smartphone to monitor a patient’s heart rate. The team has published a paper on the technology on the arXiv preprint server.

Tracking a patient’s over time can reveal clues about their cardiovascular health. The most important measurement is resting heart rate (RHR)—people with an above-normal rate are at a higher risk of heart disease and/or stroke. Persistently high rates, the researchers note, can signal a serious problem.

Over the past several years, personal health device makers have developed wearable external heart monitors, such as necklaces or smartwatches. But these devices are expensive. The researchers have found a cheaper alternative—a deep-learning system that analyzes video from the front-facing camera of a smartphone. The system is called PHRM.

In today’s AI news, all eyes will be on Nvidia’s GPU Technology Conference this week, where the company is expected to unveil its next AI chips. Nvidia CEO Jensen Huang said he will share more about the upcoming Blackwell Ultra AI chip, Vera Rubin platform, and plans for upcoming products at the annual conference, known as the GTC, during the company’s fourth quarter earnings call.

In other advancements, after decades of relying on Google’s ten blue links to find everything, consumers are quickly adapting to a completely new format: AI chatbots that do the searching for them. Adobe analyzed “more than 1 trillion visits to U.S. retail sites” through its analytics platform, and conducted a survey of “more than 5,000 U.S. respondents” to better understand how people are using AI.

Meanwhile, Barry Eggers, Co-Founder and Managing Partner at Lightspeed Venture Partners, is a luminary in the venture capital industry. As the AI landscape continues to evolve, Barry discusses the challenge of building defensible AI startups. Beyond just access to models, AI startups need differentiated data, network effects, and unique applications to maintain a competitive edge.

Re thinking of starting a new business and need advice on what to do, your first move should be turning to an AI chatbot tool. That t answer who won the Oscars last year? IBM Fellow, Martin Keen explains how RAG (Retrieval-Augmented Generation) and CAG (Cache-Augmented Generation) address knowledge gaps in AI. Discover their strengths in real-time retrieval, scalability, and efficient workflows for smarter AI systems. + s Gemini 2.0 about to revolutionize image generation and editing? In this video, Tim is diving deep into Google We close out with, Anthropic researchers Ethan Perez, Joe Benton, and Akbir Khan discuss AI control—an approach to managing the risks of advanced AI systems. They discuss real-world evaluations showing how humans struggle to detect deceptive AI, the three major threat models researchers are working to mitigate, and the overall idea of controlling highly-capable AI systems whose goals may differ from our own.

Thats all for today, but AI is moving fast — subscribe and follow for more Neural News.

(https://open.substack.com/pub/remunerationlabs/p/nvidia-is-a
Share=true)


Since the general AI agent Manus was launched last week, it has spread online like wildfire. And not just in China, where it was developed by the Wuhan-based startup Butterfly Effect. It’s made its way into the global conversation, with influential voices in tech, including Twitter cofounder Jack Dorsey and Hugging Face product lead Victor Mustar, praising its performance. Some have even dubbed it “the second DeepSeek,” comparing it to the earlier AI model that took the industry by surprise for its unexpected capabilities as well as its origin.

S first general AI agent, using multiple AI models (such as Anthropic.


The new general AI agent from China had some system crashes and server overload—but it’s highly intuitive and shows real promise for the future of AI helpers.