Toggle light / dark theme

Snap a photo of your meal, and artificial intelligence instantly tells you its calorie count, fat content, and nutritional value—no more food diaries or guesswork.

This futuristic scenario is now much closer to reality, thanks to an AI system developed by NYU Tandon School of Engineering researchers that promises a new tool for the millions of people who want to manage their weight, diabetes and other diet-related health conditions.

The technology, detailed in a paper presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, uses advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content, including calories, protein, carbohydrates and fat.

When someone is traumatically injured, giving them blood products before they arrive at the hospital—such as at the scene or during emergency transport—can improve their likelihood of survival and recovery. But patients with certain traumatic injuries have better outcomes when administered specific blood components.

University of Pittsburgh School of Medicine and UPMC scientist-surgeons report in Cell Reports Medicine that giving that has been separated from other parts of donated blood improves outcomes in patients with (TBI) or shock, whereas giving unseparated or “whole” blood may be best for patients with traumatic bleeding.

Together, Pitt and UPMC have become home to the largest clinical trials research consortium for early trauma care in the U.S., allowing the research to benefit both soldiers and civilians.

An AI-powered robot that can prepare cups of coffee in a busy kitchen could usher in the next generation of intelligent machines, a study suggests.

The research, published in the journal Nature Machine Intelligence, was led by Ruaridh Mon-Williams, a Ph.D. student jointly at the University of Edinburgh, Massachusetts Institute of Technology and Princeton University.

Using a combination of cutting-edge AI, sensitive sensors and fine-tuned motor skills, the robot can interact with its surroundings in more human-like ways than ever before, researchers say.

📣Just announced at [#GTC25](https://www.facebook.com/hashtag/gtc25?__eep__=6&__cft__[0]=AZXGE68SvdjQyRxtqhq57u6xDScMuziTjPrrOj7ic9_n1QMWssMuQdAZ4MLZmg3kpo3u92u-w_Z12HEaFeSJnvxJ_h_dNAloE8I86x4WxG8730kGwR10dtKo0yYVmS4GQdeMF0xu2E5mpp8VTUcHoNIO&__tn__=*NK-R): NVIDIA will be open-sourcing cuOpt, an AI-powered decision optimization engine.

➡️ [ https://nvda.ws/43REYuW](https://nvda.ws/43REYuW open-sourcing this powerful solver, developers can harness real-time optimization at an unprecedented scale for free.

The best-known AI applications are all about predictions — whether forecasting weather or generating the next word in a sentence. But prediction is only half the challenge. The real power comes from acting on information in real time.

That’s where cuOpt comes in.

CuOpt dynamically evaluates billions of variables — inventory levels, factory output, shipping delays, fuel costs, risk factors and regulations — and delivers the best move in near real time.

Unlike traditional optimization methods that navigate solution spaces sequentially or with limited parallelism, cuOpt taps into GPU acceleration to evaluate millions of possibilities simultaneously — finding optimal solutions exponentially faster for specific instances.

It doesn’t replace existing techniques — it enhances them. By working alongside traditional solvers, cuOpt rapidly identifies high-quality solutions, helping CPU-based models discard bad paths faster.

There is a growing desire to integrate rapidly advancing artificial intelligence (AI) technologies into Department of Defense (DoD) systems. AI may give battlefield advantage by helping improve the speed, quality, and accuracy of decision-making while enabling autonomy and assistive automation.

Due to the statistical nature of machine learning, a significant amount of work has focused on ensuring the robustness of AI-enabled systems at inference time to natural degradations in performance caused by data distribution shifts (for example, from a highly dynamic deployment environment).

However, as early as 2014, researchers demonstrated the ability to manipulate AI given adversary control of the input. Additional work has confirmed the theoretical risks of data poisoning, physically constrained adversarial patches for evasion, and model stealing attacks. These attacks are typically tested in simulated or physical environments with relatively pristine control compared to what might be expected on a battlefield.

Researchers have enabled a man who is paralyzed to control a robotic arm through a device that relays signals from his brain to a computer.

He was able to grasp, move and drop objects just by imagining himself performing the actions.

The device, known as a brain-computer interface (BCI), worked for a record 7 months without needing to be adjusted. Until now, such devices have only worked for a day or two.

The BCI relies on an AI model that can adjust to the small changes that take place in the brain as a person repeats a movement – or in this case, an imagined movement – and learns to do it in a more refined way.

“This blending of learning between humans and AI is the next phase for these brain-computer interfaces,” said the neurologist. “It’s what we need to achieve sophisticated, lifelike function.”

Science, Policy And Advocacy For Impactful And Sustainable Health Ecosystems — Dr. Catharine Young, Ph.D. — fmr. Assistant Director of Cancer Moonshot Policy and International Engagement, White House Office of Science and Technology Policy (OSTP)


Dr. Catharine Young, Ph.D. recently served as Assistant Director of Cancer Moonshot Policy and International Engagement at the White House Office of Science and Technology Policy (https://www.whitehouse.gov/ostp/) where she served at OSTP to advance the Cancer Moonshot (https://www.cancer.gov/research/key-i… with a mission to decrease the number of cancer deaths by 50% over the next 25 years.

Dr. Young’s varied career has spanned a variety of sectors including academia, non-profit, biotech, and foreign government, all with a focus on advancing science.

Dr. Young previously served as Executive Director of the SHEPHERD Foundation, where she championed rare cancer research and drove critical policy changes. Her work has also included fostering interdisciplinary collaborations and advancing the use of AI, data sharing, and clinical trial reform to accelerate cancer breakthroughs.

Dr. Young’s leadership in diplomacy and innovation includes roles such as Senior Director of Science Policy at the Biden Cancer Initiative and Senior Science and Innovation Policy Advisor at the British Embassy, where she facilitated international agreements to enhance research collaborations.

All eyes will be on Nvidia’s GPU Technology Conference this week, where the company is expected to unveil its next artificial intelligence chips. Nvidia chief executive Jensen Huang said he will share more about the upcoming Blackwell Ultra AI chip, Vera Rubin platform, and plans for following products at the annual conference, known as the GTC, during the company’s fiscal fourth quarter earnings call.

On the earnings call, Huang said Nvidia has some really exciting things to share at the GTC about enterprise and agentic AI, reasoning models, and robotics. The chipmaker introduced its highly anticipated Blackwell AI platform at last year’s GTC, which has successfully ramped up large-scale production, and made billions of dollars in sales in its first quarter, according to Huang.

Analysts at Bank of America said in a note on Wednesday that they expect Nvidia to present attractive albeit well-expected updates on Blackwell Ultra, with a focus on inferencing for reasoning models, which major firms such as OpenAI and Google are racing to develop.

The analysts also anticipate the chipmaker to share more information on its next-generation networking technology, and long-term opportunities in autonomous cars, physical AI such as robotics, and quantum computing.

In January, Nvidia announced that it would host its first Quantum Day at the GTC, and have executives from D-Wave and Rigetti discuss where quantum computing is headed. The company added that it will unveil quantum computing advances shortening the timeline to useful applications.

The same month, quantum computing stocks tanked after Huang expressed doubts over the technology’s near-term potential during the chipmaker’s financial analyst day at the Consumer Electronics Show, saying useful quantum computers are likely decades away.

Large Language Models (LLMs) both threaten the uniqueness of human social intelligence and promise opportunities to better understand it. In this talk, I evaluate the extent to which distributional information learned by LLMs allows them to approximate human behavior on tasks that appear to require social intelligence. In the first half, I will compare human and LLM responses in experiments designed to measure theory of mind—the ability to represent and reason about the mental states of other agents. Second, I present the results of an evaluation of LLMs using the Turing test, which measures a machine’s ability to imitate humans in a multi-turn social interaction.

Cameron Jones recently graduated with a PhD in Cognitive Science from the Language and Cognition Lab at UC San Diego. His work focuses on comparing humans and Large Language Models (LLMs) to learn more about how each of those systems works. He is interested in the extent to which LLMs can explain human behavior that appears to rely on world knowledge, reasoning, and social intelligence. In particular, he is interested in whether LLMs can approximate human social behavior, for instance in the Turing test, or by persuading or deceiving human interlocutors.

https://camrobjones.com/

https://scholar.google.com/citations? / camrobjones.

/ camrobjones.

Reinforcement learning (RL) has become central to advancing Large Language Models (LLMs), empowering them with improved reasoning capabilities necessary for complex tasks. However, the research community faces considerable challenges in reproducing state-of-the-art RL techniques due to incomplete disclosure of key training details by major industry players. This opacity has limited the progress of broader scientific efforts and collaborative research.

Researchers from ByteDance, Tsinghua University, and the University of Hong Kong recently introduced DAPO (Dynamic Sampling Policy Optimization), an open-source large-scale reinforcement learning system designed for enhancing the reasoning abilities of Large Language Models. The DAPO system seeks to bridge the gap in reproducibility by openly sharing all algorithmic details, training procedures, and datasets. Built upon the verl framework, DAPO includes training codes and a thoroughly prepared dataset called DAPO-Math-17K, specifically designed for mathematical reasoning tasks.

DAPO’s technical foundation includes four core innovations aimed at resolving key challenges in reinforcement learning. The first, “Clip-Higher,” addresses the issue of entropy collapse, a situation where models prematurely settle into limited exploration patterns. By carefully managing the clipping ratio in policy updates, this technique encourages greater diversity in model outputs. “Dynamic Sampling” counters inefficiencies in training by dynamically filtering samples based on their usefulness, thus ensuring a more consistent gradient signal. The “Token-level Policy Gradient Loss” offers a refined loss calculation method, emphasizing token-level rather than sample-level adjustments to better accommodate varying lengths of reasoning sequences. Lastly, “Overlong Reward Shaping” introduces a controlled penalty for excessively long responses, gently guiding models toward concise and efficient reasoning.