Toggle light / dark theme

How one national lab is getting its supercomputers ready for the AI age

OAK RIDGE, Tenn. — At Oak Ridge National Laboratory, the government-funded science research facility nestled between Tennessee’s Great Smoky Mountains and Cumberland Plateau that is perhaps best known for its role in the Manhattan Project, two supercomputers are currently rattling away, speedily making calculations meant to help tackle some of the biggest problems facing humanity.

You wouldn’t be able to tell from looking at them. A supercomputer called Summit mostly comprises hundreds of black cabinets filled with cords, flashing lights and powerful graphics processing units, or GPUs. The sound of tens of thousands of spinning disks on the computer’s file systems, and air cooling technology for ancillary equipment, make the device sound somewhat like a wind turbine — and, at least to the naked eye, the contraption doesn’t look much different from any other corporate data center. Its next-door neighbor, Frontier, is set up in a similar manner across the hall, though it’s a little quieter and the cabinets have a different design.

Yet inside those arrays of cabinets are powerful specialty chips and components capable of, collectively, training some of the largest AI models known. Frontier is currently the world’s fastest supercomputer, and Summit is the world’s seventh-fastest supercomputer, according to rankings published earlier this month. Now, as the Biden administration boosts its focus on artificial intelligence and touts a new executive order for the technology, there’s growing interest in using these supercomputers to their full AI potential.

Personalized Cancer Medicine: Humans make Better Treatment Decisions than AI

Limits of large language models in precision medicine. Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité — Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.

If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor.

The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes — genes with the potential to cause cancer — for example.

Procurement in the age of AI

Procurement professionals face challenges more daunting than ever. Recent years’ supply chain disruptions and rising costs, deeply familiar to consumers, have had an outsize impact on business buying. At the same time, procurement teams are under increasing pressure to supply their businesses while also contributing to business growth and profitability.

Deloitte’s 2023 Global Chief Procurement Officer Survey reveals that procurement teams are now being called upon to address a broader range of enterprise priorities. These range from driving operational efficiency (74% of respondents) and enhancing corporate social responsibility (72%) to improving margins via cost reduction (71%).

Teaching Robots to Ask for Help: A Breakthrough in Enhancing Safety and Efficiency

“We want the robot to ask for enough help such that we reach the level of success that the user wants. But meanwhile, we want to minimize the overall amount of help that the robot needs,” said Allen Ren.


A recent study presented at the 7th Annual Conference on Robotic Learning examines a new method for teaching robots how to ask for further instructions when carrying out tasks with the goal of improving robotic safety and efficiency. This study was conducted by a team of engineers from Google and Princeton University and holds the potential to design and build better-functioning robots that mirror human traits, such as humility. Engineers have recently begun using large language models, or LLMs—which is responsible for designing ChatGPT—to make robots more human-like, but this can also come with drawbacks, as well.

“Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don’t know,” said Dr. Anirudha Majumdar, who is an assistant professor of mechanical and aerospace engineering at Princeton University and a co-author on the study.

For the study, the researchers used this LLM method with robotic arms in laboratories in New York City and Mountain View, California. For the experiments, the robots were asked to perform a series of tasks like placing bowls in the microwave or re-arranging items on a counter. The LLM algorithm assigned probabilities on which would be the best option based on the instructions, and promptly asked for help when a certain probability threshold was achieved. For example, the human would ask the robot to place one of two bowls in the microwave but would not say which one. The LLM algorithm would then trigger, causing the robot to ask for additional help.

Microsoft president says no chance of super-intelligent AI soon

LONDON, Nov 30 (Reuters) — The president of tech giant Microsoft (MSFT.O) said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.

OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.

Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

The Rise And Fall… And Rise Of Sam Altman Has Grave Implications For AI Research And Humanity

Within a tumultuous week of November 21 for OpenAI—a series of uncontrolled outcomes, each with its own significance—one would not have predicted the outcome that was to be the reinstatement of Sam Altman as CEO of OpenAI, with a new board in tow —all in five days.

While it’s still unclear the official grounds of Sam Altman’s lack of transparency to the board, and the ultimate distrust that led to his ousting, what was apparent was Microsoft’s complete backing of Altman, and the ensuing lack of support for the original board and its decision. It now leaves everyone to question why a board that had control of the company was unable to effectively oust an executive given its members legitimate safety concerns? And, why was a structure that was put in place to mitigate the risk of unilateral control over artificial general intelligence usurped by an investor—the very entity the structure was designed to guard against?