Toggle light / dark theme

Nvidia’s Jensen Huang says AI hallucinations are solvable, artificial general intelligence is 5 years away

Artificial general intelligence (AGI) — often referred to as “strong AI,” “full AI,” “human-level AI” or “general intelligent action” — represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks, such as detecting product flaws, summarizing the news, or building you a website, AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia’s annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject — not least because he finds himself misquoted a lot, he says.

The frequency of the question makes sense: The concept raises existential questions about humanity’s role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI’s decision-making processes and objectives, which might not align with human values or priorities (a concept explored in-depth in science fiction since at least the 1940s). There’s concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.

When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity — or at least the current status quo. Needless to say, AI CEOs aren’t always eager to tackle the subject.

Surgical Robot Outperforms Human Surgeons in Precise Removal of Cancerous Tumors

Surgically removing tumors from sensitive areas, such as the head and neck, poses significant challenges. The goal during surgery is to take out the cancerous tissue while saving as much healthy tissue as possible. This balance is crucial because leaving behind too much cancerous tissue can lead to the cancer’s return or spread. Doing a resection that has precise margins—specifically, a 5mm margin of healthy tissue—is essential but difficult. This margin, roughly the size of a pencil eraser, ensures that all cancerous cells are removed while minimizing damage. Tumors often have clear horizontal edges but unclear vertical boundaries, making depth assessment challenging despite careful pre-surgical planning. Surgeons can mark the horizontal borders but have limited ability to determine the appropriate depth for removal due to the inability to see beyond the surface. Additionally, surgeons face obstacles like fatigue and visual limitations, which can affect their performance. Now, a new robotic system has been designed to perform tumor removal from the tongue with precision levels that could match or surpass those of human surgeons.

The Autonomous System for Tumor Resection (ASTR) designed by researchers at Johns Hopkins (Baltimore, MD, USA) translates human guidance into robotic precision. This system builds upon the technology from their Smart Tissue Autonomous Robot (STAR), which previously conducted the first fully autonomous laparoscopic surgery to connect the two intestinal ends. ASTR, an advanced dual-arm, vision-guided robotic system, is specifically designed for tissue removal in contrast to STAR’s focus on tissue connection. In tests using pig tongue tissue, the team demonstrated ASTR’s ability to accurately remove a tumor and the required 5mm of surrounding healthy tissue. After focusing on tongue tumors due to their accessibility and relevance to experimental surgery, the team now plans to extend ASTR’s application to internal organs like the kidney, which are more challenging to access.

SCIN: A new resource for representative dermatology images

Google Research releases the Skin Condition Image Network (SCIN) dataset in collaboration with physicians at Stanford Med.

Designed to reflect the broad range of conditions searched for online, it’s freely available as a resource for researchers, educators, & devs → https://goo.gle/4amfMwW

#AI #medicine


Health datasets play a crucial role in research and medical education, but it can be challenging to create a dataset that represents the real world. For example, dermatology conditions are diverse in their appearance and severity and manifest differently across skin tones. Yet, existing dermatology image datasets often lack representation of everyday conditions (like rashes, allergies and infections) and skew towards lighter skin tones. Furthermore, race and ethnicity information is frequently missing, hindering our ability to assess disparities or create solutions.

To address these limitations, we are releasing the Skin Condition Image Network (SCIN) dataset in collaboration with physicians at Stanford Medicine. We designed SCIN to reflect the broad range of concerns that people search for online, supplementing the types of conditions typically found in clinical datasets. It contains images across various skin tones and body parts, helping to ensure that future AI tools work effectively for all. We’ve made the SCIN dataset freely available as an open-access resource for researchers, educators, and developers, and have taken careful steps to protect contributor privacy.

DRAM Cache For GPUs Improves Performance By Up To 12.5x While Significantly Reducing Power Versus HBM

A new research paper has discovered the usefulness of DRAM cache for GPUs which can help enable higher performance at low power.

Researchers Propose The Use Of Dedicated DRAM Caches Onto Newly-Built SCMs For GPUs, Replacing Conventional HBM Configuration

The GPU industry, which involves consumer, workstation, and AI GPUs, is proceeding in a way that we are seeing advancements in memory capacities and bandwidth, but it isn’t sustainable, and ultimately, we could hit the limits if an innovative approach isn’t taken.

Omnidirectional tripedal robot scoots, shuffles and climbs

A small research group from the University of Michigan has developed a three-legged skating/shuffling robot called SKOOTR that rolls as it walks, can move along in any direction and can even rise up to overcome obstacles.

The idea for the SKOOTR – or SKating, Omni-Oriented, Tripedal Robot – project came from assistant professor Talia Y. Moore at the University of Michigan’s Evolution and Motion of Biology and Robotics (EMBiR) Lab.

“I came up with this idea as I was rolling around on my office chair between groups of students,” said Moore. “I realized that the passively rolling office chair could easily spin in any direction, and I could use my legs to perform a variety of maneuvers while staying remarkably stable. I realized that this omnidirectional maneuverability is similar to how brittle stars change directions while swimming.”

/* */