Toggle light / dark theme

Surgically removing tumors from sensitive areas, such as the head and neck, poses significant challenges. The goal during surgery is to take out the cancerous tissue while saving as much healthy tissue as possible. This balance is crucial because leaving behind too much cancerous tissue can lead to the cancer’s return or spread. Doing a resection that has precise margins—specifically, a 5mm margin of healthy tissue—is essential but difficult. This margin, roughly the size of a pencil eraser, ensures that all cancerous cells are removed while minimizing damage. Tumors often have clear horizontal edges but unclear vertical boundaries, making depth assessment challenging despite careful pre-surgical planning. Surgeons can mark the horizontal borders but have limited ability to determine the appropriate depth for removal due to the inability to see beyond the surface. Additionally, surgeons face obstacles like fatigue and visual limitations, which can affect their performance. Now, a new robotic system has been designed to perform tumor removal from the tongue with precision levels that could match or surpass those of human surgeons.

The Autonomous System for Tumor Resection (ASTR) designed by researchers at Johns Hopkins (Baltimore, MD, USA) translates human guidance into robotic precision. This system builds upon the technology from their Smart Tissue Autonomous Robot (STAR), which previously conducted the first fully autonomous laparoscopic surgery to connect the two intestinal ends. ASTR, an advanced dual-arm, vision-guided robotic system, is specifically designed for tissue removal in contrast to STAR’s focus on tissue connection. In tests using pig tongue tissue, the team demonstrated ASTR’s ability to accurately remove a tumor and the required 5mm of surrounding healthy tissue. After focusing on tongue tumors due to their accessibility and relevance to experimental surgery, the team now plans to extend ASTR’s application to internal organs like the kidney, which are more challenging to access.

Google Research releases the Skin Condition Image Network (SCIN) dataset in collaboration with physicians at Stanford Med.

Designed to reflect the broad range of conditions searched for online, it’s freely available as a resource for researchers, educators, & devs → https://goo.gle/4amfMwW

#AI #medicine


Health datasets play a crucial role in research and medical education, but it can be challenging to create a dataset that represents the real world. For example, dermatology conditions are diverse in their appearance and severity and manifest differently across skin tones. Yet, existing dermatology image datasets often lack representation of everyday conditions (like rashes, allergies and infections) and skew towards lighter skin tones. Furthermore, race and ethnicity information is frequently missing, hindering our ability to assess disparities or create solutions.

To address these limitations, we are releasing the Skin Condition Image Network (SCIN) dataset in collaboration with physicians at Stanford Medicine. We designed SCIN to reflect the broad range of concerns that people search for online, supplementing the types of conditions typically found in clinical datasets. It contains images across various skin tones and body parts, helping to ensure that future AI tools work effectively for all. We’ve made the SCIN dataset freely available as an open-access resource for researchers, educators, and developers, and have taken careful steps to protect contributor privacy.

A new research paper has discovered the usefulness of DRAM cache for GPUs which can help enable higher performance at low power.

Researchers Propose The Use Of Dedicated DRAM Caches Onto Newly-Built SCMs For GPUs, Replacing Conventional HBM Configuration

The GPU industry, which involves consumer, workstation, and AI GPUs, is proceeding in a way that we are seeing advancements in memory capacities and bandwidth, but it isn’t sustainable, and ultimately, we could hit the limits if an innovative approach isn’t taken.

A small research group from the University of Michigan has developed a three-legged skating/shuffling robot called SKOOTR that rolls as it walks, can move along in any direction and can even rise up to overcome obstacles.

The idea for the SKOOTR – or SKating, Omni-Oriented, Tripedal Robot – project came from assistant professor Talia Y. Moore at the University of Michigan’s Evolution and Motion of Biology and Robotics (EMBiR) Lab.

“I came up with this idea as I was rolling around on my office chair between groups of students,” said Moore. “I realized that the passively rolling office chair could easily spin in any direction, and I could use my legs to perform a variety of maneuvers while staying remarkably stable. I realized that this omnidirectional maneuverability is similar to how brittle stars change directions while swimming.”

According to Nvidia’s roadmap, it’ll unveil its next-gen Blackwell architecture soon. The company always launches a new architecture with data center products first and then reveals the cut-down GeForce versions many months later, so that’s what’s expected this time as well. On that note, the company’s semi-annual GTC technology conference starts in two weeks, so we expect a lot to be revealed at the show. As proof that Nvidia is close to pulling the wraps off its new data center GPUs, a Dell executive has already shared some juicy info about next-gen Nvidia hardware, saying in a recent earnings call the company has a 1,000W data center GPU in the pipeline.

The executive who has probably already received an angry call from Jensen is Jeff Clarke, a COO at Dell. On a Feb. 29 earnings call (PDF), the executive discussed Dell’s engineering superiority and how upcoming hardware from Nvidia will give the company a chance to show it off. “We’re excited about what happens at the B100 and the B200,” he said, which are the die names for Nvidia’s next-generation data center GPU and its apparent successor. For context, Nvidia currently has the H100 as its flagship data center GPU and is just now launching the second iteration with faster HBM3e memory, dubbed H200. We all know the B100 is the Blackwell successor to this chip, so it appears the B200 will be that GPU’s second iteration, though it does not currently appear on Nvidia’s roadmap (below).

He then left Google and started a new AI lab called Inflection AI, which he ran as CEO. He and Inflection’s chief scientist, Karén Simonyan, are now jumping ship to help lead Microsoft AI.

Suleyman will become both a Microsoft EVP and run the new Microsoft AI group as CEO. On why he was selected, Nadella said: “I’ve known Mustafa for several years and have greatly admired him as a founder of both DeepMind and Inflection, and as a visionary, product maker, and builder of pioneering teams that go after bold missions.”

Meanwhile, Suleyman noted in a LinkedIn post: “I’ll be leading all consumer AI products and research, including Copilot, Bing and Edge.” Several of his co-workers at Inflection have also decided to migrate to Microsoft, he wrote.