Toggle light / dark theme

Importantly, the superior temporal sulcus (STS), and superior temporal gyrus (STG) are considered core areas for multisensory integration (, ), including for olfactory–visual integration (). The STS was significantly connected to the IPS during multisensory integration, as indicated by the PPI analysis ( Fig. 3 ) focusing on functional connectivity of IPS and whole-brain activation. Likewise, the anterior and middle cingulate cortex, precuneus, and hippocampus/amygdala were activated when testing sickness-cue integration-related whole-brain functional connectivity with the IPS but were not activated when previously testing for unisensory odor or face sickness perception. In this context, hippocampus/amygdala activation may represent the involvement of an associative neural network responding to threat () represented by a multisensory sickness signal. This notion supports the earlier assumption of olfactory-sickness–driven OFC and MDT activation, suggested to be part of a neural circuitry serving disease avoidance. Last, the middle cingulate cortex has recently been found to exhibit enhanced connectivity with the anterior insula during a first-hand experience of LPS-induced inflammation (), and this enhancement has been interpreted as a potential neurophysiological mechanism involved in the brain’s sickness response. Applied to the current data, the middle cingulate cortex, in the context of multisensory-sickness–driven associations between IPS and whole-brain activations, may indicate a shared representation of an inflammatory state and associated discomfort.

In conclusion, the present study shows how subtle and early olfactory and visual sickness cues interact through cortical activation and may influence humans’ approach–avoidance tendencies. The study provides support for sensory integration of information from cues of visual and olfactory sickness in cortical multisensory convergences zones as being essential for the detection and evaluation of sick individuals. Both olfaction and vision, separately and following sensory integration, may thus be important parts of circuits handling imminent threats of contagion, motivating the avoidance of sick conspecifics (, ).

Elon Musk made some striking predictions about the impact of artificial intelligence (AI) on jobs and income at the inaugural AI Safety Summit in the U.K. in November.

The serial entrepreneur and CEO painted a utopian vision where AI renders traditional employment obsolete but provides an “age of abundance” through a system of “universal high income.”

“It’s hard to say exactly what that moment is, but there will come a point where no job is needed,” Musk told U.K. Prime Minister Rishi Sunak. “You can have a job if you want to have a job or sort of personal satisfaction, but the AI will be able to do everything.”

Artificial general intelligence (AGI) — often referred to as “strong AI,” “full AI,” “human-level AI” or “general intelligent action” — represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks, such as detecting product flaws, summarizing the news, or building you a website, AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia’s annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject — not least because he finds himself misquoted a lot, he says.

The frequency of the question makes sense: The concept raises existential questions about humanity’s role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI’s decision-making processes and objectives, which might not align with human values or priorities (a concept explored in-depth in science fiction since at least the 1940s). There’s concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.

When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity — or at least the current status quo. Needless to say, AI CEOs aren’t always eager to tackle the subject.

Surgically removing tumors from sensitive areas, such as the head and neck, poses significant challenges. The goal during surgery is to take out the cancerous tissue while saving as much healthy tissue as possible. This balance is crucial because leaving behind too much cancerous tissue can lead to the cancer’s return or spread. Doing a resection that has precise margins—specifically, a 5mm margin of healthy tissue—is essential but difficult. This margin, roughly the size of a pencil eraser, ensures that all cancerous cells are removed while minimizing damage. Tumors often have clear horizontal edges but unclear vertical boundaries, making depth assessment challenging despite careful pre-surgical planning. Surgeons can mark the horizontal borders but have limited ability to determine the appropriate depth for removal due to the inability to see beyond the surface. Additionally, surgeons face obstacles like fatigue and visual limitations, which can affect their performance. Now, a new robotic system has been designed to perform tumor removal from the tongue with precision levels that could match or surpass those of human surgeons.

The Autonomous System for Tumor Resection (ASTR) designed by researchers at Johns Hopkins (Baltimore, MD, USA) translates human guidance into robotic precision. This system builds upon the technology from their Smart Tissue Autonomous Robot (STAR), which previously conducted the first fully autonomous laparoscopic surgery to connect the two intestinal ends. ASTR, an advanced dual-arm, vision-guided robotic system, is specifically designed for tissue removal in contrast to STAR’s focus on tissue connection. In tests using pig tongue tissue, the team demonstrated ASTR’s ability to accurately remove a tumor and the required 5mm of surrounding healthy tissue. After focusing on tongue tumors due to their accessibility and relevance to experimental surgery, the team now plans to extend ASTR’s application to internal organs like the kidney, which are more challenging to access.

Google Research releases the Skin Condition Image Network (SCIN) dataset in collaboration with physicians at Stanford Med.

Designed to reflect the broad range of conditions searched for online, it’s freely available as a resource for researchers, educators, & devs → https://goo.gle/4amfMwW

#AI #medicine


Health datasets play a crucial role in research and medical education, but it can be challenging to create a dataset that represents the real world. For example, dermatology conditions are diverse in their appearance and severity and manifest differently across skin tones. Yet, existing dermatology image datasets often lack representation of everyday conditions (like rashes, allergies and infections) and skew towards lighter skin tones. Furthermore, race and ethnicity information is frequently missing, hindering our ability to assess disparities or create solutions.