Toggle light / dark theme

The World Wide Web was originally conceived and developed at CERN to meet the demand for automated information-sharing between scientists in universities and institutes around the world. From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, is at the heart of CERN’s infrastructure.

The Worldwide LHC Grid (WLCG) – a distributed infrastructure arranged in tiers – gives a community of thousands of physicists near real-time access to LHC data. The CERN data centre is at the heart of WLCG, the first point of contact between experimental data from the LHC and the grid. Through CERN openlab, a unique public-private partnership, CERN collaborates with leading ICT companies and other research organisations to accelerate the development of cutting-edge ICT solutions for the research community. CERN has also established a medium-and long-term roadmap and research programme in collaboration with the high energy physics and quantum-technology research communities via the CERN Quantum Technology Initiative (QTI).

This video will aim to provide detailed information regarding the use of robots in healthcare, specifically the surgical robots. The video consists of 4 subtopics: history of surgical procedure, introduction to surgical robots, the da Vinci surgical system with Dr. Yasufuku’s interview and robotics in remote healthcare and future advancement. The full-length of the interview will be posted on the Demystifying Medicine Soundcloud account and as a YouTube Podcast. For more information regarding the Da Vinci Surgical System, please visit https://www.davincisurgery.com/about-us/privacy-policy. (The animation of Da Vinci Surgical System during the News section was used with permission from Intuitive.)

Finally, we would like to thank Dr. Kazuhiro Yasufuku for his time and contribution to our video through his excellent interview. For those who are interested to learn more about his research, please visit https://www.yasufukuresearch.com/.

The video was made by McMaster students Allan Li, Gurkaran Chhaggar, Mateo Newbery Orrantia, and Nadia Mohammed in collaboration with Dr. Yasufuku and the McMaster Demystifying Medicine Program.

Copyright McMaster University 2021

A new method learns prompts from an image, which then can be used to reproduce similar concepts in Stable Diffusion.

Whether DALL-E 2, Midjourney or Stable Diffusion: All current generative image models are controlled by text input, so-called prompts. Since the outcome of generative AI models depends heavily on the formulation of these prompts, “prompt engineering” has become a discipline in its own right in the AI community. The goal of prompt engineering is to find prompts that produce repeatable results, that can be mixed with other prompts, and that ideally work for other models as well.

In addition to such text prompts, the AI models can also be controlled by so-called “soft prompts”. These are text embeddings automatically derived from the network, i.e. numerical values that do not directly correspond to human terms. Because soft prompts are derived directly from the network, they produce very precise results for certain synthesis tasks, but cannot be applied to other models.

Our immune systems are at war with cancer. This animation reveals how monoclonal antibodies can act as valuable reinforcements to shore up our defences – and help battle cancer.

You can find more on this topic at http://www.nature.com/milestones/antibodies.

Nature Research has full responsibility for all editorial content, including Nature Video content. This content is editorially independent of sponsors.

Sign up for the Nature Briefing: An essential round-up of science news, opinion and analysis, free in your inbox every weekday. https://go.nature.com/371OcVF