AI is said to ruin lives, but Europe is not letting that happen.

Vanderbilt researchers have developed an active machine learning approach to predict the effects of tumor variants of unknown significance, or VUS, on sensitivity to chemotherapy. VUS, mutated bits of DNA with unknown impacts on cancer risk, are constantly being identified. The growing number of rare VUS makes it imperative for scientists to analyze them and determine the kind of cancer risk they impart.
Traditional prediction methods display limited power and accuracy for rare VUS. Even machine learning, an artificial intelligence tool that leverages data to “learn” and boost performance, falls short when classifying some VUS. Recent work by the lab of Walter Chazin, Chancellor’s Chair in Medicine and professor of biochemistry and chemistry, led by co-first authors and postdoctoral fellows Alexandra Blee and Bian Li, featured an active machine learning technique.
Active machine learning relies on training an algorithm with existing data, as with machine learning, and feeding it new information between rounds of training. Chazin and his lab identified VUS for which predictions were least certain, performed biochemical experiments on those VUS and incorporated the resulting data into subsequent rounds of algorithm training. This allowed the model to continuously improve its VUS classification.
University of Toronto researchers are working on advanced snake-like robots with many useful applications.
Slender, flexible, and extensible robots
Now, a team led by Jessica Burgner-Kahrs, the director of the Continuum Robotics Lab at the University of Toronto Mississauga, is building very slender, flexible, and extensible robots that could be used by doctors to save lives, according to a press release by the institution. They do this by accessing difficult-to-reach places.
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers.
📝 The paper “MegaPortraits: One-shot Megapixel Neural Head Avatars” is available here:
https://samsunglabs.github.io/MegaPortraits/
❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers.
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join.
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O’Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers.
Thumbnail background design: Felícia Zsolnai-Fehér — http://felicia.hu.
Károly Zsolnai-Fehér’s links:
A year after Tesla announced its humanoid robot — the Tesla Bot — the conceptual general-purpose robot is up against some Chinese competition. On the sidelines of Xiaomi’s Autumn launch event in Beijing, the company announced its first full-size humanoid bionic robot. The rather unimaginatively named Xiaomi CyberOne is the second robotic product from Xiaomi and comes a year after the announcement of the Xiaomi Cyberdog, which they showcased at their 2021 Autumn launch event.
Xiaomi.
Like most other humanoid robots, most aspects of the Xiaomi CyberOne are still “work in progress.” Xiaomi claims that future, evolved variants of the robot will not only have a high degree of emotional intelligence but will also gain the ability to perceive human emotions. Despite the fact that the first-generation CyberOne demoed on stage seemed to have trouble walking, work is underway to improve its ability to master the art of bipedal movement.
DALL-E 2, OpenAI’s powerful text-to-image AI system, can create photos in the style of cartoonists, 19th century daguerreotypists, stop-motion animators and more. But it has an important, artificial limitation: a filter that prevents it from creating images depicting public figures and content deemed too toxic.
Now an open source alternative to DALL-E 2 is on the cusp of being released, and it’ll have no such filter.
London-and Los Altos-based startup Stability AI this week announced the release of a DALL-E 2-like system, Stable Diffusion, to just over a thousand researchers ahead of a public launch in the coming weeks. A collaboration between Stability AI, media creation company RunwayML, Heidelberg University researchers and the research groups EleutherAI and LAION, Stable Diffusion is designed to run on most high-end consumer hardware, generating 512×512-pixel images in just a few seconds given any text prompt.
The number of bots and spam accounts on the platform has been a sticking point for Musk throughout the deal. After months of back and forth, Musk’s issues with spam accounts eventually led him to publicly pull out of the $44 billion deal.
Last month, Musk accused Twitter of withholding information about the number of bots on the platform, later citing it as the reason for withdrawing his bid.
Musk’s lawyers claimed in a termination letter that his analysis indicated the percentage of false accounts on Twitter was “wildly higher than 5%” — the number Twitter disclosed in its financial reports.