Toggle light / dark theme

Early last year, our research team from the Visual Computing Group introduced Swin Transformer, a Transformer-based general-purpose computer vision architecture that for the first time beat convolutional neural networks on the important vision benchmark of COCO object detection and did so by a large margin. Convolutional neural networks (CNNs) have long been the architecture of choice for classifying images and detecting objects within them, among other key computer vision tasks. Swin Transformer offers an alternative. Leveraging the Transformer architecture’s adaptive computing capability, Swin can achieve higher accuracy. More importantly, Swin Transformer provides an opportunity to unify the architectures in computer vision and natural language processing (NLP), where the Transformer has been the dominant architecture for years and has benefited the field because of its ability to be scaled up.

So far, Swin Transformer has shown early signs of its potential as a strong backbone architecture for a variety of computer vision problems, powering the top entries of many important vision benchmarks such as COCO object detection, ADE20K semantic segmentation, and CelebA-HQ image generation. It has also been well-received by the computer vision research community, garnering the Marr Prize for best paper at the 2021 International Conference on Computer Vision (ICCV). Together with works such as CSWin, Focal Transformer, and CvT, also from teams within Microsoft, Swin is helping to demonstrate the Transformer architecture as a viable option for many vision challenges. However, we believe there’s much work ahead, and we’re on an adventurous journey to explore the full potential of Swin Transformer.

In the past few years, one of the most important discoveries in the field of NLP has been that scaling up model capacity can continually push the state of the art for various NLP tasks, and the larger the model, the better its ability to adapt to new tasks with very little or no training data. Can the same be achieved in computer vision, and if so, how?

An algorithm developed by researchers from Helmholtz Munich, the Technical University of Munich (TUM) and its University Hospital rechts der Isar, the University Hospital Bonn (UKB) and the University of Bonn is able to learn independently across different medical institutions. The key feature is that it is self-learning, meaning it does not require extensive, time-consuming findings or markings by radiologists in the MRI images.

This federated was trained on more than 1,500 MRI scans of healthy study participants from four institutions while maintaining data privacy. The algorithm then was used to analyze more than 500 patient MRI scans to detect diseases such as multiple sclerosis, vascular disease, and various forms of brain tumors that the algorithm had never seen before. This opens up new possibilities for developing efficient AI-based federated algorithms that learn autonomously while protecting privacy. The study has now been published in the journal Nature Machine Intelligence.

Health care is currently being revolutionized by artificial intelligence. With precise AI solutions, doctors can be supported in diagnosis. However, such algorithms require a considerable amount of data and the associated radiological specialist findings for training. The creation of such a large, central database, however, places special demands on . Additionally, the creation of the findings and annotations, for example the marking of tumors in an MRI image, is very time-consuming.

Why we should be performing interstellar archaeology and how Avi Loeb and his team at the Galileo Project plan to recover an interstellar object at the bottom of the ocean.

“Any chemically-propelled spacecraft sent by past civilizations into interstellar space, like the five we had sent so far (Voyager 1 & 2, Pioneer 10 & 11, and New Horizons), remained gravitationally bound to the Milky Way long after these civilizations died. Their characteristic speed of tens of kilometers per second is an order of magnitude smaller than the escape speed out of the Milky Way. These rockets would populate the Milky Way disk and move around at similar speeds to the stars in it.

This realization calls for a new research frontier of “interstellar archaeology”, in the spirit of searching our backyard of the Solar system for objects that came from the cosmic street surrounding it. The interstellar objects could potentially look different than the familiar asteroids or comets which are natural relics or Lego pieces from the construction project of the Solar system planets. The traditional field of archaeology on Earth finds relics left behind of cultures which are not around anymore. We can do the same in space.“
https://avi-loeb.medium.com/

The goal of the Galileo Project is to bring the search for extraterrestrial technological signatures of Extraterrestrial Technological Civilizations (ETCs) from accidental or anecdotal observations and legends to the mainstream of transparent, validated and systematic scientific research. This project is complementary to traditional SETI, in that it searches for physical objects, and not electromagnetic signals, associated with extraterrestrial technological equipment.

Solar cells are vital for the green energy transition. They can be used not only on rooftops and solar farms but also for powering autonomous vehicles, such as planes and satellites. However, photovoltaic solar cells are currently heavy and bulky, making them difficult to transport to remote locations off-grid, where they are much needed.

In a collaboration led by Imperial College London, alongside researchers from Cambridge, UCL, Oxford, Helmholtz-Zentrum Berlin in Germany, and others, researchers have produced that can absorb comparable levels of sunlight as conventional silicon , but with 10,000 times lower thickness.

The material is sodium bismuth sulfide (NaBiS2), which is grown as nanocrystals and deposited from solution to make films 30 nanometers in thickness. NaBiS2 is comprised of nontoxic elements that are sufficiently abundant in the earth’s crust for use commercially. For example, bismuth-based compounds are used as a nontoxic lead replacement in solder, or in over-the-counter stomach medicine.

“Everyone can quantum.”

Chinese multinational technology company Baidu just released its first quantum computer on Thursday. The first superconducting quantum computer, “Qian Shi” can integrate hardware, software, and many applications. Baidu also introduced the world’s first all-platform quantum hardware-software integration solution — Liang Xi — that provides access to various quantum chips via mobile app, PC, and cloud.

Qian Shi is expected to solve data that a standard computer cannot calculate and problems that cannot be solved. This development is also thought to be a breakthrough in artificial intelligence, computational biology, material simulation, and financial technology.

Qian Shi offers a stable and substantial quantum computing service to the public with high-fidelity 10 quantum bits (qubits) of power. Apart from Qian Shi, Baidu has recently developed the design of a 36-qubit superconducting quantum chip.

Since we began space exploration in the mid-20th century, space agencies have relied on sending humans and robots into space. But should we leave space exploration entirely to robots?

Or should we consider sending humans to explore a new space world instead of robots? You are about to find satisfying answers to these curious questions.

Unlike traveling from one destination to another on Earth, exploring space comes with greater responsibilities. Space agencies hoping to explore a new space world must make a lot of planning to guarantee their success.

“Robo Sapien” taken from the album “The Machinists Of Joy”.
Directed by: Jay Gillian.
Camera OP and Computer Animation: Shane Williams.
Produced by Cinematek Film & Television.
Robo Sapien provided by: JG and the Robots www.JGandtheRobots.com.

http://www.facebook.com/diekruppsofficial.
http://www.twitter.com/diekruppsband.
http://www.diekrupps.com

Bishop: They can still be computationally very expensive. Additionally, emulators learn from data, so they’re typically not more accurate than the data used to train them. Moreover, they may give insufficiently accurate results when presented with scenarios that are markedly different from those on which they’re trained.

“I believe in “use-inspired basic research”—[like] the work of Pasteur. He was a consultant for the brewing industry. Why did this beer keep going sour? He basically founded the whole field of microbiology.” —Chris Bishop, Microsoft Research.