Toggle light / dark theme

One of China’s biggest AI solution providers SenseTime is a step closer to its initial public offering. SenseTime has received regulatory approval to list on the Hong Kong Stock Exchange, according to media reports. Founded in 2014, SenseTime was christened as one of China’s four “AI Dragons” alongside Megvii, CloudWalk, and Yitu. In the second half of the 2010s, their algorithms found much demand from businesses and governments hoping to turn real-life data into actionable insights. Cameras embedded with their AI models watch city streets 24 hours. Malls use their sensing solutions to track and predict crowds on the premises.

SenseTime’s three rivals have all mulled plans to sell shares either in mainland China or Hong Kong. Megvii is preparing to list on China’s Nasdaq-style STAR board after its HKEX application lapsed.

The window for China’s data-rich tech firms to list overseas has narrowed. Beijing is making it harder for companies with sensitive data to go public outside China. And regulators in the West are wary of facial recognition companies that could aid mass surveillance.

But in the past few years, China’s AI upstarts were sought after by investors all over the world. In 2018 alone, SenseTime racked up more than $2 billion in investment. To date, the company has raised a staggering $5.2 billion in funding through 12 rounds. Its biggest outside shareholders include SoftBank Vision Fund and Alibaba’s Taobao. For its flotation in Hong Kong, SenseTime plans to raise up to $2 billion, according to Reuters.

Full Story:

Rarely does scientific software spark such sensational headlines. “One of biology’s biggest mysteries ‘largely solved’ by AI”, declared the BBC. Forbes called it “the most important achievement in AI — ever”. The buzz over the November 2020 debut of AlphaFold2, Google DeepMind’s (AI) system for predicting the 3D structure of proteins, has only intensified since the tool was made freely available in July.

The excitement relates to the software’s potential to solve one of biology’s thorniest problems — predicting the functional, folded structure of a protein molecule from its linear amino-acid sequence, right down to the position of each atom in 3D space. The underlying physicochemical rules for how proteins form their 3D structures remain too complicated for humans to parse, so this ‘protein-folding problem’ has remained unsolved for decades.

Researchers have worked out the structures of around 160,000 proteins from all kingdoms of life. They have been using experimental techniques, such as X-ray crystallography and cryo-electron microscopy (cryo-EM), and then depositing their 3D information in the Protein Data Bank. Computational biologists have made steady gains in developing software that complements these methods, and have correctly predicted the 3D shapes of some molecules from well-studied protein families.

New ways to measure the top supercomputers’ smarts in the AI field include searching for dark energy, predicting hurricanes, and finding new materials for energy storage.


[ol class= popular-box__articles-list popular-box__articles-list—active][li class= popular-box__article-list]

[img src=/media/img/missing-image.svg alt= Tune in to hear how NASA has engineered and asteroid impact with the DART spacecraft. class= popular-box__article-list__image lazy-image-van-mos optional-image sizes=99vw data-normal=/media/img/missing-image.svg data-original-mos= https://cdn.mos.cms.futurecdn.net/i7efrzkNj5VvD87EDy3yne.jpg data-pin-media= https://cdn.mos.cms.futurecdn.net/i7efrzkNj5VvD87EDy3yne.jpg data-pin-nopin= true].

This new reality promises robotic dogs to enforce social distancing and publicly owned flying taxis to provide transportation since private vehicles are only available to the rich. The technology is currently being rolled out in other western nations, including Canada.

On a hard disk somewhere in the surveillance archives of Singapore’s Changi prison is a video of Jolovan Wham, naked, alone, performing Hamlet.

Over the past several decades, researchers have moved from using electric currents to manipulating light waves in the near-infrared range for telecommunications applications such as high-speed 5G networks, biosensors on a chip, and driverless cars. This research area, known as integrated photonics, is fast evolving and investigators are now exploring the shorter—visible—wavelength range to develop a broad variety of emerging applications. These include chip-scale LIDAR (light detection and ranging), AR/VR/MR (augmented/virtual/mixed reality) goggles, holographic displays, quantum information processing chips, and implantable optogenetic probes in the brain.

The one device critical to all these applications in the is an optical phase modulator, which controls the phase of a light wave, similar to how the phase of radio waves is modulated in wireless computer networks. With a phase modulator, researchers can build an on-chip that channels light into different waveguide ports. With a large network of these optical switches, researchers could create sophisticated integrated optical systems that could control light propagating on a tiny chip or light emission from the chip.

But phase modulators in the visible range are very hard to make: there are no materials that are transparent enough in the visible spectrum while also providing large tunability, either through thermo-optical or electro-optical effects. Currently, the two most suitable materials are silicon nitride and lithium niobate. While both are highly transparent in the visible range, neither one provides very much tunability. Visible-spectrum phase modulators based on these materials are thus not only large but also power-hungry: the length of individual waveguide-based modulators ranges from hundreds of microns to several mm and a single modulator consumes tens of mW for phase tuning. Researchers trying to achieve large-scale integration—embedding thousands of devices on a single microchip—have, up to now, been stymied by these bulky, energy-consuming devices.

NVIDIA’s GauGAN2 artificial intelligence (AI) can now use simple written phrases to generate a fitting photorealistic image. The deep-learning model is able to craft different scenes in just three or four words.

GauGAN is NVIDIA’s AI program that was used to turn simple doodles into photorealistic masterpieces in 2019, a technology that was eventually turned into the NVIDIA Canvas app earlier this year. Now NVIDIA has advanced the AI even further to where it only needs a brief description in order to generate a “photo.”

Is the “good book” getting an upgrade? Join us… and find out more!

Subscribe for more from Unveiled ► https://wmojo.com/unveiled-subscribe.

Ever feel like you could do with a little guidance? A push in the right direction? Over the past couple thousand years or so, humans have often turned to religious texts to help get them through life’s trickier moments… but is science and technology now triggering a major paradigm shift? In this video, Unveiled takes a closer look at the reasons why we might soon… need a new Bible!

This is Unveiled, giving you incredible answers to extraordinary questions!

For the last decade and more, Stem Cell research and regenerative medicine have been the rave of the healthcare industry, a delicate area that has seen steady advancements over the last few years.

The promise of regenerative medicine is simple but profound that one day medical experts will be able to diagnose a problem, remove some of our body cells called stem cells and use them to grow a cure for our ailment. Using our body cells will create a highly personalized therapy attuned to our genes and systems.

The terminologies often used in this field of medicine can get a bit fuzzy for the uninitiated, so in this article, I have relied heavily on the insights of Christian Drapeau, a neurophysiologist and stem cell expert.