Toggle light / dark theme

New machine learning methods bring insights into how lithium ion batteries degrade, and show it’s more complicated than many thought.

Lithium-ion batteries lose their juice over time, causing scientists and engineers to work hard to understand that process in detail. Now, scientists at the Department of Energy’s SLAC National Accelerator Laboratory have combined sophisticated machine learning algorithms with X-ray tomography data to produce a detailed picture of how one battery component, the cathode, degrades with use.

The new study, published this month in Nature Communications, focused on how to better visualize what’s going on in cathodes made of nickel-manganese-cobalt, or NMC. In these cathodes, NMC particles are held together by a conductive carbon matrix, and researchers have speculated that one cause of performance decline could be particles breaking away from that matrix. The team’s goal was to combine cutting-edge capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and the European Synchrotron Radiation Facility (ESRF) to develop a comprehensive picture of how NMC particles break apart and break away from the matrix and how that might contribute to performance losses.

Electric VTOL air taxis are one of the great emerging technologies of our time, promising to unlock the skies as traffic-free, high-speed, 3D commuting routes. Much quieter and cheaper than helicopter travel, they’ll also run on zero-local-emission electric power, and many models suggest they’ll cost around the same per mile as a ride share.

Eventually, the market seems to agree, they’ll be pilotless automatons, even cheaper and more reliable than the earliest piloted versions. Should the onboard autopilot computers get confused, remote operators will take over and save the day as if they’re flying a Mavic drone, and every pilot gone will be an extra passenger seat in the sky.

Large numbers of eVTOL air taxis will change the way cities and lifestyles are designed. Skyports atop office buildings, train stations and last-mile transport depots will encourage multi-mode commuting. Real estate in scenic coastal areas might boom as people swap 45 minutes crawling along in suburban traffic for 45 minutes of 120 mph (200 km/h) air travel, and decide to live further from the office.

The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations—climatic, epidemiological, financial, and so forth.

A team of more than 30 OpenAI researchers have released a paper about GPT-3, a language model capable of achieving state-of-the-art results on a set of benchmark and unique natural language processing tasks that range from language translation to generating news articles to answering SAT questions. GPT-3 has a whopping 175 billion parameters. By comparison, the largest version of GPT-2 was 1.5 billion parameters, and the largest Transformer-based language model in the world — introduced by Microsoft earlier this month — is 17 billion parameters.

OpenAI released GPT-2 last year, controversially taking a staggered release approach due to fear that the model could be used for malicious purposes. OpenAI was criticized by some for the staggered approach, while others applauded the company for demonstrating a way to carefully release an AI model with the potential for misuse. GPT-3 made its debut with a preprint arXiv paper Thursday, but no release details are provided. An OpenAI spokesperson declined to comment when VentureBeat asked if a full version of GPT-3 will be released or one of seven smaller versions ranging in size from 125 million to 13 billion parameters.

Day 6 at the Artificial Intelligence Hub robotic boot camp, the kids continued the programming class using python. There was an online training section with Camp Peavy, he showed the kids robots he built and shared articles on how to build them. it was an awesome experience. It is our vision to domesticate Artificial Intelligence in Africa and we wont stop until we get there. #TakeOver.


Kelvin Dafiaghor added a new photo.

Over the last few years, creating fake videos that swap the face of one person onto another using artificial intelligence and machine learning has become a bit of a hobby for a number of enthusiasts online, with the results of these “deepfakes” getting better and better. Today, a new one applies that tech to Star Trek.

Deep Spocks

YouTuber Jarkan has released a number of “Deepfake” videos featuring different actors swapped into iconic film scenes. Today’s release takes Leonard Nimoy’s younger Spock from the original Star Trek and swaps him in for Zachary Quinto’s Spock in the J.J. Abrams 2009 film Star Trek. He does this in a scene where the younger Spock meets his older self, played by Leonard Nimoy. Deepfake swapping of Nimoy in for Quinto or even for Ethan Peck in Discovery has been done before, but this new deepfake has more impressive results.

VICE.


What do a frying pan, an LED light, and the most cutting edge camouflage in the world have in common? Well, that largely depends on who you ask. Most people would struggle to find the link, but for University of Michigan chemical engineers Sharon Glotzer and Michael Engel, there is a substantial connection, indeed one that has flipped the world of materials science on its head since its discovery over 30 years ago.

The magic ingredient common to all three items is the quasiperiodic crystal, the “impossible” atomic arrangement discovered by Dan Shechtman in 1982. Basically, a quasicrystal is a crystalline structure that breaks the periodicity (meaning it has translational symmetry, or the ability to shift the crystal one unit cell without changing the pattern) of a normal crystal for an ordered, yet aperiodic arrangement. This means that quasicrystalline patterns will fill all available space, but in such a way that the pattern of its atomic arrangement never repeats. Glotzer and Engel recently managed to simulate the most complex quasicrystal ever, a discovery which may revolutionize the field of crystallography by blowing open the door for a whole host of applications that were previously inconceivable outside of science-fiction, like making yourself invisible or shape-shifting robots.

Rice University’s Early Bird could care less about the worm; it’s looking for megatons of greenhouse gas emissions.

Early Bird is an energy-efficient method for training deep neural networks (DNNs), the form of artificial intelligence (AI) behind self-driving cars, intelligent assistants, facial recognition and dozens more high-tech applications.

Researchers from Rice and Texas A&M University unveiled Early Bird April 29 in a spotlight paper at ICLR 2020, the International Conference on Learning Representations. A study by lead authors Haoran You and Chaojian Li of Rice’s Efficient and Intelligent Computing (EIC) Lab showed Early Bird could use 10.7 times less energy to train a DNN to the same level of accuracy or better than typical training. EIC Lab director Yingyan Lin led the research along with Rice’s Richard Baraniuk and Texas A&M’s Zhangyang Wang.