DNA, or deoxyribonucleic acid, is a molecule composed of two long strands of nucleotides that coil around each other to form a double helix. It is the hereditary material in humans and almost all other organisms that carries genetic instructions for development, functioning, growth, and reproduction. Nearly every cell in a person’s body has the same DNA. Most DNA is located in the cell nucleus (where it is called nuclear DNA), but a small amount of DNA can also be found in the mitochondria (where it is called mitochondrial DNA or mtDNA).
I have been invited to participate in a quite large event in which some experts and I (allow me to not consider myself one) will discuss about Artificial Intelligence, and, in particular, about the concept of Super Intelligence.
It turns out I recently found out this really interesting TED talk by Grady Booch, just in perfect timing to prepare my talk.
No matter if you agree or disagree with Mr. Booch’s point of view, it is clear that today we are still living in the era of weak or narrow AI, very far from general AI, and even more from a potential Super Intelligence. Still, Machine Learning bring us with a great opportunity as of today. The opportunity to put algorithms to work together with humans to solve some of our biggest challenges: climate change, poverty, health and well being, etc.
Near-term quantum computers, quantum computers developed today or in the near future, could help to tackle some problems more effectively than classical computers. One potential application for these computers could be in physics, chemistry and materials science, to perform quantum simulations and determine the ground states of quantum systems.
Some quantum computers developed over the past few years have proved to be fairly effective at running quantum simulations. However, near-term quantum computing approaches are still limited by existing hardware components and by the adverse effects of background noise.
Researchers at 1QB Information Technologies (1QBit), University of Waterloo and the Perimeter Institute for Theoretical Physics have recently developed neural errormitigation, a new strategy that could improve ground state estimates attained using quantum simulations. This strategy, introduced in a paper published in Nature Machine Intelligence, is based on machine-learning algorithms.
People have been dreaming of robot butlers for decades, but one of the biggest barriers has been getting machines to understand our instructions. Google has started to close the gap by marrying the latest language AI with state-of-the-art robots.
Human language is often ambiguous. How we talk about things is highly context-dependent, and it typically requires an innate understanding of how the world works to decipher what we’re talking about. So while robots can be trained to carry out actions on our behalf, conveying our intentions to them can be tricky.
If they have any ability to understand language at all, robots are typically designed to respond to short, specific instructions. More opaque directions like “I need something to wash these chips down” are likely to go over their heads, as are complicated multi-step requests like “Can you put this apple back in the fridge and fetch the chocolate?”
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O’Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers.
Chapters: 0:00 — Teaser. 0:19 — Use virtual worlds! 0:39 Is that a good idea? 1:28 Does this really work? 1:51 Now 10 times more! 2:13 Previous method. 2:35 New method. 3:15 It gets better! 3:52 From simulation to reality. 4:35 “Gloves“ 5:07 How fast is it? 5:35 VS Apple’s ARKit. 6:25 Application to DeepFakes.
However, AI functionalities on these tiny edge devices are limited by the energy provided by a battery. Therefore, improving energy efficiency is crucial. In today’s AI chips, data processing and data storage happen at separate places – a compute unit and a memory unit. The frequent data movement between these units consumes most of the energy during AI processing, so reducing the data movement is the key to addressing the energy issue.
Stanford University engineers have come up with a potential solution: a novel resistive random-access memory (RRAM) chip that does the AI processing within the memory itself, thereby eliminating the separation between the compute and memory units. Their “compute-in-memory” (CIM) chip, called NeuRRAM, is about the size of a fingertip and does more work with limited battery power than what current chips can do.
“Having those calculations done on the chip instead of sending information to and from the cloud could enable faster, more secure, cheaper, and more scalable AI going into the future, and give more people access to AI power,” said H.-S Philip Wong, the Willard R. and Inez Kerr Bell Professor in the School of Engineering.
My point on this would just be that AI will be able to make anything you want into a game, or movie, or TV episode. And, it can be any length you want; play it as it was intended, or you can change it in any direction you want. with movies and TV i also can see people trying to play interactively as a character in the story. media in 2030s.
The founder and CEO of Midjourney, David Holz, has some truly inspiring views around how AI image generation will transform the gaming industry. During the short time we spoke this week, I had to hold myself back from falling too deep into the AI rabbit hole. In the process, I discovered Holz’s view on how this kind of tech will develop and how it’s likely to benefit the gaming industry, as well as human creativity as a whole.
Holz believes that one day in the near future, “you’ll be able to buy a console with a giant AI chip and all the games will be dreams.”
Big scientific breakthroughs often require inventions at the smallest scale. Advances in tissue engineering that can replace hearts and lungs will require the fabrication of artificial tissues that allow for the flow of blood through passages that are no thicker than a strand of hair. Similarly, miniature softbotic (soft-robot) devices that physically interact with humans safely and comfortably will demand the manufacture of components with complex networks of small liquid and airflow channels.
Advances in 3D printing are making it possible to produce such tiny structures. But for those applications that require very small, smooth, internal channels in specific complex geometries, challenges remain. 3D printing of these geometries using traditional processes requires the use of support structures that are difficult to remove after printing. Printing these models using layer-based methods at a high resolution takes a long time and compromises geometric accuracy.
Researchers at Carnegie Mellon University have developed a high-speed, reproducible fabrication method that turns the 3D printing process “inside out.” They developed an approach to 3D print ice structures that can be used to create sacrificial templates that later form the conduits and other open features inside fabricated parts.