AI ethics is about more than just bias. That’s why Red Hat’s Noelle Silver is dedicated to spreading AI literacy.
As reported in a new article in Nature Reviews Physics, instead of waiting for fully mature quantum computers to emerge, Los Alamos National Laboratory and other leading institutions have developed hybrid classical/quantum algorithms to extract the most performance—and potentially quantum advantage—from today’s noisy, error-prone hardware. Known as variational quantum algorithms, they use the quantum boxes to manipulate quantum systems while shifting much of the work load to classical computers to let them do what they currently do best: solve optimization problems.
“Quantum computers have the promise to outperform classical computers for certain tasks, but on currently available quantum hardware they can’t run long algorithms. They have too much noise as they interact with environment, which corrupts the information being processed,” said Marco Cerezo, a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos and a lead author of the paper. “With variational quantum algorithms, we get the best of both worlds. We can harness the power of quantum computers for tasks that classical computers can’t do easily, then use classical computers to compliment the computational power of quantum devices.”
Current noisy, intermediate scale quantum computers have between 50 and 100 qubits, lose their “quantumness” quickly, and lack error correction, which requires more qubits. Since the late 1990s, however, theoreticians have been developing algorithms designed to run on an idealized large, error-correcting, fault tolerant quantum computer.
Chip design is a long slog of trial and error, taking years to bring a design to market. Motivo, a five-year-old startup from a chip industry veteran, is creating software to speed up chip design from years to months using AI. Today the company announced a $12 million Series A.
Intel Capital led the round along with new investors Storm Ventures and Seraph Group, as well as participation from Inventus Capital. The company reports it has now raised a total of $20 million with its previous seed funding.
Motivo co-founder and CEO Bharath Rangarajan has worked in the chip industry for 30 years, and he saw a few fundamental trends and issues. For starters, the chip design process is highly time-intensive, taking years to come up with a successful candidate, and typically the first to market wins.
Materials that change their properties in response to certain stimuli could come to occupy a valuable space in many fields, ranging from robotics, to medical care, to advanced aircraft. A new example of this type of shape-shifting technology is modeled on ancient chain mail armor, enabling it to swiftly switch from flexible to stiff thanks to carefully arranged interlocking particles.
Contact Seller
Message.
Engineers at Caltech and JPL
The Jet Propulsion Laboratory (JPL) is a federally funded research and development center managed for NASA by the California Institute of Technology (Caltech). The laboratory’s primary function is the construction and operation of planetary robotic spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network. JPL implements programs in planetary exploration, Earth science, space-based astronomy and technology development, while applying its capabilities to technical and scientific problems of national significance.
Artificial intelligence research company OpenAI has announced the development of an AI system that translates natural language to programming code—called Codex, the system is being released as a free API, at least for the time being.
Codex is more of a next-step product for OpenAI, rather than something completely new. It builds on Copilot, a tool for use with Microsoft’s GitHub code repository. With the earlier product, users would get suggestions similar to those seen in autocomplete in Google, except it would help finish lines of code. Codex has taken that concept a huge step forward by accepting sentences written in English and translating them into runnable code. As an example, a user could ask the system to create a web page with a certain name at the top and with four evenly sized panels below numbered one through four. Codex would then attempt to create the page by generating the code necessary for the creation of such a site in whatever language (JavaScript, Python, etc.) was deemed appropriate. The user could then send additional English commands to build the website piece by piece.
Codex (and Copilot) parse written text using OpenAI’s language generation model—it is able to both generate and parse code, which allowed users to use Copilot in custom ways—one of those ways was to generate programming code that had been written by others for the GitHub repository. This led many of those who had contributed to the project to accuse OpenAI of using their code for profit, a charge that could very well be levied against Codex, as well, as much of the code it generates is simply copied from GitHub. Notably, OpenAI started out as a nonprofit entity in 2,015 and changed to what it described as a “capped profit” entity in 2019—a move the company claimed would help it get more funding from investors.
Houston-based ThirdAI, a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding.
Neotribe Ventures, Cervin Ventures and Firebolt Ventures co-led the investment, which will be used to hire additional employees and invest in computing resources, Anshumali Shrivastava, Third AI co-founder and CEO, told TechCrunch.
Shrivastava, who has a mathematics background, was always interested in artificial intelligence and machine learning, especially rethinking how AI could be developed in a more efficient manner. It was when he was at Rice University that he looked into how to make that work for deep learning. He started ThirdAI in April with some Rice graduate students.