Toggle light / dark theme

Engineers at Caltech and JPL

The Jet Propulsion Laboratory (JPL) is a federally funded research and development center managed for NASA by the California Institute of Technology (Caltech). The laboratory’s primary function is the construction and operation of planetary robotic spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network. JPL implements programs in planetary exploration, Earth science, space-based astronomy and technology development, while applying its capabilities to technical and scientific problems of national significance.

Artificial intelligence research company OpenAI has announced the development of an AI system that translates natural language to programming code—called Codex, the system is being released as a free API, at least for the time being.

Codex is more of a next-step product for OpenAI, rather than something completely new. It builds on Copilot, a tool for use with Microsoft’s GitHub code repository. With the earlier product, users would get suggestions similar to those seen in autocomplete in Google, except it would help finish lines of code. Codex has taken that concept a huge step forward by accepting sentences written in English and translating them into runnable code. As an example, a user could ask the system to create a web page with a certain name at the top and with four evenly sized panels below numbered one through four. Codex would then attempt to create the page by generating the code necessary for the creation of such a site in whatever language (JavaScript, Python, etc.) was deemed appropriate. The user could then send additional English commands to build the website piece by piece.

Codex (and Copilot) parse written text using OpenAI’s language generation model—it is able to both generate and parse code, which allowed users to use Copilot in custom ways—one of those ways was to generate programming code that had been written by others for the GitHub repository. This led many of those who had contributed to the project to accuse OpenAI of using their code for profit, a charge that could very well be levied against Codex, as well, as much of the it generates is simply copied from GitHub. Notably, OpenAI started out as a nonprofit entity in 2,015 and changed to what it described as a “capped profit” entity in 2019—a move the company claimed would help it get more funding from investors.

Houston-based ThirdAI, a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding.

Neotribe Ventures, Cervin Ventures and Firebolt Ventures co-led the investment, which will be used to hire additional employees and invest in computing resources, Anshumali Shrivastava, Third AI co-founder and CEO, told TechCrunch.

Shrivastava, who has a mathematics background, was always interested in artificial intelligence and machine learning, especially rethinking how AI could be developed in a more efficient manner. It was when he was at Rice University that he looked into how to make that work for deep learning. He started ThirdAI in April with some Rice graduate students.

I think SENS did this last year but now AlphaFold2 will make it easier and faster.


Hey it’s Han from WrySci HX discussing how breakthroughs in the protein folding problem by AlphaFold 2 from DeepMind could combine with the SENS research foundation’s approach of allotopic mitochondrial gene expression to fight aging damage. More below ↓↓↓

Subscribe! =]

It’s no secret that AI is everywhere, yet it’s not always clear when we’re interacting with it, let alone which specific techniques are at play. But one subset is easy to recognize: If the experience is intelligent and involves photos or videos, or is visual in any way, computer vision is likely working behind the scenes.

Computer vision is a subfield of AI, specifically of machine learning. If AI allows machines to “think,” then computer vision is what allows them to “see.” More technically, it enables machines to recognize, make sense of, and respond to visual information like photos, videos, and other visual inputs.

Over the last few years, computer vision has become a major driver of AI. The technique is used widely in industries like manufacturing, ecommerce, agriculture, automotive, and medicine, to name a few. It powers everything from interactive Snapchat lenses to sports broadcasts, AR-powered shopping, medical analysis, and autonomous driving capabilities. And by 2,022 the global market for the subfield is projected to reach $48.6 billion annually, up from just $6.6 billion in 2015.

I would say this is probably aimed at a few things. It’s a work around to the national fight to raise the minimum wage. These will be out of sight out of mind, so no one, besides the workers, will see as they are gradually automated to 100% by around 2027. And, the delivery is gradually fully automated with long distance drones and self driving vehicles. Also, be sure every other chain is working on the same stuff.


The ‘ghost kitchens’ are coming to the UK, US and Canada.