Toggle light / dark theme

Today, Replit announced Ghostwriter, an AI-powered programming assistant that can make suggestions to make coding easier. It works within Replit’s online development environment and resembles GitHub Copilot’s ability to recognize and compose code in various programming languages to accelerate the development process.

According to Replit, Ghostwriter works by using a large language model trained on millions of lines of publicly available code. This baked-in data allows Ghostwriter to make suggestions based on what you’ve already typed while programming in Replit’s IDE. When you see a suggestion you like, you can “autocomplete” the code by pressing the Tab key.

Greg Brockman, President and Co-Founder of @OpenAI, joins Alexandr Wang, CEO and Founder of Scale, to discuss the role of foundation models like GPT-3 and DALL·E 2 in research and in the enterprise. Foundation models make it possible to replace task-specific models with those that are generalized in nature and can be used for different tasks with minimal fine-tuning.

In January 2021, OpenAI introduced DALL·E, a text-to-image generation program. One year later, it introduced DALL·E 2, which generates more realistic, accurate, lower-latency images with four times greater resolution than its predecessor. At the same time, it released InstructGPT, a large language model (LLM) explicitly designed to follow instructions. InstructGPT makes it practical to leverage the OpenAI API to revise existing content, such as rewriting a paragraph of text or refactoring code.

Before creating OpenAI, Brockman was the CTO of Stripe, which he helped build from four to 250 employees. Watch this talk to learn how foundation models can help businesses benefit from applications that they can create more quickly than with past generations of AI tools.

Imagine the booming chords from a pipe organ echoing through the cavernous sanctuary of a massive, stone cathedral.

The a cathedral-goer will hear is affected by many factors, including the location of the organ, where the listener is standing, whether any columns, pews, or other obstacles stand between them, what the walls are made of, the locations of windows or doorways, etc. Hearing a sound can help someone envision their environment.

Researchers at MIT and the MIT-IBM Watson AI Lab are exploring the use of spatial acoustic information to help machines better envision their environments, too. They developed a that can capture how any sound in a room will propagate through the space, enabling the model to simulate what a listener would hear at different locations.

The project, known as DAF-MIT AI Accelerator, selected a pilot out of over 1,400 applicants.

The United States Air Force (DAF) and Massachusetts Institute of Technology (MIT) commissioned their lead AI pilot — a training program that uses artificial intelligence — in October 2022. The project utilizes the expertise at MIT and the Department of Air Force to research the potential of applying AI algorithms to advance the DAF and security.

The military department and the university created an artificial intelligence project called the Department of the Air Force-Massachusetts Institute of Technology Artificial Intelligence Accelerator (DAF-MIT AI Accelerator).

Full Story:


Software upgrades could help resolve the issue.

A collaboration of researchers from the U.S. and Japan has demonstrated that a laser attack could be used to blind autonomous cars and delete pedestrians from their view, endangering those in its path, according to a press release.

Autonomous or self-driving cars rely on a spinning type of radar system called LIDAR that helps the vehicle sense its surroundings. Short for Light Detection and Ranging, the system emits laser lights and then captures its reflections to determine the distances between itself and the obstacles in its path.

Most advanced autonomous cars today rely on this system to steer through obstacles in their path. However, the collaboration of researchers from the University of Florida, the University of Michigan, and the University of Electro-Communications in Japan showed the system can be tricked with a fairly basic laser setup.

The future of work is here.

As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ‘64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.

Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot interactions in safety-critical work domains.

ARTIFICIAL intelligence has discovered a new life-changing drug and human trials are already underway.

The biotech company behind the breakthrough has dosed its first patient with an AI-developed treatment for ALS patients.

Alice Zhang, 33, is the founder of Verge Genomics and a former neuroscience doctoral student at University of California.

Life on Earth would not exist as we know it, if not for the protein molecules that enable critical processes from photosynthesis and enzymatic degradation to sight and our immune system. And like most facets of the natural world, humanity has only just begun to discover the multitudes of protein types that actually exist. But rather scour the most inhospitable parts of the planet in search of novel microorganisms that might have a new flavor of organic molecule, Meta researchers have developed a first-of-its-kind metagenomic database, the ESM Metagenomic Atlas, that could accelerate existing protein-folding AI performance by 60x.

Metagenomics is just coincidentally named. It is a relatively new, but very real, scientific discipline that studies “the structure and function of entire nucleotide sequences isolated and analyzed from all the organisms (typically microbes) in a bulk sample.” Often used to identify the bacterial communities living on our skin or in the soil, these techniques are similar in function to gas chromatography, wherein you’re trying to identify what’s present in a given sample system.

Similar databases have been launched by the NCBI, the European Bioinformatics Institute, and Joint Genome Institute, and have already cataloged billions of newly uncovered protein shapes. What Meta is bringing to the table is “a new protein-folding approach that harnesses large language models to create the first comprehensive view of the structures of proteins in a metagenomics database at the scale of hundreds of millions of proteins,” according to a Tuesday release from the company. The problem is that, while advances of genomics have revealed the sequences for slews of novel proteins, just knowing what those sequences are doesn’t actually tell us how they fit together into a functioning molecule and going figuring it out experimentally takes anywhere from a few months to a few years. Per molecule. Ain’t nobody got time for that.