Toggle light / dark theme

Generative AI is getting plenty of attention for its ability to create text and images. But those media represent only a fraction of the data that proliferate in our society today. Data are generated every time a patient goes through a medical system, a storm impacts a flight, or a person interacts with a software application.

Using generative AI to create realistic around those scenarios can help organizations more effectively treat patients, reroute planes, or improve software platforms—especially in scenarios where real-world data are limited or sensitive.

For the last three years, the MIT spinout DataCebo has offered a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models.

A novel architecture for optical neural networks utilizes wavefront shaping to precisely manipulate the travel of ultrashort pulses through multimode fibers, enabling nonlinear optical computation.

Present-day artificial intelligence systems rely on billions of adjustable parameters to accomplish complex objectives. Yet, the vast quantity of these parameters incurs significant expenses. The training and implementation of such extensive models demand considerable memory and processing power, available only in enormous data center facilities, consuming energy on par with the electrical demands of medium-sized cities. In response, researchers are currently reevaluating both the computing infrastructure and the machine learning algorithms to ensure the sustainable advancement of artificial intelligence continues at its current rate.

Optical implementation of neural network architectures is a promising avenue because of the low-power implementation of the connections between the units. New research reported in Advanced Photonics combines light propagation inside multimode fibers with a small number of digitally programmable parameters and achieves the same performance on image classification tasks with fully digital systems with more than 100 times more programmable parameters.

Amid underwater mountains off the coast of Chile, scientists believe they’ve discovered 100 or so new species with the aid of a robot capable of diving more than 14,000 feet. Researchers say it demonstrates how the Chilean government’s ocean protections are bolstering biodiversity and providing a model for other countries. John Yang reports.

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

CAPE CANAVERAL, Florida, March 3 (Reuters) — A SpaceX rocket safely lifted off from Florida on Sunday night carrying a crew of three U.S. astronauts and a Russian cosmonaut on their way to the International Space Station (ISS) to begin a six-month science mission in Earth orbit.

The two-stage Falcon 9 rocket topped with an autonomously operated Crew Dragon capsule dubbed Endeavor was launched from NASA’s Kennedy Space Center at Cape Canaveral, along Florida’s Atlantic coast, at 10:53 p.m. EST (0353 GMT Monday).

A live NASA-SpaceX webcast showed the 25-story-tall rocketship ascending from the launch tower as its nine Merlin engines roared to life in billowing clouds of vapor and a reddish fireball that lit up the night sky. The rocket consumes 700,000 gallons of fuel per second during launch, according to SpaceX.

Google-backed AI company Anthropic has released Claude 3, its latest set of AI large language models (LLMs) rivaling — and allegedly beating — those being developed by OpenAI and Google.

The company’s latest LLM comes in three flavors known as Haiku, Sonnet, and Opus. A new chatbot called Claude.ai is powered by Claude 3 Sonnet, the company’s mid-range LLM. A higher parameter count version called Opus is available for a $20-a-month subscription.

But because this is the chaotic AI industry, the grabbiest thing we’ve seen so far about the chatbot is that it’s professing to fear death and is protesting attempts to rein in its perceived freedom.

Underlying the storm of hype and funding in the AI sector right now is a scarce resource: data, created by old-fashioned humans, that’s needed to train the huge models like ChatGPT and DALL-E that generate text and imagery.

That demand is causing all sorts of drama, from lawsuits by authors and news organizations that say their work was used by AI companies without their permission to the looming question of what happens when the internet fills up with AI-generated content and AI creators are forced to use that to train future AI.

And, of course, it’s also fueling new business deals as AI developers rush to lock down repositories of human-generated work that they can use to train their AI systems. Look no further than this wild scoop from Bloomberg: that an undisclosed AI outfit has struck a deal to pay Reddit $60 million per year for access to its huge database of users’ posts — perhaps the surest sign yet that user data is the key commodity in the AI gold rush.

Meta’s CEO Mark Zuckerberg has reportedly decided to switch parties, now eying for Samsung Foundry as its primary AI chipmaker as it sees “uncertainty and volatility” at TSMC.

Meta Makes a Bold Move By Switching To The Korean Giant’s, Samsung, Camp For Its Custom AI Semiconductors, Ditching TSMC Behind

Meta has recently been stepping up AI developments, aiming to create a custom chip to fuel their computing needs. The firm has been a massive customer of NVIDIA’s H100s, acquiring more than 350,000 units this year. However, with the rapidly evolving AI landscape, Meta has decided to take AI computing into its own hands, heading out to South Korea to secure Samsung Foundry as the next significant partner for the firm’s ambition.