Toggle light / dark theme

Just a few years ago, Berkeley engineers showed us how they could easily turn images into a 3D navigable scene using a technology called Neural Radiance Fields, or NeRF. Now, another team of Berkeley researchers has created a development framework to help speed up NeRF projects and make this technology more accessible to others.

Led by Angjoo Kanazawa, assistant professor of electrical engineering and computer sciences, the researchers have developed Nerfstudio, a Python framework that provides plug-and-play components for implementing NeRF-based methods, making it easier to collaborate and incorporate NeRF into projects. Kanazawa and her team will present their paper on Nerfstudio at SIGGRAPH 2023, and have published it as part of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings.

“Advancements in NeRF have contributed to its growing popularity and use in applications such as computer vision, robotics, and gaming. But support for development has been lagging,” said Kanazawa. “The Nerfstudio framework is intended to simplify the development of custom NeRF methods, the processing of real-world data and interacting with reconstructions.”

A routine osteoporosis screening bone density test can also detect increased risk for a heart attack because of the presence of calcium in the aorta. But reading these images requires expertise and can be time-consuming.

Now, research from a multi-institution collaboration, including Harvard Medical School and Hebrew SeniorLife, reports that this calcification test score can be calculated quickly by using machine learning, without the need for a person to grade the scans.

Those efforts and the interest in ChatGPT have led Microsoft to seek more GPUs than it had expected.

“I am thrilled that Microsoft announced Azure is opening private previews to their H100 AI supercomputer,” Jensen Huang, Nvidia’s CEO, said at his company’s GTC developer conference in March.

Microsoft has begun looking outside its own data centers to secure enough capacity, signing an agreement with Nvidia-backed CoreWeave, which rents out GPUs to third-party developers as a cloud service.

Here’s a better use for AI than warfare, which coming from a military family I see as a sad but necessary thing seeing as how Russia likes to invade people lately, but I hope we can keep peace with China, but anyway I’ve always loved animals. They called me Dr Dolittle as a child because I played with animals a lot. I hope for world peace. Perhaps AI can help diplomats communicate better as well. I know, you’d think we’d be able to but it doesn’t seem to be the case.


Scientists are harnessing the power of artificial intelligence (AI) to decode animal languages.

Scientists in Israel are closer than ever to making two-way communication with another species more likely — by using AI to understand the language of bats.

Preventing artificial intelligence chatbots from creating harmful content may be more difficult than initially believed, according to new research from Carnegie Mellon University which reveals new methods to bypass safety protocols.

Popular AI services like ChatGPT and Bard use user inputs to generate useful answers, including everything from generating scripts and ideas to entire pieces of writing. The services have safety protocols which prevent the bots from creating harmful content like prejudiced messaging or anything potentially defamatory or criminal.

An intriguing new artificial intelligence project called “The Simulation” is exploring the possibility of autonomously creating new episodes of the hit animated series South Park. Researchers from Fable Studios have published a paper detailing their approach, which combines multi-agent simulations, large language models, and custom visual generation systems.

The goal is to develop an AI “showrunner” agent that can generate high-quality, customized story content aligned with the style, characters, and sensibilities of South Park. The system would allow users to essentially “program” their own new episodes by providing high-level prompts and guidance.

Key to the approach is the use of a simulated South Park world, populated by digital versions of characters like Cartman, Kyle, and Stan. This provides context and backstory to seed the creative process. Users can influence events and character behaviors within the simulation to set up storylines.