Toggle light / dark theme

Study shows potential for generative AI to increase access and efficiency in healthcare

A new study led by investigators from Mass General Brigham has found that ChatGPT was about 72 percent accurate in overall clinical decision making, from coming up with possible diagnoses to making final diagnoses and care management decisions. The large-language model (LLM) artificial intelligence chatbot performed equally well in both primary care and emergency settings across all medical specialties. The research team’s results are published in the Journal of Medical Internet Research.

Our paper comprehensively assesses decision support via ChatGPT from the very beginning of working with a patient through the entire care scenario, from differential diagnosis all the way through testing, diagnosis, and management. No real benchmarks exists, but we estimate this performance to be at the level of someone who has just graduated from medical school, such as an intern or resident. This tells us that LLMs in general have the potential to be an augmenting tool for the practice of medicine and support clinical decision making with impressive accuracy.

Google Secretly Showing Newspapers an AI-Powered News Generator

Google is secretly showing off an AI tool that can produce news stories to major newspapers, including The New York Times, The Washington Post, and The Wall Street Journal.

The tool, dubbed Genesis, can digest public information and generate news content, according to reporting by the New York Times, in yet another sign that AI-generated — or at least AI-facilitated — content is about to flood the internet.

Google is stridently denying that the tool is meant to replace journalists, saying it will instead serve as a “kind of personal assistant for journalists, automating some tasks to free up time for others.”

Soft robotics research offers new route for weaving soft materials into 3D spatial structures

Ever wonder why the most advanced robots always seem to have hard bodies? Why not more pliable ones, like humans have?

Researchers working on so-called “soft robotics” attempt to incorporate the feel of living organisms into their creations. But the field hasn’t taken off because the softer components haven’t been easy enough to mass-produce and incorporate into the designs—until now.

University of Virginia researchers have invented a for weaving such as fabrics, rubbers and gels so that they can be compatible with gadgets, which may lead to a soft robotics revolution.

IBM reports analog AI chip patterned after human brain

Deep neural networks are generating much of the exciting progress stemming from generative AI. But their architecture relies on a configuration that is a virtual speedbump, ensuring the maximal efficiency can not be obtained.

Constructed with separate units for memory and processing, face heavy demands on system resources for communications between the two components that results in slower speeds and reduced efficiency.

IBM Research came up with a better idea by turning to the perfect model for its inspiration for a more efficient digital brain: the .

Nvidia’s rising star gets even brighter with another stellar quarter propelled by sales of AI chips

Computer chip maker Nvidia has rocketed into the constellation of Big Tech’s brightest stars while riding the artificial intelligence craze that’s fueling red-hot demand for its technology.

The latest evidence of Nvidia’s ascendance emerged with Wednesday’s release of the company’s quarterly earnings report. The results covering the May-July period exceeded Nvidia’s projections for astronomical sales growth propelled by the company’s specialized chips—key components that help power different forms of artificial intelligence, such as Open AI’s popular ChatGPT and Google’s Bard chatbots.

“This is a new computing platform, if you will, a new computing transition that is happening,” Nvidia CEO Jensen Huang said Wednesday during a conference call with analysts.

Google’s Search AI Says Slavery Was Good, Actually

Lots of experts on AI say it can only be as good as the data it’s trained on — basically, it’s garbage in and garbage out.

So with that old computer science adage in mind, what the heck is happening with Google’s AI-driven Search Generative Experience (SGE)? Not only has it been caught spitting out completely false information, but in another blow to the platform, people have now discovered it’s been generating results that are downright evil.

Case in point, noted SEO expert Lily Ray discovered that the experimental feature will literally defend human slavery, listing economic reasons why the abhorrent practice was good, actually. One pro the bot listed? That enslaved people learned useful skills during bondage — which sounds suspiciously similar to Florida’s reprehensible new educational standards.

/* */