Toggle light / dark theme

On a drizzly afternoon, Yan Zhao pointed to the trees visible from his campus window.

As a chemistry professor at Iowa State University, he is pioneering the creation of novel synthetic catalysts that break down cellulose, the plant fibers responsible for the trees’ height and strength.

“Cellulose is built to last – a tree doesn’t just disappear after rain,” Zhao said. “Cellulose is a huge challenge to break down.”

• Encryption and segmentation: These operate on the assumption some fraction of the network is already compromised. Restricting the reach and utility of any captured data and accessible networks will mitigate the damage even on breached systems.

• SBOM documentation: Regulatory compliance can be driven by industry organizations and the government, but it will take time to establish standards. SBOM documentation is an essential foundation for best practices.

If “democracy dies in darkness,” and that includes lies of omission in reporting, then cybersecurity suffers the same fate with backdoors. The corollary is “don’t roll your own crypto” even if well-intentioned. The arguments for weakening encryption to make law enforcement easier falls demonstrably flat, with TETRA just the latest example. Secrets rarely stay that way forever, and sensitive data is more remotely accessible than at any time in history. Privacy and global security affect us all, and the existence of these single points of failure in our cybersecurity efforts are unsustainable and will have unforeseeable consequences. We need to innovate and evolve the internet away from this model to have durable security assurances.

In a recent study published in the journal Frontiers in Medicine, researchers evaluated fluorescence optical imaging (FOI) as a method to accurately and rapidly diagnose rheumatic diseases of the hands.

They used machine learning algorithms to identify the minimum number of FOI features to differentiate between osteoarthritis (OA), rheumatoid arthritis (RA), and connective tissue disease (CTD). Of the 20 features identified as associated with the conditions, results indicate that reduced sets of features between five and 15 in number were sufficient to diagnose each of the diseases under study accurately.

That early experience drove his professional interest in helping people communicate.

Now, Henderson’s an author on one of two papers published Wednesday showing substantial advances toward enabling speech in people injured by stroke, accident or disease.

Although still very early in development, these so-called brain-computer interfaces are five times better than previous generations of the technology at “reading” brainwaves and translating them into synthesized speech. The successes suggest it will someday be possible to restore nearly normal communication ability to people like Henderson’s late father.

For prospective parents who are carriers of many inherited diseases, using in vitro fertilization along with genetic testing would significantly lower health care expenditures, according to researchers at Stanford Medicine.

Preimplantation genetic diagnostic testing during IVF, or PGD-IVF, is being used to screen for single-gene defect conditions such as cystic fibrosis, sickle cell disease and Tay-Sachs disease, along with nearly 400 others.

The problem is that the high cost of IVF — and the lack of coverage by all but one state Medicaid program, that of New York — makes it unavailable to millions of people at risk. The majority of private employer health benefit plans also do not cover IVF.

The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

ChatGPT can’t learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware. To understand why, you first have to understand how ChatGPT and similar models work, and the role humans play in making them work.