Toggle light / dark theme

Building a DNA nanoparticle to be both carrier and medicine

Scientists have been making nanoparticles out of DNA strands for two decades, manipulating the bonds that maintain DNA’s double-helical shape to sculpt self-assembling structures that could someday have jaw-dropping medical applications.

The study of DNA , however, has focused mostly on their architecture, turning the genetic code of life into components for fabricating minuscule robots. A pair of Iowa State University researchers in the genetics, development, and cell biology department—professor Eric Henderson and recent doctoral graduate Chang-Yong Oh—hope to change that by showing nanoscale materials made of DNA can convey their built-in genetic instructions.

“So far, most people have been exploring DNA nanoparticles from an engineering perspective. Little attention has been paid to the information held in those DNA strands,” Oh said.

Deep learning-aided decision support for diagnosis of skin disease across skin tones

Deep learning #AI for skin lesions assessed for assistance to 800 dermatologists and primary care physicians from 39 countries Marked improvement in accuracy but widened bias gap.


In a large-scale study involving 389 board-certified dermatologists and 459 primary-care physicians from 39 countries, the impact of a deep learning-aided decision support system on physicians’ diagnostic accuracy was tested across 46 skin diseases and for both light and dark skin tones.

The next wave of fraud should frighten banks and crypto firms alike

It’s possible the OnlyFake owner is exaggerating, and it’s also worth noting that counterfeiting documents is nothing new. The difference here, though, is that the firm’s software is capable of cranking out hundreds of fake, but very real looking, IDs. It feels like it’s a matter of time before both banks and crypto firms alike are swamped by a wave of bots seeking to open accounts that possess convincing fake IDs.

You can add to this an impending wave of AI-based tools that will be used to overcome the anti-fraud measures, such as voice-based authentication, used by banks and others. We are also seeing AI being used to carry out audacious new forms of robbery—including the jaw-dropping story this week of a criminal gang that persuaded some poor employee in Hong Kong to transfer $25 million of company funds during a Zoom meeting. It turned out that all the members on the Zoom call were AI-generated replicas of the employee’s boss and coworkers.

Dexa aims to get more out of podcasts with AI-powered search

If you listen to a lot of podcasts, there is a chance you might remember funny tidbits and are wondering… “Wait, who talked about eating fries with sriracha again?” or more serious questions. To look for the answers, you have to first find the podcast and then search through their transcripts. Dexa is trying to make podcast search easier by leveraging AI.

The tool lets you ask questions about a single podcast, like Andrew Humberman’s Huberman Lab podcast in the screenshot below, or query all the podcasts in Dexa’s database — there are currently more than 120 with more being added. The search results will give you an AI-generated summary of the answer along with pointers to podcasts where the participant discussed the topic.

For instance, you can ask questions like “What’s the best way to get more sleep?” and find answers to that from Dexa’s podcast library with timestamped links to those conversations. You can also @mention a specific podcast to narrow down your search results.

How OLMo From AI2 Redefines LLM Innovation

The Allen Institute for AI created the Open Language Model, or OLMo, which is an open-source large language model with the aim of advancing the science of language models through open research.


AI2 has partnered with organizations such as Surge AI and MosaicML for data and training code. These partnerships are crucial for providing the diverse datasets and sophisticated training methodologies that underpin OLMo’s capabilities. The collaboration with the Paul G. Allen School of Computer Science and Engineering at the University of Washington and Databricks Inc. has also been pivotal in realizing the OLMo project.

It is important to note that the current architecture of OLMo is not the same as the models that power chatbots or AI assistants, which use instruction-based models. However, that’s on the roadmap. According to AI2, there will be multiple enhancements made to the model in the future. In the coming months, there are plans to iterate on OLMo by introducing different model sizes, modalities, datasets, and capabilities into the OLMo family. This iterative process is aimed at continuously improving the model’s performance and utility for the research community.

OLMo’s open and transparent approach, along with its advanced capabilities and commitment to continuous improvement, make it a major milestone in the evolution of LLMs.

What Is The Best Way To Control Today’s AI?

In a famous line over 60 years ago, early AI pioneer Norbert Wiener summed up one of the core challenges that humanity faces in building artificial intelligence: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively…we had better be quite sure…


The answer is a technology known as reinforcement learning from human feedback (RLHF).

RLHF has become the dominant method by which human developers control and steer the behavior of AI models, especially language models. It impacts how millions of people around the world experience artificial intelligence today. It is impossible to understand how today’s most advanced AI systems work without understanding RLHF.

At the same time, newer methods are quickly emerging that seek to improve upon and displace RLHF in the AI development process. The technological, commercial and societal implications are profound: at stake is how humans shape the way that AI behaves. Few areas of AI research are more active or important today.