Toggle light / dark theme

Antibiotic resistance is a major danger to public health that threatens to claim the lives of millions of people per year within the next few decades. Years of necessary administration and excessive application of antibiotics have selected for strains that are resistant to many of our currently available treatments. Due to the high costs and difficulty of developing new antibiotics, the emergence of resistant bacteria is outpacing the introduction of new drugs to fight them. To overcome this problem, many researchers are focusing on developing antibacterial therapeutic strategies that are “resistance-resistant”—regimens that slow or stall resistance development in the targeted pathogens. In this mini review, we outline major examples of novel resistance-resistant therapeutic strategies. We discuss the use of compounds that reduce mutagenesis and thereby decrease the likelihood of resistance emergence. Then, we examine the effectiveness of antibiotic cycling and evolutionary steering, in which a bacterial population is forced by one antibiotic toward susceptibility to another antibiotic. We also consider combination therapies that aim to sabotage defensive mechanisms and eliminate potentially resistant pathogens by combining two antibiotics or combining an antibiotic with other therapeutics, such as antibodies or phages. Finally, we highlight promising future directions in this field, including the potential of applying machine learning and personalized medicine to fight antibiotic resistance emergence and out-maneuver adaptive pathogens.

The use of antibiotics is central to the practice of modern medicine but is threatened by widespread antibiotic resistance (Centers for Disease Control and Prevention (U.S.), 2019). Antibiotics are a selective evolutionary pressure—they inhibit bacterial growth and viability, and antibiotic-treated bacteria are forced to either adapt and survive or succumb to treatment. The stress of antibiotic treatment can enhance bacterial mutagenesis leading to de novo resistance mutations (Figure 1A), promote the acquisition of horizontally transferred genetic elements that confer resistance, or trigger phenotypic responses that increase tolerance to drugs (Davies and Davies, 2010; Levin-Reisman et al., 2017; Bakkeren et al., 2019; Darby et al., 2022;). Additionally, antibiotic treatment can select for the proliferation of pre-existing mutants already in the population (Figure 1B).

In recent years, there has been a growing trend in higher education to incorporate modern technologies and practices in order to improve the overall educational experience. Learning management systems, gamification, video assisted learning, virtual and augmented reality, are some examples of how technology has improved student engagement and education planning. Let’s talk about AI in education. The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

Despite the many benefits that technology has brought to education, there are also concerns about its impact on higher education institutions. With the rise of online education and the growing availability of educational resources on the internet, many traditional universities and colleges are worried about the future of their institutions. As a result, many higher education institutions need help to keep pace with the rapid technological changes and are looking for ways to adapt and stay relevant in the digital age.

By now, you’ve probably heard about ChatGPT, the AI chatbot developed by OpenAI, that has been taking social media by storm. But what exactly is ChatGPT, and why is everyone talking about it? We asked it directly, and here is a comprehensible answer for non-tech people:

When people program new deep learning AI models — those that can focus on the right features of data by themselves — the vast majority rely on optimization algorithms, or optimizers, to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers — derivative-based optimizers— run into trouble handling real-world applications.

In a new paper, researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions.

The researchers write, “Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.”

ChatGPT went down on Wednesday morning — and the timing of its outage couldn’t have been more unfortunate. While OpenAI’s world-beating chatbot suffered its second major outage in as many weeks, big tech executives were convening in Washington to plead their case to lawmakers over the future of AI.

Among several notable figures in attendance was Sam Altman, CEO of the AI startup — who probably hoped to put on a better face amidst increased scrutiny over ChatGPT’s falling user traffic for the past several months.


This was yet another notable outage that ChatGPT has suffered in the past several weeks as user traffic falls.

The CEO of Polish drinks company Dictador is an AI-powered humanoid robot who works 7 days a week. The AI boss, named Mika, told Reuters that she doesn’t have weekends and is “always on 24/7.” Mika helps to spot potential clients and selects artists to design the rum producer’s bottles.

The humanoid robot CEO of a Polish drinks company is one busy boss.


Polish drinks company Dictador appointed an AI-powered humanoid robot called Mika as its experimental CEO in August 2022.

Imagine living in a cool, green city flush with parks and threaded with footpaths, bike lanes, and buses, which ferry people to shops, schools, and service centers in a matter of minutes.

That breezy dream is the epitome of urban planning, encapsulated in the idea of the 15-minute city, where all basic needs and services are within a quarter of an hour’s reach, improving public health and lowering vehicle emissions.

Artificial intelligence could help urban planners realize that vision faster, with a new study from researchers at Tsinghua University in China demonstrating how machine learning can generate more efficient spatial layouts than humans can, and in a fraction of the time.

Get up to speed on the rapidly evolving world of AI with our roundup of the week’s developments.

In a move that should surprise no one, tech leaders who gathered at closed-door meetings in Washington, DC, this week to discuss AI regulation with legislators and industry groups agreed on the need for laws governing generative AI technology. But they couldn’t agree on how to approach those regulations.

“The Democratic senator Chuck Schumer, who called the meeting ‘historic,’ said that attendees loosely endorsed the idea of regulations but that there was little consensus on what such rules would look like,” The Guardian reported. “Schumer said he asked everyone in the room — including more than 60… More.

In tasks like customer service, consulting, programming, writing, teaching, etc., language agents can reduce human effort and are a potential first step toward artificial general intelligence (AGI). Recent demonstrations of language agents’ potential, including AutoGPT and BabyAGI, have sparked much attention from researchers, developers, and general audiences.

Even for seasoned developers or researchers, most of these demos or repositories are not conducive to customizing, configuring, and deploying new agents. This restriction results from the fact that these demonstrations are frequently proof-of-concepts that highlight the potential of language agents rather than being more substantial frameworks that can be used to gradually develop and customize language agents.

Furthermore, studies show that the majority of these open-source sources cover only a tiny percentage of the basic language agent abilities, such as job decomposition, long-term memory, web navigation, tool usage, and multi-agent communication. Additionally, most (if not all) of the language agent frameworks currently in use rely exclusively on a brief task description and entirely on the ability of LLMs to plan and act. Due to the high randomness and consistency across different runs, language agents are difficult to modify and tweak, and the user experience is poor.

Back to AI in healthcare – we’ve looked at this from a number of angles, but what about some of the pros and cons of using AI/ML systems in a clinical context? And also, what about how to conquer disease with AI models?

There’s a broader theory that AI is going to allow for trail-blazing research on everything from cancer and heart disease to trauma and bone and muscle health — and everything in between. Now, we have more defined solutions coming to the table, and they’re well worth looking at!

In this IIA talk, cardiologist Collin Stultz talks about the treatment of disorders, and new tools, starting with a dramatic emphasis on heart disease.