Toggle light / dark theme

In recent years, there has been a growing trend in higher education to incorporate modern technologies and practices in order to improve the overall educational experience. Learning management systems, gamification, video assisted learning, virtual and augmented reality, are some examples of how technology has improved student engagement and education planning. Let’s talk about AI in education. The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

Despite the many benefits that technology has brought to education, there are also concerns about its impact on higher education institutions. With the rise of online education and the growing availability of educational resources on the internet, many traditional universities and colleges are worried about the future of their institutions. As a result, many higher education institutions need help to keep pace with the rapid technological changes and are looking for ways to adapt and stay relevant in the digital age.

By now, you’ve probably heard about ChatGPT, the AI chatbot developed by OpenAI, that has been taking social media by storm. But what exactly is ChatGPT, and why is everyone talking about it? We asked it directly, and here is a comprehensible answer for non-tech people:

When people program new deep learning AI models — those that can focus on the right features of data by themselves — the vast majority rely on optimization algorithms, or optimizers, to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers — derivative-based optimizers— run into trouble handling real-world applications.

In a new paper, researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions.

The researchers write, “Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.”

ChatGPT went down on Wednesday morning — and the timing of its outage couldn’t have been more unfortunate. While OpenAI’s world-beating chatbot suffered its second major outage in as many weeks, big tech executives were convening in Washington to plead their case to lawmakers over the future of AI.

Among several notable figures in attendance was Sam Altman, CEO of the AI startup — who probably hoped to put on a better face amidst increased scrutiny over ChatGPT’s falling user traffic for the past several months.


This was yet another notable outage that ChatGPT has suffered in the past several weeks as user traffic falls.

The CEO of Polish drinks company Dictador is an AI-powered humanoid robot who works 7 days a week. The AI boss, named Mika, told Reuters that she doesn’t have weekends and is “always on 24/7.” Mika helps to spot potential clients and selects artists to design the rum producer’s bottles.

The humanoid robot CEO of a Polish drinks company is one busy boss.


Polish drinks company Dictador appointed an AI-powered humanoid robot called Mika as its experimental CEO in August 2022.

Imagine living in a cool, green city flush with parks and threaded with footpaths, bike lanes, and buses, which ferry people to shops, schools, and service centers in a matter of minutes.

That breezy dream is the epitome of urban planning, encapsulated in the idea of the 15-minute city, where all basic needs and services are within a quarter of an hour’s reach, improving public health and lowering vehicle emissions.

Artificial intelligence could help urban planners realize that vision faster, with a new study from researchers at Tsinghua University in China demonstrating how machine learning can generate more efficient spatial layouts than humans can, and in a fraction of the time.

Get up to speed on the rapidly evolving world of AI with our roundup of the week’s developments.

In a move that should surprise no one, tech leaders who gathered at closed-door meetings in Washington, DC, this week to discuss AI regulation with legislators and industry groups agreed on the need for laws governing generative AI technology. But they couldn’t agree on how to approach those regulations.

“The Democratic senator Chuck Schumer, who called the meeting ‘historic,’ said that attendees loosely endorsed the idea of regulations but that there was little consensus on what such rules would look like,” The Guardian reported. “Schumer said he asked everyone in the room — including more than 60… More.

In tasks like customer service, consulting, programming, writing, teaching, etc., language agents can reduce human effort and are a potential first step toward artificial general intelligence (AGI). Recent demonstrations of language agents’ potential, including AutoGPT and BabyAGI, have sparked much attention from researchers, developers, and general audiences.

Even for seasoned developers or researchers, most of these demos or repositories are not conducive to customizing, configuring, and deploying new agents. This restriction results from the fact that these demonstrations are frequently proof-of-concepts that highlight the potential of language agents rather than being more substantial frameworks that can be used to gradually develop and customize language agents.

Furthermore, studies show that the majority of these open-source sources cover only a tiny percentage of the basic language agent abilities, such as job decomposition, long-term memory, web navigation, tool usage, and multi-agent communication. Additionally, most (if not all) of the language agent frameworks currently in use rely exclusively on a brief task description and entirely on the ability of LLMs to plan and act. Due to the high randomness and consistency across different runs, language agents are difficult to modify and tweak, and the user experience is poor.

Back to AI in healthcare – we’ve looked at this from a number of angles, but what about some of the pros and cons of using AI/ML systems in a clinical context? And also, what about how to conquer disease with AI models?

There’s a broader theory that AI is going to allow for trail-blazing research on everything from cancer and heart disease to trauma and bone and muscle health — and everything in between. Now, we have more defined solutions coming to the table, and they’re well worth looking at!

In this IIA talk, cardiologist Collin Stultz talks about the treatment of disorders, and new tools, starting with a dramatic emphasis on heart disease.

Stable Audio is a first-of-its-kind product that uses the latest generative AI techniques to deliver faster, higher-quality music and sound effects via an easy-to-use web interface. Stability AI offers a basic free version of Stable Audio, which can be used to generate and download tracks of up to 45 seconds, and a ‘Pro’ subscription, which delivers 90-second tracks that are downloadable for commercial projects.

“As the only independent, open and multimodal generative AI company, we are thrilled to use our expertise to develop a product in support of music creators,” said Emad Mostaque, CEO of Stability AI. “Our hope is that Stable Audio will empower music enthusiasts and creative professionals to generate new content with the help of AI, and we look forward to the endless innovations it will inspire.”

Stable Audio is ideal for musicians seeking to create samples to use in their music, but the opportunities for creators are limitless. Audio tracks are generated in response to descriptive text prompts supplied by the user, along with a desired length of audio. For instance, “Post-Rock, Guitars, Drum Kit, Bass, Strings, Euphoric, Up-Lifting, Moody, Flowing, Raw, Epic, Sentimental, 125 BPM” can be entered with a request for a 95-second track, and it would deliver this track.