Toggle light / dark theme

A new study from the University of Georgia aims to improve how we evaluate children’s creativity through human ratings and through artificial intelligence.

A team from the Mary Frances Early College of Education is developing an AI system that can more accurately rate open-ended responses on assessments for elementary-aged students.

“In the same way that hospital systems need good data on their patients, educational systems need really good data on their students in order to make effective choices,” said study author and associate professor of educational psychology Denis Dumas. “Creativity assessments have policy and curricular relevance, and without assessment data, we can’t fully support creativity in schools.”

All of us are at the beginning of a journey to understand generative AI’s power, reach, and capabilities. This research is the latest in our efforts to assess the impact of this new era of AI. It suggests that generative AI is poised to transform roles and boost performance across functions such as sales and marketing, customer operations, and software development. In the process, it could unlock trillions of dollars in value across sectors from banking to life sciences. The following sections share our initial findings.

For the full version of this report, download the PDF.

Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40 percent. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.

There has been a lot of buzz about all the ways that Artificial Intelligence could change the world, from the workplace to schools and day-to-day life as a whole, but the recent advancements in the field could spell the end of the traditional school classroom. In an interview with the British media outlet, The Guardian, reported on Friday (July 7) one of the world’s leading experts on AI made the prediction that for better or worse, AI might change classrooms.

How would things change?

Speaking about how AI could potentially change traditional school classrooms, a British computer scientist at the University of California, Berkeley, Professor Stuart Russell told The Guardian, “Education is the biggest benefit that we can look for in the next few years.”

GENEVA: A panel of AI-enabled humanoid robots took the microphone on Friday (Jul 7) at a United Nations conference with the message: They could eventually run the world better than humans.

But the social robots said they felt humans should proceed with caution when embracing the rapidly-developing potential of artificial intelligence, and admitted that they cannot — yet — get a proper grip on human emotions.

Some of the most advanced humanoid robots were at the United Nations’ AI for Good Global Summit in Geneva, joining around 3,000 experts in the field to try to harness the power of AI and channel it into being used to solve some of the world’s most pressing problems, such as climate change, hunger and social care.

A recent crop of AI systems claiming to detect AI-generated text perform poorly—and it doesn’t take much to get past them.

Within weeks of ChatGPT’s launch, there were fears that students would be using the chatbot to spin up passable essays in seconds. In response to those fears, startups started making products that promise to spot whether text was written by a human or a machine.

The problem is that it’s relatively simple to trick these tools and avoid detection, according to new research that has not yet been peer reviewed.

Earlier this year, on the eve of Chicago’s mayoral election, a video of moderate Democrat candidate Paul Vallas appeared online. Tweeted by “Chicago Lakefront News,” it appeared to showcase Vallas railing against lawlessness in Chicago and suggesting that there was a time when “no one would bat an eye” at fatal police shootings.

The video, which appeared authentic, was widely shared before the Vallas campaign denounced it as an AI-generated fake and the two-day old Twitter account that posted it disappeared. And while it’s impossible to say if it had any impact on Vallas’s loss against progressive Brandon Johnson, a former teacher and union organizer, it is a lower stakes glimpse at the high stakes AI deceptions that will potentially muddy the public discourse during the upcoming presidential election. And it raises a key question: How will platforms like Facebook and Twitter mitigate them?

That’s a daunting challenge. With no actual laws to regulate how AI can be used in political campaigns, it is on the platforms to determine what deep fakes users will see on their feeds, and right now, most are struggling to address how to self-regulate. “These are threats to our very democracies,” Hany Farid, an electrical engineering and computer science professor at UC Berkeley, told Forbes. “I don’t see the platforms taking this seriously.”

Finding new drugs – called “drug discovery” – is an expensive and time-consuming task. But a type of artificial intelligence called machine learning can massively accelerate the process and do the job for a fraction of the price.

My colleagues and I recently used this technology to find three promising candidates for senolytic drugs – drugs that slow ageing and prevent age-related diseases.

Senolytics work by killing senescent cells. These are cells that are “alive” (metabolically active), but which can no longer replicate, hence their nickname: zombie cells.

Managing Director of Cyber Security Consulting at Verizon.

It’s no surprise firewalls and encryption are instrumental to help defend against cyberattacks, but those tools can’t defend against one of the largest cybersecurity threats: people.

Social engineering—manipulating individuals to divulge sensitive information—is on the rise, even as organizations increasingly implement cybersecurity education and training. While social engineering already poses a challenge for organizations, AI might make it even more of a threat.