Toggle light / dark theme

Ethics and AI in Education: Self-Efficacy, Anxiety, and Ethical Judgments

“The way we teach critical thinking will change with AI,” said Dr. Stephen Aguilar. “Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions.”


Can AI be integrated into the classroom? This is what a recent study titled “AI in K-12 Classrooms: Ethical Considerations and Lessons Learned” hopes to address and is one of three studies published in the “Critical Thinking and Ethics in the Age of Generative AI in Education” report by the USC Center for Generative AI and Society. The purpose of the study is to examine the ethics behind how teachers should use AI in the classroom and holds the potential for academics, researchers, and institutional leaders to better understand the implications of AI for academic purposes.

“The way we teach critical thinking will change with AI,” said Dr. Stephen Aguilar, who is the associate director for the USC Center for Generative AI and Society and one of the authors of the study. “Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions.”

The study conducted a survey of 248 K-12 teachers with an average of 11 years teaching experience from a myriad of academic backgrounds, including public, private, and charter schools. The teachers were instructed to rate their impressions of using generative AI, such as ChatGPT, for their classroom instruction. In the end, the researchers discovered the results varied between men and women, with women teachers holding a preference for rule-based (deontological) approaches to using AI in the classroom.

2054, Part III: The Singularity

“We’d witness advances like mind-uploading,” B.T. said, and described the process by which the knowledge, analytic skills, intelligence, and personality of a person could be uploaded to a computer chip. “Once uploaded, that chip could be fused with a quantum computer that couples biological with artificial intelligence. If you did this, you’d create a human mind that has a level of computational, predictive, analytic, and psychic skill incomprehensibly higher than any existing human mind. You’d have the mind of God. That online intelligence could then create real effects in the physical world. God’s mind is one thing, but what makes God God is that He cometh to earth —”

When B.T. said earth, he made a sweeping gesture, like a faux preacher, and in his excitement, he knocked over Lily’s glass of wine. A waiter promptly appeared with a handful of napkins, sopping up the mess. B.T. waited for the waiter to leave.

“Don’t give me that look.”

When Lab-Trained AI Meets the Real World, ‘Mistakes can Happen’

Tissue contamination distracts AI models from making accurate real-world diagnoses. Human pathologists are extensively trained to detect when tissue samples from one patient mistakenly end up on another patient’s microscope slides (a problem known as tissue contamination). But such contamination can easily confuse artificial intelligence (AI) models, which are often trained in pristine, simulated environments, reports a new Northwestern Medicine study.

“We train AIs to tell ‘A’ versus ‘B’ in a very clean, artificial environment, but, in real life, the AI will see a variety of materials that it hasn’t trained on. When it does, mistakes can happen,” said corresponding author Dr. Jeffery Goldstein, director of perinatal pathology and an assistant professor of perinatal pathology and autopsy at Northwestern University Feinberg School of Medicine.

“Our findings serve as a reminder that AI that works incredibly well in the lab may fall on its face in the real world. Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear — and AI companies hope — that the computers are coming for our jobs. Not yet.”

Guided Energy helps EV fleet managers optimize battery charging

Imagine you work for a car rental agency or a package delivery company and you’re in charge of a fleet of vehicles. If you’re switching to EV vehicles, it becomes more complex to manage your vehicles due to long charging time and limited charging point availabilities.

Guided Energy, a French startup that raised $5.2 million from Sequoia Capital and Dynamo Ventures at the end of 2023, is building a software tool that help EV fleet operators when it comes to charge management and dispatch. The company aggregates data from vehicles, public and private charging points and uses machine learning to tell you when and where you’re supposed to charge your vehicles.

“The beauty of the EV ecosystem is that it is all online. This means, we connect to both EVs and charging points directly. Where customers already have telematics or supervision platforms in place, we can integrate with them using APIs into our platform, giving them a single, real-time, unified view of their EV operations,” co-founder and CEO Anant Kapoor told me.

Meta’s Plans to Label AI-Generated Content Are a Sad Fart

Meta is promising to roll out auto-labeling for AI-generated images — as soon as it figures out how, that is.

Nick Clegg, Meta’s president of global affairs, said in a policy update that the company is currently working with “industry partners” to formulate criteria that will help identify AI content. Once those criteria are determined, Meta will begin automatically labeling posts featuring any AI-generated images, video, or audio “in the coming months.”

“This approach represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers,” Clegg wrote. “So we’re pursuing a range of options. We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers.”

/* */