Toggle light / dark theme

Our lives were already infused with artificial intelligence (AI) when ChatGPT reverberated around the online world late last year. Since then, the generative AI system developed by tech company OpenAI has gathered speed and experts have escalated their warnings about the risks.

Meanwhile, chatbots started going off-script and talking back, duping other bots, and acting strangely, sparking fresh concerns about how close some AI tools are getting to human-like intelligence.

For this, the Turing Test has long been the fallible standard set to determine whether machines exhibit intelligent behavior that passes as human. But in this latest wave of AI creations, it feels like we need something more to gauge their iterative capabilities.

How will AI models that can generate text, images, audio, video and code change what students need to learn and the instructional processes that guide their learning? Do we need generative models designed specifically for educational purposes?

Speakers.
Dora Demszky: Assistant Professor of Education Data Science, Stanford University.
Noah Goodman: Associate Professor of Psychology, of Computer Science and by courtesy of Linguistics, Stanford University.
Percy Liang: Director, Center for Research on Foundation Models; Associate Professor of Computer Science, Stanford University.
Rob Reich: Professor of Political Science; Faculty Director, McCoy Family Center for Ethics in Society; Marc and Laura Andreessen Faculty Co-Director, Stanford Center on Philanthropy and Civil Society; Associate Director, Stanford HAI

The AI+Education Summit: AI in the Service of Teaching and Learning took place on Feb. 15, 2023. Learn about upcoming events here: https://hai.stanford.edu/events

Researchers Antoni Gondia and Andrew Adamatzky recently gave a robot living skin made of fungus (via Futurism). Any science enthusiast understands that the power of science can be quite astounding at times, but recreating the Terminator in real life might be a little terrifying for some.

Inspired by the skin of the Terminator, researchers are using fungus to create a bio-organic skin over non-killer robots.

In fact, the scientists openly admit that their goal was to recreate a pivotal scene in The Terminator (1984) where one of the robots is seen receiving an implantation of living skin. Though the robot’s skin is an external addition, it is able to collect data from the addition and heal any wounds incurred.

Apple reportedly has several teams working and spending millions on generative AI.

Apple is spending millions of dollars a day to build artificial intelligence tools, according to The Information.

Although Apple considers itself to be its closest competitor, given how it chose to launch Vision Pro when the clamor around AR/VR tech had died down, the scale of investments by Apple is telling of how the tech industry’s pivot to generative AI has affected the company’s outlook, especially with OpenAI’s chatbot ChatGPT taking center stage.


Apple is apparently going hard on developing AI, according to a new report that says it’s investing millions of dollars every day in multiple AI projects to rival the likes of ChatGPT.

Recent data shows that both Korea and China are ahead of the US in terms of ratios of robots to manufacturing workers.

Robot use is an indication of economic prosperity and growth throughout the world. The ratio of industrial robots to manufacturing workers is one of the most frequently used approaches to benchmarking robot adoption rates.

The International Federation of Robotics (IFR) publishes statistics on robot utilization worldwide in manufacturing. Its most recent data is from 2021 and shows Korea leading the way in terms of robot use in manufacturing.

This soft robot uses “physical intelligence” to navigate complicated surfaces without the need for human or computer intervention.

Engineers have developed a “brainless” soft robot that can effortlessly traverse difficult terrain.

This breakthrough comes from North Carolina State University researchers, who previously created a soft robot capable of navigating basic mazes without the need for human or computer intervention.

Researchers from the RIKEN Center for Quantum Computing have used machine learning to perform error correction for quantum computers—a crucial step for making these devices practical—using an autonomous correction system that despite being approximate, can efficiently determine how best to make the necessary corrections.

The research is published in the journal Physical Review Letters.

In contrast to , which operate on bits that can only take the basic values 0 and 1, quantum computers operate on “qubits”, which can assume any superposition of the computational basis states. In combination with , another quantum characteristic that connects different qubits beyond classical means, this enables quantum computers to perform entirely new operations, giving rise to potential advantages in some computational tasks, such as large-scale searches, , and cryptography.

No detectors “reliably distinguish between AI-generated and human-generated content.”

In a section of the FAQ titled “Do AI detectors work?”, OpenAI writes, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and… More.


Last week, OpenAI published tips for educators in a promotional blog post that shows how some teachers are using ChatGPT as an educational aid, along with suggested prompts to get started. In a related FAQ, they also officially admit what we already know: AI writing detectors don’t work, despite frequently being used to punish students with false positives.

In July, we covered in depth why AI writing detectors such as GPTZero don’t work, with experts calling them mostly snake oil. These detectors often yield false positives due to relying on unproven detection metrics. Ultimately, there is nothing special about AI-written text that always distinguishes it from human-written, and detectors can be defeated by rephrasing. That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

Bad news for anyone looking to get their hands on Nvidia’s top specced GPUs, such as the A100 or H100: it’s not going to get any easier to source the parts until at least the end of 2024, TSMC has warned. The problem, it seems, isn’t that TSMC – which fabricates not just those GPUs for Nvidia but also components for AMD, Apple, and many others – can’t make enough chips. Rather, a lack of advanced packaging capacity used to stitch the silicon together is holding up production, TSMC chairman Mark Liu told Nikkei Asia.

According to Liu, TSMC is only able to meet about 80 percent of demand for its chip on wafer on substrate (CoWoS) packaging technology. This is used in some of the most advanced… More.


Boss Mark Liu says silicon ready but advanced packaging isn’t.