Toggle light / dark theme

New AI systems released in 2023 demonstrate remarkable properties that have taken most observers by surprise. The potential both for positive AI outcomes and negative AI outcomes seems to have been accelerated. This leads to five responses:

1.) “Yawn” — AI has been overhyped before, and is being overhyped again now. Let’s keep our attention on more tangible issues.

2.) “Full speed ahead with more capabilities” — Let’s get to the wonderful positive outcomes of AI as soon as possible, sidestepping those small-minded would-be regulators who would stifle all the innovation out of the industry.

Type: departments.

careers.

Harvard.

Cybersecurity.

internet.

law.

[article elid=2659855688 data-frozen-sections=[] class= clearfix page-article sm-mb-1 quality-HD post-2659855688 data-category= Careers]

Jonathan L. Zittrain wears many hats. An expert on the Internet, digital technology, law, and public policy, he regularly contributes to public discussions about what digital tech is doing to us and what we should do about it—most recently around the governance of AI and the incentives that shape major social media platforms.

He holds several roles, all at Harvard, reflecting his many converging interests. He is a professor of international law at Harvard Law School, a professor of public policy at its Kennedy School, and a professor of computer science at the university’s John A. Paulson School of Engineering and Applied Sciences. He’s also cofounder and faculty director of Harvard’s Berkman Klein Center for Internet & Society.

In his various capacities, he has been tackling many sticky cyberpolicy issues over the past 25 years.

Keep your eye on the prize, but meanwhile don’t lose sight of other nifty opportunities too. What am I talking about? During the famous Gold Rush era, eager prospectors sought the dreamy riches of unearthed gold. Turns out that very few actually struck it rich by discovering those prized gold nuggets. You might be surprised to know that while panning for gold, there was a possibility of finding other precious metals. The erstwhile feverish desire to get gold would sometimes overpower the willingness to mine silver, mercury, and other ores that were readily seen while searching for gold.


It all has to do with data, particularly data mined or scanned from the Internet that is then used principally to data train generative AI apps.

OpenAI’s ChatGPT and its successor GPT-4 would not exist if it were not for all the data training undertaken to get the AI apps into shape for doing Natural Language Processing (NLP) and performing interactive conversations with humans. The data training entailed scanning various portions of the Internet, see my explanation at the link here. In the case of text-to-text or text-to-essay generative AI, the mainstay of ChatGPT, all kinds of text were scanned to ferret out patterns of how humans use words.

I’ll say more about this in a moment.

In the wake of the AI-generated hit Heart on My Sleeve going viral with deepfakes of multi-platinum artists Drake and The Weeknd, pop star Grimes has invited her fans to create music with her voice.

On Sunday night she tweeted, “I’ll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.”

She added that she’s open to anything anyone wants. “Im just curious what even happens and interested in being a Guinea pig,” She said she welcomes the open sourcing of art and an end to copyright.

Like many people, you may have had your mind blown recently by the possibility of ChatGPT and other large language models like the new Bing or Google’s Bard.

For anyone who somehow hasn’t come across them — which is probably unlikely as ChatGPT is reportedly the fastest-growing app of all time — here’s a quick recap:

Large language models or LLMs are software algorithms trained on huge text datasets, enabling them to understand and respond to human language in a very lifelike way.


Auto-GPT is a breakthrough technology that creates its own prompts and enables large language models to perform complex multi-step procedures. While it has potential benefits, it also raises ethical concerns about accuracy, bias, and potential sentience in AI agents.

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Over the past year or so, generative AI models such as ChatGPT and DALL-E have made it possible to produce vast quantities of apparently human-like, high-quality creative content from a simple series of prompts.

Though highly capable – far outperforming humans in big-data pattern recognition tasks in particular – current AI systems are not intelligent in the same way we are. AI systems aren’t structured like our brains and don’t learn the same way.

AI systems also use vast amounts of energy and resources for training (compared to our three-or-so meals a day). Their ability to adapt and function in dynamic, hard-to-predict and noisy environments is poor in comparison to ours, and they lack human-like memory capabilities.

Pneumonia is a potentially fatal lung infection that progresses rapidly. Patients with pneumonia symptoms – such as a dry, hacking cough, breathing difficulties and high fever – generally receive a stethoscope examination of the lungs, followed by a chest X-ray to confirm diagnosis. Distinguishing between bacterial and viral pneumonia, however, remains a challenge, as both have similar clinical presentation.

Mathematical modelling and artificial intelligence could help improve the accuracy of disease diagnosis from radiographic images. Deep learning has become increasingly popular for medical image classification, and several studies have explored the use of convolutional neural network (CNN) models to automatically identify pneumonia from chest X-ray images. It’s critical, however, to create efficient models that can analyse large numbers of medical images without false negatives.

Now, K M Abubeker and S Baskar at the Karpagam Academy of Higher Education in India have created a novel machine learning framework for pneumonia classification of chest X-ray images on a graphics processing unit (GPU). They describe their strategy in Machine Learning: Science and Technology.