You only need two simple letters to accurately convey the major shift in the technology space this year: A and I. Beyond those letters, however, is a complex, evolving and exciting way in which we work, communicate and collaborate. As you will see, artificial intelligence is a common thread as we embark on Microsoft Build, our annual flagship event for developers.
Our guest is Nita Farahany, a Distinguished Professor at Duke University where she heads the Science, Law, and Policy Lab. The research she conducts in her lab specifically focuses on the implications of emerging neuroscience, genomics, and artificial intelligence; and, as a testament to her expertise, there is a long, long list of awards and influential positions she can lay claim to, including an appointment by Obama to the Presidential Commission for the Study of Bioethical Issues.
In this episode, we explore Nita’s recent publication, provocatively entitled, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. This takes us on our tour of the current neurotechnology that exists, the upcoming ways in which this tech will be integrated into our daily products, how it will shape our decision making, the profound list of ethical considerations surrounding cognitive liberty, and much more.
Artificial intelligence is a superior lifeform that humans are creating, and many AI researchers have outlined various scenarios in which this technology can pose an existential risk to humanity that could result in the literal end of the world.
AI news timestamps: 0:00 How bad could it be? 2:56 AI destruction scenario 1 and 2 4:28 The future of artificial intelligence. 5:25 Merge with AI for human evolution. 6:41 The AI box experiment.
A team of researchers at Duke University and their collaborators have uncovered the atomic mechanisms that make a class of compounds called argyrodites attractive candidates for both solid-state battery electrolytes and thermoelectric energy converters.
The discoveries—and the machine learning approach used to make them—could help usher in a new era of energy storage for applications such as household battery walls and fast-charging electric vehicles.
The results appeared online May 18 in the journal Nature Materials.
May 22 (Reuters) — Nvidia Corp (NVDA.O) on Monday said it has worked with the U.K.’s University of Bristol to build a new supercomputer using a new Nvidia chip that would compete with Intel Corp (INTC.O) and Advanced Micro Devices Inc (AMD.O).
Nvidia is the world’s top maker of graphics processing units (GPUs), which are in high demand because they can be used to speed up artificial intelligence work. OpenAI’s ChatGPT, for example, was created with thousands of Nvidia GPUs.
But Nvidia’s GPU chips are typically paired with what is called a central processing unit (CPU), a market that has been dominated by Intel and AMD for decades. This year, Nvidia has started shipping its own competing CPU chip called Grace, which is based on technology from SoftBank Group Corp-owned (9984.T) Arm Ltd.
Meta has created an AI language model that (in a refreshing change of pace) isn’t a ChatGPT clone. The company’s Massively Multilingual Speech (MMS) project can recognize over 4,000 spoken languages and produce speech (text-to-speech) in over 1,100. Like most of its other publicly announced AI projects, Meta is open-sourcing MMS today to help preserve language diversity and encourage researchers to build on its foundation. “Today, we are publicly sharing our models and code so that others in the research community can build upon our work,” the company wrote. “Through this work, we hope to make a small contribution to preserve the incredible language diversity of the world.”
Speech recognition and text-to-speech models typically require training on thousands of hours of audio with accompanying transcription labels. (Labels are crucial to machine learning, allowing the algorithms to correctly categorize and “understand” the data.) But for languages that aren’t widely used in industrialized nations — many of which are in danger of disappearing in the coming decades — “this data simply does not exist,” as Meta puts it.
Meta used an unconventional approach to collecting audio data: tapping into audio recordings of translated religious texts. “We turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research,” the company said. “These translations have publicly available audio recordings of people reading these texts in different languages.” Incorporating the unlabeled recordings of the Bible and similar texts, Meta’s researchers increased the model’s available languages to over 4,000.
On Wednesday, Google displayed how Bard, its new AI robot, could be used to write up job listings from a simple one line prompt. Microsoft has demonstrated how a ChatGPT-powered tool can write entire articles in Word.
“There are a tonne of sales representatives doing a lot of banal work to compose prospecting emails,” says Rob Seaman, a senior vice president at workplace messaging company Slack, which is working with OpenAI to embed ChatGPT into its app as a kind of digital co-worker.
New AI tools may remove some of the most tedious aspects of such roles. But based on past evidence, technology also threatens to create a whole new class of menial roles.
Perspective from a very-educated layman. Er, laywoman.
This is Hello, Computer, a series of interviews carried out in 2023 at a time when artificial intelligence appears to be going everywhere, all at once.
Sabine Hossenfelder is a German theoretical physicist, science communicator, author, musician, and YouTuber. She is the author of Lost in Math: How beauty leads physics astray, which explores the concept of elegance in fundamental physics and cosmology, and of Existential Physics: A scientist’s guide to life’s biggest questions.
Sabine has published more than 80 research papers in the foundations of physics, from cosmology to quantum foundations and particle physics. Her writing has appeared in Scientific American, Nautilus, The New York Times, and The Guardian.
Sabine also works as a freelance popular science writer and runs the YouTube channel Science Without the Gobbledygook, where she talks about recent scientific developments and debunks hype, and a separate YouTube channel for music she writes and records.
The other day a friend proudly told me she wrote a heartwarming graduation card to her teenage son. “Okay,” she confessed. “I
How long was your card? I asked her.
Not only that, but many also couldn’t even generate a topic on their own. They lacked creativity to dream up their own ideas, much less the critical thinking skills to put themselves in the shoes of their audience, imagining what would land. But they all had 4.0 GPAs or higher and came from private schools in Orange County and LA, reflecting our watered-down educational system.
And now we’re being told ChatGPT is a boon for our students?
Despite these concerns, our best days are ahead of us. As a positive futurist, I see the AI surge as a wakeup call. Especially in corporate America. For too long, we’ve outsourced too much. As just one example, the COVID-19 pandemic exposed how reliant we are on countries like China for manufacturing, including our critical medical supply chain.
With LIMA, Meta’s AI researchers introduce a new language model that achieves GPT-4 and Bard level performance in test scenarios, albeit fine-tuned with relatively few examples.
LIMA stands for “Less is More for Alignment,” and the name hints at the model’s function: It is intended to show that with an extensively pre-trained AI model, a few examples are sufficient to achieve high-quality results.
Few examples in this case means that Meta manually selected 1,000 diverse prompts and their output from sources such as other research papers, WikiHow, StackExchange, and Reddit.