Toggle light / dark theme

ChatGPT has put AI in the limelight. The NYT has this funny list. I can think of many more things, but it shows how interest is growing.

The public release of ChatGPT last fall kicked off a wave of interest in artificial intelligence. A.I. models have since snaked their way into many people’s everyday lives.

Despite their flaws, ChatGPT and other A.I. tools are helping people to save time at work, to code without knowing how to code, to make daily life easier or just to have fun.


Artificial intelligence models have found their way into many people’s lives, for work and for fun.

At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in—a tool ChatGPT can use when answering a query—that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context—a situation similar to how nature repurposes existing capacities for new functions.

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented last week at the International Conference on Learning Representations (ICLR), doctoral student Kenneth Li of Harvard University and his AI researcher colleagues—Aspen K. Hopkins of the Massachusetts Institute of Technology, David Bau of Northeastern University, and Fernanda Viégas, Hanspeter Pfister and Martin Wattenberg, all at Harvard—spun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.

This is awesome and more than a little scary. You can actually try it on a youtube video and then ask questions about the content…


Introducing LeMUR, AssemblyAI’s new framework for applying powerful LLMs to transcribed speech.
With a single line of code, LeMUR can quickly process audio transcripts for up to 10 hours worth of audio content, which effectively translates into ~150k tokens, for tasks likes summarization and question answer.

Sam Altman & Patrick Collison’s Interview — https://www.assemblyai.com/playground/v2/transcript/6gsem9pf…73f20247b.

Summary: Researchers have created a device that emulates the human eye’s ability to see color by using narrowband perovskite photodetectors and a neuromorphic algorithm.

The photodetectors, sensitive to red, green, and blue light, mimic our cone cells, while the neuromorphic algorithm simulates our neural network to process information into high-quality images. Unlike modern cameras that require external filters, this technology could improve resolution and reduce manufacturing costs.

The device also generates electricity as it absorbs light, potentially leading to battery-free camera technology.

Microsoft’s corporate VP and chief economist tells the World Economic Forum (WEF) that AI will be used by bad actors, but “we shouldn’t regulate AI until we see some meaningful harm.”

Speaking at the WEF Growth Summit 2023 during a panel on “Growth Hotspots: Harnessing the Generative AI Revolution,” Microsoft’s Michael Schwarz argued that when it came to AI, it would be best not to regulate it until something bad happens, so as to not suppress the potentially greater benefits.

I am quite confident that yes, AI will be used by bad actors; and yes, it will cause real damage; and yes, we have to be very careful and very vigilant,” Schwarz told the WEF panel.

Is it OK for the largest AI platforms to be closed and private? Or should they all be open and transparent?

Why should you care?

Because AI has the power to completely transform every industry, drive a new world order, and create the next dozen trillionaires. It’s the single most important tool we’ve ever created to solve the world’s biggest problems.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Writer, a generative AI platform for enterprises, has released a report revealing that almost half (46%) of senior executives (directors and above) suspect their colleagues have unintentionally shared corporate data with ChatGPT. This troubling statistic highlights the necessity for generative AI tools to safeguard companies’ data, brand, and reputation.

The State of Generative AI in the Enterprise report found that ChatGPT is the most popular chatbot in use amongst enterprises, with CopyAI (35%) and Anyword (26%) following closely behind as the second and third most commonly used. However, many companies have banned the use of generative AI tools in the workplace, with ChatGPT being the most frequently banned (32%), followed by CopyAI (28%) and Jasper (23%).