Toggle light / dark theme

With AI chatbots, will Elon Musk and the ultra-rich replace the masses?

Elon Musk is hyping the imminent release of his ChatGPT competitor Grok, yet another example of how his entire personality is just itself a biological LLM made by ingesting all of Reddit and 4chan. Grok already seems patterned in many ways off of the worst of Elon’s indulgences, with the sense of humor of a desperately unfunny and regressive internet troll, and biases informed by a man whose horrible, dangerous biases are fully invisible to himself.

There are all kinds of reasons to be wary of Grok, including the standard reasons to be wary of any current LLM-based AI technology, like hallucinations and inaccuracies. Layer on Elon Musk’s recent track record for disastrous social sensitivity and generally harmful approach to world-shaping issues, and we’re already looking at even more reason for concern. But the real risk probably isn’t yet so easy to grok, just because we have little understanding yet of the extent of the impact that widespread use of LLMs across our daily and online lives will have.

One key area where they’re already having and are bound to have much more of an impact is user-generated content. We’ve seen companies already deploying first-party integrations that start to embrace some of these uses, like Artifact with its AI-generated thumbnails for shared posts, and Meta adding chatbots to basically everything. Musk is debuting Grok on X as a feature reserved for Premium+ subscribers initially, with a rollout supposedly beginning this week.

Contrary to reports, OpenAI probably isn’t building humanity-threatening AI

Has OpenAI invented an AI technology with the potential to “threaten humanity”? From some of the recent headlines, you might be inclined to think so.

Reuters and The Information first reported last week that several OpenAI staff members had, in a letter to the AI startup’s board of directors, flagged the “prowess” and “potential danger” of an internal research project known as “Q*.” This AI project, according to the reporting, could solve certain math problems — albeit only at grade-school level — but had in the researchers’ opinion a chance of building toward an elusive technical breakthrough.

There’s now debate as to whether OpenAI’s board ever received such a letter — The Verge cites a source suggesting that it didn’t. But the framing of Q* aside, Q* in actuality might not be as monumental — or threatening — as it sounds. It might not even be new.

Tech Leaders Collaborate On Generative AI For Accelerated Chip Design

Artificial intelligence has been steadily infused into various parts of the Synopsys EDA tool suite for the last few years. What started in 2021 with DSO.ai, a tool created to accelerate, enhance and reduce the costs associated with the place-and-route stage of semiconductor design (sometimes called PnR or floor planning), has been expanded across the company’s complete toolchain, called Synopsys.ai suite, which covers virtually the entire chip development workflow. Despite its numerous successes in recent years, however, Synopsys isn’t done innovating. Last week the company announced a strategic collaboration with Microsoft to deliver what it is calling the Synopsys.ai Copilot, a contextually-aware generative AI (GenAI) tool that assists human design teams via conversational intelligence using natural language.

If you missed the previous announcement, the Synopsys.ai Copilot is powered by OpenAI technology running on Microsoft’s Azure on-demand, high-performance cloud infrastructure. Synopsys.ai Copilot is meant to alleviate much of the grunt work required of engineers during RTL generation and verification, similar to the way ChatGPT’s conversational AI capabilities have brought productivity improvements to numerous other industries. Chip designers can effectively ask the Synopsys.ai Copilot questions in plain English, to gain insights into results, produce documentation, or ascertain information about a myriad of other criteria. Today Synopsys revealed that AMD, Intel and Microsoft are already working with the Synopsys.ai GenAI capabilities on various designs.

The generative AI of the Synopsys.ai Copilot enables a number of new capabilities. Collaborative tools can offer engineers guidance on everything from design tools to EDA workflows, and it can provide quick analysis of results. The aria-label="Synopsys.ai Copilot”>Synopsys.ai Copilot can also expedite development of RTL (register-transfer level abstraction), formal verification assertion creation, UVM test benches, and layout design. Synopsys.ai Copilot will also enable end-to-end workflow creation using natural language across the Synopsys.ai suite.

Men 2X More Likely To Use Generative AI Than Women: Report

OpenAI’s ChatGPT gets 60 times the traffic of Google’s conversational generative AI engine Bard and boasts industry-leading 30-minutes session times.


I asked AI why that might be, and one potential reason ChatGPT gave me for the disparity: historically higher representation of men in the tech sector, which is likely to be the source of most early adopters. When I also asked on Twitter/X why the disparity might exist, one person cited the fact that historically AI tools like Alexa, Siri and Cortana have often had female names, playing into cultural stereotypes about women helping men. The modern generative AI tools, of course, have names like ChatGPT or Bard or Perplexity or MidJourney. So perhaps that is changing.

Another perfectly valid reason one woman cited: perhaps they just don’t want to.

“To get the AI to work, you have to think like it and do the prompts like it wants,” says author and artist Catherine Fitzpatrick. “Tiresome.”

Hyperwar Ascendant: The Global Race For Autonomous Military Supremacy

In my 2015 exploration with General John R. Allen on the concept of Hyperwar, we recognized the potential of artificial intelligence to unalterably change the field of battle. Chief among the examples of autonomous systems were drone swarms, which are both a significant threat and a critical military capability. Today, Hyperwar seems to be the operative paradigm accepted by militaries the world over as a de facto reality. Indeed, the observe-orient-decide-act (OODA) loop is collapsing. Greater autonomy is being imbued in all manner of weapon systems and sensors. Work is ongoing to develop systems that further decrease reaction times and increase the mass of autonomous systems employed in conflict. This trend is highlighted potently by the U.S. Replicator initiative and China’s swift advancements in automated manufacturing and missile technologies.

The U.S. Replicator Initiative: A Commitment to Autonomous Warfare?

The Pentagon’s “Replicator” initiative is a strategic move to counter adversaries like China by rapidly producing “attritable autonomous systems” across multiple domains. Deputy Secretary of Defense Kathleen Hicks emphasized the need for platforms that are “small, smart, cheap, and many,” planning to produce thousands of such systems within 18 to 24 months. The Department of Defense, under this initiative, is developing smaller, more intelligent, and cost-effective platforms, a move that aligns with the creation of a Hyperwar environment.

Google Delays Release of Gemini AI That Aims to Compete With OpenAI

Google’s company-defining effort to catch up to ChatGPT creator OpenAI is turning out to be harder than expected.

Google representatives earlier this year told some cloud customers and business partners they would get access to the company’s new conversational AI, a large language model known as Gemini, by November. But the company recently told them not to expect it until the first quarter of next year, according to two people with direct knowledge. The delay comes at a bad time for Google, whose cloud sales growth has slowed while that of its bigger rival, Microsoft, has accelerated. Part of Microsoft’s success has come from selling OpenAI’s technology to its customers.

New method uses crowdsourced feedback to train robots

To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning—a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal.

In many instances, a human expert must carefully design a reward function, which is an incentive mechanism that gives the agent motivation to explore. The human expert must iteratively update that reward function as the agent explores and tries different actions. This can be time-consuming, inefficient, and difficult to scale up, especially when the is complex and involves many steps.

Researchers from MIT, Harvard University, and the University of Washington have developed a new reinforcement learning approach that doesn’t rely on an expertly designed reward function. Instead, it leverages crowdsourced , gathered from many non-expert users, to guide the agent as it learns to reach its goal. The work has been published on the pre-print server arXiv.

Deep Mind’s Student of Games AI system can beat humans at a variety of games

A team of AI researchers from EquiLibre Technologies, Sony AI, Amii and Midjourney, working with Google’s DeepMind project, has developed an AI system called Student of Games (SoG) that is capable of both beating humans at a variety of games and learning to play new ones. In their paper published in the journal Science Advances, the group describes the new system and its capabilities.

Over the past half-century, and engineers have developed the idea of machine learning and artificial intelligence, in which human-generated data is used to train computer systems. The technology has applications in a variety of scenarios, one of which is playing board and/or parlor games.

Teaching a computer to play a and then improving its capabilities to the degree that it can beat humans has become a milestone of sorts, demonstrating how far artificial intelligence has developed. In this new study, the research team has taken another step toward artificial general intelligence—in which a computer can carry out tasks deemed superhuman.