Toggle light / dark theme

The adoption of artificial intelligence (AI) and generative AI, such as ChatGPT, is becoming increasingly widespread. The impact of generative AI is predicted to be significant, offering efficiency and productivity enhancements across industries. However, as we enter a new phase in the technology’s lifecycle, it’s crucial to understand its limitations before fully integrating it into corporate tech stacks.

Large language model (LLM) generative AI, a powerful tool for content creation, holds transformative potential. But beneath its capabilities lies a critical concern: the potential biases ingrained within these AI systems. Addressing these biases is paramount to the responsible and equitable implementation of LLM-based technologies.

Prudent utilization of LLM generative AI demands an understanding of potential biases. Here are several biases that can emerge during the training and deployment of generative AI systems.

At the Goldman Sachs Communacopia and Tech Conference, tech execs pointed to why AI has rightfully gained so much attention on Wall Street.

Speaking to a standing-room-only crowd at the Goldman Sachs Communacopia and Tech Conference.

According to Das, the total addressable market for AI will consist of $300 billion in chips and systems, $150 billion in generative AI software, and $150 billion in omniverse enterprise software. These figures represent growth over the “long term,” Das said, though he did not specify a target date.


According to Nvidia, $600 billion is tied to a major bet on accelerated computing.

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

A former Google executive who helped pioneer the company’s foray into artificial intelligence fears the technology will be used to create “more lethal pandemics.”

Mustafa Suleyman, co-founder and former head of applied AI at Google’s DeepMind, said the use of artificial intelligence will enable humans to access information with potentially deadly consequences.

“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible,” Suleyman said The Diary of a CEO podcast on Monday.

ChatGPT is not officially available in China.

China can’t buy US chips required for advanced artificial intelligence models, but that’s not stopping the country from churning out AI models. After receiving regulatory approval from the country’s internet watchdog, several Chinese tech companies launched their respective AI chatbots last week. This comes in response to OpenAI’s ChatGPT, which, since its launch, has prompted rival tech companies around the globe to launch their own chatbots.

According to a Reuters report, Baidu CEO Robin Li has claimed that over 70 large language models (LLMs) with over 1 billion parameters have been released in China.


US sanctions on semiconductors are choking China’s ability to advance in generative AI. But as per latest reports, China has given nod to over 70 LLMs, showcasing its growth in the AI space.

A team of scientists from Ames National Laboratory has developed a new machine learning model for discovering critical-element-free permanent magnet materials. The model predicts the Curie temperature of new material combinations. It is an important first step in using artificial intelligence to predict new permanent magnet materials. This model adds to the team’s recently developed capability for discovering thermodynamically stable rare earth materials. The work is published in Chemistry of Materials.

High performance magnets are essential for technologies such as , , electric vehicles, and magnetic refrigeration. These magnets contain critical materials such as cobalt and rare earth elements like neodymium and dysprosium. These materials are in high demand but have limited availability. This situation is motivating researchers to find ways to design new magnetic materials with reduced critical materials.

Machine learning (ML) is a form of . It is driven by computer algorithms that use data and trial-and-error algorithms to continually improve its predictions. The team used experimental data on Curie temperatures and theoretical modeling to train the ML algorithm. Curie temperature is the maximum temperature at which a material maintains its magnetism.

In a major breakthrough, scientists have built a tool to predict the odour profile of a molecule, just based on its structure. It can identify molecules that look different but smell the same, as well as molecules that look very similar but smell totally different.

Professor Jane Parker, University of Reading, said: Vision research has wavelength, hearing research has frequency – both can be measured and assessed by instruments. But what about smell? We don’t currently have a way to measure or accurately predict the odour of a molecule, based on its molecular structure.

You can get so far with current knowledge of the molecular structure, but eventually you are faced with numerous exceptions where the odour and structure don’t match. This is what has stumped previous models of olfaction. The fantastic thing about this new ML generated model is that it correctly predicts the odour of those exceptions.

The current crop of AI robots has made giant leaps when it comes to tiny activities.

There are robots performing colonoscopies, conducting microsurgeries on and nerve cells, designing , constructing delicate timepieces and conducting fine touch-up operations on fading, aging classical paintings by the masters.

Robots are able to handle delicate objects thanks to what researchers call passive compliance. That is the ability to change their state in response to .

Other questions to the experts in this canvassing invited their views on the hopeful things that will occur in the next decade and for examples of specific applications that might emerge. What will human-technology co-evolution look like by 2030? Participants in this canvassing expect the rate of change to fall in a range anywhere from incremental to extremely impactful. Generally, they expect AI to continue to be targeted toward efficiencies in workplaces and other activities, and they say it is likely to be embedded in most human endeavors.

The greatest share of participants in this canvassing said automated systems driven by artificial intelligence are already improving many dimensions of their work, play and home lives and they expect this to continue over the next decade. While they worry over the accompanying negatives of human-AI advances, they hope for broad changes for the better as networked, intelligent systems are revolutionizing everything, from the most pressing professional work to hundreds of the little “everyday” aspects of existence.

One respondent’s answer covered many of the improvements experts expect as machines sit alongside humans as their assistants and enhancers. An associate professor at a major university in Israel wrote, “In the coming 12 years AI will enable all sorts of professions to do their work more efficiently, especially those involving ‘saving life’: individualized medicine, policing, even warfare (where attacks will focus on disabling infrastructure and less in killing enemy combatants and civilians). In other professions, AI will enable greater individualization, e.g., education based on the needs and intellectual abilities of each pupil/student. Of course, there will be some downsides: greater unemployment in certain ‘rote’ jobs (e.g., transportation drivers, food service, robots and automation, etc.).”