Toggle light / dark theme

Finally approved for public release, Baidu’s Ernie Bot app now needs to make itself valuable for new users.

As I reported last week, Baidu became the first Chinese tech company to roll out its large language model—called Ernie Bot—to the general public, following a regulatory approval from the Chinese government. Previously, access required an application or was limited to corporate clients.

I have to admit the Chinese public has reacted more passionately than I had expected.

Andreessen Horowitz General Partner Vijay Pande, who oversees the firm’s health and bio fund, joins Bloomberg’s Ed Ludlow to discuss his AI strategy, and how generative AI can help safely engineer medicines at scale, support doctors and patients, and drive greater efficiency and better outcomes.
——-
Like this video? Subscribe to Bloomberg Technology on YouTube:
https://www.youtube.com/channel/UCrM7B7SL_g1edFOnmj-SDKg.
Watch the latest full episodes of “Bloomberg Technology” with Caroline Hyde and Ed Ludlow here: https://tinyurl.com/ycyevxda.

Get the latest in tech from Silicon Valley and around the world here:
https://www.bloomberg.com/technology.

Connect with us on… Twitter: https://twitter.com/technology Facebook: https://www.facebook.com/BloombergTechttps://www.instagram.com/bloombergbu
Twitter: https://twitter.com/technology.
Facebook: https://www.facebook.com/BloombergTechnology.
Instagram: https://www.instagram.com/bloombergbusiness/

Generative AI and other easy-to-use software tools can help employees with no coding background become adept programmers, or what the authors call citizen developers. By simply describing what they want in a prompt, citizen developers can collaborate with these tools to build entire applications—a process that until recently would have required advanced programming fluency.

Information technology has historically involved builders (IT professionals) and users (all other employees), with users being relatively powerless operators of the technology. That way of working often means IT professionals struggle to meet demand in a timely fashion, and communication problems arise among technical experts, business leaders, and application users.

Citizen development raises a critical question about the ultimate fate of IT organizations. How will they facilitate and safeguard the process without placing too many obstacles in its path? To reject its benefits is impractical, but to manage it carelessly may be worse. In this article the authors share a road map for successfully introducing citizen development to your employees.

Will we ever decipher the language of molecular biology? Here, I argue that we are just a few years away from having accurate in silico models of the primary biomolecular information highway — from DNA to gene expression to proteins — that rival experimental accuracy and can be used in medicine and pharmaceutical discovery.

Since I started my PhD in 1996, the computational biology community had embraced the mantra, “biology is becoming a computational science.” Our ultimate ambition has been to predict the activity of biomolecules within cells, and cells within our bodies, with precision and reproducibility akin to engineering disciplines. We have aimed to create computational models of biological systems, enabling accurate biomolecular experimentation in silico. The recent strides made in deep learning and particularly large language models (LLMs), in conjunction with affordable and large-scale data generation, are propelling this aspiration closer to reality.

LLMs, already proven masters at modeling human language, have demonstrated extraordinary feats like passing the bar exam, writing code, crafting poetry in diverse styles, and arguably rendering the Turing test obsolete. However, their potential for modeling biomolecular systems may even surpass their proficiency in modeling human language. Human language mirrors human thought providing us with an inherent advantage, while molecular biology is intricate, messy, and counterintuitive. Biomolecular systems, despite their messy constitution, are robust and reproducible, comprising millions of components interacting in ways that have evolved over billions of years. The resulting systems are marvelously complex, beyond human comprehension. Biologists often resort to simplistic rules that work only 60% or 80% of the time, resulting in digestible but incomplete narratives. Our capacity to generate colossal biomolecular data currently outstrips our ability to understand the underlying systems.

Quantum computers, technologies that perform computations leveraging quantum mechanical phenomena, could eventually outperform classical computers on many complex computational and optimization problems. While some quantum computers have attained remarkable results on some tasks, their advantage over classical computers is yet to be conclusively and consistently demonstrated.

Ramis Movassagh, a researcher at Google Quantum AI, who was formerly at IBM Quantum, recently carried out a theoretical study aimed at mathematically demonstrating the notable advantages of quantum computers. His paper, published in Nature Physics, mathematically shows that simulating random quantum circuits and estimating their outputs is so-called #P-hard for classical computers (i.e., meaning that is highly difficult).

“A key question in the field of quantum computation is: Are quantum computers exponentially more powerful than classical ones?” Ramis Movassagh, who carried out the study, told Phys.org. “Quantum supremacy conjecture (which we renamed to Quantum Primacy conjecture) says yes. However, mathematically it’s been a major open problem to establish rigorously.”

The adoption of artificial intelligence (AI) and generative AI, such as ChatGPT, is becoming increasingly widespread. The impact of generative AI is predicted to be significant, offering efficiency and productivity enhancements across industries. However, as we enter a new phase in the technology’s lifecycle, it’s crucial to understand its limitations before fully integrating it into corporate tech stacks.

Large language model (LLM) generative AI, a powerful tool for content creation, holds transformative potential. But beneath its capabilities lies a critical concern: the potential biases ingrained within these AI systems. Addressing these biases is paramount to the responsible and equitable implementation of LLM-based technologies.

Prudent utilization of LLM generative AI demands an understanding of potential biases. Here are several biases that can emerge during the training and deployment of generative AI systems.

At the Goldman Sachs Communacopia and Tech Conference, tech execs pointed to why AI has rightfully gained so much attention on Wall Street.

Speaking to a standing-room-only crowd at the Goldman Sachs Communacopia and Tech Conference.

According to Das, the total addressable market for AI will consist of $300 billion in chips and systems, $150 billion in generative AI software, and $150 billion in omniverse enterprise software. These figures represent growth over the “long term,” Das said, though he did not specify a target date.


According to Nvidia, $600 billion is tied to a major bet on accelerated computing.

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.