Toggle light / dark theme

As my friends Tristan Harris and Aza Raskin of the Center for Humane Technology (CHT) explain in their April 9th YouTube presentation, the AI revolution is moving much too fast for, and proving much too slippery for, conventional legal and regulatory responses by humans and their state power.

Crucially, these two idealistic Silicon Valley renegades point out, in an accessible manner, exactly what has made the recent jump in AI capacity possible:

If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission.

OpenAI CEO Sam Altman recently warned that he has no qualms about removing ChatGPT from Europe if legislation designed to regulate AI becomes law. The legislation in question is the AI Act and includes several provisions that Altman argues are overly broad and overreaching.

“The current draft of the EU AI Act would be over-regulating,” Altman said in remarks picked up by Reuters. “But we have heard it’s going to get pulled back,” he added.

In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.

Here’s why.


Is generative AI a blessing or a curse when it comes to medical doctors and the role of medical malpractice lawsuits.

Researchers at the university of chicago.

Founded in 1,890, the University of Chicago (UChicago, U of C, or Chicago) is a private research university in Chicago, Illinois. Located on a 217-acre campus in Chicago’s Hyde Park neighborhood, near Lake Michigan, the school holds top-ten positions in various national and international rankings. UChicago is also well known for its professional schools: Pritzker School of Medicine, Booth School of Business, Law School, School of Social Service Administration, Harris School of Public Policy Studies, Divinity School and the Graham School of Continuing Liberal and Professional Studies, and Pritzker School of Molecular Engineering.

TOKYO, May 20 (Reuters) — Leaders of the Group of Seven (G7) nations on Saturday called for the development and adoption of international technical standards for trustworthy artificial intelligence (AI) as lawmakers of the rich countries focus on the new technology.

While the G7 leaders, meeting in Hiroshima, Japan, recognised that the approaches to achieving “the common vision and goal of trustworthy AI may vary”, they said in a statement that “the governance of the digital economy should continue to be updated in line with our shared democratic values”.

The agreement came after European Union, which is represented at the G7, inched closer this month to passing legislation to regulate AI technology, potentially the world’s first comprehensive AI law.

University of ChicagoFounded in 1,890, the University of Chicago (UChicago, U of C, or Chicago) is a private research university in Chicago, Illinois. Located on a 217-acre campus in Chicago’s Hyde Park neighborhood, near Lake Michigan, the school holds top-ten positions in various national and international rankings. UChicago is also well known for its professional schools: Pritzker School of Medicine, Booth School of Business, Law School, School of Social Service Administration, Harris School of Public Policy Studies, Divinity School and the Graham School of Continuing Liberal and Professional Studies, and Pritzker School of Molecular Engineering.

Summary: Researchers proposed the need for a legal framework to guide the conversation on whether or not human brain organoids can be considered people.

Brain organoids are grown from stem cells in a lab, mimicking the growth and structure of real brains. However, they do not fulfill the requirements to be considered natural persons, according to the researchers.

The study explores the potential juridical personhood of human brain organoids, and whether they can be considered legal entities.

NeaChat is an AI-powered chatbot developed based on ChatGPT, serving users in various industries such as education, research, finance, healthcare, and law.

Wuhan, China, May 6, 2023 (GLOBE NEWSWIRE) — The NeaChat team is honored to announce that it has obtained access to OpenAI’s latest generation of artificial intelligence language model GPT-4, becoming one of the first teams in China to obtain authorized access to GPT-4 is a powerful AI model with excellent natural language understanding and generation capabilities, with significant improvements in functionality and performance over its predecessor, GPT-3.5.

The core advantages of GPT-4 lie in its vast knowledge base, efficient problem-solving capabilities, natural language generation, and wide range of applications. We believe that the introduction of GPT-4 will bring a richer and more intelligent experience to NeaChat users.

Artificial intelligence (AI) has become commonplace, and quantum computing is set to alter the landscape radically. The potential of quantum computers to process vast amounts of data at unprecedented speeds could render existing AI chatbots, such as ChatGPT, obsolete.

The intricacies of quantum computing intertwine with understanding the evolution of artificial intelligence. This journey reveals the convergence of two transformative technologies, uncovers challenges, opens opportunities, and underscores the vital role of safeguarding innovations through patent law.

Artificial intelligence has surged forward in recent years, developing sophisticated AI chatbots like OpenAI’s ChatGPT.

AI startup Hugging Face and ServiceNow Research, ServiceNow’s R&D division, have released StarCoder, a free alternative to code-generating AI systems along the lines of GitHub’s Copilot.

Code-generating systems like DeepMind’s AlphaCode; Amazon’s CodeWhisperer; and OpenAI’s Codex, which powers Copilot, provide a tantalizing glimpse at what’s possible with AI within the realm of computer programming. Assuming the ethical, technical and legal issues are someday ironed out (and AI-powered coding tools don’t cause more bugs and security exploits than they solve), they could cut development costs substantially while allowing coders to focus on more creative tasks.

According to a study from the University of Cambridge, at least half of developers’ efforts are spent debugging and not actively programming, which costs the software industry an estimated $312 billion per year. But so far, only a handful of code-generating AI systems have been made freely available to the public — reflecting the commercial incentives of the organizations building them (see: Replit).