Toggle light / dark theme

New analysis in Science explores artificial intelligence and interspecific law

Artificial intelligence already wears multiple hats in the workplace, whether its writing ad copy, handling customer support requests, or filtering job applications. As the technology continues its ascent and capabilities, the notion of corporations managed or owned by AI becomes less far-fetched. The legal framework already exists to allow “Zero-member LLCs.”

How would an AI-operated LLC be treated under the law, and how would AI respond to or consequences as the owner/manager of an LLC? These questions speak to an unprecedented challenge facing lawmakers: the regulation of a nonhuman entity with the same (or better) cognitive capabilities as humans that, if left unresolved or poorly addressed, could slip beyond human control.

“Artificial intelligence and interspecific law,” an article by Daniel Gervais of Vanderbilt Law School and John Nay of The Center for Legal Informatics at Stanford University, and a Visiting Scholar at Vanderbilt, argues for more AI research on the legal compliance of nonhumans with human-level intellectual capacity.

How machine learning can support data assimilation for Earth system models

Data assimilation is the combination of the latest observations with a short-range forecast to obtain the best possible estimate of the current state of the Earth system. Machine learning can contribute to it by optimizing the use of satellite observations.

Obtaining the best possible estimate of the current state of the Earth system, known as the analysis, is extremely important for weather forecasting. That’s because the analysis serves as the initial conditions of forecasts.

To use the latest observations, we rely on a vast system of instruments that regularly measure aspects of the atmosphere and other components of the Earth system. Over the last few decades, observations have become increasingly important.

Bridging the expectation-reality gap in machine learning

Machine learning (ML) is now mission critical in every industry. Business leaders are urging their technical teams to accelerate ML adoption across the enterprise to fuel innovation and long-term growth. But there is a disconnect between business leaders’ expectations for wide-scale ML deployment and the reality of what engineers and data scientists can actually build and deliver on time and at scale.

In a Forrester study launched today and commissioned by Capital One, the majority of business leaders expressed excitement at deploying ML across the enterprise, but data scientist team members said they didn’t yet have all the necessary tools to develop ML solutions at scale. Business leaders would love to leverage ML as a plug-and-play opportunity: “just input data into a black box and valuable learnings emerge.” The engineers who wrangle company data to build ML models know it’s far more complex than that. Data may be unstructured or poor quality, and there are compliance, regulatory, and security parameters to meet.

Applying a neuroscientific lens to the feasibility of artificial consciousness

The rise in capabilities of artificial intelligence (AI) systems has led to the view that these systems might soon be conscious. However, we might be underestimating the neurobiological mechanisms underlying human consciousness.

Modern AI systems are capable of many amazing behaviors. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. When we, humans, are interacting with ChatGPT, we consciously perceive the text the model generates, just as you are currently consciously perceiving this text here.

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious.

Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality

Researchers have developed a virtual reality application where a range of 3D modeling tools can be opened and controlled using just the movement of a user’s hand.

The researchers, from the University of Cambridge, used machine learning to develop ‘HotGestures’—analogous to the hot keys used in many desktop applications.

HotGestures give users the ability to build figures and shapes in without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.

An AI just negotiated a contract for the first time ever — and no human was involved

In a world first, artificial intelligence demonstrated the ability to negotiate a contract autonomously with another artificial intelligence without any human involvement.

British AI firm Luminance developed an AI system based on its own proprietary large language model (LLM) to automatically analyze and make changes to contracts. LLMs are a type of AI algorithm that can achieve general-purpose language processing and generation.

Jaeger Glucina, chief of staff and managing director of Luminance, said the company’s new AI aimed to eliminate much of the paperwork that lawyers typically need to complete on a day-to-day basis.

Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving

The team’s research, including their code, data, and models, is now publicly available on GitHub. This open-source approach encourages the broader AI community to continue this line of exploration, potentially leading to further advancements in machine learning.

The advent of LeMa represents a major milestone in AI, suggesting that machines’ learning (ML) processes can be made more akin to human learning. This development could revolutionize sectors heavily reliant on AI, such as healthcare, finance, and autonomous vehicles, where error correction and continuous learning are critical.

As the AI field continues to evolve rapidly, the integration of human-like learning processes, such as learning from mistakes, appears to be an essential factor in developing more efficient and effective AI systems.

Former Google CEO invests in nonprofit creating an ‘AI scientist’

Eric Schmidt is funding a nonprofit that’s focused on building an artificial intelligence-powered assistant for the laboratory, with the lofty goal of overhauling the scientific research process, according to interviews with the former Google CEO and officials at the new venture.

The nonprofit, Future House, plans to develop AI tools that can analyze and summarize research papers as well as respond to scientific questions using large language models — the same technology that supports popular AI chatbots. But Future House also intends to go a step further.

The “AI scientist,” as Future House refers to it, will one day be able to sift through thousands of scientific papers and independently compose hypotheses at greater speed and scale than humans, CEO Sam Rodriques said on the latest episode of the Bloomberg Originals series AI IRL.

/* */