Toggle light / dark theme

Ensuring security in the software market is undeniably crucial, but it is important to strike a balance that avoids excessive government regulation and the burdens associated with government-mandated legal responsibility, also called a liability regime. While there’s no question the market is broken with regards to security, and intervention is necessary, there is a less intrusive approach that enables the market to find the right level of security while minimizing the need for heavy-handed government involvement.

Imposing a liability regime on software companies may go too far and create unintended consequences. The downsides of liability, such as increased costs, potential legal battles, and disincentives to innovation, can hinder the development of secure software without necessarily guaranteeing improved security outcomes. A liability regime could also burden smaller companies disproportionately and stifle the diversity and innovation present in the software industry.

Instead, a more effective approach involves influencing the software market through measures that encourage transparency and informed decision-making. By requiring companies to be fully transparent about their security practices, consumers and businesses can make informed choices based on their risk preferences. Transparency allows the market to drive the demand for secure software, enabling companies with robust security measures to potentially gain a competitive edge.

The Internet just changed forever, but most people living in the United States don’t even realize what just happened. A draconian new law known as the “Digital Services Act” went into effect in the European Union on Friday, and it establishes an extremely strict regime of Internet censorship that is far more authoritarian than anything we have ever seen before.

From this point forward, hordes of European bureaucrats will be the arbiters of what is acceptable to say on the Internet. If they discover something that you have said on a large online platform that they do not like, they can force that platform to take it down, because someone in Europe might see it. So even though this is a European law, the truth is that it is going to have a tremendous impact on all of us.

A major _New York Times_ investigation reveals how the United States’ aquifers are becoming severely depleted due to overuse in part from huge industrial farms and sprawling cities. The _Times_ reports that Kansas corn yields are plummeting due to a lack of water, there is not enough water to support the construction of new homes in parts of Phoenix, Arizona, and rivers across the country are drying up as aquifers are being drained far faster than they are refilling. “It can take millions of years to fill an aquifer, but they can be depleted in 50 years,” says Warigia Bowman, director of sustainable energy and natural resources law at the University of Tulsa College of Law. “All coastal regions in the United States are really being threatened by groundwater and aquifer problems.”

Transcript: democracynow.org.

Democracy Now! is an independent global news hour that airs on over 1,500 TV and radio stations Monday through Friday. Watch our livestream at democracynow.org Mondays to Fridays 8–9 a.m. ET.

Support independent media: https://democracynow.org/donate.

It sometimes presents incorrect steps to arrive at the answer, because it is designed to base conclusions on precedent. And a precedent based on a given data set is limited to the confines of the data set. This, says Microsoft, leads to “increased costs, memory, and computational overheads.”

AoT to the rescue. The algorithm evaluates whether the initial steps—” thoughts,” to use a word generally associated only with humans—are sound, thereby avoiding a situation where an early wrong “thought” snowballs into an absurd outcome.

Though not expressly stated by Microsoft, one can imagine that if AoT is what it’s cracked up to be, it might help mitigate the so-called AI “hallucinations”—the funny, alarming phenomenon whereby programs like ChatGPT spits out false information. In one of the more notorious examples, in May 2023, a lawyer named Stephen A. Schwartz admitted to “consulting” ChatGPT as a source when conducting research for a 10-page brief. The problem: The brief referred to several court decisions as legal precedents… that never existed.

What happens in femtoseconds in nature can now be observed in milliseconds in the lab.

Scientists at the university of sydney.

The University of Sydney is a public research university located in Sydney, New South Wales, Australia. Founded in 1,850, it is the oldest university in Australia and is consistently ranked among the top universities in the world. The University of Sydney has a strong focus on research and offers a wide range of undergraduate and postgraduate programs across a variety of disciplines, including arts, business, engineering, law, medicine, and science.

In the last ten years, AI systems have developed at rapid speed. From the breakthrough of besting a legendary player at the complex game Go in 2016, AI is now able to recognize images and speech better than humans, and pass tests including business school exams and Amazon coding interview questions.

Last week, during a U.S. Senate Judiciary Committee hearing about regulating AI, Senator Richard Blumenthal of Connecticut described the reaction of his constituents to recent advances in AI. “The word that has been used repeatedly is scary.”

The Subcommittee on Privacy, Technology, and the Law overseeing the meeting heard testimonies from three expert witnesses, who stressed the pace of progress in AI. One of those witnesses, Dario Amodei, CEO of prominent AI company Anthropic, said that “the single most important thing to understand about AI is how fast it is moving.”

Google DeepMind researchers have finally found a way to make life coaching even worse: infuse it with generative AI.

According to internal documents obtained by The New York Times reports, Google and the Google-owned DeepMind AI lab are working with “generative AI to perform at least 21 different types of personal and professional tasks.” And among those tasks, apparently, is an effort to use generative AI to build a “life advice” tool. You know, because an inhuman AI model knows everything there is to know about navigating the complexities of mortal human existence.

As the NYT points out, the news of the effort notably comes months after AI safety experts at Google said, back in just December, that users of AI systems could suffer “diminished health and well-being” and a “loss of agency” as the result of taking AI-spun life advice. The Google chatbot Bard, meanwhile, is barred from providing legal, financial, or medical advice to its users.

Check out our Patreon page: https://www.patreon.com/teded.

View full lesson: https://ed.ted.com/lessons/can-we-grow-human-brains-outside-…-lancaster.

Shielded by our thick skulls and swaddled in layers of protective tissue, the human brain is extremely difficult to observe in action. Luckily, scientists can use brain organoids — pencil eraser-sized masses of cells that function like human brains but aren’t part of an organism — to look closer. How do they do it? And is it ethical? Madeline Lancaster shares how to make a brain in a lab.

Lesson by Madeline Lancaster, animation by Adam Wells.

Please check out Numerai — our sponsor using our link @
http://numer.ai/mlst.

Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.

Support us! https://www.patreon.com/mlst.
MLST Discord: https://discord.gg/aNPkGUQtc5
Twitter: https://twitter.com/MLStreetTalk.

In this fascinating interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of “Counterfeit People.” Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.