Toggle light / dark theme

Researchers at the university of pennsylvania.

The University of Pennsylvania (Penn) is a prestigious private Ivy League research university located in Philadelphia, Pennsylvania. Founded in 1740 by Benjamin Franklin, Penn is one of the oldest universities in the United States. It is renowned for its strong emphasis on interdisciplinary education and its professional schools, including the Wharton School, one of the leading business schools globally. The university offers a wide range of undergraduate, graduate, and professional programs across various fields such as law, medicine, engineering, and arts and sciences. Penn is also known for its significant contributions to research, innovative teaching methods, and active campus life, making it a hub of academic and extracurricular activity.

New research challenges the ease of implanting false memories, highlighting flaws in the influential “Lost in the Mall” study.

By reexamining the data from a previous study, researchers found that many supposed false memories might actually be based on real experiences, casting doubt on the use of such studies in legal contexts.

Reevaluating the “Lost in the Mall” Study.

A new theory related to the second law of thermodynamics describes the motion of active biological systems ranging from migrating cells to traveling birds.

In 1944, Erwin Schrödinger published the book What is life? [1]. Therein, he reasoned about the origin of living systems by using methods of statistical physics. He argued that organisms form ordered states far from thermal equilibrium by minimizing their own disorder. In physical terms, disorder corresponds to positive entropy. Schrödinger thus concluded: “What an organism feeds upon is negative entropy […] freeing itself from all the entropy it cannot help producing while alive.” This statement poses the question of whether the second law of thermodynamics is valid for living systems. Now Benjamin Sorkin at Tel Aviv University, Israel, and colleagues have considered the problem of entropy production in living systems by putting forward a generalization of the second law [2].

Researchers at the University of Cincinnati College of Medicine and Cincinnati Children’s Hospital have developed a new approach, which combines advanced screening techniques with computational modeling, to significantly shorten the drug discovery process. It has the potential to transform the pharmaceutical industry.

The research, published recently in Science Advances, represents a significant leap forward in drug discovery efficiency. It was featured on LegalReader.com.

https://www.uc.edu/news/articles/2024/09/uc-college-of-medic…aster.html


Legal Reader seeks to provide the latest legal news & commentary on the laws that shape our world.

This episode is sponsored by Legal Zoom.

Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code Smith10 to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.

In this episode of the Eye on AI podcast, we dive into the world of Artificial General Intelligence (AGI) with Ben Goertzel, CEO of SingularityNET and a leading pioneer in AGI development.

Ben shares his vision for building machines that go beyond task-specific capabilities to achieve true, human-like intelligence. He explores how AGI could reshape society, from revolutionizing industries to redefining creativity, learning, and autonomous decision-making.

There are contexts where human cognitive and emotional intelligence takes precedence over AI, which serves a supporting role in decision-making without overriding human judgment. Here, AI “protects” human cognitive processes from things like bias, heuristic thinking, or decision-making that activates the brain’s reward system and leads to incoherent or skewed results. In the human-first mode, artificial integrity can assist judicial processes by analyzing previous law cases and outcomes, for instance, without substituting a judge’s moral and ethical reasoning. For this to work well, the AI system would also have to show how it arrives at different conclusions and recommendations, considering any cultural context or values that apply differently across different regions or legal systems.

4 – Fusion Mode:

Artificial integrity in this mode is a synergy between human intelligence and AI capabilities combining the best of both worlds. Autonomous vehicles operating in Fusion Mode would have AI managing the vehicle’s operations, such as speed, navigation, and obstacle avoidance, while human oversight, potentially through emerging technologies like Brain-Computer Interfaces (BCIs), would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real-time, and blending AI’s precision with human moral reasoning. These kinds of advanced integrations between humans and machines will require artificial integrity at the highest level of maturity: artificial integrity would ensure not only technical excellence but ethical robustness, to guard against any exploitation or manipulation of neural data as it prioritizes human safety and autonomy.

Delivering Innovative, Compassionate And Accessible Patient Care — Robert Stone, CEO — City of Hope & Dr. Marcel van den Brink, MD, PhD, President, City of Hope Comprehensive Cancer Center.


Robert Stone is the CEO of City of Hope (https://www.cityofhope.org/robert-stone), a premier cancer research and treatment center dedicated to innovation in biomedical science and the delivery of compassionate, world-class patient care. A seasoned health care executive, he has served in a number of strategic decision-making roles since he joined City of Hope in 1996, culminating with his appointment as president in 2012, CEO in 2014, and as the Helen and Morgan Chu Chief Executive Officer Distinguished Chair in 2021.

Mr. Stone has J.D., University of Chicago Law School, Chicago, IL.

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?

In many cases, AI systems gather external information to use as context when answering a particular query. For example, to answer a question about a medical condition, the system might reference recent research papers on the topic. Even with this relevant context, models can make mistakes with what feels like high doses of confidence. When a model errs, how can we track that specific piece of information from the context it relied on — or lack thereof?

To help tackle this obstacle, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers created ContextCite, a tool that can identify the parts of external context used to generate any particular statement, improving trust by helping users easily verify the statement.


The ContextCite tool from MIT CSAIL can find the parts of external context that a language model used to generate a statement. Users can easily verify the model’s response, making the tool useful in fields like health care, law, and education.

DEADLINE APPROACHING! The NEH program is accepting applications through Dec. 11, 2024. For more information, visit.


For organizations in areas affected by Hurricane Helene in FL, GA, SC, NC, VA and TN, optional prospectuses will be accepted until Oct 16th. The prospectus must use the Prospectus Template.

The Humanities Research Centers on Artificial Intelligence program aims to support a more holistic understanding of artificial intelligence (AI) in the modern world through the creation of new humanities research centers on artificial intelligence at eligible institutions. Centers must focus their scholarly activities on exploring the ethical, legal, or societal implications of AI.