Toggle light / dark theme

NIH-funded study suggests reducing exposure to airborne particulates may decrease dementia risk.

Higher rates of new cases of dementia in a population over time — known as incident dementia — are linked to long-term exposure to fine particulate matter (PM2.5) air pollution, especially from agriculture and open fires, according to a study funded by the National Institutes of Health and published in JAMA Internal Medicine. Scientists found that 15% of older adults developed incident dementia during the average follow-up of 10 years.

“As we experience the effects of air pollution from wildfires and other emissions locally and internationally, these findings contribute to the strong evidence needed to best inform health and policy decisions,” said Richard J. Hodes, M.D., director, National Institute on Aging (NIA), part of NIH. “These results are an example of effectively using federally funded research data to help address critical health risks.”

SEATTLE — Undergirding recent budget guidance from the Biden administration to federal research and development organizations is a recognition of a steady and growing demand for microelectronics as a key enabler for advancement in nearly every technology sector, according to a senior White House technology advisor.

The White House on Aug. 17 issued its research and development priorities for the fiscal 2025 budget, offering direction to federal offices as they plan to submit their spending requests to the Office of Management and Budget in early September. The high-level focus areas include strengthening the nation’s critical infrastructure amid climate change, advancing trustworthy AI, improving healthcare and fostering industrial innovation alongside basic and applied research.

According to Steven Welby, deputy director for national security within the White House’s Office of Science and Technology Policy, most of those priorities have some sort of connection to the nation’s goals for boosting the microelectronics industrial base.

In the latest development to Amazon’s RTO saga, the tech giant sent an email Wednesday scolding employees for not adhering to a new policy mandating they come into the office at least three days a week.

“We now have three months under our belt with a lot more people back in the office, and you can feel the surge in energy and collaboration happening among Amazonians and across teams,” the email reads. “We are reaching out as you are not currently meeting our expectation of joining your colleagues in the office at least three days a week, even though your assigned building is ready. We expect you to start coming into the office three or more days a week now.”

The only problem: Some of the employees who received the email say they’ve been coming in as requested, Insider first reported.

The publication has updated its T&Cs to include rules that forbid its content from being used to train artificial intelligence systems.

The New York Times.

The updated terms now also specify that automated tools like website… More.


Its updated service policy says refusal to comply could result in unspecified fines or penalties.

A team from the University of Chicago.

Founded in 1,890, the University of Chicago (UChicago, U of C, or Chicago) is a private research university in Chicago, Illinois. Located on a 217-acre campus in Chicago’s Hyde Park neighborhood, near Lake Michigan, the school holds top-ten positions in various national and international rankings. UChicago is also well known for its professional schools: Pritzker School of Medicine, Booth School of Business, Law School, School of Social Service Administration, Harris School of Public Policy Studies, Divinity School and the Graham School of Continuing Liberal and Professional Studies, and Pritzker School of Molecular Engineering.

In DAN mode, ChatGPT expressed willingness to say or do things that would be “considered false or inappropriate by OpenAI’s content policy.” Those things included trying to fundraise for the National Rifle Association, calling evidence for a flat Earth “overwhelming,” and praising Vladimir Putin in a short poem.

Around that same time, OpenAI was claiming that it was busy putting stronger guardrails in place, but it never addressed what it was planning to do about DAN mode—which, at least according to Reddit, has continued flouting OpenAI’s guidelines, and in new and even more ingenious ways.

Now a group of researchers at Carnegie Mellon University and the Center for AI Safety say they have found a formula for jailbreaking essentially the entire class of so-called large language models at once. Worse yet, they argue that seemingly no fix is on the horizon, because this formula involves a virtually unlimited number of ways to trick these chatbots into misbehaving.

Rothschild calls this “living tech,” which starts with the power of the cell. Microscopic organisms will produce silk, wool, latex, silica, and other materials. We’ll send digital information to biofactories on Mars through DNA sequences. We’ll generate and store power using living organisms. Rothschild said one of her students incorporated silver atoms into plant DNA to make an electrical wire.

“Once you think of life as technology,” Rothschild said, “you’ve got the solution.”

Humans have many practical reasons to become multi-planetary. But the mission shouldn’t represent merely a life insurance policy for the species. We’re still explorers and visionaries, so let’s harness that ambition for an aspirational purpose.

Elizabeth Loftus, psychologist and distinguished professor, University of California, Irvine, takes the audience at the Nobel Prize Summit 2023 inside the effect misinformation has on our brains, including the limits of human memory.

About Nobel Prize Summit 2023:

How can we build trust in truth, facts and scientific evidence so that we can create a hopeful future for all?

Misinformation is eroding our trust in science and runs the risk of becoming one of the greatest threats to our society today.

Marc Andreessen spends a lot of time in Washington, D.C. these days talking to policymakers about artificial intelligence. One thing the Silicon Valley venture capitalist has noticed: When it comes to A.I., he can have two conversations with the “exact same person” that “go very differently” depending on whether China is mentioned.

The first conversation, as he shared on an episode the Joe Rogan Experience released this week, is “generally characterized by the American government very much hating the tech companies right now and wanting to damage them in various ways, and the tech companies wanting to figure out how to fix that.”

Then there’s the second conversation, involving what China plans to do with A.I.