Toggle light / dark theme

Mark Zuckerberg Says AI Could Soon Do The Work Of Meta’s Midlevel Engineers

In today’s AI news, this year coding might go from one of the most sought-after skills on the job market to one that can be fully automated. Mark Zuckerberg said that Meta and some of the biggest companies in the tech industry are already working toward this on an episode of the Joe Rogan Experience on Friday.

In other advancements, NovaSky, a team of researchers based out of UC Berkeley’s Sky Computing Lab, released Sky-T1-32B-Preview, a reasoning model that’s competitive with an earlier version of OpenAI’s o1. “Remarkably, Sky-T1-32B-Preview was trained for less than $450,” the team wrote in a blog post, “demonstrating that it is possible to replicate high-level reasoning capabilities affordably and efficiently.”

And, no company has capitalized on the AI revolution more dramatically than Nvidia. The world’s leading high-performance GPU maker has used its ballooning fortunes to significantly increase investments in all sorts of startups but particularly in AI startups.

Meanwhile, Sir Keir Starmer has green-lit a plan to use the immigration system to recruit a new wave of AI experts and loosen up data mining regulations to help Britain lead the world in the new technology. The recruitment of thousands of new AI experts by the government and private sector is part of a 50-point plan to transform Britain with the new technology.

In videos, newly deployed at Lawrence Livermore National Laboratory, El Capitan — the National Nuclear Security Administration’s (NNSA) first exascale supercomputer, is setting new benchmarks in computing power. At 2.79 exaFLOPs of peak performance El Capitan’s unprecedented capabilities are already impacting scientific computing and making the previously unimaginable a reality.

Then, François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set. They explore two core solution paradigms—program synthesis (induction) and direct prediction (“transduction”)—and how successful solutions combine both.

And, in this entertaining and important talk, AI ethicist Nadia Lee shares the perils, pitfalls and opportunities that swirl around our emerging AI reality. Nadia Lee is an ethical AI advocate and the founder of ThatsMyFace, an AI company which detects key assets and people in malicious content for businesses.

Dr. Debra Whitman, Ph.D. — Chief Public Policy Officer, AARP — Author, The Second Fifty

Exploring the most important questions we face as we age.


Dr. Debra Whitman, Ph.D. is Executive Vice President and Chief Public Policy Officer, at AARP (https://www.aarp.org/) where she leads policy development, analysis and research, as well as global thought leadership supporting and advancing the interests of individuals age 50-plus and their families. She oversees AARP’s Public Policy Institute, AARP Research, Office of Policy Development and Integration, Thought Leadership, and AARP International.

Dr. Whitman is an authority on aging issues with extensive experience in national policy making, domestic and international research, and the political process. An economist, she is a strategic thinker whose career has been dedicated to solving problems affecting economic and health security, and other issues related to population aging.

As staff director for the U.S. Senate Special Committee on Aging, Dr. Whitman worked across the aisle to increase retirement security, lower the cost of health care, protect vulnerable seniors, safeguard consumers, make the pharmaceutical industry more transparent, and improve our nation’s long term care system.

Before that, Dr. Whitman worked for the Congressional Research Service as a specialist in the economics of aging. She provided members of Congress and their staff with research and advice, and authored analytical reports on the economic impacts of current policies affecting older Americans, as well as the distributional and intergenerational effects of legislative proposals.

Dr. Nahid Bhadelia, MD — Founding Director, BU Center on Emerging Infectious Diseases (CEID)

Improving Global Resilience Against Emerging Infectious Threats — Dr. Nahid Bhadelia, MD — Founding Director, Center on Emerging Infectious Diseases (CEID), Boston University.


Dr. Nahid Bhadelia, MD, MALD is a board-certified infectious diseases physician who is the Founding Director of BU Center on Emerging Infectious Diseases (https://www.bu.edu/ceid/about-the-cen…) as well an Associate Professor at the BU School of Medicine. She served the Senior Policy Advisor for Global COVID-19 Response for the White House COVID-19 Response Team in 2022–2023, where she coordinated the interagency programs for global COVID-19 vaccine donations from the United States and was the policy lead for Project NextGen, $5B HHS program aimed at developing next generation vaccines and treatments for pandemic prone coronaviruses. She also served as the interim Testing Coordinator for the White House MPOX Response Team. She is the Director and co-founder of Biothreats Emergence, Analysis and Communications Network (BEACON), an open source outbreak surveillance program.

Between 2011–2021, Dr. Bhadelia helped develop and then served as the medical director of the Special Pathogens Unit (SPU) at Boston Medical Center, a medical unit designed to care for patients with highly communicable diseases, and a state designated Ebola Treatment Center. She was previously an associate director for BU’s maximum containment research program, the National Emerging Infectious Diseases Laboratories. She has provided direct patient care and been part of outbreak response and medical countermeasures research during multiple Ebola virus disease outbreaks in West and East Africa between 2014–2019. She was the clinical lead for a DoD-funded viral hemorrhagic fever clinical research unit in Uganda, entitled Joint Mobile Emerging Disease Intervention Clinical Capability (JMEDICC) program between 2017 and 2022. Currently, she is a co-director of Fogarty funded, BU-University of Liberia Emerging and Epidemic Viruses Research training program. She was a member of the World Health Organization(WHO)’s Technical Advisory Group on Universal Health and Preparedness Review (UHPR). She currently serves as a member of the National Academies Forum on Microbial Threats and previously served as the chair of the National Academies Workshop Committee for Potential Research Priorities to Inform Readiness and Response to Highly Pathogenic Avian Influenza A (H5N1) and member of the Ad Hoc Committee on Current State of Research, Development, and Stockpiling of Smallpox Medical Countermeasures.

Dr. Bhadelia’s research focuses on operational global health security and pandemic preparedness, including medical countermeasure evaluation and clinical care for emerging infections, diagnostics evaluation and positioning, infection control policy development, and healthcare worker training. She has health system response experience with pathogens such as H1N1, Zika, Lassa fever, Marburg virus disease, and COVID-19 at the state, national, and global levels.

Dr. Bhadelia has served on state, national, and interagency groups focused on biodefense priority setting, development of clinical care guidelines, and medical countermeasures research. She has served as a subject matter expert to the US Centers for Disease Control and Prevention, Department of Defense (DoD), White House Office of Science and Technology Policy (OSTP) and World Bank. She is an adjunct professor at Fletcher School of Law and Diplomacy at Tufts University since 2016, where she teaches on global health security and emerging pathogens.

AI categorizes 700 million aurora images for better geomagnetic storm forecasting

The aurora borealis, or northern lights, is known for a stunning spectacle of light in the night sky, but this near-Earth manifestation, which is caused by explosive activity on the sun and carried by the solar wind, can also interrupt vital communications and security infrastructure on Earth. Using artificial intelligence, researchers at the University of New Hampshire have categorized and labeled the largest-ever database of aurora images that could help scientists better understand and forecast the disruptive geomagnetic storms.

The research, recently published in the Journal of Geophysical Research: Machine Learning and Computation, developed artificial intelligence and machine learning tools that were able to successfully identify and classify over 706 million images of auroral phenomena in NASA’s Time History of Events and Macroscale Interactions during Substorms (THEMIS) data set collected by twin spacecrafts studying the space environment around Earth. THEMIS provides images of the night sky every three seconds from sunset to sunrise from 23 different stations across North America.

“The massive dataset is a valuable resource that can help researchers understand how the interacts with the Earth’s magnetosphere, the protective bubble that shields us from charged particles streaming from the sun,” said Jeremiah Johnson, associate professor of applied engineering and sciences and the study’s lead author. “But until now, its huge size limited how effectively we can use that data.”

UN aviation agency investigating ‘potential’ security breach

On Monday, the United Nations’ International Civil Aviation Organization (ICAO) announced it was investigating what it described as a “reported security incident.”

Established in 1944 as an intergovernmental organization, this United Nations agency works with 193 countries to support the development of mutually recognized technical standards.

“ICAO is actively investigating reports of a potential information security incident allegedly linked to a threat actor known for targeting international organizations,” ICAO said in a statement.

How we classify flood risk may give developers and home buyers a false sense of security

Common methods of communicating flood risk may create a false sense of security, leading to increased development in areas threatened by flooding.

This phenomenon, called the “safe development paradox,” is described in a new paper from North Carolina State University. Lead author Georgina Sanchez, a research scholar in NC State’s Center for Geospatial Analytics, said this may be an unintended byproduct of how the Federal Emergency Management Agency classifies areas based on their probability of dangerous flooding.

The findings are published in the journal PLOS ONE.

Quantum Teleportation Made Possible! Scientists Achieved Near-Perfect Results

Discover the groundbreaking world of quantum teleportation! Learn how scientists are revolutionizing data transfer using quantum entanglement, enabling secure, instant communication over vast distances. From integrating quantum signals into everyday internet cables to overcoming challenges like noise, this technology is reshaping our future. Explore the possibilities of a quantum internet and its role in computing and security. Watch our full video for an engaging dive into how quantum teleportation works and why it’s a game-changer for technology. Don’t miss out!

Paper link: https://journals.aps.org/prl/abstract

Visit our website for up-to-the-minute updates:
www.nasaspacenews.com.

Follow us.
Facebook: / nasaspacenews.
Twitter: / spacenewsnasa.

Join this channel to get access to these perks:
/ @nasaspacenewsagency.

#NSN #NASA #Astronomy#QuantumTeleportation #QuantumInternet #QuantumComputing #SecureCommunication #QuantumTech #ScienceBreakthrough #DataTransfer #FutureTechnology #QuantumEntanglement #QuantumScience #QuantumWorld #TeleportationScience #TechInnovation #NextGenTech #QuantumPhysics #ScienceExplained #CuttingEdgeTech #QuantumFuture #QuantumTechnology #TeleportationExplained #QuantumNetworks #RevolutionaryTech #TechUpdates #QuantumCommunication #DataRevolution #QuantumMechanics #TechAdvancements #PhysicsInnovation #ScienceMadeSimple #QuantumBreakthrough #QuantumDiscoveries

AI’s Achilles’ Heel: Researchers Expose Major Model Security Flaw

Researchers used electromagnetic signals to steal and replicate AI models from a Google Edge TPU with 99.91% accuracy, exposing significant vulnerabilities in AI systems and calling for urgent protective measures.

Researchers have shown that it’s possible to steal an artificial intelligence (AI) model without directly hacking the device it runs on. This innovative technique requires no prior knowledge of the software or architecture supporting the AI, making it a significant advancement in model extraction methods.

“AI models are valuable, we don’t want people to steal them,” says Aydin Aysu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Building a model is expensive and requires significant computing sources. But just as importantly, when a model is leaked, or stolen, the model also becomes more vulnerable to attacks – because third parties can study the model and identify any weaknesses.”