Toggle light / dark theme

Since the public release of OpenAI’s ChatGPT, artificial intelligence (AI) has quickly become a driving force in innovation and everyday life, sparking both excitement and concern. AI promises breakthroughs in fields like medicine, education, and energy, with the potential to solve some of society’s toughest challenges. But at the same time, fears around job displacement, privacy, and the spread of misinformation have led many to call for tighter government control.

Many are now seeking swift government intervention to regulate AI’s development in the waning “lame duck” session before the inauguration of the next Congress. These efforts have been led by tech giants, including OpenAI, Amazon, Google, and Microsoft, under the guise of securing “responsible development of advanced AI systems” from risks like misinformation and bias. Building on the Biden administration’s executive order to create the U.S. Artificial Intelligence Safety Institute (AISI) and mandate that AI “safety tests,” among other things, be reported to the government, the bipartisan negotiations would permanently authorize the AISI to act as the nation’s primary AI regulatory agency.

The problem is, the measures pushed by these lobbying campaigns favor large, entrenched corporations, sidelining smaller competitors and stifling innovation. If Congress moves forward with establishing a federal AI safety agency, even with the best of intentions, it risks cementing Big Tech’s dominance at the expense of startups. Rather than fostering competition, such regulation would likely serve the interests of the industry’s largest corporations, stifling entrepreneurship and limiting AI’s potential to transform America—and the world—for the better. The unintended consequences are serious: slower product improvement, fewer technological breakthroughs, and severe costs to the economy and consumers.

Health Innovation For Prevention And Precision At Scale — Dr. Päivi Sillanaukee, MD, Ph.D. — Special Envoy, Health & Wellbeing, Ministry of Social Affairs and Health Finland.


Dr. Päivi Sillanaukee, MD, Ph.D. is Special Envoy for Health and Wellbeing, Ministry of Social Affairs and Health Finland (https://stm.fi/en/rdi-growth-programm…).

Dr. Sillanaukee has over 20 years of experience at highest civil servant administrative positions, both from government, including roles as Director General at Ministry of Social Affairs and Health, Ambassador for Health and Wellbeing at the Ministry for Foreign Affairs, as well as various additional roles in the public sector at the Municipalities and Special Health care district levels.

WASHINGTON — The Australian Department of Defence announced the cancellation of its JP9102 military satellite program, an estimated $5 billion project awarded to Lockheed Martin just 18 months ago, citing shifts in satellite technology and the market’s pivot toward multi-orbit space communications.

The cancellation of Australia’s JP9102 satellite program is yet another sign of the disruptive impact that low Earth orbit space internet services, led by the rapid growth of SpaceX’s Starlink, are having on the traditional satellite communications industry and government procurement models.

JP9102, or Defence Joint Project 9,102, was launched in 2021 with plans to develop between three to five geostationary satellites and ground systems, marking one of Australia’s most ambitious space infrastructure ventures.

By Chuck Brooks, Skytop Contributor / October 25, 2024

Chuck Brooks serves as President and Consultant of Brooks Consulting International. Chuck also serves as an Adjunct Professor at Georgetown University in the Cyber Risk Management Program, where he teaches graduate courses on risk management, homeland security, and cybersecurity.

Chuck has received numerous global accolades for his work and promotion of cybersecurity. Recently, he was named the top cybersecurity expert to follow on social media, and also as one top cybersecurity leaders for 2024. He has also been named “Cybersecurity Person of the Year” by Cyber Express, Cybersecurity Marketer of the Year, and a “Top 5 Tech Person to Follow” by LinkedIn” where he has 120,000 followers on his profile.

As a thought leader, blogger, and event speaker, he has briefed the G20 on energy cybersecurity, The US Embassy to the Holy See, and the Vatican on global cybersecurity cooperation. He has served on two National Academy of Science Advisory groups, including one on digitalizing the USAF, and another on securing BioTech. He has also addressed USTRANSCOM on cybersecurity and serves on an industry/government Working group for DHS CISA focused on security space systems.

Suspended in the relic of an ancient sea beneath southern Arkansas, there may be enough lithium for nine times the expected global demand for the element in car batteries in 2030.

\t\t\t\t\t\t\t\t\t\t\t.

A collaborative national and state government research team trained a machine learning model to predict and map the lithium concentrations of salty water deep within the porous limestone aquifer beneath southern Arkansas, known as the Smackover Formation brines.

How does social media influence safety messages during a natural disaster? This is what a recent study published in the International Journal of Disaster Risk Reduction hopes to address as a pair of researchers from the Stevens Institute of Technology investigated how the perspectives of natural disasters and the corresponding government responses could be impacted by false or irrelevant information being shared across a myriad of social media platforms, specifically X (Twitter) and Facebook. This study holds the potential to help scientists, governments, disaster relief efforts, and the public better understand the ramifications of social media messages and discussions on responding to natural disasters worldwide.

“It’s like being at a crowded party—if everyone’s arguing loudly about politics, it’s hard to make yourself heard over the noise,” said Dr. Jose Ramirez-Marquez, who is an associate professor in the Stevens School of Systems and Enterprises and the sole co-author on the study.

For the study, the researchers examined online discussions that occurred during four recent hurricanes: Harvey, Imelda, Laura, and Florence. The goal of the study was to ascertain online discussion patterns, and which posts and comments got the most attention as the crises unfolded. For example, the researchers found that dogs being trapped by flooding comprised 24 of the 50 most active discussions compared to 7 of those 50 being comprised of public safety. During Hurricane Florence, it was found that more than half of the 50 top discussions involved politics or animals, whereas 19 of the 50 discussed public safety.

Next Generation Biomanufacturing Technologies — Dr. Leonard Tender, Ph.D. — Biological Technologies Office, Defense Advanced Research Projects Agency — DARPA


Dr. Leonard Tender, Ph.D. is a Program Manager in the Biological Technologies Office at DARPA (https://www.darpa.mil/staff/dr-leonar…) where his research interests include developing new methods for user-defined control of biological processes, and climate and supply chain resilience.

Prior to coming to DARPA, Dr. Tender was a principal investigator and led the Laboratory for Molecular Interfaces in the Center for Bio/Molecular Science and Engineering at the U.S. Naval Research Laboratory. There, among other accomplishments, he facilitated numerous international collaborations with key external stakeholders in academia, industry, and government and his highly interdisciplinary research team, comprised of electrochemists, microbiologists, and engineers, is widely recognized for its many contributions to the field of microbial electrochemistry.