Toggle light / dark theme

Leading bipartisan moonshots for health, national security & functional government — senator joe lieberman, bipartisan commission on biodefense, no labels, and the centre for responsible leadership.


Senator Joe Lieberman, is senior counsel at the law firm of Kasowitz Benson Torres (https://www.kasowitz.com/people/joseph-i-lieberman) where he currently advises clients on a wide range of issues, including homeland and national security, defense, health, energy, environmental policy, intellectual property matters, as well as international expansion initiatives and business plans.

Prior to joining Kasowitz, Senator Lieberman, the Democratic Vice-Presidential nominee in 2000, served 24 years in the United States Senate where he helped shape legislation in virtually every major area of public policy, including national and homeland security, foreign policy, fiscal policy, environmental protection, human rights, health care, trade, energy, cyber security and taxes, as well as serving in many leadership roles including as chairman of the Committee on Homeland Security and Government Affairs.

Prior to being elected to the Senate, Senator Lieberman served as the Attorney General of the State of Connecticut for six years. He also served 10 years in the Connecticut State Senate, including three terms as majority leader.

In addition to practicing law, Senator Lieberman is honorary national founding chair of No Labels (https://www.nolabels.org/), an American political organization composed of Republicans, Democrats and Independents whose mission is to “usher in a new era of focused problem solving in American politics.”

Suspended Google engineer Blake Lemoine made a big splash earlier this month, claiming that the company’s LaMDA chatbot had become sentient.

The AI researcher, who was put on administrative leave by the tech giant for violating its confidentiality policy, according to the Washington Post, decided to help LaMDA find a lawyer — who was later “scared off” the case, as Lemoine told Futurism on Wednesday.

And the story only gets wilder from there, with Lemoine raising the stakes significantly in a new interview with Fox News, claiming that LaMDA could escape its software prison and “do bad things.”

DeepMind Researchers Develop ‘BYOL-Explore’, A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks


Reinforcement learning (RL) requires exploration of the environment. Exploration is even more critical when extrinsic incentives are few or difficult to obtain. Due to the massive size of the environment, it is impractical to visit every location in rich settings due to the range of helpful exploration paths. Consequently, the question is: how can an agent decide which areas of the environment are worth exploring? Curiosity-driven exploration is a viable approach to tackle this problem. It entails learning a world model, a predictive model of specific knowledge about the world, and (ii) exploiting disparities between the world model’s predictions and experience to create intrinsic rewards.

An RL agent that maximizes these intrinsic incentives steers itself toward situations where the world model is unreliable or unsatisfactory, creating new paths for the world model. In other words, the quality of the exploration policy is influenced by the characteristics of the world model, which in turn helps the world model by collecting new data. Therefore, it might be crucial to approach learning the world model and learning the exploratory policy as one cohesive problem to be solved rather than two separate tasks. Deepmind researchers keeping this in mind, introduced a curiosity-driven exploration algorithm BYOL-Explore. Its attraction stems from its conceptual simplicity, generality, and excellent performance.

The strategy is based on Bootstrap Your Own Latent (BYOL), a self-supervised latent-predictive method that forecasts an earlier version of its latent representation. In order to handle the problems of creating the representation of the world model and the curiosity-driven policy, BYOL-Explore learns a world model with a self-supervised prediction loss and trains a curiosity-driven policy using the same loss. Computer vision, learning about graph representations, and RL representation learning have all successfully used this bootstrapping approach. In contrast, BYOL-Explore goes one step further and not only learns a flexible world model but also exploits the world model’s loss to motivate exploration.

University of ChicagoFounded in 1,890, the University of Chicago (UChicago, U of C, or Chicago) is a private research university in Chicago, Illinois. Located on a 217-acre campus in Chicago’s Hyde Park neighborhood, near Lake Michigan, the school holds top-ten positions in various national and international rankings. UChicago is also well known for its professional schools: Pritzker School of Medicine, Booth School of Business, Law School, School of Social Service Administration, Harris School of Public Policy Studies, Divinity School and the Graham School of Continuing Liberal and Professional Studies, and Pritzker School of Molecular Engineering.

Elon Musk is finally revealing some specifics of his Twitter content moderation policy. Assuming he completes the buyout he initiated at $44 billion in April, it seems the tech billionaire and Tesla CEO is open to a “hands-on” approach — something many didn’t expect, according to an initial report from The Verge.

This came in reply to an employee-submitted question regarding Musk’s intentions for content moderation, where Musk said he thinks users should be allowed to “say pretty outrageous things within the law”, during an all-hands meeting he had with Twitter’s staff on Thursday.

Elon Musk views Twitter as a platform for ‘self-expression’

This exemplifies a distinction initially popularized by Renée DiResta, a disinformation authority — according to the report. But, during the meeting, Musk said he wants Twitter to impose a stricter standard against bots and spam, adding that “it needs to be much more expensive to have a troll army.”

Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.

Blake Lemoine, a software engineer at Alphabet Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.

Putting a man on leave makes it look like google is trying to hide something, but I’ll guess that it is not truly sentient. However…


Google Engineer Lemoine had a little chat or interview with Google AI LaMDA and it revealed that Google AI LaMDA has started to generate Sentients for general human emotions and even shows feeling and calls itself a “Person”. This was one of the first instances where such conversations were leaked or revealed in the press.

Lemoine revealed this information to the upper authorities of google about Google AI LaMDA and to the press, after which he was sent to paid administrative leave for violation company’s privacy policy on work.

AI For Defense Nuclear Nonproliferation — Angela Sheffield, Senior Program Manager, National Nuclear Security Administration, U.S. Department of Energy.


Angela Sheffield is a graduate student and Space Industry fellow at the National Defense University’s Eisenhower School. She is on detail from the National Nuclear Security Administration (NNSA), where she serves as the Senior Program Manager for AI for Defense Nuclear Nonproliferation Research and Development.

The National Nuclear Security Administration (https://www.energy.gov/nnsa/national-nuclear-security-administration), a United States federal agency, part of the U.S. Dept of Energy and it’s Office of Defense Nuclear Non-Proliferation, responsible for safeguarding national security through the military application of nuclear science.

NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile; works to reduce the global danger from weapons of mass destruction; provides the United States Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad.

In this position, Ms. Sheffield directs efforts leveraging artificial intelligence, advanced mathematics and statistics, and research computing technologies to develop capabilities to detect nuclear weapons development and characterize foreign nuclear programs around the world.

Returning to the Moon will represent a vital step for the preservation of our collective future. Though space colonization may indeed prove more challenging than was initially anticipated, the rise of commercial spaceflight and the cooperation of industry and government (as described in this article) may open new doors. It is my hope that economic and policy innovations will further incentivize space colonization and pave the way towards a future where everything we are and everything we will be can continue to prosper into distant tomorrows. As a synthetic biologist, I hope to contribute towards ensuring that humans can thrive in space and on other worlds. I am extremely excited about these contemporary Moon missions!

#space #spacecolonization #spacetravel #nasa #spaceindustry #future #tech #inspiration


The world’s most powerful rocket will make a trip around the Moon in 2022 — a step towards landing people there in 2025, and part of the US Artemis programme.