Toggle light / dark theme

In the field of artificial intelligence, reinforcement learning is a type of machine-learning strategy that rewards desirable behaviors while penalizing those which aren’t. An agent can perceive its surroundings and act accordingly through trial and error in general with this form or presence – it’s kind of like getting feedback on what works for you. However, learning rules from scratch in contexts with complex exploration problems is a big challenge in RL. Because the agent does not receive any intermediate incentives, it cannot determine how close it is to complete the goal. As a result, exploring the space at random becomes necessary until the door opens. Given the length of the task and the level of precision required, this is highly unlikely.

Exploring the state space randomly with preliminary information should be avoided while performing this activity. This prior knowledge aids the agent in determining which states of the environment are desirable and should be investigated further. Offline data collected by human demonstrations, programmed policies, or other RL agents could be used to train a policy and then initiate a new RL policy. This would include copying the pre-trained policy’s neural network to the new RL policy in the scenario where we utilize neural networks to describe the procedures. This process transforms the new RL policy into a pre-trained one. However, as seen below, naively initializing a new RL policy like this frequently fails, especially for value-based RL approaches.

Google AI researchers have developed a meta-algorithm to leverage pre-existing policy to initialize any RL algorithm. The researchers utilize two procedures to learn tasks in Jump-Start Reinforcement Learning (JSRL): a guide policy and an exploration policy. The exploration policy is an RL policy trained online using the agent’s new experiences in the environment. In contrast, the guide policy is any pre-existing policy that is not modified during online training. JSRL produces a learning curriculum by incorporating the guide policy, followed by the self-improving exploration policy, yielding results comparable to or better than competitive IL+RL approaches.

CSIS will host a public event on responsible AI in a global context, featuring a moderated discussion with Julie Sweet, Chair and CEO of Accenture, and Brad Smith, President and Vice Chair of the Microsoft Corporation, on the business perspective, followed by a conversation among a panel of experts on the best way forward for AI regulation. Dr. John J. Hamre, President and CEO of CSIS, will provide welcoming remarks.

Keynote Speakers:
Brad Smith, President and Vice Chair, Microsoft Corporation.
Julie Sweet, Chair and Chief Executive Officer, Accenture.

Featured Speakers:
Gregory C. Allen, Director, Project on AI Governance and Senior Fellow, Strategic Technologies Program, CSIS
Mignon Clyburn, Former Commissioner, U.S. Federal Communications Commission.
Karine Perset, Head of AI Unit and OECD.AI, Digital Economy Policy Division, Organisation for Economic Co-Operation and Development (OECD)
Helen Toner, Director of Strategy, Center for Security and Emerging Technology, Georgetown University.

This event is made possible through general support to CSIS.

This doesn’t mean you need to don an ushanka and start marching. You can start taking collective action by focusing on community groups and connecting with climate leaders. This will likely help you solve other issues, too, like waste disposal, recycling, and community clean-up projects in your locale.

Conclusion

Combating climate change requires all of us to reconsider our individual and collective climate responsibility. As an individual, you can do your part and let others know what you are doing. It has never been easier to connect with the world and share than it is today. You can join others in writing letters to the editor of your local newspaper, or your local and national government representatives. You can join groups like Citizens’ Climate Lobby and learn how to engage policy decision-makers. And in your daily routines, you can lead by example.

It’s a big ask to tell countries with very little access to electricity to accept the same level of responsibility as electricity-rich nations in striving to achieve the net-zero 2050 emissions target set by the United Nations. And nuclear energy has to be in the mix.


Is the IPCC goal of getting to net-zero by 2050 aspirational or legitimate? A Foreign Policy Review panel tackles the question.

Developments in artificial intelligence and human enhancement technologies have the potential to remake American society in the coming decades. A new Pew Research Center survey finds that Americans see promise in the ways these technologies could improve daily life and human abilities. Yet public views are also defined by the context of how these technologies would be used, what constraints would be in place and who would stand to benefit – or lose – if these advances become widespread.

Fundamentally, caution runs through public views of artificial intelligence (AI) and human enhancement applications, often centered around concerns about autonomy, unintended consequences and the amount of change these developments might mean for humans and society. People think economic disparities might worsen as some advances emerge and that technologies, like facial recognition software, could lead to more surveillance of Black or Hispanic Americans.

This survey looks at a broad arc of scientific and technological developments – some in use now, some still emerging. It concentrates on public views about six developments that are widely discussed among futurists, ethicists and policy advocates. Three are part of the burgeoning array of AI applications: the use of facial recognition technology by police, the use of algorithms by social media companies to find false information on their sites and the development of driverless passenger vehicles.

Abundant fuel cell raw materials and renewables potential could add up to a green hydrogen economy in the Philippines, according to Jose Mari Angelo Abeleda Jr and Richard Espiritu, two professors at the University of the Philippines Diliman. In a paper published in this month’s Energy Policy, they explained the country is a latecomer to the sector and should develop basic and applied knowledge for training and research. The country should also establish stronger links between industry and academia, the report’s authors suggested. “The establishment of the Philippine Energy Research and Policy Institute (Perpi) is a move towards the right direction as it will be instrumental in crafting policies and pushing for activities that will usher for more private-academ[ic] partnerships for the development of fuel cell technology in the Philippines,” the scholars wrote. “However, through enabling legislation, a separate and dedicated Hydrogen Research and Development Center (HRDC) will be pivotal in ensuring that sufficient government and private funding are provided.” The authors reported progress in the production of fuel cell membranes but few developments towards large scale production, transport, and storage facilities. “The consolidation of existing renewable energy sources for hydrogen production can also be explored in order to ensure reliable and sustainable hydrogen fuel supply,” they wrote. “This is because the country will gain more benefit if it focuses more on the application of fuel cell technology on rural electrification via renewa[ble] energy-based distributed power generation, rather than on transportation such as fuel cell vehicles.”

Paris-based energy engineering company Technip Energies and Indian energy business Greenko ZeroC Private have signed a memorandum of understanding (MOU) to explore green hydrogen project development opportunities in the refining, petrochemicals, fertilizer, chemical, and power plant sectors in India. “The MOU aims to facilitate active engagement between the teams of Technip Energies in India and Greenko to step up collaborative opportunities on a build-own-operate (BOO) model – in which Greenko will be the BOO operator and owner of the asset and Technip Energies will support with engineering services, integration and EP/EPC [engineering and procurement/engineering, procurement and constructrion] – for pilot and commercial scale green hydrogen and related projects in India in order to offer economically feasible technology solutions to clients,” the French company wrote today.

SSTI and the UW–Madison-based Mayors Innovation Project recently released a new report arguing for a different approach that incentivizes diverse ways to travel to and from new developments. By funding public transportation, limiting parking and preserving the walkability of neighborhoods, Sundquist’s team argues, cities and states can reduce congestion better than if they only plan for cars.

The same solutions can help cities meet their policy goals, such as reduced emissions or more equitable access to services for residents.

“We look at the gap between policy goals on the one hand and the way decisions are being made that actually make things happen in the real world,” says Sundquist. “Often you have great policy goals, and then you have a bunch of rules of thumb that are still basically what was set in the ’50s during the interstate era.”

What does this actually mean in concrete terms? And is it an accurate description of Russia’s nuclear doctrine?

By Mark Episkopos

The recent round of tensions in the consistently difficult relationship between Russia and the U.S. has prompted a renewed focus on the Kremlin’s nuclear posture. For years, Western analysts have posited that Moscow adheres to what is often called an “escalate to de-escalate” approach. But what does this mean in concrete policy terms, and is it an accurate description of Russia’s nuclear doctrine?

Concerned about the war Ukraine? You’re not alone. Historian Yuval Noah Harari provides important context on the Russian invasion, including Ukraine’s long history of resistance, the specter of nuclear war and his view of why, even if Putin wins all the military battles, he’s already lost the war. (This talk and conversation, hosted by TED global curator Bruno Giussani, was part of a TED Membership event on March 1, 2022. Visit http://ted.com/membership to become a TED Member.)

Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. You’re welcome to link to or embed these videos, forward them to others and share these ideas with people you know.

Become a TED Member: http://ted.com/membership.

Anthony J. Ferrante, Global Head of Cybersecurity and Senior Managing Director, FTI Consulting, Inc.

Artificial intelligence (AI) models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision making, reasoning and problem solving. This presentation will discuss the security, ethical and privacy concerns surrounding this technology. Learning Objectives:1: Understand that the solution to adversarial AI will come from a combination of technology and policy.2: Learn that coordinated efforts among key stakeholders will help to build a more secure future.3: Learn how to share intelligence information in the cybersecurity community to build strong defenses.