Following China’s abrupt U-turn on zero-COVID policy last month, the country has seen an increase in COVID cases. A leading doctor at one of Shanghai’s top hospitals estimates that up to 70% of the city’s population has been infected with COVID-19.

A sharp-eyed developer at Krita noticed recently that, in the settings for their Adobe Creative Cloud account, the company had opted them (and everyone else) into a “content analysis” program whereby they “may analyze your content using techniques such as machine learning (e.g. for pattern recognition) to develop and improve our products and services.” Some have taken this to mean that it is ingesting your images for its AI. And … they do. Kind of? But it’s not that simple.
First off, lots of software out there has some kind of “share information with the developer” option, where it sends telemetry like how often you use the app or certain features, why it crashed, etc. Usually it gives you an option to turn this off during installation, but not always — Microsoft incurred the ire of many when it basically said telemetry was on by default and impossible to turn off in Windows 10.
That’s gross, but what’s worse is slipping a new sharing method and opting existing users into it. Adobe told PetaPixel that this content analysis thing “is not new and has been in place for a decade.” If they were using machine learning for this purpose and said so a decade ago, that’s quite impressive, as is that apparently no one noticed that whole time. That seems unlikely. I suspect the policy has existed in some form but has quietly evolved.
India will reach its population peak in 2065.
India is the second most populous country in the world. With 1,414 billion, it comes right after China. However, contrary to China’s population-reducing policy, India’s population is increasing and seems to surpass China in a few decades.
As BBC reported, China reduced its population growth rate by about half, from two percent in 1973 to 1.1 percent in 1983. According to demographers, much of this was accomplished by trampling on human rights.
A study suggests that the world’s population will shrink after 2100, reaching much lower numbers than what the U.N. currently predicts, and major shifts in economic power are also likely to happen.
Both animals and people use high-dimensional inputs (like eyesight) to accomplish various shifting survival-related objectives. A crucial aspect of this is learning via mistakes. A brute-force approach to trial and error by performing every action for every potential goal is intractable even in the smallest contexts. Memory-based methods for compositional thinking are motivated by the difficulty of this search. These processes include, for instance, the ability to: recall pertinent portions of prior experience; (ii) reassemble them into new counterfactual plans, and (iii) carry out such plans as part of a focused search strategy. Compared to equally sampling every action, such techniques for recycling prior successful behavior can considerably speed up trial-and-error. This is because the intrinsic compositional structure of real-world objectives and the similarity of the physical laws that control real-world settings allow the same behavior (i.e., sequence of actions) to remain valid for many purposes and situations. What guiding principles enable memory processes to retain and reassemble experience fragments? This debate is strongly connected to the idea of dynamic programming (DP), which using the principle of optimality significantly lowers the computing cost of trial-and-error. This idea may be expressed informally as considering new, complicated issues as a recomposition of previously solved, smaller subproblems.
This viewpoint has recently been used to create hierarchical reinforcement learning (RL) algorithms for goal-achieving tasks. These techniques develop edges between states in a planning graph using a distance regression model, compute the shortest pathways across it using DP-based graph search, and then use a learning-based local policy to follow the shortest paths. Their essay advances this field of study. The following is a summary of their contributions: They provide a strategy for long-term planning that acts directly on high-dimensional sensory data that an agent may see on its own (e.g., images from an onboard camera). Their solution blends traditional sampling-based planning algorithms with learning-based perceptual representations to recover and reassemble previously recorded state transitions in a replay buffer.
The two-step method makes this possible. To determine how many timesteps it takes for an optimum policy to move from one state to the next, they first learn a latent space where the distance between two states is the measure. They know contrastive representations using goal-conditioned Q-values acquired through offline hindsight relabeling. To establish neighborhood criteria across states, the second threshold this developed latent distance metric. They go on to design sampling-based planning algorithms that scan the replay buffer for trajectory segments—previously recorded successions of transitions—whose ends are adjacent states.
Leading Canada’s Bio-Safety & Security R&D — Dr. Loren Matheson PhD, Defence Research and Development Canada, Department of National Defence.
Dr. Loren Matheson, Ph.D. is a Portfolio Manager at the Center For Security Science, at Defence Research and Development Canada (DRDC — https://www.canada.ca/en/defence-research-development.html), which is a special operating agency of the Department of National Defence, whose purpose is to provide the Canadian Armed Forces, other government departments, and public safety and national security communities with knowledge and technology.
With a focus on the chemical and biological sciences at DRDC, Dr. Matheson develops and leads safety and security R&D projects with government partners, industry and academia. In addition, she spearheaded an effort to establish a virtual symposium series, developed communications products to explain their program to national and international partners, and helped established a science communication position.
Dr. Matheson previously served as both a senior science advisor within the Office of the Chief Science Operating Officer, and National Manager, Plant Health Research and Strategies, at the Canadian Food Inspection Agency.
After 10 years consulting as a grants facilitator in clinical research, Dr. Matheson moved to the public service to pursue interests in science policy and security science.
Musk’s attention to Twitter is hurting his bread and butter.
Since September last year, Elon Musk has been regarded as the world’s richest person. The stock price of the electric vehicle-making company Tesla has been the sole reason behind his dramatic rise to the top. With Tesla stock dropping 50 percent value since the beginning of the year, Musk has now dropped to number two on the list of the world’s richest people, Bloomberg.
Getty Images.
In April, Musk announced his decision to buy out Twitter and take the social media company private to unlock its true potential. The timing of his offer could not be worse as the U.S. Federal Bank began tightening its fiscal policy to rein in inflation. Within days, Musk’s $44 billion offer seemed a price too high to pay, as the stock prices of tech companies began shrinking with higher interest rates.
The Large Hadron Collider Beauty (LHCb) experiment at CERN is the world’s leading experiment in quark flavor physics with a broad particle physics program. Its data from Runs 1 and 2 of the Large Hadron Collider (LHC) has so far been used for over 600 scientific publications, including a number of significant discoveries.
While all scientific results from the LHCb collaboration are already publicly available through open access papers, the data used by the researchers to produce these results is now accessible to anyone in the world through the CERN open data portal. The data release is made in the context of CERN’s Open Science Policy, reflecting the values of transparency and international collaboration enshrined in the CERN Convention for more than 60 years.
“The data collected at LHCb is a unique legacy to humanity, especially since no other experiment covers the region LHCb looks at,” says Sebastian Neubert, leader of the LHCb open data project. “It has been obtained through a huge international collaborative effort, which was funded by the public. Therefore the data belongs to society.”
Government support is needed, however, to help consumers overcome heat pumps’ higher upfront costs relative to alternatives. The costs of purchasing and installing a heat pump can be up to four times as much as those for a gas boiler. Financial incentives for heat pumps are now available in 30 countries.
In the IEA’s most optimistic scenario – in which all governments achieve their energy and climate pledges in full – heat pumps become the main way of decarbonising space and water heating worldwide. The agency estimates that heat pumps have the potential to reduce global carbon dioxide (CO2) emissions by at least 500 million tonnes in 2030 – equal to the annual CO2 emissions of all cars in Europe today. Leading manufacturers report promising signs of momentum and policy support and have announced plans to invest more than US$4 billion in expanding heat pump production and related efforts, mostly in Europe.
Opportunities also exist for heat pumps to provide low-temperature heat in industrial sectors, especially in the paper, food, and chemicals industries. In Europe alone, 15 gigawatts of heat pumps could be installed across 3,000 facilities in these three sectors, which have been hit hard by recent rises in natural gas prices.