Toggle light / dark theme

Stacking solar cells increases their efficiency. Working with partners in the PERCISTAND project, researchers at the Karlsruhe Institute of Technology (KIT) have produced perovskite/CIS tandem solar cells with an efficiency of nearly 25%—the highest value achieved thus far with this technology. Moreover, this combination of materials is light and versatile, making it possible to envision the use of these tandem solar cells in vehicles, portable equipment, and devices that can be folded or rolled up. The researchers present their results in the journal ACS Energy Letters.

Perovskite have made astounding progress over the past decade. Their efficiency is now comparable to that of the long-established silicon solar cells. Perovskites are innovative materials with a special crystal structure. Researchers worldwide are working to get photovoltaic technology ready for practical applications. The more electricity they generate per unit of surface area, the more attractive solar cells are for consumers.

The efficiency of solar cells can be increased by stacking two or more cells. If each of the stacked solar cells is especially efficient at absorbing light from a different part of the solar spectrum, inherent losses can be reduced and efficiency boosted. The efficiency is a measure of how much of the is converted into electricity. Thanks to their versatility, perovskite solar cells make outstanding components for such tandems. Tandem solar cells using perovskites and silicon have reached a record efficiency level of over 29%, considerably higher than that of made of perovskite (25.7%) or silicon (26.7%).

How to use causal influence diagrams to recognize the hidden incentives that shape an AI agent’s behavior.


There is rightfully a lot of concern about the fairness and safety of advanced Machine Learning systems. To attack the root of the problem, researchers can analyze the incentives posed by a learning algorithm using causal influence diagrams (CIDs). Among others, DeepMind Safety Research has written about their research on CIDs, and I have written before about how they can be used to avoid reward tampering. However, while there is some writing on the types of incentives that can be found using CIDs, I haven’t seen a succinct write up of the graphical criteria used to identify such incentives. To fill this gap, this post will summarize the incentive concepts and their corresponding graphical criteria, which were originally defined in the paper Agent Incentives: A Causal Perspective.

A causal influence diagram is a directed acyclic graph where different types of nodes represent different elements of an optimization problem. Decision nodes represent values that an agent can influence, utility nodes represent the optimization objective, and structural nodes (also called change nodes) represent the remaining variables such as the state. The arrows show how the nodes are causally related with dotted arrows indicating the information that an agent uses to make a decision. Below is the CID of a Markov Decision Process, with decision nodes in blue and utility nodes in yellow:

The first model is trying to predict a high school student’s grades in order to evaluate their university application. The model uses the student’s high school and gender as input and outputs the predicted GPA. In the CID below we see that predicted grade is a decision node. As we train our model for accurate predictions, accuracy is the utility node. The remaining, structural nodes show how relevant facts about the world relate to each other. The arrows from gender and high school to predicted grade show that those are inputs to the model. For our example we assume that a student’s gender doesn’t affect their grade and so there is no arrow between them. On the other hand, a student’s high school is assumed to affect their education, which in turn affects their grade, which of course affects accuracy. The example assumes that a student’s race influences the high school they go to. Note that only high school and gender are known to the model.

Armed with little more than a computer, hackers are increasingly setting their sights on some of the biggest things that humans can build.

Vast container ships and chunky freight planes — essential in today’s global economy — can now be brought to a halt by a new generation of code warriors.

“The reality is that an aeroplane or vessel, like any digital system, can be hacked,” David Emm, a principal security researcher at cyber firm Kaspersky, told CNBC.

Tesla CEO Elon Musk described the electric automaker’s factories in Austin and Berlin as “money furnaces” that were losing billions of dollars because supply chain breakdowns were limiting the number of cars they can produce.

In a May 30 interview with a Tesla owners club that was just released this week, Musk said that getting the Berlin and Austin plants functional “are overwhelmingly our concerns. Everything else is a very small thing,” Musk said, but added that “it’s all gonna get fixed real fast.”

It’s not clear how much has changed in the three weeks since the interview, but last week Musk tweeted congratulations to his Berlin team for producing 1,000 cars in a week.

While electric vehicles promise a green future, the batteries that power them don’t boast the same level of sustainability.


While driving electric vehicles is a step towards a greener future, the car batteries that power them are not as sustainable. Though the battery is at the heart of any EV, most are made from lithium-ion and have a limited lifespan that starts to degrade from the first time you charge them. So what happens when they reach capacity?

The cycle of charging and discharging causes them lose energy and power. The more charge cycles a battery goes through, the faster it will degrade. Once batteries reach 70 or 80% of their capacity, which happens around either 5 to 8 years or after 100,000 miles of driving, they have to be replaced, according to Science Direct.

Due to electric vehicles’ rising popularity, it goes without saying that their battery waste will become a major issue. Experts estimate that 12 million tons of batteries will be thrown away by 2030, The Guardian reported. The conundrum that manufacturers and consumers have is that although they can be recycled, there are not enough facilities to handle them. To date, there are only four lithium-ion recycling centers in the United States (via WCNC). However, this number must grow exponentially in the next few years as Industry experts predict there will be 85 million electric vehicles on the road by 2030 (via Science Direct).

In 2009, a computer scientist then at Princeton University named Fei-Fei Li invented a data set that would change the history of artificial intelligence. Known as ImageNet, the data set included millions of labeled images that could train sophisticated machine-learning models to recognize something in a picture. The machines surpassed human recognition abilities in 2015. Soon after, Li began looking for what she called another of the “North Stars” that would give AI a different push toward true intelligence.

She found inspiration by looking back in time over 530 million years to the Cambrian explosion, when numerous land-dwelling animal species appeared for the first time. An influential theory posits that the burst of new species was driven in part by the emergence of eyes that could see the world around them for the first time. Li realized that vision in animals never occurs by itself but instead is “deeply embedded in a holistic body that needs to move, navigate, survive, manipulate and change in the rapidly changing environment,” she said. “That’s why it was very natural for me to pivot towards a more active vision [for AI].”

Today, Li’s work focuses on AI agents that don’t simply accept static images from a data set but can move around and interact with their environments in simulations of three-dimensional virtual worlds.

Cerebras Systems, maker of the world’s largest processor, has broken the record for the most complex AI model trained using a single device.

Using one CS-2 system, powered by the company’s wafer-sized chip (WSE-2), Cerebras is now able to train AI models with up to 20 billion parameters thanks to new optimizations at the software level.

The firm says the breakthrough will resolve one of the most frustrating problems for AI engineers: the need to partition large-scale models across thousands of GPUs. The result is an opportunity to drastically cut the time it takes to develop and train new models.