Toggle light / dark theme

The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.


The EU wants AI that’s fair and accountable, respects human autonomy and prevents harm.

Read more

Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes. — Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable. — Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen. — Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make. — Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines. — Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change” — Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.


AI technologies should be accountable, explainable, and unbiased, says EU.

Read more

Japan’s space agency wants to create a moon base with the help of robots that can work autonomously, with little human supervision.

The project, which has racked up three years of research so far, is a collaboration between the Japan Aerospace Exploration Agency (JAXA), the construction company Kajima Corp., and three Japanese universities: Shibaura Institute of Technology, The University of Electro-Communications and Kyoto University.

Recently, the collaboration did an experiment on automated construction at the Kajima Seisho Experiment Site in Odawara (central Japan).

Read more

Andrew Yang gives a dynamite interview on automation, UBI, and economic solutions to transitioning to the future.


Andrew Yang, award winning entrepreneur, Democratic Presidential candidate, and author of “The War on Normal People,” joins Ben to discuss the Industrial Revolution, Universal Basic Income, climate change, circumcision, and much more.

Check out more of Ben’s content on:
YouTube: https://www.youtube.com/c/benshapiro
Twitter: @benshapiro
Instagram: @officialbenshapiro

Check Andrew Yang out on:
Twitter: @ANDREWYANG
Facebook: https://www.facebook.com/andrewyang2020/
Website: https://www.yang2020.com

Read more

Next month, however, a team of MIT researchers will be presenting a so-called “Proxyless neural architecture search” algorithm that can speed up the AI-optimized AI design process by 240 times or more. That would put faster and more accurate AI within practical reach for a broad class of image recognition algorithms and other related applications.

“There are all kinds of tradeoffs between model size, inference latency, accuracy, and model capacity,” says Song Han, assistant professor of electrical engineering and computer science at MIT. Han adds that:

“[These] all add up to a giant design space. Previously people had designed neural networks based on heuristics. Neural architecture search tried to free this labor intensive, human heuristic-based exploration [by turning it] into a learning-based, AI-based design space exploration. Just like AI can [learn to] play a Go game, AI can [learn how to] design a neural network.”

Read more