Archive for the ‘machine learning’ category

Aug 15, 2017

The Fundamental Limits of Machine Learning — By Jesse Dunietz | Nautilus

Posted by in category: machine learning

“She thought her solution was obvious. Her colleagues, though, were sure their solution was correct—and the two didn’t match. Was the problem with one of their answers, or with the puzzle itself?”

Read more

Jun 8, 2017

Artificial General Intelligence (AGI): Future A to Z

Posted by in categories: business, computing, cyborgs, engineering, ethics, existential risks, machine learning, robotics/AI, singularity

What is the ultimate goal of Artificial General Intelligence?

In this video series, the Galactic Public Archives takes bite-sized looks at a variety of terms, technologies, and ideas that are likely to be prominent in the future. Terms are regularly changing and being redefined with the passing of time. With constant breakthroughs and the development of new technology and other resources, we seek to define what these things are and how they will impact our future.

Continue reading “Artificial General Intelligence (AGI): Future A to Z” »

Apr 11, 2017

Limits to the Nonparametric Intuition: Superintelligence and Ecology

Posted by in categories: environmental, existential risks, machine learning

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, we’d like a superintelligence that applies the non-parametric intuition, the intuition that we can’t know all the factors but can partially discover them with well-motivated trade-offs.

However, I’ve come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesn’t promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that “staying grounded” in these conditions is one way to know that important design information is missing and seek it out. The Onion article “Man’s Garbage To Have Much More Significant Effect On Planet Than He Will” is one example of a common failure at living in a grounded way.

In other words, “staying grounded” means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Continue reading “Limits to the Nonparametric Intuition: Superintelligence and Ecology” »

Apr 6, 2017

The Nonparametric Intuition: Superintelligence and Design Methodology

Posted by in categories: engineering, machine learning

I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.

The question about superintelligence I wish to address is the “paperclip universe” problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.

This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that don’t admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.

Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals we’ve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.

Continue reading “The Nonparametric Intuition: Superintelligence and Design Methodology” »

Aug 24, 2016

The iBrain is Here And it’s Already Inside Your Phone — By Steven Level | Backchannel

Posted by in categories: business, machine learning, robotics/AI


“An exclusive inside look at how artificial intelligence and machine learning work at Apple”

Read more

Aug 5, 2016

This startup uses machine learning and satellite imagery to predict crop yields — By Alex Brokaw | The Verge

Posted by in categories: big data, business, machine learning, satellites, space


“Instead, Descartes relies on 4 petabytes of satellite imaging data and a machine learning algorithm to figure out how healthy the corn crop is from space.”

Read more

Jul 22, 2016

Google Sprints Ahead in AI Building Blocks, Leaving Rivals Wary — By Jack Clark | Bloomberg

Posted by in categories: machine learning, robotics/AI


“There’s a high-stakes race under way in Silicon Valley to develop software that makes it easy to weave artificial intelligence technology into almost everything, and Google has sprinted into the lead.”

Read more

Jul 5, 2016

When Humanity Meets A.I. | a16z Podcast

Posted by in categories: disruptive technology, education, ethics, machine learning, robotics/AI

Podcast with “Andreessen Horowitz Distinguished Visiting Professor of Computer Science … Fei-Fei Li [who publishes under Li Fei-Fei], associate professor at Stanford University.”

May 18, 2016

May 18th-20th Google I/O Developers Conference Live Feed

Posted by in categories: machine learning, virtual reality


Today’s conference emphasizes virtual reality and machine learning.

Live Feed

May 12, 2016

Recommendation Engines Yielding Stronger Predictions into Our Wants and Needs

Posted by in categories: computing, disruptive technology, economics, information science, innovation, internet, machine learning, software

If you’ve ever seen a “recommended item” on eBay or Amazon that was just what you were looking for (or maybe didn’t know you were looking for), it’s likely the suggestion was powered by a recommendation engine. In a recent interview, Co-founder of machine learning startup Delvv, Inc., Raefer Gabriel, said these applications for recommendation engines and collaborative filtering algorithms are just the beginning of a powerful and broad-reaching technology.

Raefer Gabriel, Delvv, Inc.

Raefer Gabriel, Delvv, Inc.

Gabriel noted that content discovery on services like Netflix, Pandora, and Spotify are most familiar to people because of the way they seem to “speak” to one’s preferences in movies, games, and music. Their relatively narrow focus of entertainment is a common thread that has made them successful as constrained domains. The challenge lies in developing recommendation engines for unbounded domains, like the internet, where there is more or less unlimited information.

“Some of the more unbounded domains, like web content, have struggled a little bit more to make good use of the technology that’s out there. Because there is so much unbounded information, it is hard to represent well, and to match well with other kinds of things people are considering,” Gabriel said. “Most of the collaborative filtering algorithms are built around some kind of matrix factorization technique and they definitely tend to work better if you bound the domain.”

Continue reading “Recommendation Engines Yielding Stronger Predictions into Our Wants and Needs” »

Page 1 of 212