THE global investment bank Goldman Sachs has claimed mining asteroids for precious metals is a “realistic” goal.
It has released a report exploring the possibility of using an “asteroid-grabbing spacecraft” to extract platinum from space rocks.
The fast-advancing fields of neuroscience and computer science are on a collision course. David Cox, Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard, explains how his lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.
© Sören Boyn / CNRS/Thales physics joint research unit.
Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.
Posted in space
I will be participating in London #Spaceapps2017
Participant registration for #SpaceApps 2017 is now open. Develop solutions for problems in space & on Earth. Sign up: 2017.spaceappschallenge.org/locations
New article by Transhumanist Party:
Gennady Stolyarov II
The Spring 2017 issue of the magazine Issues in Science and Technology, published by the National Academy of Sciences, features an article by Professor Steve Fuller, the Auguste Comte Chair in Social Epistemology in the Department of Sociology at the University of Warwick in the United Kingdom. This article, entitled “Does this pro-science party deserve our votes?” discusses the Transhumanist Party from the time of Zoltan Istvan’s 2016 run for President.
In this article, which offers both positive discussion and critiques of Istvan’s campaign, Professor Fuller writes:
What Istvan offered voters was a clear vision of how science and technology could deliver a heaven on earth for everyone. The Transhumanist Bill of Rights envisages that it is within the power of science and technology to deliver the end to all significant suffering, the enhancement of one’s existing capacities, and the indefinite extension of one’s life. To the fans whom Istvan attracted during his campaign, these added up to “liberty makers.” For them, the question was what prevented the federal government from prioritizing what Istvan had presented as well within human reach.
Off-grid housing that actually works for families is hard to come by, but that’s what ReGen Villages is striving towards with their concept for new self-sustaining communities.
The startup real estate company has a dream to create regenerative communities that not only produce their own food but also generate their own power, meaning what’s usually only possible for rural areas with renewable energy sources would be a reality for people that want these luxuries while having close neighbors.
This idea is more than just a dream, however, as the development company has its sights on their first site in Almere, Netherlands with the goal of opening it in 2018.
I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.
The question about superintelligence I wish to address is the “paperclip universe” problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.
This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that don’t admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.
Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals we’ve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.
How can this work? It’s easy to imagine such an algorithmic admission leading to paralysis, either from finding contradictory objectives that apparently admit no solution or an analysis/paralysis which perpetually requires no undiscovered goals before proceeding. Alternatively, stated incorrectly, it could backfire, with finding more goals taking the place of making more paperclips as it proceeds singlemindedly to consume resources. Clearly, a satisfactory superintelligence would need to reason appropriately about the goal discovery process.
There is a profession that has figured out a heuristic form of reasoning about goal discovery processes: designers. Designers have coined the phrase “the fuzzy front end” when talking about the very early stages of a project before anyone has figured out what it is about. Designers engage in low-cost elicitation exercises with a variety of stakeholders. They quickly discover who the relevant stakeholders are and what impacts their interventions might have. Adept designers switch back and forth rapidly from candidate solutions to analyzing the potential impacts of those designs, making new associations about the area under study that allows for further goal discovery. As designers undertake these explorations, they advise going slightly past the apparent wall of diminishing returns, often using an initial brainstorming session to reveal all of the “obvious ideas” before undertaking a deeper analysis. Seasoned designers develop an understanding when stakeholders are holding back and need to be prompted, or when equivocating stakeholders should be encouraged to move on. Designers will interleave a series of prototypes, experiential exercises, and pilot runs into their work, to make sure that interventions really behave the way their analysis seems to indicate.
These heuristics correspond well to an area of statistics and machine learning called nonparametric Bayesian inference. Nonparametric does not mean that there are no parameters, but instead that the parameters are not given, and that inferring that there are further parameters is part of the task. Suppose that you were to move to a new town, and ask around about the best restaurant. The first answer would definitely be new, but as one asked more, eventually you would start getting new answers more rarely. The likelihood of a given answer would also begin to converge. In some cases the answers will be more concentrated on a few answers, and in some cases the answers will be more dispersed. In either case, once we have an idea of how concentrated the answers are, we might see that a particular period of not discovering new answers might just be unlucky and that we should pursue further inquiry.
Asking why provides a list of critical features that can be used to direct different inquiries that fill out the picture. What’s the best restaurant in town for Mexican food? Which is best at maintaining relationships to local food providers/has the best value for money/is the tastiest/has the most friendly service? Designers discover aspects about their goals in an open-ended way, that allows discovery to act in quick cycles of learning through taking on different aspects of the problem. This behavior would work very well for an active learning formulation of relational nonparametric inference.
There is a point at which information gathering activities are less helpful at gathering information than attending to the feedback to activities that more directly act on existing goals. This happens when there is a cost/risk equilibrium between the cost of more discovery activities and the risk of making an intervention on incomplete information. In many circumstances, the line between information gathering and direct intervention will be fuzzier, as exploration proceeds through reversible or inconsequential experiments, prototypes, trials, pilots, and extensions that gather information while still pursuing the goals found so far.
From this perspective, many frameworks for assessing engineering discovery processes make a kind of epistemological error: they assess the quality of the solution from the perspective of the information that they have gathered, paying no attention to the rates and costs which that information was discovered, and whether or not the discovery process is at equilibrium. This mistake comes from seeing the problems as finding a particular point in a given search space of solutions, rather than taking the search space as a variable requiring iterative development. A superintelligence equipped to see past this fallacy would be unlikely to deliver us a universe of paperclips.
Having said all this, I think the nonparametric intuition, while right, can be cripplingly misguided without being supplemented with other ideas. To consider discovery analytically is to not discount the power of knowing about the unknown, but it doesn’t intrinsically value non-contingent truths. In my next essay, I will take on this topic.
For a more detailed explanation and an example of how to extend engineering design assessment to include nonparametric criteria, see The Methodological Unboundedness of Limited Discovery Processes. Form Academisk, 7:4.