Toggle light / dark theme

This time I come to talk about a new concept in this Age of Artificial Intelligence and the already insipid world of Social Networks. Initially, quite a few years ago, I named it “Counterpart” (long before the TV series “Counterpart” and “Black Mirror”, or even the movie “Transcendence”).

It was the essence of the ETER9 Project that was taking shape in my head.

Over the years and also with the evolution of technologies — and of the human being himself —, the concept “Counterpart” has been getting better and, with each passing day, it makes more sense!

When in 2015, Eileen Brown looked at the ETER9 Project (crazy for many, visionary for few) and wrote an interesting article for ZDNET with the title “New social network ETER9 brings AI to your interactions”, it ensured a worldwide projection of something the world was not expecting.

Someone, in a lost world (outside the United States), was risking, with everything he had in his possession (very little or less than nothing), a vision worthy of the American dream. At that time, Facebook was already beginning to annoy the cleaner minds that were looking for a difference and a more innovative world.

Today, after that test bench, we see that Facebook (Meta or whatever) is nothing but an illusion, or, I dare say, a big disappointment. No, no, no! I am not now bad-mouthing Facebook just because I have a project in hand that is seen as a potential competitor.

Article originally published on LINKtoLEADERS under the Portuguese title “Sem saber ler nem escrever!”

In the 80s, “with no knowledge, only intuition”, I discovered the world of computing. I believed computers could do everything, as if it were an electronic God. But when I asked the TIMEX Sinclair 1000 to draw the planet Saturn — I am fascinated by this planet, maybe because it has rings —, I only glimpse a strange message on the black and white TV:

Upper row Associate American Corner librarian Donna Lyn G. Labangon, Space Apps global leader Dr. Paula S. Bontempi, former DICT Usec. Monchito B. Ibrahim, Animo Labs executive director Mr. Federico C. Gonzalez, DOST-PCIEERD deputy executive director Engr. Raul C. Sabularse, PLDT Enterprise Core Business Solutions vice president and head Joseph Ian G. Gendrano, lead organizer Michael Lance M. Domagas, and Animo Labs program manager Junnell E. Guia. Lower row Dominic Vincent D. Ligot, Frances Claire Tayco, Mark Toledo, and Jansen Dumaliang Lopez of Aedes project.

MANILA, Philippines — A dengue case forecasting system using space data made by Philippine developers won the 2019 National Aeronautics and Space Administration’s International Space Apps Challenge. Over 29,000 participating globally in 71 countries, this solution made it as one of the six winners in the best use of data, the solution that best makes space data accessible, or leverages it to a unique application.

Dengue fever is a viral, infectious tropical disease spread primarily by Aedes aegypti female mosquitoes. With 271,480 cases resulting in 1,107 deaths reported from January 1 to August 31, 2019 by the World Health Organization, Dominic Vincent D. Ligot, Mark Toledo, Frances Claire Tayco, and Jansen Dumaliang Lopez from CirroLytix developed a forecasting model of dengue cases using climate and digital data, and pinpointing possible hotspots from satellite data.

Sentinel-2 Copernicus and Landsat 8 satellite data used to reveal potential dengue hotspots.

Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.

Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.

Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.

What is the ultimate goal of Artificial General Intelligence?

In this video series, the Galactic Public Archives takes bite-sized looks at a variety of terms, technologies, and ideas that are likely to be prominent in the future. Terms are regularly changing and being redefined with the passing of time. With constant breakthroughs and the development of new technology and other resources, we seek to define what these things are and how they will impact our future.

Follow us on social media:
Twitter / Facebook / Instagram

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, we’d like a superintelligence that applies the non-parametric intuition, the intuition that we can’t know all the factors but can partially discover them with well-motivated trade-offs.

However, I’ve come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesn’t promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that “staying grounded” in these conditions is one way to know that important design information is missing and seek it out. The Onion article “Man’s Garbage To Have Much More Significant Effect On Planet Than He Will” is one example of a common failure at living in a grounded way.

In other words, “staying grounded” means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Suppose that there were a superintelligence where individual agents have a capacity as compared to us such that we are as mice are to us. What might we reasonably hope from the agents of such an intelligence? My hope is that these agents are ecologists who wish for us to flourish in our natural lifeways. This does not mean that they leave us all to our own preserves, though hopefully they will see the advantage to having some unaltered wilderness in which to observe how we choose to live left to our own devices. Instead, we can be participants in patterned arrangements aimed to satisfy our needs in return for our engaged participation in larger systems of resource management. By this standard, our human systems might be found wanting by many living creatures today.

Given this, a productive approach to developing superintelligence would not only be concerned with its technical creation, but also by being in the position to demonstrate how all can flourish through good stewardship, setting a proper example for when these systems emerge and are trying to understand what goals should be like. We would also want the facts of its and our material conditions readily apparent, so that it doesn’t start from a disconnected and disembodied basis.

Overall, this means that in addition to the capacity to discover more goals, it would be instructive to supply this superintelligence with a schema of describing the relationships and conditions under which current participants flourish, as well as the goal to promote such flourishing whenever the means are clear and circumstances indicate such flourishing will not emerge of its own accord. This kind of information technology for ecological engineering might also be useful for our own purposes.

What will a superintelligence take as its flourishing? It is hard to say. However, hopefully it will find sustaining, extending, and promoting the flourishing of the ecology that allowed its emergence as a inspiring, challenging, and creative goal.