Toggle light / dark theme

The General Theory of General Intelligence: A Pragmatic Patternist Perspective — paper by Ben Goertzel: https://arxiv.org/abs/2103.15100 Abstract: “A multi-decade exploration into the theoretical foundations of artificial and natural general intelligence, which has been expressed in a series of books and papers and used to guide a series of practical and research-prototype software systems, is reviewed at a moderate level of detail. The review covers underlying philosophies (patternist philosophy of mind, foundational phenomenological and logical ontology), formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems partly driven by these formalizations and philosophies. The implementation of specific cognitive processes such as logical reasoning, program learning, clustering and attention allocation in the context and language of this high level architecture is considered, as is the importance of a common (e.g. typed metagraph based) knowledge representation for enabling “cognitive synergy” between the various processes. The specifics of human-like cognitive architecture are presented as manifestations of these general principles, and key aspects of machine consciousness and machine ethics are also treated in this context. Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered.“
Talk held at AGI17 — http://agi-conference.org/2017/#AGI17 #AGI #ArtificialIntelligence #Understanding #MachineUnderstanding #CommonSence #ArtificialGeneralIntelligence #PhilMind https://en.wikipedia.org/wiki/Artificial_general_intelligenceMany thanks for tuning in!

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk/
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture b) Donating.
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b.
- Patreon: https://www.patreon.com/scifuture c) Sharing the media SciFuture creates.

Kind regards.
Adam Ford.
- Science, Technology & the Future — #SciFuture — http://scifuture.org

“Intelligence supposes goodwill,” Simone de Beauvoir wrote in the middle of the twentieth century. In the decades since, as we have entered a new era of technology risen from our minds yet not always consonant with our values, this question of goodwill has faded dangerously from the set of considerations around artificial intelligence and the alarming cult of increasingly advanced algorithms, shiny with technical triumph but dull with moral insensibility.

In De Beauvoir’s day, long before the birth of the Internet and the golden age of algorithms, the visionary mathematician, philosopher, and cybernetics pioneer Norbert Wiener (November 26, 1894–March 18, 1964) addressed these questions with astounding prescience in his 1954 book The Human Use of Human Beings, the ideas in which influenced the digital pioneers who shaped our present technological reality and have recently been rediscovered by a new generation of thinkers eager to reinstate the neglected moral dimension into the conversation about artificial intelligence and the future of technology.

A decade after The Human Use of Human Beings, Wiener expanded upon these ideas in a series of lectures at Yale and a philosophy seminar at Royaumont Abbey near Paris, which he reworked into the short, prophetic book God & Golem, Inc. (public library). Published by MIT Press in the final year of his life, it won him the posthumous National Book Award in the newly established category of Science, Philosophy, and Religion the following year.

Possibly a move to freeze and stall the tec, like the bio ethics clowns who were able to freeze bio tec. But, China wouldnt sign on to any freeze, thankfully. And the tec has already spread across 3rd world countries.


WASHINGTON, June 6 (Reuters) — Senate Majority Leader Chuck Schumer said on Tuesday he has scheduled three briefings for senators on artificial intelligence, including the first classified briefing on the topic.

In a letter to colleagues on Tuesday, the Democratic leader said senators need to deepen their understanding of artificial intelligence.

“AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement,” Schumer said.

The 2020 Nobel Prize for Chemistry was awarded to Dr. Jennifer Doudna and Dr. Emmanuelle Charpentier for their work on the gene editing technique known as CRISPR-Cas9. This gives us the ability to change the DNA of any living thing, from plants and animals to humans.

The applications are enormous, from improving farming to curing diseases. A decade or so from now, CRISPR will no doubt be taught in High Schools, and be a basic building block of medicine and agriculture. It is going to change everything.

There are ethical and moral concerns, of course, and we will need regulations to ensure this powerful technology is not abused. But we should focus on the remarkable opportunities CRISPR has opened up for us.

I quoted and responded to this remark:

“…we probably will not solve death and this actually shouldn’t be our goal.” Well nice as she seems thank goods Dr Levine does not run the scientific community involved in rejuvenation.

The first bridge looks like it’s going to be plasma dilution and this may come to the general population in just a few short years. People who have taken this treatment report things like their arthritis and back pain vanishing.

After that epigentic programming to treat things that kill you in old age. And so on, bridge after bridge. if you have issues with the future, some problem with people living as long as they like, then by all means you have to freedom to grow old and die. That sounds mean but then I think it’s it’s mean to inform me I have to die because you think we have to because of “progress”. But this idea that living for centuries or longer is some horrible moral crime just holds no water.


Science can’t stop aging, but it may be able to slow our epigenetic clocks.

In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.

Here’s why.


Is generative AI a blessing or a curse when it comes to medical doctors and the role of medical malpractice lawsuits.

Our technological age is witnessing a breakthrough that has existential implications and risks. The innovative behemoth, ChatGPT, created by OpenAI, is ushering us inexorably into an AI economy where machines can spin human-like text, spark deep conversations and unleash unparalleled potential. However, this bold new frontier has its challenges. Security, privacy, data ownership and ethical considerations are complex issues that we must address, as they are no longer just hypothetical but a reality knocking at our door.

The G7, composed of the world’s seven most advanced economies, has recognized the urgency of addressing the impact of AI.


To understand how countries may approach AI, we need to examine a few critical aspects.

Clear regulations and guidelines for generative AI: To ensure the responsible and safe use of generative AI, it’s crucial to have a comprehensive regulatory framework that covers privacy, security and ethics. This framework will provide clear guidance for both developers and users of AI technology.

Public engagement: It’s important to involve different viewpoints in policy discussions about AI, as these decisions affect society as a whole. To achieve this, public consultations or conversations with the general public about generative AI can be helpful.

Year 2022


Experiments such as this one cannot be funded with federal research dollars, though they break no U.S. laws. The work was conducted in China, not because it was illegal in the United States, the researchers said, but because the monkey embryos, which are difficult to procure and expensive, were available there. The experiment used a total of 150 embryos, which were obtained without harming the monkeys, “just like in the IVF procedure,” Tan said.

But such experiments, which combine human cells with those of animals, are nevertheless controversial. This work, and other work by Izpisua Belmonte, has moved so rapidly, bioethicists have had trouble keeping up.

“The complicated thing is that we need better models of human disease, but the better those models are, the closer they bring us to the ethical issues we were trying to avoid by not doing experiments in humans,” Farahany said. “Remarkable steps forward require urgent public engagement.”