Working to integrate theology, ethics and science, ISCAT Fellow and renowned author Dr. Brian Edgar discusses the future of humanity.
Category: ethics – Page 16
The Death of Death: The Scientific Possibility of Physical Immortality and its Moral Defense (Copernicus Books) — Kindle edition by Cordeiro, José, Wood, David. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading The Death of Death: The Scientific Possibility of Physical Immortality and its Moral Defense (Copernicus Books).
Here’s my latest opinion article, just published at Merion West. It’s about AI and the environment! Give it a read!
“With artificial general intelligence (AGI) likely just decades away, there is an urgent need to consider the extent of environmental harm we are causing. AGI will likely question if humans are good stewards of the planet and quickly come to the conclusion that we are not.”
Many artificial intelligence (AI) scientists believe that artificial general intelligence (AGI)—intelligence on par with humans—will be achieved within 20 years. If this happens, what will AI think of people?
Answers to this question range widely, from AI being grateful to its creators to it barely even noticing us to it wanting to be our equals. However one theory in ethics increasingly being considered is that AI will be angry with us because of the environmental harm humans have caused to the planet and to other species.
Skeptics counter that AGI will not care about the environment because it is a machine, with little need for nature. Despite this, I believe that AGI will not only want Earth to thrive but also to be protected from those who might destroy it—such as humans. AGI, similar to any intelligent person, would prefer a stable, biodiverse planet filled with renewable resources available to it over an ecological wasteland.
Claims that superintelligent AI poses a threat to humanity are frightening, but only because they distract from the real issues today, argues Mhairi Aitken, an ethics fellow at The Alan Turing Institute.
The 2023 edition of the exclusive Longevity Investors Conference is fast approaching, bringing together investors, companies and researchers in Gstaad, Switzerland in September. One of the speakers at this year’s conference is scientist, writer and presenter Dr Andrew Steele, the author of the best-selling book Ageless: The new science of getting older without getting old. When it comes to his views on longevity, Steele sits firmly in the camp that aging, like cancer, is something that humanity should be focused on curing.
Longevity. Technology: Last year, Steele told us he was “absolutely convinced” curing aging is possible, but that significant questions remain around how quickly we can get there. As he prepares to speak to more than 100 investors at LIC, we caught up with Steele to see how his views on longevity have evolved, and what he would say to those considering investing in the field.
First and foremost, Steele, who recently published a new, free chapter of Ageless on the moral, ethical and social consequences of treating aging, believes that longevity represents a huge “human opportunity” for investors.
AI is quickly becoming an essential part of daily work. It’s already being used to help improve operational processes, strengthen customer service, measure employee experience, and bolster cybersecurity efforts, among other applications. And with AI deepening its presence in daily life, as more people turn to AI bot services, such as ChatGPT, to answer questions and get help with tasks, its presence in the workplace will only accelerate.
Much of the discussion around AI in the workplace has been about the jobs it could replace. It’s also sparked conversations around ethics, compliance, and governance issues, with many companies taking a cautious approach to adopting AI technologies and IT leaders debating the best path forward.
While the full promise of AI is still uncertain, it’s early impact on the workplace can’t be ignored. It’s clear that AI will make its mark on every industry in the coming years, and it’s already creating a shift in demand for skills employers are looking for. AI has also sparked renewed interest in long-held IT skills, while creating entirely new roles and skills companies will need to adopt to successfully embrace AI.
Researchers have created synthetic human embryos using stem cells, according to media reports. Remarkably, these embryos have reportedly been created from embryonic stem cells, meaning they do not require sperm and ova.
This development, widely described as a breakthrough that could help scientists learn more about human development and genetic disorders, was revealed this week in Boston at the annual meeting of the International Society for Stem Cell Research.
The research, announced by Professor Magdalena Żernicka-Goetz of the University of Cambridge and the California Institute of Technology, has not yet been published in a peer-reviewed journal. But Żernicka-Goetz told the meeting these human-like embryos had been made by reprogramming human embryonic stem cells.
Here’s my new article for Aporia Magazine. A lot of wild ideas in it. Give it a read:
Regardless of the ethics and whether the science can even one day be worked out for Quantum Archaeology, the philosophical dilemma it presents to Pascal’s Wager is glaring. If humans really could eradicate the essence of death as we know it—including even the ability to ever permanently die—Pascal’s Wager becomes unworkable. Frankly, so does my Transhumanist Wager. After all, why should I dedicate my life and energy to living indefinitely through science when, by the next century, technology could bring me back exactly as I was—or even as an improved version of myself?
Outside of philosophical discourse, billions of dollars are pouring into the anti-aging and technology fields—much of it from Silicon Valley and the San Francisco Bay Area where I live. Everyone from entrepreneurs like Mark Zuckerburg to nonprofits like XPRIZE to giants like Google is spending money on ways to try to end all diseases and overcome death. Bank of America recently reported that they expect the extreme longevity field to be worth over $600 billion dollars by 2025.
Technology research spending for computers, microprocessors, and information technology is even bigger: $4.3 trillion dollars is estimated to have been spent worldwide in 2019. This amount includes research into quantum computing, which is hoped to eventually make computers hundreds—maybe thousands—of times faster over the next 50 years.
Despite the advancements of the 21st Century, the science to overcome biological death is not even close to being ready, if ever. Over 100,000 people still die a day, and in some countries like America, life expectancy has actually started going slightly backward. However, like other black swans of innovation in history—such as the internet, combustion engine, and penicillin—we shouldn’t rule out that new inventions may make humans live dramatically longer and maybe even as long as they like. As our species reaches for the heavens with its growing scientific armory, Pascal’s Wager is going to be challenged. It just might need an upgrade.
Sam Harris is an American author, philosopher, neuroscientist, and podcast host.
His work touches on a wide range of topics, including rationality, religion, ethics, free will, neuroscience, meditation, philosophy of mind, politics, terrorism, and artificial intelligence.
His academic background is in philosophy and cognitive neuroscience.
The General Theory of General Intelligence: A Pragmatic Patternist Perspective — paper by Ben Goertzel: https://arxiv.org/abs/2103.15100 Abstract: “A multi-decade exploration into the theoretical foundations of artificial and natural general intelligence, which has been expressed in a series of books and papers and used to guide a series of practical and research-prototype software systems, is reviewed at a moderate level of detail. The review covers underlying philosophies (patternist philosophy of mind, foundational phenomenological and logical ontology), formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems partly driven by these formalizations and philosophies. The implementation of specific cognitive processes such as logical reasoning, program learning, clustering and attention allocation in the context and language of this high level architecture is considered, as is the importance of a common (e.g. typed metagraph based) knowledge representation for enabling “cognitive synergy” between the various processes. The specifics of human-like cognitive architecture are presented as manifestations of these general principles, and key aspects of machine consciousness and machine ethics are also treated in this context. Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered.“
Talk held at AGI17 — http://agi-conference.org/2017/#AGI17 #AGI #ArtificialIntelligence #Understanding #MachineUnderstanding #CommonSence #ArtificialGeneralIntelligence #PhilMind https://en.wikipedia.org/wiki/Artificial_general_intelligenceMany thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk/
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture b) Donating.
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b.
- Patreon: https://www.patreon.com/scifuture c) Sharing the media SciFuture creates.
Kind regards.
Adam Ford.
- Science, Technology & the Future — #SciFuture — http://scifuture.org