Scientists have achieved a major breakthrough in combating ageing and age-related diseases. The study by the researchers from Harvard Medical School and Massachusetts Institute of Technology was published in the journal Aging-US.
Humanity’s attempt to prevent ageing: What is the breakthrough?
The researchers have introduced a chemical method through a ‘single pill’ to reprogram body cells, following which the cells effectively return to a younger state.
Wonder drugs, environmental sustainability or Skynet apocalypse: Hundreds of experts weigh in on what life might be for A.I.-fueled 2035 in new Pew Research report.
The concept of a computational consciousness and the potential impact it may have on humanity is a topic of ongoing debate and speculation. While Artificial Intelligence (AI) has made significant advancements in recent years, we have not yet achieved a true computational consciousness that can replicate the complexities of the human mind.
It is true that AI technologies are becoming more sophisticated and capable of performing tasks that were previously exclusive to human intelligence. However, there are fundamental differences between Artificial Intelligence and human consciousness. Human consciousness is not solely based on computation; it encompasses emotions, subjective experiences, self-awareness, and other aspects that are not yet fully understood or replicated in machines.
The arrival of advanced AI systems could certainly have transformative effects on society and our understanding of humanity. It may reshape various aspects of our lives, from how we work and communicate to how we approach healthcare and scientific discoveries. AI can enhance our capabilities and provide valuable tools for solving complex problems.
However, it is important to consider the ethical implications and potential risks associated with the development of AI. Ensuring that AI systems are developed and deployed responsibly, with a focus on fairness, transparency, and accountability, is crucial.
Join journalist Pedro Pinto and Yuval Noah Harari as they delve into the future of artificial intelligence (A.I.). Together, they explore pressing questions in front of a live audience, such as: What will be the impact of A.I. on democracy and politics? How can we maintain human connection in the age of A.I.? What skills will be crucial for the future? And what does the future of education hold?
Filmed on May 19 2023 in Lisbon, Portugal and produced by the Fundação Francisco Manuel dos Santos (FFMS), in what marks the first live recording of the show: “It’s not that simple.”
A large proportion of CEOs from a diverse cross-section of Fortune 500 companies believe artificial intelligence might destroy humanity — even as business leaders lean into the gold rush around the tech.
In survey results shared with CNN, 42 percent of CEOs from 119 companies surveyed by Yale University think that AI could, within the next five to ten years, quite literally destroy our species.
While the names of specific CEOs who share that belief were not made public, CNN notes that the consortium surveyed during Yale’s CEO Summit event this week contained a wide array of leaders from companies including Zoom, Coca-Cola and Walmart.
One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say “AI”? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones.
IBM’s definition is simple: “a field which combines computer science and robust datasets to enable problem-solving.” Google, meanwhile, defines it as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.”
It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones. The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline.
Since I don’t work for any large companies involved in AI, nor do I anticipate ever doing so; and considering that I have completed my 40-year career (as an old man-now succesfully retired), I would like to share a video by someone I came across during my research into “True Open Source AI.”
I completely agree with the viewpoints expressed in this video (of which begins at ~ 4 mins into the video after technical matters). Additionally, I would like to add some of my own thoughts as well.
We need open source alternatives to large corporations so that people (that’s us humans) have options for freedom, and personal privacy when it comes to locally hosted AIs. The thought of a world completely controlled by Big Corp AI is even more frightening than George Orwell’s “Big Brother.” I believe there must be an alternative to this nightmarish scenario.
Basically I have talked about how chaos gpt poses a great threat to current cyberdefenses and it still does but there is great promise of a god like AI that can a powerful force of good. This could also become a greater AI arms race that would need even more security measures like an AI god that can counter state level or country level threats. I do think this threat would come regardless when we try to reach agi but chat gpt also shows much more promising results as it could be used a god like AI with coders using it aswell as AI coders.
Chaos-GPT, an autonomous implementation of ChatGPT, has been unveiled, and its objectives are as terrifying as they are well-structured.
A leading expert in artificial intelligence warns that the race to develop more sophisticated models is outpacing our ability to regulate the technology. Critics say his warnings overhype the dangers of new AI models like GPT. But MIT professor Max Tegmark says private companies risk leading the world into dangerous territory without guardrails on their work. His Institute of Life issued a letter signed by tech luminaries like Elon Musk warning Silicon Valley to immediately stop work on AI for six months to unite on a safe way forward. Without that, Tegmark says, the consequences could be devastating for humanity.