Menu

Blog

Archive for the ‘robotics/AI’ category: Page 134

May 19, 2024

Linux distros ban ‘tainted’ AI-generated code — NetBSD and Gentoo lead the charge on forbidding AI-written code

Posted by in category: robotics/AI

Not all FOSS (Free and Open Source Software) developers want AI messing with their code.

May 19, 2024

Superintelligence: Paths, Dangers, Strategies

Posted by in categories: biotech/medical, ethics, existential risks, robotics/AI

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

May 19, 2024

Evolutionary Emergence: From Primordial Atoms to Living Algorithms of Artificial Superintelligence

Posted by in categories: biological, cosmology, information science, particle physics, quantum physics, robotics/AI

To be clear, humans are not the pinnacle of evolution. We are confronted with difficult choices and cannot sustain our current trajectory. No rational person can expect the human population to continue its parabolic growth of the last 200 years, along with an ever-increasing rate of natural resource extraction. This is socio-economically unsustainable. While space colonization might offer temporary relief, it won’t resolve the underlying issues. If we are to preserve our blue planet and ensure the survival and flourishing of our human-machine civilization, humans must merge with synthetic intelligence, transcend our biological limitations, and eventually evolve into superintelligent beings, independent of material substrates—advanced informational beings, or ‘infomorphs.’ In time, we will shed the human condition and upload humanity into a meticulously engineered inner cosmos of our own creation.

Much like the origin of the Universe, the nature of consciousness may appear to be a philosophical enigma that remains perpetually elusive within the current scientific paradigm. However, I emphasize the term “current.” These issues are not beyond the reach of alternative investigative methods, ones that the next scientific paradigm will inevitably incorporate with the arrival of Artificial Superintelligence.

The era of traditional, human-centric theoretical modeling and problem-solving—developing hypotheses, uncovering principles, and validating them through deduction, logic, and repeatable experimentation—may be nearing the end. A confluence of factors—Big Data, algorithms, and computational resources—are steering us towards a new type of discovery, one that transcends the limitations of human-like logic and decision-making— the one driven solely by AI superintelligence, nestled in quantum neo-empiricism and a fluidity of solutions. These novel scientific methodologies may encompass, but are not limited to, computing supercomplex abstractions, creating simulated realities, and manipulating matter-energy and the space-time continuum itself.

May 19, 2024

OpenAI will use Reddit posts to train ChatGPT under new deal

Posted by in categories: business, internet, law, policy, robotics/AI

Earlier this month, Reddit published a Public Content Policy stating: Unfortunately, we see more and more commercial entities using unauthorized access or misusing authorized access to collect public data in bulk, including Reddit public content. Worse, these entities perceive they have no limitation on their usage of that data, and they do so with no regard for user rights or privacy, ignoring reasonable legal, safety, and user removal requests.

In its blog post on Thursday, Reddit said that deals like OpenAI’s are part of an open Internet. It added that part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online.

Reddit has been vocal about its interest in pursuing data licensing deals as a core part of its business. Its building of AI partnerships sparks discourse around the use of user-generated content to fuel AI models without users being compensated and some potentially not considering that their social media posts would be used this way. OpenAI and Stack Overflow faced pushback earlier this month when integrating Stack Overflow content with ChatGPT. Some of Stack Overflow’s user community responded by sabotaging their own posts.

May 19, 2024

AI Tool Predicts Whether Online Health Misinformation Will Cause Real-World Harm

Posted by in categories: health, robotics/AI

A new AI-based analytical technique reveals that specific language phrasing in Reddit misinformation posts foretold people rejecting COVID vaccinations.

By Joanna Thompson

May 19, 2024

LLMs can be easily manipulated for malicious purposes, research finds

Posted by in category: robotics/AI

Researchers at AWS AI Labs, found that most publicly available LLMs can be easily manipulated into revealing harmful or unethical info.

May 19, 2024

China shows off machine gun-wielding war robot dogs in Cambodia

Posted by in categories: drones, military, robotics/AI

China has officially showcased its machine-gun armed robodog drones during a 15-day military exercise with Cambodian forces.

May 19, 2024

AI-powered headphone lets user choose which sound to block or amplify

Posted by in categories: mobile phones, robotics/AI

The new AI technology enables personalized noise-canceling headphones, which can isolate a speaker’s voice from ambient noise in real time.

May 19, 2024

3cm-sized metalens camera uses AI to make distorted images sharp

Posted by in category: robotics/AI

Chinese researchers employ deep learning to enhance metalens image quality, unlocking its viability in versatile applications.

May 19, 2024

Tesla trick: Elon Musk’s plan to train AI in China takes shape

Posted by in categories: Elon Musk, information science, robotics/AI

Musk’s plan to use Chinese data to train algorithms aligns with the country’s ambitions to lead in autonomous driving technologies.

Page 134 of 2,372First131132133134135136137138Last