Archive for the ‘robotics/AI’ category: Page 146
What is the red line in the ability to imitate or simulate the commonly recognizable characteristics of a popular or non-popular person when we seek to assess the potential for unethical persuasion of technologies like GenAI?
What responsibility does the creator bear for the intrinsic consequences related to the unethical use of a person’s identity as a lever of persuasion through technology?
Persuasion has no dark side per se: only the intentions of those who wield it do, and GenAI is not inherently endowed with such intentions, neither for itself nor by itself.
Jun 6, 2024
“Thinking at a Distance” in the Age of AI
Posted by Dan Breeden in categories: biological, robotics/AI
‘Thinking at a distance’ with large language models sparks human-AI cognitive capacity transcending biological limits, but it risks existential ‘entangled mind’ miscalibration.
Jun 6, 2024
Amazon’s Zoox Is Almost Ready to Launch Its Robotaxi Service
Posted by Shailesh Prasad in categories: robotics/AI, transportation
Jun 6, 2024
Attacking Quantum Models with AI: When Can Truncated Neural Networks Deliver Results?
Posted by Dan Breeden in categories: business, particle physics, quantum physics, robotics/AI
Currently, computing technologies are rapidly evolving and reshaping how we imagine the future. Quantum computing is taking its first toddling steps toward delivering practical results that promise unprecedented abilities. Meanwhile, artificial intelligence remains in public conversation as it’s used for everything from writing business emails to generating bespoke images or songs from text prompts to producing deep fakes.
Some physicists are exploring the opportunities that arise when the power of machine learning — a widely used approach in AI research—is brought to bear on quantum physics. Machine learning may accelerate quantum research and provide insights into quantum technologies, and quantum phenomena present formidable challenges that researchers can use to test the bounds of machine learning.
When studying quantum physics or its applications (including the development of quantum computers), researchers often rely on a detailed description of many interacting quantum particles. But the very features that make quantum computing potentially powerful also make quantum systems difficult to describe using current computers. In some instances, machine learning has produced descriptions that capture the most significant features of quantum systems while ignoring less relevant details—efficiently providing useful approximations.
Jun 6, 2024
Understanding The Power Of Deep Neural Networks
Posted by Dan Breeden in category: robotics/AI
Explore the significance of deep neural networks in artificial intelligence and how they simulate human brains for complex tasks through deep learning.
Jun 6, 2024
Finding Meaning and Purpose in a Solved World
Posted by Dan Breeden in category: robotics/AI
In Superintelligence, Nick Bostrom explored the question, ‘What might happen if AI development goes wrong?’ In this conversation, we ask the opposite question: ‘What happens if everything goes right?’
Jun 6, 2024
This AI Paper from Princeton and the University of Warwick Proposes a Novel Artificial Intelligence Approach to Enhance the Utility of LLMs as Cognitive Models
Posted by Dan Breeden in category: robotics/AI
Scientists studying Large Language Models (LLMs) have found that LLMs perform similarly to humans in cognitive tasks, often making judgments and decisions that deviate from rational norms, such as risk and loss aversion. LLMs also exhibit human-like biases and errors, particularly in probability judgments and arithmetic operations tasks. These similarities suggest the potential for using LLMs as models of human cognition. However, significant challenges remain, including the extensive data LLMs are trained on and the unclear origins of these behavioural similarities.
The suitability of LLMs as models of human cognition is debated due to several issues. LLMs are trained on much larger datasets than humans and may have been exposed to test questions, leading to artificial enhancements in human-like behaviors through value alignment processes. Despite these challenges, fine-tuning LLMs, such as the LLaMA-1-65B model, on human choice datasets has improved accuracy in predicting human behavior. Prior research has also highlighted the importance of synthetic datasets in enhancing LLM capabilities, particularly in problem-solving tasks like arithmetic. Pretraining on such datasets can significantly improve performance in predicting human decisions.
Researchers from Princeton University and Warwick University propose enhancing the utility of LLMs as cognitive models by (i) utilizing computationally equivalent tasks that both LLMs and rational agents must master for cognitive problem-solving and (ii) examining task distributions required for LLMs to exhibit human-like behaviors. Applied to decision-making, specifically risky and intertemporal choice, Arithmetic-GPT, an LLM pretrained on an ecologically valid arithmetic dataset, predicts human behavior better than many traditional cognitive models. This pretraining suffices to align LLMs closely with human decision-making.
Jun 6, 2024
Google DeepMind used a large language model to solve an unsolved math problem
Posted by Shailesh Prasad in categories: mathematics, robotics/AI
Jun 6, 2024
Can an emerging field called ‘neural systems understanding’ explain the brain?
Posted by Dan Breeden in categories: neuroscience, robotics/AI
Perhaps.
This mashup of neuroscience, artificial intelligence and even linguistics and philosophy of mind aims to crack the deep question of what ‘understanding’ is, however un-brain-like its models may be.