Bedrock allows its users to build and scale generative AI applications like chatbots.
Becoming the latest actor to join the generative AI space race, Amazon Web Services (AWS) has launched Bedrock. Not to be confused with OpenAI’s ChatGPT or Google’s Bard which are AI-powered chatbots.
What Bedrock does is allowing users to build and scale generative AI applications such as chatbots, text generation, image generation using language prompts, etc. There’s a range of pre-trained models that the users can customize and embed their own data into and then integrate and deploy in applications using the AWS tools.
Michael Levin’s 2019 paper “The Computational Boundary of a Self” is discussed. The main topics of conversation include Scale-Free Cognition, Surprise & Stress, and the Morphogenetic Field. Michael Levin is a scientist at Tufts University; his lab studies anatomical and behavioral decision-making at multiple scales of biological, artificial, and hybrid systems. He works at the intersection of developmental biology, artificial life, bioengineering, synthetic morphology, and cognitive science.
❶ Scale-Free Cognition. 3:05 Ultimate question of the embodied mind. 5:50 The most difficult interview to prepare for. 6:55 One of my favorite papers of all time (screenshare) 7:40 The Computational Boundary of a Self. 9:25 Defining intelligence (cybernetics) 10:30 Cognitive light cones. 16:50 All intelligence is collective intelligence. 17:35 Nested selves vs. one integrated self (Not Integrated Information Theory) 21:10 The same dynamics in the brain occur in every tissue of the body. 22:50 Why scale “free” cognition?
❷ Stress & Surprise. 27:22 Stress = Surprise? 30:30 Intelligence within a salamander example (homeostatic capability of collective intelligence) 33:35 The scale-free importance of stress. 37:30 Stress is an exported error signal. 40:45 Stress means your problem becomes everyone’s problem (cooperation without altruism) 42:25 Stress has no ownership metadata (gap junctions permit mind meld)
Many current computational models that aim to simulate cortical and hippocampal modules of the brain depend on artificial neural networks. However, such classical or even deep neural networks are very slow, sometimes taking thousands of trials to obtain the final response with a considerable amount of error. The need for a large number of trials at learning and the inaccurate output responses are due to the complexity of the input cue and the biological processes being simulated. This article proposes a computational model for an intact and a lesioned cortico-hippocampal system using quantum-inspired neural networks. This cortico-hippocampal computational quantum-inspired (CHCQI) model simulates cortical and hippocampal modules by using adaptively updated neural networks entangled with quantum circuits. The proposed model is used to simulate various classical conditioning tasks related to biological processes. The output of the simulated tasks yielded the desired responses quickly and efficiently compared with other computational models, including the recently published Green model.
Several researchers have proposed models that combine artificial neural networks (ANNs) or quantum neural networks (QNNs) with various other ingredients. For example, Haykin (1999) and Bishop (1995) developed multilevel activation function QNNs using the quantum linear superposition feature (Bonnell and Papini, 1997).
The prime factorization algorithm of Shor was used to illustrate the basic workings of QNNs (Shor, 1994). Shor’s algorithm uses quantum computations by quantum gates to provide the potential power for quantum computers (Bocharov et al., 2017; Dridi and Alghassi, 2017; Demirci et al., 2018; Jiang et al., 2018). Meanwhile, the work of Kak (1995) focused on the relationship between quantum mechanics principles and ANNs. Kak introduced the first quantum network based on the principles of neural networks, combining quantum computation with convolutional neural networks to produce quantum neural computation (Kak, 1995; Zhou, 2010). Since then, a myriad of QNN models have been proposed, such as those of Zhou (2010) and Schuld et al. (2014).
OpenAI’s breakthrough consistency model will lead into image understanding to make GPT4 multimodal, providing next generation improvements with human-computer interaction, human-robot interaction, and even helping the disabled. Microsoft has already released a predecessor to GPT4 image understanding by with Visual ChatGPT, which is much more limited in its abilities.
AI news timestamps: 0:00 Multimodal artificial intelligence. 0:35 OpenAI consistency models. 1:35 GPT4 and computers. 3:04 GPT4 and robotics. 4:28 GPT4 and the disabled. 5:36 Microsoft Visual ChatGPT
The agency will look at developing a standard policy for setting privacy rules on artificial intelligence.
AI-enabled language models are becoming commonplace of late, spearheaded by the disruption caused by OpenAI’s ChatGPT. In its wake, we have seen other technological players like Google and Microsoft scrambling to catch up to the competition by introducing their respective models to the public.
As a counterbalance, global authorities are doing due diligence to evolve a common framework to regulate the industry.
This has important implications for measuring the mass of the central black hole in M87.
Look at the image on the left and then the image on the right. They are by no means identical. But what if we told you that both the images are of the same object?
A system for realizing many-photon quantum circuits is presented, comprising a programmable nanophotonic chip operating at room temperature, interfaced with a fully automated control system.
In his book “A Brief History of AI,” Michael Wooldridge, a professor of computer science at the University of Oxford and an AI researcher, explains that AI is not about creating life, but rather about creating machines that can perform tasks requiring intelligence.
Wooldridge discusses the two approaches to AI: symbolic AI and machine learning. Symbolic AI involves coding human knowledge into machines, while machine learning allows machines to learn from examples to perform specific tasks. Progress in AI stalled in the 1970s due to a lack of data and computational power, but recent advancements in technology have led to significant progress. AI can perform narrow tasks better than humans, but the grand dream of AI is achieving artificial general intelligence (AGI), which means creating machines with the same intellectual capabilities as humans. One challenge for AI is giving machines social skills, such as cooperation, coordination, and negotiation.
The path to conscious machines is slow and complex, and the mystery of human consciousness and self-awareness remains unsolved. The limits of computing are only bounded by imagination.
Grocery shopping will lead the way in the AI (artificial intelligence) revolution.
Brits aged 15 to 64 spend about 43 percent of all their work and study time on cooking cleaning and other jobs round the home such as looking after children or elderly relatives.
In the UK, working-age men spend around half as much time as working-age women do on them.