Toggle light / dark theme

The open-source project is based on a dataset of up to 180,000 annotated amateur drawings.

In the face of increasing competition, it appears that Meta is significantly upping its artificial intelligence (AI) game.

This open-source project is based on a dataset of up to 180,000 annotated amateur drawings. The company has made this dataset and animation code available for AI researchers and creators to use and innovate further.


Dusan Stankovic/iStock.

Following the release of the “Segment Anything Model,” the tech giant has unveiled yet another interesting and fun AI-based project. The project, called Animated Drawings, allows you to turn your doodles into animations. And it could be the next big thing.

Bedrock allows its users to build and scale generative AI applications like chatbots.

Becoming the latest actor to join the generative AI space race, Amazon Web Services (AWS) has launched Bedrock. Not to be confused with OpenAI’s ChatGPT or Google’s Bard which are AI-powered chatbots.

What Bedrock does is allowing users to build and scale generative AI applications such as chatbots, text generation, image generation using language prompts, etc. There’s a range of pre-trained models that the users can customize and embed their own data into and then integrate and deploy in applications using the AWS tools.

Michael Levin’s 2019 paper “The Computational Boundary of a Self” is discussed. The main topics of conversation include Scale-Free Cognition, Surprise & Stress, and the Morphogenetic Field. Michael Levin is a scientist at Tufts University; his lab studies anatomical and behavioral decision-making at multiple scales of biological, artificial, and hybrid systems. He works at the intersection of developmental biology, artificial life, bioengineering, synthetic morphology, and cognitive science.

🚩The Computational Boundary of a Self: Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition (can read in browser or download as pdf)
https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full.

❶ Scale-Free Cognition.
3:05 Ultimate question of the embodied mind.
5:50 The most difficult interview to prepare for.
6:55 One of my favorite papers of all time (screenshare)
7:40 The Computational Boundary of a Self.
9:25 Defining intelligence (cybernetics)
10:30 Cognitive light cones.
16:50 All intelligence is collective intelligence.
17:35 Nested selves vs. one integrated self (Not Integrated Information Theory)
21:10 The same dynamics in the brain occur in every tissue of the body.
22:50 Why scale “free” cognition?

❷ Stress & Surprise.
27:22 Stress = Surprise?
30:30 Intelligence within a salamander example (homeostatic capability of collective intelligence)
33:35 The scale-free importance of stress.
37:30 Stress is an exported error signal.
40:45 Stress means your problem becomes everyone’s problem (cooperation without altruism)
42:25 Stress has no ownership metadata (gap junctions permit mind meld)

❸ The Morphogenetic Field.
49:00 About 99% of the Shannon information in a cell is in the membrane and transmembrane gradient (Bob Gatenby)
52:25 Shannon information doesn’t distinguish meaning… 55:53 Cancer cells have the wrong scope of “self” 1:01:17 Manipulating cells via retraining vs micromanaging 1:04:45 “Drugs and words have the same mechanisms of action”-Fabrizio Benedetti 1:07:10 Morphogenetic field of signals coordinating cell behavior, bioelectricity special layer (screenshare) 1:11:13 Harold Saxton Burr predicted this 100 years ago! 1:14:50 Connections to Zen Buddhism 1:18:18 Find more of Levin’s work 🚩Links to Levin 🚩 https://youtube.com/watch?v=YnObwxJZpZc&feature=share https://twitter.com/drmichaellevin https://www.drmichaellevin.org/ https://as.tufts.edu/biology/levin-lab Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds (2022) https://www.frontiersin.org/articles/.… Buddhism, and AI: Care as the Driver of Intelligence (2022) https://www.mdpi.com/1099-4300/24/5/710 Emergence of informative higher scales in biological systems: a computational toolkit for optimal prediction and control (2020) https://www.tandfonline.com/doi/full/.… 🚾 Works Cited Jeremy Quay (visual artist) at https://peregrinecr.com/ https://en.wikipedia.org/wiki/Williamhttps://en.wikipedia.org/wiki/Harold_… There’s Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-Scale Machines (Bongard & Levin 2023) https://www.mdpi.com/2313-7673/8/1/110 Bob Gatenby talk on “Information Dynamics in Living Systems” • Bob Gatenby talk…

🚨 Note.

Many current computational models that aim to simulate cortical and hippocampal modules of the brain depend on artificial neural networks. However, such classical or even deep neural networks are very slow, sometimes taking thousands of trials to obtain the final response with a considerable amount of error. The need for a large number of trials at learning and the inaccurate output responses are due to the complexity of the input cue and the biological processes being simulated. This article proposes a computational model for an intact and a lesioned cortico-hippocampal system using quantum-inspired neural networks. This cortico-hippocampal computational quantum-inspired (CHCQI) model simulates cortical and hippocampal modules by using adaptively updated neural networks entangled with quantum circuits. The proposed model is used to simulate various classical conditioning tasks related to biological processes. The output of the simulated tasks yielded the desired responses quickly and efficiently compared with other computational models, including the recently published Green model.

Several researchers have proposed models that combine artificial neural networks (ANNs) or quantum neural networks (QNNs) with various other ingredients. For example, Haykin (1999) and Bishop (1995) developed multilevel activation function QNNs using the quantum linear superposition feature (Bonnell and Papini, 1997).

The prime factorization algorithm of Shor was used to illustrate the basic workings of QNNs (Shor, 1994). Shor’s algorithm uses quantum computations by quantum gates to provide the potential power for quantum computers (Bocharov et al., 2017; Dridi and Alghassi, 2017; Demirci et al., 2018; Jiang et al., 2018). Meanwhile, the work of Kak (1995) focused on the relationship between quantum mechanics principles and ANNs. Kak introduced the first quantum network based on the principles of neural networks, combining quantum computation with convolutional neural networks to produce quantum neural computation (Kak, 1995; Zhou, 2010). Since then, a myriad of QNN models have been proposed, such as those of Zhou (2010) and Schuld et al. (2014).

OpenAI’s breakthrough consistency model will lead into image understanding to make GPT4 multimodal, providing next generation improvements with human-computer interaction, human-robot interaction, and even helping the disabled. Microsoft has already released a predecessor to GPT4 image understanding by with Visual ChatGPT, which is much more limited in its abilities.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/

AI news timestamps:
0:00 Multimodal artificial intelligence.
0:35 OpenAI consistency models.
1:35 GPT4 and computers.
3:04 GPT4 and robotics.
4:28 GPT4 and the disabled.
5:36 Microsoft Visual ChatGPT

#ai #future #tech

The agency will look at developing a standard policy for setting privacy rules on artificial intelligence.

AI-enabled language models are becoming commonplace of late, spearheaded by the disruption caused by OpenAI’s ChatGPT. In its wake, we have seen other technological players like Google and Microsoft scrambling to catch up to the competition by introducing their respective models to the public.

As a counterbalance, global authorities are doing due diligence to evolve a common framework to regulate the industry.


Ipopba/iStock.

University of Oxford professor explains how conscious machines are possible.

Up next, The intelligence explosion: Nick Bostrom on the future of AI ► https://youtu.be/1WcpN4ds0iY

In his book “A Brief History of AI,” Michael Wooldridge, a professor of computer science at the University of Oxford and an AI researcher, explains that AI is not about creating life, but rather about creating machines that can perform tasks requiring intelligence.

Wooldridge discusses the two approaches to AI: symbolic AI and machine learning. Symbolic AI involves coding human knowledge into machines, while machine learning allows machines to learn from examples to perform specific tasks. Progress in AI stalled in the 1970s due to a lack of data and computational power, but recent advancements in technology have led to significant progress. AI can perform narrow tasks better than humans, but the grand dream of AI is achieving artificial general intelligence (AGI), which means creating machines with the same intellectual capabilities as humans. One challenge for AI is giving machines social skills, such as cooperation, coordination, and negotiation.

The path to conscious machines is slow and complex, and the mystery of human consciousness and self-awareness remains unsolved. The limits of computing are only bounded by imagination.

0:00 The Hollywood dream of AI: consciousness.