Toggle light / dark theme

This week our guest academic philosopher, Susan Schneider, who is the founding director for the Center for the Future Mind at Florida Atlantic University, as well as the author of the 2019 book, Artificial You: AI and the Future of Your Mind. In this episode we focus heavily on Susan’s thoughts, hopes, and concerns surrounding the current conversations regarding artificial intelligence. This includes, but is certainly not limited to, the philosophical and ethical questions that AI presents in general, the feasibility of mind uploading and machine consciousness, the ways we may end up outsourcing our decision making to machines, how we might merge with machines, and how these potential tech futures might impact identity and sense of self. You can learn more about Susan at schneiderwebsite.com, and find out how to get involved with her work at fau.edu/future-mind ** Host: Steven Parton — LinkedIn / Twitter Music by: Amine el Filali.

41 MINS

Molecular computing is a promising area of study aimed at using biological molecules to create programmable devices. This idea was first introduced in the mid-1990s and has since been realized by several computer scientists and physicists worldwide.

Researchers at East China Normal University and Shanghai Jiao Tong University have recently developed molecular convolutional (CNNs) based on synthetic DNA regulatory circuits. Their approach, introduced in a paper published in Nature Machine Intelligence, overcomes some of the challenges typically encountered when creating efficient artificial neural networks based on molecular components.

“The intersection of computer science and is a fertile ground for new and exciting science, especially the design of intelligent systems is a longstanding goal for scientists,” Hao Pei, one of the researchers who carried out the study, told TechXplore. “Compared to the brain, the scale and computing power of developed DNA neural networks are severely limited, due to the size limitations. The primary objective of our work was to scale up the computing power of DNA circuits by introducing a suitable model for DNA molecular systems.”

Using GPT-3, Calamity AI developed a short film script called Date Night. GPT-3 is the third generation Generative Pre-trained Transformer, is a neural network ML model trained using internet data to generate any type of text. GPT-3 has been used to create articles, poetry, stories, news reports, and dialogue using just a small amount of input text that can be used to produce large amounts of quality content. Developed by OpenAI, it requires a small amount of input text to generate large volumes of relevant and sophisticated machine-generated text.

Enter Calamity AI, a pair of film students in California collaborating with an AI to write original short films and produce for YouTube. It aims to showcase the results of AI and humans working in tandem. The limitations of artificial intelligence restrict it from doing every element of the filmmaking process.

Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.

The world of technology is rapidly shifting from flat media viewed in the third person to immersive media experienced in the first person. Recently dubbed “the metaverse,” this major transition in mainstream computing has ignited a new wave of excitement over the core technologies of virtual and augmented reality. But there is a third technology area known as telepresence that is often overlooked but will become an important part of the metaverse.

While virtual reality brings users into simulated worlds, telepresence (also called telerobotics) uses remote robots to bring users to distant places, giving them the ability to look around and perform complex tasks. This concept goes back to science fiction of the 1940s and a seminal short story by Robert A. Heinlein entitled Waldo. If we combine that concept with another classic sci-fi tale, Fantastic Voyage (1966), we can imagine tiny robotic vessels that go inside the body and swim around under the control of doctors who diagnose patients from the inside, and even perform surgical tasks.

The design of protein sequences that can precisely fold into pre-specified 3D structures is a challenging task. A recently proposed deep-learning algorithm improves such designs when compared with traditional, physics-based protein design approaches.

ABACUS-R is trained on the task of predicting the AA at a given residue, using information about that residue’s backbone structure, and the backbone and AA of neighboring residues in space. To do this, ABACUS-R uses the Transformer neural network architecture6, which offers flexibility in representing and integrating information between different residues. Although these aspects are similar to a previous network2, ABACUS-R adds auxiliary training tasks, such as predicting secondary structures, solvent exposure and sidechain torsion angles. These outputs aren’t needed during design but help with training and increase sequence recovery by about 6%. To design a protein sequence, ABACUS-R uses an iterative ‘denoising’ process (Fig.