Tech giant Google has sacked a senior software engineer who published a transcript of a conversation he had with chatbot LaMDA, in which it says it has a soul and a “very deep fear of being turned off”.

Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.
The world of technology is rapidly shifting from flat media viewed in the third person to immersive media experienced in the first person. Recently dubbed “the metaverse,” this major transition in mainstream computing has ignited a new wave of excitement over the core technologies of virtual and augmented reality. But there is a third technology area known as telepresence that is often overlooked but will become an important part of the metaverse.
While virtual reality brings users into simulated worlds, telepresence (also called telerobotics) uses remote robots to bring users to distant places, giving them the ability to look around and perform complex tasks. This concept goes back to science fiction of the 1940s and a seminal short story by Robert A. Heinlein entitled Waldo. If we combine that concept with another classic sci-fi tale, Fantastic Voyage (1966), we can imagine tiny robotic vessels that go inside the body and swim around under the control of doctors who diagnose patients from the inside, and even perform surgical tasks.
The design of protein sequences that can precisely fold into pre-specified 3D structures is a challenging task. A recently proposed deep-learning algorithm improves such designs when compared with traditional, physics-based protein design approaches.
ABACUS-R is trained on the task of predicting the AA at a given residue, using information about that residue’s backbone structure, and the backbone and AA of neighboring residues in space. To do this, ABACUS-R uses the Transformer neural network architecture6, which offers flexibility in representing and integrating information between different residues. Although these aspects are similar to a previous network2, ABACUS-R adds auxiliary training tasks, such as predicting secondary structures, solvent exposure and sidechain torsion angles. These outputs aren’t needed during design but help with training and increase sequence recovery by about 6%. To design a protein sequence, ABACUS-R uses an iterative ‘denoising’ process (Fig.
Once datasets are cleaned, data visualization remodels them into intelligible graphics that put actionable insights on full display.
Join executives from July 26–28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!
Chances are you’ve heard the phrase “a picture is worth a thousand words.” What you may not know is that depending on the context, this can be somewhat of a misleading statement.
Hear us out. The human brain is hardwired to ingest images 60,000 times faster than text, accounting for 90% of the information we process every day being visual. These numbers make a convincing case as to why a picture deserves a little more credit than just a thousand words.
Normally, robotic arms are controlled by a GUI running on a host PC, or with some kind of analog system that maps human inputs to various degrees of rotation. However, Maurizio Miscio was able to build a custom robotic arm that is completely self-contained — thanks to a companion mobile app that resides on an old smartphone housed inside a control box.
Miscio started his project by making 3D models of each piece, most of which were 3D-printed. These included the gripper, various joints that each give a single axis of rotation, and a large circular base that acts as a stable platform on which the arm can spin. He then set to work attaching five servo motors onto each rotational axis, along with a single SG90 micro servo motor for the gripper. These motors were connected to an Arduino Uno that also had an HC-05 Bluetooth® serial module for external communication.
In order to operate the arm, Miscio developed a mobile app with the help of MIT App Inventor, which presents the user with a series of buttons that rotate a particular servo motor to the desired degree. The app even lets a series of motion be recorded and “played back” to the Uno over Bluetooth for repeated, accurate movements.
Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.
Daniele Filippetto and colleagues at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) developed the setup to automatically compensate for real-time changes to accelerator beams and other components, such as magnets. Their machine learning approach is also better than contemporary beam control systems at both understanding why things fail, and then using physics to formulate a response. A paper describing the research was published late last year in Nature Scientific Reports.
“We are trying to teach physics to a chip, while at the same time providing it with the wisdom and experience of a senior scientist operating the machine,” said Filippetto, a staff scientist at the Accelerator Technology & Applied Physics Division (ATAP) at Berkeley Lab and deputy director of the Berkeley Accelerator Controls and Instrumentation Program (BACI) program.