Don’t fret the AI job ‘apocalypse’. While we can expect disruption across different industries, this will come with opportunities.
Category: robotics/AI – Page 975
Scientists Believe, ‘Organoid Intelligence’, Is the Future of Computing. CNN reports that as part of a new field called “organoid intelligence,” a computer powered by human brain cells could shape the future. Organoids are lab-grown tissues capable of brain-like functions, such as forming a network of connections. Brain organoids were first grown in 2012 by Dr. Thomas Hartung, a professor of environmental health and engineering, by altering human skin samples. Brain organoids were first grown in 2012 by Dr. Thomas Hartung, a professor of environmental health and engineering, by altering human skin samples. Computing and artificial intelligence have been driving the technology revolution but they are reaching a ceiling., Dr.
A new program can streamline the process of creating, launching and analysing computational chemistry experiments. This piece of software, called AQME, is distributed for free under an open source licence, and could contribute to making calculations more efficient, as well as accelerating automated analyses.
‘We estimate time savings of around 70% in routine computational chemistry protocols,’ explains lead author Juan Vicente Alegre Requena, at the Institute of Chemical Synthesis and Homogeneous Catalysis (ISQCH) in Zaragoza, Spain. ‘In modern molecular simulations, studying a single reaction usually involves more than 500 calculations,’ he explains. ‘Generating all the input files, launching the calculations and analysing the results requires an extraordinary amount of time, especially when unexpected errors appear.’
Therefore, Alegre and his colleagues decided to code a piece of software to skip several steps and streamline calculations. Among other advantages, AQME works with simple inputs, instead of the optimised 3D chemical structures usually required by other solutions. ‘It’s exceptionally easy,’ says Alegre. ‘AQME is installed in a couple of minutes, then the only indispensable input is as a simple Smiles string.’ Smiles is a system developed by chemist and coder Dave Weininger in the late 1980s, which converts complex chemical structures into a succession of letters and numbers that is machine readable. This cross-compatibility could allow integration with chemical databases and machine-learning solutions, most of which include datasets in Smiles format, explains Alegre.
Computer models are an important tool for studying how the brain makes and stores memories and other types of complex information. But creating such models is a tricky business. Somehow, a symphony of signals—both biochemical and electrical—and a tangle of connections between neurons and other cell types creates the hardware for memories to take hold. Yet because neuroscientists don’t fully understand the underlying biology of the brain, encoding the process into a computer model in order to study it further has been a challenge.
Now, researchers at the Okinawa Institute of Science and Technology (OIST) have altered a commonly used computer model of memory called a Hopfield network in a way that improves performance by taking inspiration from biology. They found that not only does the new network better reflect how neurons and other cells wire up in the brain, it can also hold dramatically more memories.
The complexity added to the network is what makes it more realistic, says Thomas Burns, a Ph.D. student in the group of Professor Tomoki Fukai, who heads OIST’s Neural Coding and Brain Computing Unit. “Why would biology have all this complexity? Memory capacity might be a reason,” Mr. Burns says.
Optical computing has been gaining wide interest for machine learning applications because of the massive parallelism and bandwidth of optics. Diffractive networks provide one such computing paradigm based on the transformation of the input light as it diffracts through a set of spatially-engineered surfaces, performing computation at the speed of light propagation without requiring any external power apart from the input light beam. Among numerous other applications, diffractive networks have been demonstrated to perform all-optical classification of input objects.
Researchers at the University of California, Los Angeles (UCLA), led by Professor Aydogan Ozcan, have introduced a “time-lapse” scheme to significantly improve the image classification accuracy of diffractive optical networks on complex input objects. The findings are published in the journal Advanced Intelligent Systems.
In this scheme, the object and/or the diffractive network are moved relative to each other during the exposure of the output detectors. Such a “time-lapse” scheme has previously been used to achieve super-resolution imaging, for example, in security cameras, by capturing multiple images of a scene with lateral movements of the camera.
Even a couple of years ago, the idea that artificial intelligence might be conscious and capable of subjective experience seemed like pure science fiction. But in recent months, we’ve witnessed a dizzying flurry of developments in AI, including language models like ChatGPT and Bing Chat with remarkable skill at seemingly human conversation.
Given these rapid shifts and the flood of money and talent devoted to developing ever smarter, more humanlike systems, it will become increasingly plausible that AI systems could exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of real emotions and suffering, we face a potentially catastrophic moral dilemma: either give those systems rights, or don’t.
Experts are already contemplating the possibility. In February 2022, Ilya Sutskever, chief scientist at OpenAI, publicly pondered whether “today’s large neural networks are slightly conscious.” A few months later, Google engineer Blake Lemoine made international headlines when he declared that the computer language model, or chatbot, LaMDA might have real emotions. Ordinary users of Replika, advertised as “the world’s best AI friend,” sometimes report falling in love with it.
ABOVE: © ISTOCK.COM, CHRISTOPH BURGSTEDT
Artificial intelligence algorithms have had a meteoric impact on protein structure, such as when DeepMind’s AlphaFold2 predicted the structures of 200 million proteins. Now, David Baker and his team of biochemists at the University of Washington have taken protein-folding AI a step further. In a Nature publication from February 22, they outlined how they used AI to design tailor-made, functional proteins that they could synthesize and produce in live cells, creating new opportunities for protein engineering. Ali Madani, founder and CEO of Profluent, a company that uses other AI technology to design proteins, says this study “went the distance” in protein design and remarks that we’re now witnessing “the burgeoning of a new field.”
Proteins are made up of different combinations of amino acids linked together in folded chains, producing a boundless variety of 3D shapes. Predicting a protein’s 3D structure based on its sequence alone is an impossible task for the human mind, owing to numerous factors that govern protein folding, such as the sequence and length of the biomolecule’s amino acids, how it interacts with other molecules, and the sugars added to its surface. Instead, scientists have determined protein structure for decades using experimental techniques such as X-ray crystallography, which can resolve protein folds in atomic detail by diffracting X-rays through crystallized protein. But such methods are expensive, time-consuming, and depend on skillful execution. Still, scientists using these techniques have managed to resolve thousands of protein structures, creating a wealth of data that could then be used to train AI algorithms to determine the structures of other proteins. DeepMind famously demonstrated that machine learning could predict a protein’s structure from its amino acid sequence with the AlphaFold system and then improved its accuracy by training AlphaFold2 on 170,000 protein structures.
Lek-Heng Lim uses tools from algebra, geometry and topology to answer questions in machine learning.
How far away are we from AGI? Does OpenAI have the right approach? Leave a comment below with what you think!
00:00 Intro.
00:42 What is AGI?
01:08 OpenAI’s Origin Story.
02:25 Enter GPT and Dall-E
03:15 OpenAI’s Roadmap.
04:33 Closing Thoughts.
Please like this video and subscribe to my channel if you want to see more videos like this!
Follow me on other platforms so you’ll never miss out on my updates!
Summary: According to researchers, language model AIs like ChatGPT reflect the intelligence and diversity of the user. Such language models adopt the persona of the user and mirror that persona back.
Source: Salk Institute.
The artificial intelligence (AI) language model ChatGPT has captured the world’s attention in recent months. This trained computer chatbot can generate text, answer questions, provide translations, and learn based on the user’s feedback. Large language models like ChatGPT may have many applications in science and business, but how much do these tools understand what we say to them and how do they decide what to say back?