Toggle light / dark theme

Optical computing has been gaining wide interest for machine learning applications because of the massive parallelism and bandwidth of optics. Diffractive networks provide one such computing paradigm based on the transformation of the input light as it diffracts through a set of spatially-engineered surfaces, performing computation at the speed of light propagation without requiring any external power apart from the input light beam. Among numerous other applications, diffractive networks have been demonstrated to perform all-optical classification of input objects.

Researchers at the University of California, Los Angeles (UCLA), led by Professor Aydogan Ozcan, have introduced a “time-lapse” scheme to significantly improve the accuracy of diffractive optical networks on complex input objects. The findings are published in the journal Advanced Intelligent Systems.

In this scheme, the object and/or the diffractive network are moved relative to each other during the exposure of the output detectors. Such a “time-lapse” scheme has previously been used to achieve super-resolution imaging, for example, in , by capturing multiple images of a scene with lateral movements of the camera.

Even a couple of years ago, the idea that artificial intelligence might be conscious and capable of subjective experience seemed like pure science fiction. But in recent months, we’ve witnessed a dizzying flurry of developments in AI, including language models like ChatGPT and Bing Chat with remarkable skill at seemingly human conversation.

Given these rapid shifts and the flood of money and talent devoted to developing ever smarter, more humanlike systems, it will become increasingly plausible that AI systems could exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of real emotions and suffering, we face a potentially catastrophic moral dilemma: either give those systems rights, or don’t.

Experts are already contemplating the possibility. In February 2022, Ilya Sutskever, chief scientist at OpenAI, publicly pondered whether “today’s large neural networks are slightly conscious.” A few months later, Google engineer Blake Lemoine made international headlines when he declared that the computer language model, or chatbot, LaMDA might have real emotions. Ordinary users of Replika, advertised as “the world’s best AI friend,” sometimes report falling in love with it.

ABOVE: © ISTOCK.COM, CHRISTOPH BURGSTEDT

Artificial intelligence algorithms have had a meteoric impact on protein structure, such as when DeepMind’s AlphaFold2 predicted the structures of 200 million proteins. Now, David Baker and his team of biochemists at the University of Washington have taken protein-folding AI a step further. In a Nature publication from February 22, they outlined how they used AI to design tailor-made, functional proteins that they could synthesize and produce in live cells, creating new opportunities for protein engineering. Ali Madani, founder and CEO of Profluent, a company that uses other AI technology to design proteins, says this study “went the distance” in protein design and remarks that we’re now witnessing “the burgeoning of a new field.”

Proteins are made up of different combinations of amino acids linked together in folded chains, producing a boundless variety of 3D shapes. Predicting a protein’s 3D structure based on its sequence alone is an impossible task for the human mind, owing to numerous factors that govern protein folding, such as the sequence and length of the biomolecule’s amino acids, how it interacts with other molecules, and the sugars added to its surface. Instead, scientists have determined protein structure for decades using experimental techniques such as X-ray crystallography, which can resolve protein folds in atomic detail by diffracting X-rays through crystallized protein. But such methods are expensive, time-consuming, and depend on skillful execution. Still, scientists using these techniques have managed to resolve thousands of protein structures, creating a wealth of data that could then be used to train AI algorithms to determine the structures of other proteins. DeepMind famously demonstrated that machine learning could predict a protein’s structure from its amino acid sequence with the AlphaFold system and then improved its accuracy by training AlphaFold2 on 170,000 protein structures.

How far away are we from AGI? Does OpenAI have the right approach? Leave a comment below with what you think!

00:00 Intro.
00:42 What is AGI?
01:08 OpenAI’s Origin Story.
02:25 Enter GPT and Dall-E
03:15 OpenAI’s Roadmap.
04:33 Closing Thoughts.

Please like this video and subscribe to my channel if you want to see more videos like this!

Follow me on other platforms so you’ll never miss out on my updates!

💌 Sign up for my newsletter! https://alexchao.substack.com/subscribe.
🐦 Follow me on Twitter https://twitter.com/alexchaomander.
👥 Connect with me on LinkedIn https://www.linkedin.com/in/alexchao56/

Sources:

Summary: According to researchers, language model AIs like ChatGPT reflect the intelligence and diversity of the user. Such language models adopt the persona of the user and mirror that persona back.

Source: Salk Institute.

The artificial intelligence (AI) language model ChatGPT has captured the world’s attention in recent months. This trained computer chatbot can generate text, answer questions, provide translations, and learn based on the user’s feedback. Large language models like ChatGPT may have many applications in science and business, but how much do these tools understand what we say to them and how do they decide what to say back?

The FDA denied Elon Musk’s application to test his brain chip on humans. In addition to ruining Twitter, Elon Musk is on a mission to plant AI technology into the human brain. Neuralink is one of Musk’s five companies, and it’s in the process of developing neural-interface technology, a tech advancement that would mean putting chips in human brains.

Artificial Intelligence AI

🖤 Become an AI & Robots fan & get access to perks: https://www.youtube.com/channel/UCi-vwe-lm_tgxEdlxf690Aw.

Did you think that technology getting too advanced and wiping away humanity was something that happened only in movies? You might be shocked by what you find today.

Robots ‘will reach human intelligence by 2029 and life as we know it will end in 2045’.

This isn’t the prediction of a conspiracy theorist, a blind dead woman or an octopus but of Google’s chief of engineering, Ray Kurzweil.

Kurzweil has said that the work happening now ‘will change the nature of humanity itself’.