Toggle light / dark theme

Scientists Use AI to Create Music through Proteins

The first time a language model was used to synthesize human proteins.

Of late, AI models are really flexing their muscles. We have recently seen how ChatGPT has become a poster child for platforms that comprehend human languages. Now a team of researchers has tested a language model to create amino acid sequences, showcasing abilities to replicate human biology and evolution.

The language model, which is named ProGen, is capable of generating protein sequences with a certain degree of control. The result was achieved by training the model to learn the composition of proteins. The experiment marks the first time a language model was used to synthesize human proteins.

A study regarding the research was published in the journal *Nature Biotechnology Thursday. *The project was a combined effort from researchers at the University of California-San Francisco and the University of California-Berkeley and Salesforce Research, which is a science arm of a software company based in San Fransisco.

## The significance of using a language model

Researchers say that a language model was used for its ability to generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences on diverse topics.

“In the same way that words are strung together one-by-one to form text sentences, amino acids are strung together one-by-one to make proteins,” Nikhil Naik, the Director of AI Research at Salesforce Research, told *Motherboard*. The team applied “neural language modeling to proteins for generating realistic, yet novel protein sequences.”

ChatGPT is about to get even better and Microsoft’s Bing could win big

Google worked to reassure investors and analysts on Thursday during its quarterly earnings call that it’s still a leader in developing AI. The company’s Q4 2022 results were highly anticipated as investors and the tech industry awaited Google’s response to the popularity of OpenAI’s ChatGPT, which has the potential to threaten its core business.

During the call, Google CEO Sundar Pichai talked about the company’s plans to make AI-based large language models (LLMs) like LaMDA available in the coming weeks and months. Pichai said users will soon be able to use large language models as a companion to search. An LLM, like ChatGPT, is a deep learning algorithm that can recognize, summarize and generate text and other content based on knowledge from enormous amounts of text data. Pichai said the models that users will soon be able to use are particularly good for composing, constructing and summarizing.

“Now that we can integrate more direct LLM-type experiences in Search, I think it will help us expand and serve new types of use cases, generative use cases,” Pichai said. “And so, I think I see this as a chance to rethink and reimagine and drive Search to solve more use cases for our users as well. It’s early days, but you will see us be bold, put things out, get feedback and iterate and make things better.”

Pichai’s comments about the possible ChatGPT rival come as a report revealed this week that Microsoft is working to incorporate a faster version of ChatGPT, known as GPT-4, into Bing, in a move that would make its search engine, which today has only a sliver of search market share, more competitive with Google. The popularity of ChatGPT has seen Google reportedly turning to co-founders Larry Page and Sergey Brin for help in combating the potential threat. The New York Times recently reported that Page and Brin had several meetings with executives to strategize about the company’s AI plans.

During the call, Pichai warned investors and analysts that the technology will need to scale slowly and that he sees large language usage as still being in its “early days.” He also said that the company is developing AI with a deep sense of responsibility and that it’s going to be careful when launching AI-based products, as the company plans to initially launch beta features and then slowly scale up from there.

He went on to note that Google will provide new tools and APIs for developers, creators and partners to empower them to build their own applications and discover new possibilities with AI.

Science journals ban listing of ChatGPT as co-author on papers

The results highlight some potential strengths and weaknesses of ChatGPT.

Some of the world’s biggest academic journal publishers have banned or curbed their authors from using the advanced chatbot, ChatGPT. Because the bot uses information from the internet to produce highly readable answers to questions, the publishers are worried that inaccurate or plagiarised work could enter the pages of academic literature.

Several researchers have already listed the chatbot as a co-author in academic studies, and some publishers have moved to ban this practice. But the editor-in-chief of Science, one of the top scientific journals in the world, has gone a step further and forbidden any use of text from the program in submitted papers.

It’s not surprising the use of such chatbots is of interest to academic publishers. Our recent study, published in Finance Research Letters, showed ChatGPT could be used to write a finance paper that would be accepted for an academic journal. Although the bot performed better in some areas than in others, adding in our own expertise helped overcome the program’s limitations in the eyes of journal reviewers.

However, we argue that publishers and researchers should not necessarily see ChatGPT as a threat but rather as a potentially important aide for research — a low-cost or even free electronic assistant.

Our thinking was: if it’s easy to get good outcomes from ChatGPT by simply using it, maybe there’s something extra we can do to turn these good results into great ones.

We first asked ChatGPT to generate the standard four parts of a research study: research idea, literature review (an evaluation of previous academic research on the same topic), dataset, and suggestions for testing and examination. We specified only the broad subject and that the output should be capable of being published in “a good finance journal.”

Generative AI may only be a foreshock to AI singularity

Check out all the on-demand sessions from the Intelligent Security Summit here.

Generative AI, which is based on Large Language Models (LLMs) and transformer neural networks, has certainly created a lot of buzz. Unlike hype cycles around new technologies such as the metaverse, crypto and Web3, generative AI tools such as Stable Diffusion and ChatGPT are poised to have tremendous, possibly revolutionary impacts. These tools are already disrupting multiple fields — including the film industry — and are a potential game-changer for enterprise software.

All of this has led Ben Thompson to declare in his Stratechery newsletter to declare generative AI advances as marking “a new epoch in technology.”

Bill Gates Reveals the Next Big Thing

The co-founder of Microsoft is convinced that artificial intelligence like ChatGPT will radically change our world.

Bill Gates has never been so excited.

The co-founder of software giant Microsoft is on the artificial intelligence bandwagon that has become the buzzword in the tech world in recent weeks.

Mind Blowing Breakthroughs in AI Discover the Future of Narrow, General and Specific AI

https://youtube.com/watch?v=9PuwqjDiGcg&feature=share

Unlock the secrets of artificial intelligence in this comprehensive video. Explore the different categories of AI, such as narrow or general AI, and discover the differences between them. Delve into specific types of AI, including natural language processing, computer vision, and machine learning. Learn about the practical applications of these technologies and discover how they’re shaping the future. This is a must-see video for anyone interested in understanding the complexities of AI and how it’s transforming our world. Don’t miss out, watch now!

Gabriel Kreiman — Computational Confessions of the Human Brain

Gabriel Kreiman is a Professor at Harvard Medical School. He is on faculty at Children’s Hospital and the Center for Brain Science at Harvard University. He is Associate Director and Thrust Leader in the Harvard/MIT Center for Brains, Minds, and Machines. He received his MSc and PhD from the California Institute of Technology and pursued postdoctoral work with Professor Poggio at MIT.

The Kreiman laboratory combines behavioral metrics, neurophysiological recordings and computational models to understand cognitive function and to build biologically inspired Artificial Intelligence systems. Kreiman’s work has focused on two main themes: understanding the transformation of pixel-like inputs into rich and complex visual percepts; and elucidating the subjectively filters incoming inputs to create lasting narratives that constitute the fabric of our personal experiences and knowledge.

The Nanorobot Surgeon You Can Swallow

In 1959, Richard Feynman made the famous assertion that one day we will be able to swallow the surgeon. Advancements in nanomedicine are making that dream come true. Nanoroboticist Metin Sitti shows the tiny robot that can take pictures, biopsy, and deliver medicine inside of you.

Watch the full program here: https://youtu.be/FzFY5ms3AUc.
Original program date: May 30, 2013

The World Science Festival gathers great minds in science and the arts to produce live and digital content that allows a broad general audience to engage with scientific discoveries. Our mission is to cultivate a general public informed by science, inspired by its wonder, convinced of its value, and prepared to engage with its implications for the future.

Visit our Website: http://www.worldsciencefestival.com/
Like us on Facebook: https://www.facebook.com/worldsciencefestival.
Follow us on twitter: https://twitter.com/WorldSciFest

Scientists Made a Mind-Bending Discovery About How AI Actually Works

Researchers are starting to unravel one of the biggest mysteries behind the AI language models that power text and image generation tools like DALL-E and ChatGPT.

For a while now, machine learning experts and scientists have noticed something strange about large language models (LLMs) like OpenAI’s GPT-3 and Google’s LaMDA : they are inexplicably good at carrying out tasks that they haven’t been specifically trained to perform. It’s a perplexing question, and just one example of how it can be difficult, if not impossible in most cases, to explain how an AI model arrives at its outputs in fine-grained detail.

In a forthcoming study posted to the arXiv preprint server, researchers at the Massachusetts Institute of Technology, Stanford University, and Google explore this “apparently mysterious” phenomenon, which is called “in-context learning.” Normally, to accomplish a new task, most machine learning models need to be retrained on new data, a process that can normally require researchers to input thousands of data points to get the output they desire—a tedious and time-consuming endeavor.