Toggle light / dark theme

Engineers at the University of Waterloo have developed artificial intelligence (AI) technology to predict if women with breast cancer would benefit from chemotherapy prior to surgery.

The new AI algorithm, part of the open-source Cancer-Net initiative led by Dr. Alexander Wong, could help unsuitable candidates avoid the serious side effects of chemotherapy and pave the way for better surgical outcomes for those who are suitable.

“Determining the right treatment for a given breast cancer patient is very difficult right now, and it is crucial to avoid unnecessary side effects from using treatments that are unlikely to have real benefit for that patient,” said Wong, a professor of systems design engineering.

Architectures based on artificial neural networks (ANNs) have proved to be very helpful in research settings, as they can quickly analyze vast amounts of data and make accurate predictions. In 2020, Google’s British AI subsidiary DeepMind used a new ANN architecture dubbed the Fermionic neural network (FermiNet) to solve the Schrodinger equation for electrons in molecules, a central problem in the field of chemistry.

The Schroedinger is a partial differential equation based on well-established theory of energy conservation, which can be used to derive information about the behavior of electrons and solve problems related to the properties of matter. Using FermiNet, which is a conceptually simple method, DeepMind could solve this equation in the context of chemistry, attaining very accurate results that were comparable to those obtained using highly sophisticated quantum chemistry techniques.

Researchers at Imperial College London, DeepMind, Lancaster University, and University of Oxford recently adapted the FermiNet architecture to tackle a quantum physics problem. In their paper, published in Physical Review Letters, they specifically used FermiNet to calculate the ground states of periodic Hamiltonians and study the homogenous electron gas (HEG), a simplified quantum mechanical model of electrons interacting in solids.

Magic, a startup developing a code-generating platform similar to GitHub’s Copilot, today announced that it raised $23 million in a Series A funding round led by Alphabet’s CapitalG with participation from Elad Gil, Nat Friedman and Amplify Partners. So what’s its story?

Magic’s CEO and co-founder, Eric Steinberger, says that he was inspired by the potential of AI at a young age. In high school, he and his friends wired up the school’s computers for machine learning algorithm training, an experience that planted the seeds for Steinberger’s computer science degree and his job at Meta as an AI researcher.

“I spent years exploring potential paths to artificial general intelligence, and then large language models (LLMs) were invented,” Steinberger told TechCrunch in an email interview. “I realized that combining LLMs trained on code with my research on neural memory and reinforcement learning might allow us to build an AI software engineer that feels like a true colleague, not just a tool. This would be extraordinarily useful for companies and developers.”

Panelists: michael graziano, jonathan cohen, vasudev lal, joscha bach.

The seminal contribution “Attention is all you need” (Vasvani et al. 2017), which introduced the Transformer algorithm, triggered a small revolution in machine learning. Unlike convolutional neural networks, which construct each feature out of a fixed neighborhood of signals, Transformers learn which data a feature on the next layer of a neural network should attend to. However, attention in neural networks is very different from the integrated attention in a human mind. In our minds, attention seems to be part of a top-down mechanism that actively creates a coherent, dynamic model of reality, and plays a crucial role in planning, inference, reflection and creative problem solving. Our consciousness appears to be involved in maintaining the control model of our attention.

In this panel, we want to discuss avenues into our understanding of attention, in the context of machine learning, cognitive science and future developments of AI.

Full program and references: https://cognitive-ai-panel.webflow.io/panels/attention

Generative AI represents a big breakthrough towards models that can make sense of the world by dreaming up visual, textual and conceptual representations, and are becoming increasingly generalist. While these AI systems are currently based on scaling up deep learning algorithms with massive amounts of data and compute, biological systems seem to be able to make sense of the world using far less resources. This phenomenon of efficient intelligent self-organization still eludes AI research, creating an exciting new frontier for the next wave of developments in the field. Our panelists will explore the potential of incorporating principles of intelligent self-organization from biology and cybernetics into technical systems as a way to move closer to general intelligence. Join in on this exciting discussion about the future of AI and how we can move beyond traditional approaches like deep learning!

This event is hosted and sponsored by Intel Labs as part of the Cognitive AI series.

“We are just at the beginning of our AI journey, and the best is yet to come,” said Google CEO.

Search engine giant Google is looking to deploy its artificial intelligence (A.I.)-based large language models available as a “companion to search,” CEO Sundar Pichai said during an earnings report on Thursday, Bloomberg.

A large language model (LLM) is a deep learning algorithm that can recognize and summarize content from massive datasets and use it to predict or generate text. OpenAI’s GPT-3 is one such LLM that powers the hugely popular chatbot, ChatGPT.

Google worked to reassure investors and analysts on Thursday during its quarterly earnings call that it’s still a leader in developing AI. The company’s Q4 2022 results were highly anticipated as investors and the tech industry awaited Google’s response to the popularity of OpenAI’s ChatGPT, which has the potential to threaten its core business.

During the call, Google CEO Sundar Pichai talked about the company’s plans to make AI-based large language models (LLMs) like LaMDA available in the coming weeks and months. Pichai said users will soon be able to use large language models as a companion to search. An LLM, like ChatGPT, is a deep learning algorithm that can recognize, summarize and generate text and other content based on knowledge from enormous amounts of text data. Pichai said the models that users will soon be able to use are particularly good for composing, constructing and summarizing.

“Now that we can integrate more direct LLM-type experiences in Search, I think it will help us expand and serve new types of use cases, generative use cases,” Pichai said. “And so, I think I see this as a chance to rethink and reimagine and drive Search to solve more use cases for our users as well. It’s early days, but you will see us be bold, put things out, get feedback and iterate and make things better.”

Welcome Back To Future Fuse Technology today is evolving at a rapid pace, enabling faster change and progress, causing an acceleration of the rate of change. However, it is not only technology trends and emerging technologies that are evolving, a lot more has changed this year due to the outbreak of COVID-19 making IT professionals realize that their role will not stay the same in the contactless world tomorrow. And an IT professional in 2023–24 will constantly be learning, unlearning, and relearning (out of necessity if not desire).Artificial intelligence will become more prevalent in 2023 with natural language processing and machine learning advancement. Artificial intelligence can better understand us and perform more complex tasks using this technology. It is estimated that 5G will revolutionize the way we live and work in the future. From the evolution of Artificial Intelligence (AI), the internet of things (IoT), and 5G network to cloud computing, big data, and analytics, technology has the capacity or potential to transform everything, revolutionizing the future of the world. Already, we see the rapid roll-out of autonomous vehicles (self-driving cars) currently in trial phases for all car companies, and Elon Musk’s Tesla is improving the technology by making it more secure and redefined. Forward-thinking and innovative companies seem not to miss any chance to bring breakthrough innovation to the world…in this video, we are looking into The World Will Be REVOLUTIONIZED by These 18 Rapidly Developing Technologies.

TAGS: #ai #technologygyan #futureTechnology.

RIGHT NOTICE: The Copyright Laws of the United States recognize a “fair use” of copyrighted content. Section 107 of the U.S. Copyright Act states: “Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.” This video and our YouTube channel, in general, may contain certain copyrighted works that were not specifically authorized to be used by the copyright holder(s), but which we believe in good faith are protected by federal law and the fair use doctrine for one or more of the reasons noted above.