Predicting how proteins bind to other molecules could revolutionize biochemistry, drug discovery.
Category: robotics/AI – Page 252
Colin Jacobs, PhD, assistant professor in the Department of Medical Imaging at Radboud University Medical Center in Nijmegen, The Netherlands, and Kiran Vaidhya Venkadesh, a second-year PhD candidate with the Diagnostic Image Analysis Group at Radboud University Medical Center discuss their 2021 Radiology study, which used CT images from the National Lung Cancer Screening Trial (NLST) to train a deep learning algorithm to estimate the malignancy risk of lung nodules.
A Generation of AI Guinea Pigs
Posted in robotics/AI
Salesforce recently announced that it has introduced more than 50 AI-powered tools among its workforce and reported that these tools have collectively saved all of its employees in excess of 50,000 hours—or 24 years’ worth—of working time in just three months.
As a company, Salesforce serves as an especially compelling case study for the impact of AI on work—not only because the company tests tools on their own workforce, but because so many others rely on Salesforce’s products to do their jobs each day. Simply put: Salesforce is in the business of work.
Salesforce has more than 70,000 employees worldwide—a 30% increase since 2020. And the software giant builds the products that are used by employees at some 150,000 workplaces, from small businesses to Fortune 500 companies; from sales and customer service teams to marketing and tech teams.
Apple’s first reveal of the new macOS Sequoia includes a way to remote control your iPhone directly from the Mac, and a new Apple Passwords app.
Announced in the WWDC 2024 keynote, macOS 15 is called macOS Sequoia, and as expected, it brings AI — or Apple Intelligence — to every platform and practically every feature.
Across macOS Sequoia and Apple’s other platforms, users can write, summarize, and proofread text almost system-wide with Writing Tools. It will be able to generate sketches, animations, or illustrations with Image Playground, which is built into apps including Messages — and has its own brand-new app too.
There are plenty of reasons why Google would be interested in going down this route. For example, closer integration would make Android handsets more compatible with Chromebooks. However, it appears the main reason for the move is to accelerate the delivery of AI features.
As the Mountain View-based firm explains, having Chrome OS lean more on Android’s tech stack will make it easier to bring new AI features to Chromebooks. The company adds that along with the change, it wants to maintain the “security, consistent look and feel, and extensive management capabilities” that users are acquainted with.
Google is working on the updates starting today, but notes that users won’t see the changes for a while. The tech giant claims that when everything is ready, the transition will be seamless.
Hong Kong (CNN) — Tesla is one step closer to launching full-self driving (FSD) technology in China after it clinched an agreement with Baidu to upgrade its mapping software.
The Chinese tech giant said Saturday that it was providing lane-level navigation services for Tesla cars. Baidu (BIDU) says this level of navigation can provide drivers with detailed information, including making lane recommendations ahead of upcoming turns, to enhance safety.
The AI boom and soaring demand for Nvidia GPUs have propelled the company’s stock and earned the Nvidia CEO a reputation as a visionary. Even Mark Zuckerberg calls him the “Taylor Swift of tech.”
People who have worked for Huang on Nvidia’s journey to become a $3 trillion-plus company previously described how he can be a “demanding” boss.
Eight current and former Nvidia employees spoke to Business Insider about Huang’s leadership style and what it’s like to be grilled by him. These people asked not to be named as they were not authorized to speak to the media.
And manufacturers are keen to bring additional screens into play, from 2009’s Lenovo’s Thinkpad W700 with its built-in extendable tablet to modern devices like the Asus ZenBook Duo, or Lenovo Yoga Book 9i — and some frankly absurd variants along the way.
But what’s next for the laptop? Will it be Lenovo’s transparent laptop or will AI transform the laptop into handheld devices like how the Steam Deck and ROG Ally X represent a potential reinvention of the gaming laptop? Well, in my opinion, and many others, the next step is augmented reality.
Modern laptops are stuck between two desires, smaller form factors and larger displays. Both of which have their benefits, but you can’t gain more of one without giving up some of the other.
Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.
The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.
The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.