Menu

Blog

Archive for the ‘robotics/AI’ category: Page 137

Jun 14, 2024

Google is making Chrome OS more like Android to deliver more AI features

Posted by in categories: mobile phones, robotics/AI, security

There are plenty of reasons why Google would be interested in going down this route. For example, closer integration would make Android handsets more compatible with Chromebooks. However, it appears the main reason for the move is to accelerate the delivery of AI features.

As the Mountain View-based firm explains, having Chrome OS lean more on Android’s tech stack will make it easier to bring new AI features to Chromebooks. The company adds that along with the change, it wants to maintain the “security, consistent look and feel, and extensive management capabilities” that users are acquainted with.

Google is working on the updates starting today, but notes that users won’t see the changes for a while. The tech giant claims that when everything is ready, the transition will be seamless.

Jun 14, 2024

Tesla clears another hurdle to launching full self-driving in China

Posted by in categories: mapping, robotics/AI, transportation

Hong Kong (CNN) — Tesla is one step closer to launching full-self driving (FSD) technology in China after it clinched an agreement with Baidu to upgrade its mapping software.

The Chinese tech giant said Saturday that it was providing lane-level navigation services for Tesla cars. Baidu (BIDU) says this level of navigation can provide drivers with detailed information, including making lane recommendations ahead of upcoming turns, to enhance safety.

Jun 14, 2024

Nvidians say Jensen Huang is a perfectionist who asks tough questions — and expects them to admit mistakes

Posted by in category: robotics/AI

The AI boom and soaring demand for Nvidia GPUs have propelled the company’s stock and earned the Nvidia CEO a reputation as a visionary. Even Mark Zuckerberg calls him the “Taylor Swift of tech.”

People who have worked for Huang on Nvidia’s journey to become a $3 trillion-plus company previously described how he can be a “demanding” boss.

Eight current and former Nvidia employees spoke to Business Insider about Huang’s leadership style and what it’s like to be grilled by him. These people asked not to be named as they were not authorized to speak to the media.

Jun 14, 2024

‘People have to see it to believe it’: We asked an expert about AR laptops and the challenges in this booming market

Posted by in categories: augmented reality, robotics/AI

And manufacturers are keen to bring additional screens into play, from 2009’s Lenovo’s Thinkpad W700 with its built-in extendable tablet to modern devices like the Asus ZenBook Duo, or Lenovo Yoga Book 9i — and some frankly absurd variants along the way.

But what’s next for the laptop? Will it be Lenovo’s transparent laptop or will AI transform the laptop into handheld devices like how the Steam Deck and ROG Ally X represent a potential reinvention of the gaming laptop? Well, in my opinion, and many others, the next step is augmented reality.

Modern laptops are stuck between two desires, smaller form factors and larger displays. Both of which have their benefits, but you can’t gain more of one without giving up some of the other.

Jun 14, 2024

Open and remotely accessible Neuroplatform for research in wetware computing

Posted by in categories: biological, robotics/AI

Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.

The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.

The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.

Jun 14, 2024

Segway’s robot mower spared me from my least favorite chore

Posted by in categories: robotics/AI, satellites

I’m sure some of you have looked at robo mowers as Roombas for your yard but, sadly, many of them require you to install a boundary wire around the perimeter of your lawn. And any product that requires you to dig a trench is the opposite of what “low effort” means to me. That’s why I was interested in trying Segway’s Navimow i105, its £945 (around $1,200) GPS-equipped mower which eliminates that busywork. And keeping your lawn neat and tidy is a job that’s all busywork.

Ask a gardener and they’ll tell you the secret to a great lawn is to seed a piece of flat land and then mow it into submission. Regular, militant mowing kills off all the other flora, ensuring only grass can grow until everything looks well-manicured. But that relentless mowing requires a lot of time, a luxury I’ve never had. It’s the sort of job a robot mower was born to do, given it can scuttle around and trim grass without you there.

Segway’s i Series is the company’s latest, more affordable offering compared to its pricier S Series. The new units have a smaller battery and range, with the i105 able to handle areas up to 500 square meters. Unlike some GPS mowers, the i105 is equipped with a forward facing HD camera with a 180-degree field of vision. So while it relies on satellites for positioning, it’ll have enough sense to stop before it clatters into an obstacle. It’s not packing sophisticated computer vision smarts, but it’ll play safe lest it charge into a pet, inattentive family member or prized flower.

Jun 13, 2024

Physicists use machine learning techniques to search for exotic-looking collisions that could indicate new physics

Posted by in categories: particle physics, robotics/AI

One of the main goals of the LHC experiments is to look for signs of new particles, which could explain many of the unsolved mysteries in physics. Often, searches for new physics are designed to look for one specific type of new particle at a time, using theoretical predictions as a guide. But what about searching for unpredicted—and unexpected—new particles?

Jun 13, 2024

Ultrafast Photonic Chip Transforms Machine Vision and Edge Intelligence

Posted by in category: robotics/AI

Researchers have developed a new intelligent photonic sensing-computing chip that can process, transmit and reconstruct images of a scene within nanoseconds. Credit: Wei Wu, Tsinghua University.

Researchers have created a photonic chip capable of processing images at nanosecond speeds, significantly faster than current methods. This chip enhances edge intelligence by integrating AI analysis directly into optical processing, potentially transforming applications such as autonomous driving.

Researchers have demonstrated a new intelligent photonic sensing-computing chip that can process, transmit and reconstruct images of a scene within nanoseconds. This advance opens the door to extremely high-speed image processing that could benefit edge intelligence for machine vision applications such as autonomous driving, industrial inspection and robotic vision.

Jun 13, 2024

Researchers ask industry for ways to guarantee the performance and accuracy of artificial intelligence (AI)

Posted by in categories: mathematics, military, robotics/AI

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued a broad agency announcement (HR001124S0029) for the Artificial Intelligence Quantified (AIQ) project.

AIQ seeks to find ways of assessing and understanding the capabilities of AI to enable mathematical guarantees on performance. Successful use of military AI requires ensuring safe and responsible operation of autonomous and semi-autonomous technologies.

Jun 13, 2024

New Attack Technique ‘Sleepy Pickle’ Targets Machine Learning Models

Posted by in category: robotics/AI

Learn about Sleepy Pickle, a new threat exploiting machine learning models via pickle files. Protect your data now!

Page 137 of 2,405First134135136137138139140141Last