Toggle light / dark theme

An ‘Uncrashable’ Car? Luminar Says Its Lidar Can Get There

As a recent New York Times article highlighted, self-driving cars are taking longer to come to market than many experts initially predicted. Automated vehicles where riders can sit back, relax, and be delivered to their destinations without having to watch the road are continuously relegated to the “not-too-distant future.”

There’s not just debate on when this driverless future will arrive, there’s also a lack of consensus on how we’ll get there, that is, which technologies are most efficient, safe, and scalable to take us from human-driven to computer-driven (Tesla is the main outlier in this debate). The big players are lidar, cameras, ultrasonic sensors, and radar. Last week, one lidar maker showcased some new technology that it believes will tip the scales.

California-based Luminar has built a lidar it calls Iris not only has a longer range than existing systems, it’s also more compact; gone are the days of a big, bulky setup that all but takes over the car. Perhaps most importantly, the company is aiming to manufacture and sell Iris at a price point well below the industry standard.

Hospital on a chip

Circa 2009


The researchers expect to have a working prototype of the product in four years. “We are just at the beginning of this project,” Wang said. “During the first two years, our primary focus will be on the sensor systems. Integrating enzyme logic onto electrodes that can read biomarker inputs from the body will be one of our first major challenges.”

“Achieving the goal of the program is estimated to take nearly a decade,” Chrisey said.

Developing an effective interface between complex physiological processes and wearable devices could have a broader impact, Wang said. If the researchers are successful, they could pave the way for “autonomous, individual, on-demand medical care, which is the goal of the new field of personalized medicine,” he added.

Expanding human-robot collaboration in manufacturing

Machines and robots undoubtedly make life easier. They carry out jobs with precision and speed, and, unlike humans, they do not require breaks as they are never tired.

As a result, companies are looking to use them more and more in their to improve productivity and remove dirty, dangerous, and dull tasks.

However, there are still so many tasks in the that require dexterity, adaptability, and flexibility.

A vision-based robotic system for 3D ultrasound imaging

Ultrasound imaging techniques have proved to be highly valuable tools for diagnosing a variety of health conditions, including peripheral artery disease (PAD). PAD, one of the most common diseases among the elderly, entails the blocking or narrowing of peripheral blood vessels, which limits the supply of blood to specific areas of the body.

Ultrasound imaging methods are among the most popular means of diagnosing PAD, due to their many advantageous characteristics. In fact, unlike other imaging methods, such as computed tomography angiography and , ultrasound imaging is non-invasive, low-cost and radiation-free.

Most existing ultrasound imaging techniques are designed to capture in real time. While this can be helpful in some cases, their inability to collect three-dimensional information reduces the reliability of the data they gather, increasing their sensitivity to variations in how individual physicians used a given technique.

Google Cloud launches Vertex AI, a new managed machine learning platform

At Google I/O today Google Cloud announced Vertex AI, a new managed machine learning platform that is meant to make it easier for developers to deploy and maintain their AI models. It’s a bit of an odd announcement at I/O, which tends to focus on mobile and web developers and doesn’t traditionally feature a lot of Google Cloud news, but the fact that Google decided to announce Vertex today goes to show how important it thinks this new service is for a wide range of developers.

The launch of Vertex is the result of quite a bit of introspection by the Google Cloud team. “Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”

Did Chinese scientists just bring down an unmanned plane with an electromagnetic pulse weapon?

Can this be true?


An unmanned aircraft was brought down by a powerful electromagnetic pulse in what could be the first reported test of an advanced new weapon in China.

A paper published in the Chinese journal Electronic Information Warfare Technology did not give details of the timing and location of the experiment, which are classified but it may be the country’s first openly reported field test of an electromagnetic pulse (EMP) weapon.

China is racing to catch up in the field after the US demonstrated a prototype EMP weapon that brought down 50 drones with one shot in 2019.

A standard for artificial intelligence in biomedicine

An international research team with participants from several universities including the FAU has proposed a standardized registry for artificial intelligence (AI) work in biomedicine to improve the reproducibility of results and create trust in the use of AI algorithms in biomedical research and, in the future, in everyday clinical practice. The scientists presented their proposal in the journal Nature Methods.

In the last decades, new technologies have made it possible to develop a wide variety of systems that can generate huge amounts of biomedical data, for example in cancer research. At the same time, completely new possibilities have developed for examining and evaluating this data using methods. AI algorithms in intensive care units, e.g., can predict circulatory failure at an early stage based on large amounts of data from several monitoring systems by processing a lot of complex information from different sources at the same time, which is far beyond human capabilities.

This great potential of AI systems leads to an unmanageable number of biomedical AI applications. Unfortunately, the corresponding reports and publications do not always adhere to best practices or provide only incomplete information about the algorithms used or the origin of the data. This makes assessment and comprehensive comparisons of AI models difficult. The decisions of AIs are not always comprehensible to humans and results are seldomly fully reproducible. This situation is untenable, especially in clinical research, where trust in AI models and transparent research reports are crucial to increase the acceptance of AI algorithms and to develop improved AI methods for basic biomedical research.

/* */