Toggle light / dark theme

Chip maker Intel has been chosen to lead a new initiative led by the U.S. military’s research wing, DARPA, aimed at improving cyber-defenses against deception attacks on machine learning models.

Machine learning is a kind of artificial intelligence that allows systems to improve over time with new data and experiences. One of its most common use cases today is object recognition, such as taking a photo and describing what’s in it. That can help those with impaired vision to know what’s in a photo if they can’t see it, for example, but it also can be used by other computers, such as autonomous vehicles, to identify what’s on the road.

But deception attacks, although rare, can meddle with machine learning algorithms. Subtle changes to real-world objects can, in the case of a self-driving vehicle, have disastrous consequences.

#Technology in #medicine: What will the #future #healthcare be like? https://www.neurozo-innovation.com/post/future-health Technologies have made many great impacts on our medical system in recent years. The article will first give a thorough summarization of them, and then the expectations and potential problems regarding future healthcare will be discussed. #AI #5G #VR #AR #MR #3DPrinting #BrainComputerInterface #telemedicine #nanotechnology #drones #SelfDriving #blockchain #robotics #innovation #trend


Technology has many beneficial effects on modern people’s lives, and one of them is to prolong our lifespan through advancing the medical field. In the past few years, new techniques such as artificial intelligence, robots, wearable tech, and so on have been used to improve the quality of our healthcare system, and some even newer innovations such as flying vehicles and brain computer interface are also considered valuable to the field. In this article, we will first give a thorough discussion about how these new technologies will shape our future healthcare, and then some upcoming problems that we may soon face will be addressed.

Focus is on Physical Sciences Research and Management of Complex Systems

WASHINGTON, D.C. — Today, the U.S. Department of Energy (DOE) announced a plan to provide up to $30 million for advanced research in machine learning (ML) and artificial intelligence (AI) for both scientific investigation and the management of complex systems.

The initiative encompasses two separate topic areas. One topic is focused on the development of ML and AI for predictive modeling and simulation focused on research across the physical sciences. ML and AI are thought to offer promising new alternatives to traditional programming methods for computer modeling and simulation.

Replicating human interaction and behavior is what artificial intelligence has always been about. In recent times, the peak of technology has well and truly surpassed what was initially thought possible, with countless examples of the prolific nature of AI and other technologies solving problems around the world.

Think about this: Gary Kasparov stated that he would never lose a game of chess to a computer. For a long time, this seemed like a statement that would withstand all tests.

Roll on 1996, however, and IBM developed Deep Blue, a computer bot/program/application that beat the master Gary Kasparov at his own game.

These days, neural networks, deep learning and all types of sensors allow AI to be used in healthcare, to operate self-driving cars and to tweak our photos on Instagram.

In the #future, the ability to learn, to emulate the creative process and to self-organize may give rise to previously unimagined opportunities and unprecedented threats.


When 20 years ago, a computer beat a human at chess, it marked the dawn of Artificial Intelligence, as we know it.

As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years.

The 2010s saw IBM’s Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind’s AlphaGO becoming the world’s Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online.

For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.

Tiny biohybrid robots on the micrometer scale can swim through the body and deliver drugs to tumors or provide other cargo-carrying functions. The natural environmental sensing tendencies of bacteria mean they can navigate toward certain chemicals or be remotely controlled using magnetic or sound signals.

To be successful, these tiny biological robots must consist of materials that can pass clearance through the body’s immune response. They also have to be able to swim quickly through viscous environments and penetrate to deliver cargo.

In a paper published this week in APL Bioengineering, from AIP Publishing, researchers fabricated biohybrid bacterial microswimmers by combining a genetically engineered E. coli MG1655 substrain and nanoerythrosomes, small structures made from red cells.

As capable as robots are, the original animals after which they tend to be designed are always much, much better. That’s partly because it’s difficult to learn how to walk like a dog directly from a dog — but this research from Google’s AI labs make it considerably easier.

The goal of this research, a collaboration with UC Berkeley, was to find a way to efficiently and automatically transfer “agile behaviors” like a light-footed trot or spin from their source (a good dog) to a quadrupedal robot. This sort of thing has been done before, but as the researchers’ blog post points out, the established training process can often “require a great deal of expert insight, and often involves a lengthy reward tuning process for each desired skill.”

That doesn’t scale well, naturally, but that manual tuning is necessary to make sure the animal’s movements are approximated well by the robot. Even a very doglike robot isn’t actually a dog, and the way a dog moves may not be exactly the way the robot should, leading the latter to fall down, lock up or otherwise fail.

A new programme from the US Defense Advanced Research Projects Agency (DARPA) aims to address a key weakness of autonomous and semi-autonomous land systems: the need for active illumination to navigate in low-light conditions.

Unmanned systems rely on active illumination — anything that emits light or electromagnetic radiation, such as light detection and ranging (LIDAR) systems — to navigate at night or underground.

However, according to Joe Altepeter, programme manager in DARPA’s Defense Sciences Office, this approach creates significant security concerns, as such emissions could be detected by potential adversaries.