Toggle light / dark theme

Using artificial intelligence to help drones find people lost in the woods

A trio of researchers at Johannes Kepler University has used artificial intelligence to improve thermal imaging camera searches of people lost in the woods. In their paper published in the journal Nature Machine Intelligence, David Schedl, Indrajit Kurmi and Oliver Bimber, describe how they applied a deep learning network to the problem of people lost in the woods and how well it worked.

When people become lost in forests, search and rescue experts use helicopters to fly over the area where they are most likely to be found. In addition to simply scanning the ground below, the researchers use binoculars and . It is hoped that such cameras will highlight differences in body temperature of people on the ground versus their surroundings making them easier to spot. Unfortunately, in some instances does not work as intended because of vegetation covering subsoil or the sun heating the trees to a temperature that is similar to the body temperature of the person that is lost. In this new effort, the researchers sought to overcome these problems by using a deep learning application to improve the images that are made.

The solution the team developed involved using an AI application to process multiple images of a given area. They compare it to using AI to process data from multiple radio telescopes. Doing so allows several telescopes to operate as a single large telescope. In like manner, the AI application they used allowed multiple thermal images taken from a helicopter (or drone) to create an image as if it were captured by a with a much larger lens. After processing, the images that were produced had a much higher depth of field—in them the tops of the trees appeared blurred while people on the ground became much more recognizable. To train the AI system, the researchers had to create their own database of images. They used drones to take pictures of volunteers on the ground in a wide variety of positions.

Extraterrestrial Languages

If we send a message into space, will extraterrestrial beings receive it? Will they understand?

The endlessly fascinating question of whether we are alone in the universe has always been accompanied by another, more complicated one: if there is extraterrestrial life, how would we communicate with it? In this book, Daniel Oberhaus leads readers on a quest for extraterrestrial communication. Exploring Earthlings’ various attempts to reach out to non-Earthlings over the centuries, he poses some not entirely answerable questions: If we send a message into space, will extraterrestrial beings receive it? Will they understand? What languages will they (and we) speak? Is there not only a universal grammar (as Noam Chomsky has posited), but also a grammar of the universe?

Oberhaus describes, among other things, a late-nineteenth-century idea to communicate with Martians via Morse code and mirrors; the emergence in the twentieth century of SETI (the search for extraterrestrial intelligence), CETI (communication with extraterrestrial intelligence), and finally METI (messaging extraterrestrial intelligence); the one-way space voyage of Ella, an artificial intelligence agent that can play cards, tell fortunes, and recite poetry; and the launching of a theremin concert for aliens. He considers media used in attempts at extraterrestrial communication, from microwave systems to plaques on spacecrafts to formal logic, and discusses attempts to formulate a language for our message, including the Astraglossa and two generations of Lincos (lingua cosmica).

AGI: How to Ensure Benevolence in Synthetic Superintelligence (Part III: Conclusion)

Devising an effective AGI value loading system should be of the utmost importance. Interlinking of enhanced humans with AGIs will bring about the Syntellect Emergence which could be considered the essence of the Cybernetic Singularity. Future efforts in programming and infusing machine morality will surely combine top-down, bottom-up and interlinking approaches. #AGI #FriendlyAI #Cybernetics #BenevolentAI #SyntheticIntelligence #CyberneticSingularity #Superintelligence


A simple solution to achieve this might be to combine select human minds (very liberal, loving, peaceful types) with brain computer interfaces in a virtual environment. Work to raise an AGI who believes itself to be human and who believes in self sacrifice and putting the good of others above its own self. When this is achieved, let it sale they a virtual door to join humanity online.

Fable Studio unveils two AI-based virtual beings who can talk to you

Fable Studio has announced two new conversational AI virtual beings, or artificial people. Their names are Charlie and Beck, and they will be able to hold conversations as if they were real people.

The new characters are a blend of storytelling and artificial intelligence, a marriage that Fable is pioneering in the belief that virtual beings will become a huge market as people seek companionship and entertainment during the tough climate of the pandemic.

CEO Edward Saatchi believes that virtual beings are the start of something big. He organizes the Virtual Beings Summit, and this summer he noted that virtual beings companies — from Genies to AI Foundation — have raised more than $320 million.

Dodge Tomahawk | Fastest bike in the world 420 mph

(2020): Last posts we have talked about supercars, hypercars, sports cars, and concept cars, such as Koenigsegg Jesko, SSC Tuatara, Audi AI Trail, Bloodhound LSR, etc. Today we will be talking some interesting. We will be talking about the fastest bike in the world, that is Dodge Tomahawk.

The Parent company is DaimlerChrysler AG. The tomahawk cost around 555, 000 US dollars. It runs on four wheels.

Left of Launch: Artificial Intelligence at the Nuclear Nexus

Popular media and policy-oriented discussions on the incorporation of artificial intelligence (AI) into nuclear weapons systems frequently focus on matters of launch authority—that is, whether AI, especially machine learning (ML) capabilities, should be incorporated into the decision to use nuclear weapons and thereby reduce the role of human control in the decisionmaking process. This is a future we should avoid. Yet while the extreme case of automating nuclear weapons use is high stakes, and thus existential to get right, there are many other areas of potential AI adoption into the nuclear enterprise that require assessment. Moreover, as the conventional military moves rapidly to adopt AI tools in a host of mission areas, the overlapping consequences for the nuclear mission space, including in nuclear command, control, and communications (NC3), may be underappreciated.

AI may be used in ways that do not directly involve or are not immediately recognizable to senior decisionmakers. These areas of AI application are far left of an operational decision or decision to launch and include four priority sectors: security and defense; intelligence activities and indications and warning; modeling and simulation, optimization, and data analytics; and logistics and maintenance. Given the rapid pace of development, even if algorithms are not used to launch nuclear weapons, ML could shape the design of the next-generation ballistic missile or be embedded in the underlying logistics infrastructure. ML vision models may undergird the intelligence process that detects the movement of adversary mobile missile launchers and optimize the tipping and queuing of overhead surveillance assets, even as a human decisionmaker remains firmly in the loop in any ultimate decisions about nuclear use. Understanding and navigating these developments in the context of nuclear deterrence and the understanding of escalation risks will require the analytical attention of the nuclear community and likely the adoption of risk management approaches, especially where the exclusion of AI is not reasonable or feasible.

Virus detection using nanoparticles and deep neural network–enabled smartphone system

Emerging and reemerging infections present an ever-increasing challenge to global health. Here, we report a nanoparticle-enabled smartphone (NES) system for rapid and sensitive virus detection. The virus is captured on a microchip and labeled with specifically designed platinum nanoprobes to induce gas bubble formation in the presence of hydrogen peroxide. The formed bubbles are controlled to make distinct visual patterns, allowing simple and sensitive virus detection using a convolutional neural network (CNN)-enabled smartphone system and without using any optical hardware smartphone attachment. We evaluated the developed CNN-NES for testing viruses such as hepatitis B virus (HBV), HCV, and Zika virus (ZIKV). The CNN-NES was tested with 134 ZIKV-and HBV-spiked and ZIKV-and HCV-infected patient plasma/serum samples. The sensitivity of the system in qualitatively detecting viral-infected samples with a clinically relevant virus concentration threshold of 250 copies/ml was 98.97% with a confidence interval of 94.39 to 99.97%.


See allHide authors and affiliations.

Smartphone systems can also benefit from the recent unprecedented advancements in nanotechnology to develop diagnostic approaches. Catalysis can be considered as one of the popular applications of nanoparticles because of their large surface-to-volume ratio and high surface energy (11–16). So far, numerous diagnostic platforms for cancer and infectious diseases have been developed by substituting enzymes, such as catalase, oxidase, and peroxidase with nanoparticle structures (17–20). Here, we adopted the intrinsic catalytic properties of platinum nanoparticles (PtNPs) for gas bubble formation to detect viruses on-chip using a convolutional neural network (CNN)–enabled smartphone system.

/* */