Toggle light / dark theme

DUGWAY, Utah — Army Green Berets from the 1st Special Forces Group conducted two weeks of hands-on experimentation with Project Origin Unmanned Systems at Dugway Proving Ground. Engineers from the U.S. Army DEVCOM Ground Vehicle Systems Center were on site to collect data on how these elite Soldiers utilized the systems and what technology and behaviors are desired.

Project Origin vehicles are the evolution of multiple Soldier Operational Experiments. This GVSC-led rapid prototyping effort allows the Army to conduct technology and autonomous behavior integration for follow-on assessments with Soldiers in order to better understand what Soldiers need from unmanned systems.

For the two-week experiment, Soldiers with the 1st Special Forces Group attended familiarization and new equipment training in order to develop Standard Operating Procedures for Robotic Combat Vehicles. The unit utilized these SOPs to conduct numerous mission-oriented exercises including multiple live-fire missions during the day and night.

The ‘Stepping Into the Future’ conference is coming up soon — April 23-24th to be exact. It’s online and it’s free (via zoom). It will be fun & exciting — I hope you can all make it. Many of the synopses of coming talks are already online (linked to from the agenda) — so check them out.


About | Speakers | Agenda.

We are in the midst of a technological avalanche – surprisingly to many, AI has made the impossible possible. In a rapidly changing world maintaining and expanding our capacity to innovate is essential.

http://www.homomimeticus.eu/
Part of the ERC-funded project Homo Mimeticus, the Posthuman Mimesis conference (KU Leuven, May 2021) promoted a mimetic turn in posthuman studies. In the first keynote Lecture, Prof. Kevin Warwick (U of Coventry) argued that our future will be as cyborgs – part human, part technology. Kevin’s own experiments will be used to explain how implant and electrode technology can be employed to create cyborgs: biological brains for robots, to enable human enhancement and to diminish the effects of neural illnesses. In all cases the end result is to increase the abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for an interface linking a biological brain directly with computer technology. A look will be taken at future concepts of being, for posthumans this possibly involving a click and play body philosophy. New, much more powerful, forms of communication will also be considered.

HOM Videos is part of an ERC-funded project titled Homo Mimeticus: Theory and Criticism, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n°716181)
Follow HOM on Twitter: https://twitter.com/HOM_Project.

Facebook: https://www.facebook.com/HOMprojectERC

Is neuromorphic computing the only way we can actually achieve general artificial intelligence?

Very likely yes, according to Gordon Wilson, CEO of Rain Neuromorphics, who is trying to recreate the human brain in hardware and “give machines all of the capabilities that we recognize in ourselves.”

Rain Neuromorphics has built a neuromorphic chip that is analog. In other words it does not simulate neural networks: it is a neural network in analog, not digital. It’s a physical collection of neurons and synapses, as opposed to an abstraction of neurons and synapses. That means no ones and zeroes of traditional computing but voltages and currents that represent the mathematical operations you want to perform.

Right now it’s 1000X more energy efficient than existing neural networks, Wilson says, because it doesn’t have to spend all those computing cycles simulating the brain. The circuit is the neural network, which leads to some extraordinary gains in both speed improvement and power reduction, according to Wilson.

Links:
Rain Neuromorphics: https://rain.ai.
Episode sponsor: SMRT1 https://smrt1.ca/

Support TechFirst with $SMRT coins: https://rally.io/creator/SMRT/

Anthony J. Ferrante, Global Head of Cybersecurity and Senior Managing Director, FTI Consulting, Inc.

Artificial intelligence (AI) models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision making, reasoning and problem solving. This presentation will discuss the security, ethical and privacy concerns surrounding this technology. Learning Objectives:1: Understand that the solution to adversarial AI will come from a combination of technology and policy.2: Learn that coordinated efforts among key stakeholders will help to build a more secure future.3: Learn how to share intelligence information in the cybersecurity community to build strong defenses.

In a pilot human study, researchers from the University of Minnesota Medical School and Massachusetts General Hospital show it is possible to improve specific human brain functions related to self-control and mental flexibility by merging artificial intelligence with targeted electrical brain stimulation.

Alik Widge, MD, Ph.D., an assistant professor of psychiatry and member of the Medical Discovery Team on Addiction at the U of M Medical School, is the senior author of the research published in Nature Biomedical Engineering. The findings come from a human study conducted at Massachusetts General Hospital in Boston among 12 patients undergoing for epilepsy—a procedure that places hundreds of tiny electrodes throughout the brain to record its activity and identify where seizures originate.

In this study, Widge collaborated with Massachusetts General Hospital’s Sydney Cash, MD, Ph.D., an expert in epilepsy research; and Darin Dougherty, MD, an expert in clinical brain stimulation. Together, they identified a brain region—the internal capsule—that improved patients’ mental function when stimulated with small amounts of electrical energy. That part of the brain is responsible for cognitive control—the process of shifting from one thought pattern or behavior to another, which is impaired in most .

The Italian privacy guarantor (GPDP) has imposed a fine of €20,000,000 on Clearview AI for implementing a biometric monitoring network in Italy without acquiring people’s consent.

This decision resulted from a proceeding that launched in February 2021, following relevant complaints about GDPR violations that stemmed directly from Clearview’s operations.

More specifically, the investigation revealed that the American facial recognition software company maintains a database of 10 billion images of people’s faces, including Italians, who had their faces extracted from public website profiles and online videos.

When Google unveiled its first autonomous cars in 2010, the spinning cylinder mounted on the roofs really stood out. It was the vehicle’s light detection and ranging (LiDAR) system, which worked like light-based radar. Together with cameras and radar, LiDAR mapped the environment to help these cars avoid obstacles and drive safely.

Since a then, inexpensive, chip-based cameras and have moved into the mainstream for collision avoidance and autonomous highway driving. Yet, LiDAR navigation systems remain unwieldy mechanical devices that cost thousands of dollars.

That may be about to change, thanks to a new type of high-resolution LiDAR chip developed by Ming Wu, professor of electrical engineering and computer sciences and co-director of the Berkeley Sensor and Actuator Center at the University of California, Berkeley. The new design appears Wednesday, March 9, in the journal Nature.

I’ve been trying to review and summarize Eliezer Yudkowksy’s recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra’s talking about and what’s going on.

The Open Philanthropy Project (“Open Phil”) is a big effective altruist foundation interested in funding AI safety. It’s got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is “informal” — but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100.

Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but here he gives Bryan Caplan 50–50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays.