Toggle light / dark theme

While many futures are generalists, there is a need for foresight professionals centered on specific fields, as well. Perhaps no area is more in need of innovative outlooks for the future than healthcare. With rising costs, aging populations and personnel shortages, the challenges are many. But so are the opportunities to employ emerging technologies. In the first part of a two part series, host Mark Sackler discusses these challenge with two nursing Ph.D.’s, Oriana Beaudet and Dan Pesut. Part One addresses the need for foresight both in nursing specifically and healthcare in general, as well as the global challenges of an aging population. Part two will drill down to individual ssues, including automation, robotics and artificial intelligence as caregiving tools for the future.

AMOLF researchers and their collaborators from the Advanced Science Research Center (ASRC/CUNY) in New York have created a nanostructured surface capable of performing on-the-fly mathematical operations on an input image. This discovery could boost the speed of existing imaging processing techniques and lower energy usage. The work enables ultrafast object detection and augmented reality applications. The researchers publish their results today in the journal Nano Letters.

Image processing is at the core of several rapidly growing technologies, such as augmented reality, autonomous driving and more general object recognition. But how does a computer find and recognize an object? The initial step is to understand where its boundaries are, hence edge detection in an image becomes the starting point for image recognition. Edge detection is typically performed digitally using integrated implying fundamental speed limitations and high energy consumption, or in an analog fashion which requires bulky optics.

Cogito, ergo sum,” Rene Descartes. Translation: “I think, therefore I am.”

What makes us, us? How is it that we’re able to look at a tree and see beauty, hear a song and feel moved, or take comfort in the smell of rain or the taste of coffee? How do we know we still exist when we close our eyes and lie in silence? To date, science doesn’t have an answer to those questions.

In fact, it doesn’t even have a unified theory. And that’s because we can’t simulate consciousness. All we can do is try to reverse-engineer it by studying living beings. Artificial intelligence, coupled with quantum computing, could solve this problem and provide the breakthrough insight scientists need to unravel the mysteries of consciousness. But first we need to take the solution seriously.

In the late ’90s, wildlife conservationists Zoe Jewell and Sky Alibhai were grappling with a troubling realization. The pair had been studying black rhino populations in Zimbabwe, and they spent a good deal of their time shooting the animals with tranquilizer darts and affixing radio collars around their necks. But after years of work, the researchers realized there was a major problem: Their technique, commonly used by all manner of wildlife scientists, seemed to be causing female rhinos to have fewer offspring.

The researchers published their findings in 2001, igniting a controversy in the conservation world. The problem, says Duke University professor of conservation ecology Stuart Pimm, is that being “collared” is extremely stressful for animals. “If you were walking through your neighborhood and suddenly a bunch of strange people came charging after you … and you got shot in the ass with a dart and woke up with something around your neck, I think you’d be in pretty bad shape too,” he says.

But Jewell and Alibhai had an idea. While working alongside the Shona tribe in Zimbabwe, they saw how the indigenous trackers were able to deduce an enormous amount of information about wildlife from animals’ footprints, including weight, sex, and species, all without getting anywhere close to the animals themselves. “We would go out with local game scouts, who were often expert trackers, and they would often laugh at us as we were listening to these signals coming from the collars,” Jewell says. “They would say to us, ‘all you need to do is look on the ground.”

Learning something new — and quickly — may depend on the lesson’s difficulty level, according to a new study.

Flipping the classroom, room temperature, and later school-day start times, are just a few of the countless interventions scientists have tested and some educators have implemented.

Now, scientists say they have cracked the code on the optimal level of difficulty to speed up learning. The team tested how the difficulty of training impacts the rate of learning in a broad class of learning algorithms, artificial neural networks, and computer models thought to simulate learning in humans and animals.