Toggle light / dark theme

But a carbon nanotube coating (shown in clear jacket) replaces the tin-coated copper braid that serves as the outer conductor, ordinarily the heaviest component. Created by researchers at Rice University, the coating was tested by a collaborative group including NIST, which has more than 10 years of expertise in characterizing and measuring nanotu…bes. The coating, only up to 90 microns (millionths of a meter) in thickness, resulted in a total cable mass reduction of 50 percent (useful for lowering the weight of electronics in aerospace vehicles) and handled 10,000 bending cycles without affecting performance. And even though the coating is microscopically thin, the cable transmitted data with a comparable ability to ordinary cables, due to the nanotubes’ favorable electrical properties.

Credit: J. Fitlow/Rice University See More

Read more

Adrienne Porter Felt, Staff Software Engineer, Google Chrome.

Everyone wants to build software that’s both usable and secure, yet the world is full of software that falters at this intersection. How does this happen? I experienced the disconnect firsthand, when the Chrome security team redid Chrome’s security UI to conform to best practices for usable security. In the process, we learned how hard it is to actually adhere to oft-cited wisdom about usable security when faced with real-world constraints and priorities. With a set of case studies, I’ll illustrate the limitations we encountered when trying to apply common wisdom to a browser with more than a billion users—and discuss what has actually worked for us in practice, which might work for other practitioners too.

Sign up to find out more about Enigma conferences:
https://www.usenix.org/conference/enigma2016#signup

Watch all Enigma 2016 videos at:

Researchers at Harvard are working to identify the brain processes that make humans so good at recognising patterns. Their ultimate goals is to develop biologically-inspired computer systems for smarter AI. Computers inspired by the human brain could be used to detect network invasions, read MRI images, and even drive cars.

Their ultimate goals is to develop biologically-inspired computer systems for smarter AI.

Read more

This just in: Aipoly Vision* — a free AI app that runs on your iPhone/iPad** (Android coming) and recognizes objects and colors — is now live on the App store, Aipoly Inc. co-founder Alberto Rizzoli just told me in an email.

Of course, I immediately downloaded the app, launched it on my iPhone 6s+, and tested it. It works spectacularly. Its voice names objects or colors in real time as a walk around and also displays objects’ names. I am blown away. Here’s a sample:

Informal Aipoly Vision object-recognition test (credit: A. Angelica/Aipoly Inc.)

Aside from a few minor glitches (the swivel chair was also named “office” and “padded stool” and a banana was also named “bug” and “handle” — but I taught it the right name using its “pencil” tool), Aipoly Vision was astoundingly accurate. Colors were a problem with small objects because of backgrounds, but works OK for most large objects, walls, and floors, the company says.

Not mentioned is that the roads become easier to replace instead of having to repave them.


The minister told a conference of transport authorities last week that the tenders for the “Positive Energy” initiative had already been issued and the tests on the panels would begin in the spring.

According to France’s Agency of Environment and Energy Management, 4m of solarised road is enough to supply one household’s electricity needs, apart from heating, and one kilometre will light a settlement with 5,000 inhabitants.

So the maximum effect of the programme, if successful, could be to furnish 5 million people with electricity, or about 8% of the French population.

An experiment by University of Washington researchers is setting the stage for advances in mind reading technology. Using brain implants and sophisticated software, researchers can now predict what their subjects are seeing with startling speed and accuracy.

The ability to view a two-dimensional image on a page or computer screen, and then transform that image into something our minds can immediately recognize, is a neurological process that remains mysterious to scientists. To learn more about how our brains perform this task—and to see if computers can collect and predict what a person is seeing in real time—a research team led by University of Washington neuroscientist Rajesh Rao and neurosurgeon Jeff Ojermann demonstrated that it’s possible to decode human brain signals at nearly the speed of perception. The details of their work can be found in a new paper in PLOS Computational Biology.

The team sought the assistance of seven patients undergoing treatment for epilepsy. Medications weren’t helping alleviate their seizures, so these patients were given temporary brain implants, and electrodes were used to pinpoint the focal points of their seizures. The UW researchers saw this as an opportunity to perform their experiment. “They were going to get the electrodes no matter what,” noted Ojermann in a UW NewsBeat article. “We were just giving them additional tasks to do during their hospital stay while they are otherwise just waiting around.”

Read more

Illustration of a particle (red sphere) trapped by the 3D trapping node created by two superimposed, orthogonal (at right angles), standing surface acoustic waves and induced acoustic streaming (credit: Carnegie Mellon University)

A team of researchers at three universities has developed a way to use “acoustic tweezers” (which use ultrasonic surface acoustic waves, or SAWs, to trap and manipulate micrometer-scale particles and biological cells — see “Acoustic tweezers manipulate cellular-scale objects with ultrasound “) to non-invasively pick up and move single cells in three mutually orthogonal axes of motion (three dimensions).

The new 3D acoustic tweezers can pick up single cells or entire cell assemblies and deliver them to desired locations to create 2D and 3D cell patterns, or print the cells into complex shapes — a promising new method for “3D bioprinting” in biological tissues, the researchers say in an open-access paper in the Proceedings of the National Academy of Sciences (PNAS).

Read more