Toggle light / dark theme

Science fiction writers envisioned the technology decades ago, and startups have been working on developing an actual product for at least 10 years.

Today, Mojo Vision announced that it has done just that—put 14K pixels-per-inch microdisplays, wireless radios, image sensors, and motion sensors into contact lenses that fit comfortably in the eyes. The first generation of Mojo Lenses are being powered wirelessly, though future generations will have batteries on board. A small external pack, besides providing power, handles sensor data and sends information to the display. The company is calling the technology Invisible Computing, and company representatives say it will get people’s eyes off their phones and back onto the world around them.

The first application, says Steve Sinclair, senior vice president of product and marketing, will likely be for people with low vision—providing real-time edge detection and dropping crisp lines around objects. In a demonstration last week at CES 2020, I used a working prototype (albeit by squinting through the lens rather than putting it into my eyes), and the device highlighted shapes in bright green as I looked around a dimly lit room.

We’re at a fascinating point in the discourse around artificial intelligence (AI) and all things “smart”. At one level, we may be reaching “peak hype”, with breathless claims and counter claims about potential society impacts of disruptive technologies. Everywhere we look, there’s earnest discussion of AI and its exponentially advancing sisters – blockchain, sensors, the Internet of Things (IoT), big data, cloud computing, 3D / 4D printing, and hyperconnectivity. At another level, for many, it is worrying to hear politicians and business leaders talking with confidence about the transformative potential and societal benefits of these technologies in application ranging from smart homes and cities to intelligent energy and transport infrastructures.

Why

Researchers reverse stroke damage in animal model using stem cell exosomes.


Expanding upon previous work that developed a treatment using a type of extracellular vesicles known as exosomes—small fluid-filled structures that are created by stem cells—investigators at the University of Georgia (UGA) present brain-imaging data for a new stroke treatment that supported full recovery in swine, modeled with the same pattern of neurodegeneration as seen in humans with severe stroke. Findings from this new study were published recently in Translational Stroke Research through an article titled “Neural Stem Cell Extracellular Vesicles Disrupt Midline Shift Predictive Outcomes in Porcine Ischemic Stroke Model.”

Amazingly, it’s been almost a quarter-century since the first drug was approved for stroke. Yet, what’s even more striking is that only a single drug remains approved today, so having a greater understanding of the molecular mechanisms that underlie stroke cases should lead to new therapies that could provide dramatic improvements in patient outcomes.

The researchers at UGA’s Regenerative Bioscience Center report the first observational evidence during a midline shift—when the brain is being pushed to one side—to suggest that a minimally invasive and nonoperative exosome treatment can now influence the repair and damage that follow a severe stroke.

London, 15th January 2020 – Biotech LIfT BioSciences today announced a further major investment into the company in its mission to develop the first curative and affordable cell therapy for all solid tumours. The investors included Jonathan Milner, a leading biotech ‘super-angel’ investor and earlier stage investor in LIfT, Kizoo Technology Ventures, a leading early-stage investor in breakthrough technologies and Downing Ventures, a leading London-based investor.

Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated.

The peaks are known as AI summers, and the troughs AI winters.

The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AI’s abilities.


AI researchers admit that the hype around AI may be cooling off once again.

Biological organisms have certain useful attributes that synthetic robots do not, such as the abilities to heal, adapt to new situations, and reproduce. Yet molding biological tissues into robots or tools has been exceptionally difficult to do: Experimental techniques, such as altering a genome to make a microbe perform a specific task, are hard to control and not scalable.

Now, a team of scientists at the University of Vermont and Tufts University in Massachusetts has used a supercomputer to design novel lifeforms with specific functions, then built those organisms out of frog cells.

The new, AI-designed biological bots crawl around a petri dish and heal themselves. Surprisingly, the biobots also spontaneously self-organize and clear their dish of small trash pellets.

As the U.S. Army increasingly uses facial and object recognition to train artificial intelligent systems to identify threats, the need to protect its systems from cyberattacks becomes essential.

An Army project conducted by researchers at Duke University and led by electrical and computer engineering faculty members Dr. Helen Li and Dr. Yiran Chen, made significant progress toward mitigating these types of attacks. Two members of the Duke team, Yukun Yang and Ximing Qiao, recently took first prize in the Defense category of the CSAW ‘19 HackML competition.

“Object recognition is a key component of future intelligent systems, and the Army must safeguard these systems from cyberattacks,” said MaryAnne Fields, program manager for intelligent systems at the Army Research Office. “This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the system is subtly altered to give incorrect answers. Safeguarding object recognition systems will ensure that future Soldiers will have confidence in the intelligent systems they use.”