Toggle light / dark theme

Brain-computer interfaces have become a practical (if limited) reality in the US. Synchron says it has become the first in the country to implant a BCI in a human patient. Doctors in New York’s Mount Sinai West implanted the company’s Stentrode in the motor cortex of a participant in Synchron’s COMMAND trial, which aims to gauge the usefulness and safety of BCIs for providing hands-free device control to people with severe paralysis. Ideally, technology like Stentrode will offer independence to people who want to email, text and otherwise handle digital tasks that others take for granted.

Surgeons installed the implant using an endovascular procedure that avoids the intrusiveness of open-brain surgery by going through the jugular vein. The operation went “extremely well” and let the patient return home 48 hours later, according to Synchron. An ongoing Australian trial has also proven successful so far, with four patients still safe a year after receiving their implants.

It may take a long time before doctors can offer Synchron’s BCIs to patients. The company received FDA approval for human trials in July 2021, and it’s still expanding the COMMAND trial as of this writing. Still, the US procedure represents a significant step toward greater autonomy for people with paralysis. It also represents a competitive victory — Elon Musk’s Neuralink has yet to receive FDA permission for its own implant.

Researchers at the SketchX, University of Surrey have recently developed a meta learning-based model that allows users to retrieve images of specific items simply by sketching them on a tablet, smartphone, or on other smart devices. This framework was outlined in a paper set to be presented at the European Conference on Computer Vision (ECCV), one of the top three flagship computer vision conferences along with CVPR and ICCV.

Researchers at the SketchX, University of Surrey have recently developed a meta learning-based model that allows users to retrieve images of specific items simply by sketching them on a tablet, smartphone, or on other smart devices. This framework was outlined in a paper set to be presented at the European Conference on Computer Vision (ECCV), one of the top three flagship computer vision conferences along with CVPR and ICCV.

“This is the latest along the line of work on ‘fine-grained image retrieval,’ a problem that my research lab (SketchX, which I direct and founded back in 2012) pioneered back in 2015, with a paper published in CVPR 2015 titled ‘Sketch Me That Shoe,’” Yi-Zhe Song, one of the researchers who carried out the study, told TechXplore. “The idea behind our paper is that it is often hard or impossible to conduct image retrieval at a fine-grained level, (e.g., finding a particular type of shoe at Christmas, but not any shoe).”

In the past, some researchers tried to devise models that can retrieve images based on text or voice descriptions. Text might be easier for to produce, yet it was found only to work at a coarse level. In other words, it can become ambiguous and ineffective when trying to describe details.

We live in an increasingly connected world, a fact underscored by the swift spread of the coronavirus around the globe. Underlying this connectivity are complex networks—global air transportation, the internet, power grids, financial systems and ecological networks, to name just a few. The need to ensure the proper functioning of these systems also is increasing, but control is difficult.

Now a Northwestern University research team has discovered a ubiquitous property of a complex network and developed a novel computational method that is the first to systematically exploit that property to control the whole network using only . The method considers the computational time and information communication costs to produce the optimal choice.

The same connections that provide functionality in networks also can serve as conduits for the propagation of failures and instabilities. In such dynamic networks, gathering and processing all the information necessary to make a better decision can take too much time. The goal is to diagnose a problem and take action before it leads to a system-wide issue. This may mean having less information but being timely.

Swave Photonics has designed holographic chips on a proprietary diffractive optics technology to “bring the metaverse to life.”


Can virtual reality become indistinguishable from actual reality? Swave Photonics, a spinoff of Imec and Vrije Universiteit Brussel, has designed holographic chips on a proprietary diffractive optics technology to “bring the metaverse to life.” The Leuven, Belgium–based startup has raised €7 million in seed funding to accelerate the development of its multi-patented Holographic eXtended Reality (HXR) technology.

“Our vision is to empower people to visualize the impossible, collaborate, and accomplish more,” Théodore Marescaux, CEO and founder of Swave Photonics, told EE Times Europe. “With our HXR technology, we want to make that extended reality practically indistinguishable from the real world.”

What does it mean to project images that are indistinguishable from reality? “It means a very wide field of view, colors, high dynamic range, the ability to move your head around an object and see it from different angles, and the ability to focus,” he said.

A new miracle drug could increase the human lifespan by up to 200 years. Dr. Andrew Steele, a British computational biologist recently published a new book on the longevity of human life. In the book, the doctor argues that it is completely feasible for humans to live beyond our standard 100-year lifespan thanks to a new type of drug.