Toggle light / dark theme

When video chatting with colleagues, coworkers, or family, many of us have grown accustomed to using virtual backgrounds and background filters. It has been shown to offer more control over the surroundings, allowing fewer distractions, preserving the privacy of those around us, and even liven up our virtual presentations and get-togethers. However, Background filters don’t always work as expected or perform well for everyone.

Image segmentation is a computer vision process of separating the different components of a photo or video. It has been widely used to improve backdrop blurring, virtual backgrounds, and other augmented reality (AR) effects. Despite advanced algorithms, achieving highly accurate person segmentation seems challenging.

The model used for image segmentation tasks must be incredibly consistent and lag-free. Inefficient algorithms may result in bad experiences for the users. For instance, during a video conference, artifacts generated by erroneous segmentation output might easily confuse persons utilizing virtual background programs. More importantly, segmentation problems may result in unwanted exposure to people’s physical environments when applying backdrop effects.

Neuralink! Also known as, Elon Musk’s other (other) project.

You’ve probably heard of Elon Musk’s plan to insert brain chips into the human brain, finally bridging the gap between human and machine. Academically, this is known as a brain-machine interface and it’s actually not that novel an idea!

In this video, I’ll refer to two key scientific studies, discuss the threat of AI technology, why Elon Musk wants to put a chip in your brain and whether or not it’s feasible.

Once you’re done watching the video, feel free to tell me in the comments whether you would ever consider getting the N1 (or whatever future iteration) chip.

Could this be the future of humanity?

Video Of The Rats & References On Substack.

The rising visibility of Ethical AI or AI Ethics is doing great good, meanwhile some believe it isn’t enough and a semblance of embracing Radical Ethical AI is appearing. This is closely examined, including for AI-based self-driving cars.

Has the prevailing tenor and attention of today’s widely emerging semblance of AI Ethics gotten into a veritable rut? Some seem to decidedly think so.

Let’s unpack this. You might generally be aware that there has been a rising tide of interest in the ethical ramifications of AI. This is often referred to as either AI Ethics or Ethical AI, which we’ll consider herein those two monikers as predominantly equivalent and interchangeable (I suppose some might quibble about that assumption, but I’d like to suggest that we not get distracted by the potential differences, if any, for the purposes of this discussion).

For something so small, neurons can be quite complex—not only because there are billions of them in a brain, but because their function can be influenced by many factors, like their shape and genetic makeup.

A research team led by Daifeng Wang, a Waisman Center professor of biostatistics and medical informatics and computer sciences at the University of Wisconsin–Madison, is adapting machine learning and artificial intelligence techniques to better understand how a variety of traits together affect the way work and behave.

Called manifold learning, the approach may help researchers better understand and even predict brain disorders by looking at specific neuronal properties. The Wang lab recently published its findings in two studies.

Soft sensing technologies have the potential to revolutionize wearable devices, haptic interfaces, and robotic systems. However, most soft sensing technologies aren’t durable and consume high amounts of energy.

Now, researchers at the University of Cambridge have developed self-healing, biodegradable, 3D-printed materials that could be used in the development of realistic artificial hands and other soft robotics applications. The low-cost jelly-like materials can sense strain, temperature, and humidity. And unlike earlier self-healing robots, they can also partially repair themselves at room temperature.

“Incorporating soft sensors into robotics allows us to get a lot more information from them, like how strain on our muscles allows our brains to get information about the state of our bodies,” said David Hardman from Cambridge’s Department of Engineering the paper’s first author.