Menu

Blog

Page 4124

Nov 26, 2022

Ikea Is Replacing Styrofoam Packaging With Compostable Mushroom-Foam

Posted by in category: sustainability

Mushroom-foam is as cheap as Styrofoam, requires no fossil fuel, and creates no plastic pollution, biodegrading in your garden in just a couple of weeks Ikea is switching to a new mushroom-based, biodegradable alternative to polystyrene (Styrofoam) packaging for its furniture and home decor. Known as Mycofoam, the product is […].

Nov 26, 2022

A Boiling Cauldron: Cybersecurity Trends, Threats, And Predictions For 2023

Posted by in categories: cybercrime/malcode, information science, internet, quantum physics

By Chuck Brooks


There are many other interesting trends to look out for in 2023. These trends will include the expansion of use of a Software Bill of Materials (SBOM), the integration of more 5G networks to bring down latency of data delivery, more Deep Fakes being used for fraud, low code for citizen coding, more computing at the edge, and the development of initial stages of the implementation of quantum technologies and algorithms.

When all is said and done, 2023 will face a boiling concoction of new and old cyber-threats. It will be an especially challenging year for all those involved trying to protect their data and for geopolitical stability.

Continue reading “A Boiling Cauldron: Cybersecurity Trends, Threats, And Predictions For 2023” »

Nov 26, 2022

Artificial Intelligence (AI) Researchers from Cornell University Propose a Novel Neural Network Framework to Address the Video Matting Problem

Posted by in categories: mapping, robotics/AI

Image and video editing are two of the most popular applications for computer users. With the advent of Machine Learning (ML) and Deep Learning (DL), image and video editing have been progressively studied through several neural network architectures. Until very recently, most DL models for image and video editing were supervised and, more specifically, required the training data to contain pairs of input and output data to be used for learning the details of the desired transformation. Lately, end-to-end learning frameworks have been proposed, which require as input only a single image to learn the mapping to the desired edited output.

Video matting is a specific task belonging to video editing. The term “matting ” dates back to the 19th century when glass plates of matte paint were set in front of a camera during filming to create the illusion of an environment that was not present at the filming location. Nowadays, the composition of multiple digital images follows similar proceedings. A composite formula is exploited to shade the intensity of the foreground and background of each image, expressed as a linear combination of the two components.

Although really powerful, this process has some limitations. It requires an unambiguous factorization of the image into foreground and background layers, which are then assumed to be independently treatable. In some situations like video matting, hence a sequence of temporal-and spatial-dependent frames, the layers decomposition becomes a complex task.

Nov 26, 2022

Updating the Great Pyramid Internal Ramp Theory

Posted by in category: space

The Internal Ramp Theory for the Great Pyramid of Egypt is one of the most interesting ideas ever proposed for its construction. French architect Jean-Pierre Houdin has spent more than 20 years developing and refining this idea.

In October of 2022, Houdin published an update to his theory which reflects the ScanPyramids findings from the past six years. The ScanPyramids ‘Big Void’ is an intriguing clue that Houdin may be correct with his notion of the Grand Gallery being used as a counterweight ramp for the largest pyramid stones.

The ‘Big Void’ may be another Grand Gallery-like space which could be used for the same purpose. Institutional Egyptology remains unreceptive to Houdin’s publications, nor the extremely confident results from the ScanPyramids mission.

Nov 26, 2022

Good and bad memories are stored in different neurons, study finds

Posted by in categories: biological, neuroscience

Memories are stored in all different areas across the brain as networks of neurons called engrams. In addition to collecting information about incoming stimuli, these engrams capture emotional information. In a new study, Steve Ramirez, a neuroscientist at Boston University, discovered where the brain stores positive and negative memories and uncovered hundreds of markers that differentiate positive-memory neurons from negative-memory neurons.

In 2019, Ramirez found evidence that good and bad memories are stored in different regions of the hippocampus, a cashew-shaped structure that holds sensory and emotional information necessary for forming and retrieving memories. The top part of the hippocampus activated when mice underwent enjoyable experiences, but the bottom region activated when they had negative experiences.

His team also found that they could manipulate memories by activating these regions. When he and his team activated the top area of the hippocampus, bad memories were less traumatic. Conversely, when they activated the bottom part, mice exhibited signs of long-last lasting anxiety-related behavioral changes. Ramirez suspected this difference in effect was because the neurons that store good and bad memories have different functions beyond simply keeping positive and negative emotions. However, before he could unravel this difference, he needed to identify which cells were storing good and bad memories. The results were published in the journal Communications Biology.

Nov 26, 2022

Scientists Discover a Gene That Could Prevent Alzheimer’s Disease

Posted by in categories: biotech/medical, neuroscience

Researchers at the University of Colorado Anschutz find that the overexpression of a gene improves learning and memory in Alzheimer’s.

Alzheimer’s disease is a disease that attacks the brain, causing a decline in mental ability that worsens over time. It is the most common form of dementia and accounts for 60 to 80 percent of dementia cases. There is no current cure for Alzheimer’s disease, but there are medications that can help ease the symptoms.

Nov 26, 2022

Evolution of Human Consciousness SOLVED! — Yet Again, It Seems… | Mind Matters

Posted by in categories: biological, evolution, neuroscience

Nothing in biology makes sense except in the light of evolution. The gradualism of evolution has explained and dissolved life’s mysteries—life’s seemingly irreducible complexity and the illusion that living things possess some sort of mysterious vitalizing essence. So, too, evolution is likely to be key to demystifying the seemingly inexplicable, ethereal nature of consciousness.

First, what does it even mean to say that “Nothing in biology makes sense except in the light of evolution”? If the chosen topic is human consciousness, Martin Luther King and Mother Teresa come quickly to mind. But then what does the term “evolution” contribute to the discussion of the origin of human consciousness? Is it something useful or something theorists are stuck with, come what may?

Science theories should make predictions. Who predicted either King or Mother Teresa?

Nov 26, 2022

Fluxonium qubits bring the creation of a quantum computer closer

Posted by in categories: computing, information science, quantum physics

Russian scientists from University of Science and Technology MISIS and Bauman Moscow State Technical University were one of the first in the world to implement a two-qubit operation using superconducting fluxonium qubits. Fluxoniums have a longer life cycle and a greater precision of operations, so they are used to make longer algorithms. An article on research that brings the creation of a quantum computer closer to reality has been published in npj Quantum Information.

One of the main questions in the development of a universal quantum computer is about . Namely, which quantum objects are the best to make processors for quantum computers: electrons, photons, ions, superconductors, or other “quantum transistors.” Superconducting qubits have become one of the most successful platforms for quantum computing during the past decade. To date, the most commercially successful superconducting qubits are transmons, which are actively investigated and used in the quantum developments of Google, IBM and other world leading laboratories.

The main task of a qubit is to store and process information without errors. Accidental noise and even mere observation can lead to the loss or alteration of data. The stable operation of often requires extremely low ambient temperatures—close to zero Kelvin, which is hundreds of times colder than the temperature of open space.

Nov 26, 2022

Researchers From Stanford And Microsoft Have Proposed An Artificial Intelligence (AI) Approach That Uses Declarative Statements As Corrective Feedback For Neural Models With Bugs

Posted by in category: robotics/AI

The methods currently used to correct systematic issues in NLP models are either fragile or time-consuming and prone to shortcuts. Humans, on the other hand, frequently reprimand one another using natural language. This inspired recent research on natural language patches, which are declarative statements that enable developers to deliver corrective feedback at the appropriate level of abstraction by either modifying the model or adding information the model may be missing.

Instead of relying solely on labeled examples, there is a growing body of research on using language to provide instructions, supervision, and even inductive biases to models, such as building neural representations from language descriptions (Andreas et al., 2018; Murty et al., 2020; Mu et al., 2020), or language-based zero-shot learning (Brown et al., 2020; Hanjie et al., 2022; Chen et al., 2021). For corrective purposes, when the user interacts with an existing model to enhance it, language has yet to be properly utilized.

The neural language patching model has two heads: a gating head that determines if a patch should be applied and an interpreter head that forecasts results based on the information in the patch. The model is trained in two steps: first on a tagged dataset and then through task-specific fine-tuning. A set of patch templates are used to create patches and synthetic labeled samples during the second fine-tuning step.

Nov 26, 2022

Application: Quantum mechanics on curved spaces — Lec 26 — Frederic Schuller

Posted by in category: quantum physics

This is from a series of lectures — “Lectures on the Geometric Anatomy of Theoretical Physics” delivered by Dr. Frederic P Schuller.