Menu

Blog

Page 5232

Mar 29, 2022

Russia, China can’t take down Starlink’s 2,000+ satelites, says Elon Musk

Posted by in categories: Elon Musk, internet

Mar 29, 2022

‘Informational simplicity’ may explain why nature favors symmetry

Posted by in categories: biological, evolution

In biology, symmetry is typically the rule rather than the exception. Our bodies have left and right halves, starfish radiate from a central point and even trees, though not largely symmetrical, still produce symmetrical flowers. In fact, asymmetry in biology seems quite rare by comparison.

Does this mean that evolution has a preference for symmetry? In a new study, an international group of researchers, led by Iain Johnston, a professor in the Department of Mathematics at the University of Bergen in Norway, says it does.

Mar 28, 2022

Japan Wants to Make Half Its Cargo Ships Autonomous by 2040

Posted by in categories: drones, economics, robotics/AI

On top of the environmental concerns, Japan has an added motivation for this push towards automation —its aging population and concurrent low birth rates mean its workforce is rapidly shrinking, and the implications for the country’s economy aren’t good.

Thus it behooves the Japanese to automate as many job functions as they can (and the rest of the world likely won’t be far behind, though they won’t have quite the same impetus). According to the Nippon Foundation, more than half of Japanese ship crew members are over the age of 50.

In partnership with Misui OSK Lines Ltd., the foundation recently completed two tests of autonomous ships. The first was a 313-foot container ship called the Mikage, which sailed 161 nautical miles from Tsuruga Port, north of Kyoto, to Sakai Port near Osaka. Upon reaching its destination port the ship was even able to steer itself into its designated bay, with drones dropping its mooring line.

Mar 28, 2022

CRISPR/Cas9-Mediated Genome Editing as a Therapeutic Approach for Leber Congenital Amaurosis 10

Posted by in categories: biotech/medical, genetics

Circa 2017 😀


As the most common subtype of Leber congenital amaurosis (LCA), LCA10 is a severe retinal dystrophy caused by mutations in the CEP290 gene. The most frequent mutation found in patients with LCA10 is a deep intronic mutation in CEP290 that generates a cryptic splice donor site. The large size of the CEP290 gene prevents its use in adeno-associated virus (AAV)-mediated gene augmentation therapy. Here, we show that targeted genomic deletion using the clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system represents a promising therapeutic approach for the treatment of patients with LCA10 bearing the CEP290 splice mutation. We generated a cellular model of LCA10 by introducing the CEP290 splice mutation into 293FT cells and we showed that guide RNA pairs coupled with SpCas9 were highly efficient at removing the intronic splice mutation and restoring the expression of wild-type CEP290. In addition, we demonstrated that a dual AAV system could effectively delete an intronic fragment of the Cep290 gene in the mouse retina. To minimize the immune response to prolonged expression of SpCas9, we developed a self-limiting CRISPR/Cas9 system that minimizes the duration of SpCas9 expression. These results support further studies to determine the therapeutic potential of CRISPR/Cas9-based strategies for the treatment of patients with LCA10.

Keywords: CEP290; CRISPR/Cas9; LCA10.

Continue reading “CRISPR/Cas9-Mediated Genome Editing as a Therapeutic Approach for Leber Congenital Amaurosis 10” »

Mar 28, 2022

BMW and Lego have built the hoverbike of our dreams

Posted by in category: futurism

Mar 28, 2022

The biggest problem in AI? Machines have no common sense

Posted by in category: robotics/AI

What most people define as common sense is actually common learning, and much of that is biased.

The biggest short term problem in AI: as mentioned in the video clip, an over-emphasis on data set size, irrelevant of accuracy, representation or accountability.

The biggest long term problem in AI: Instead of trying to replace us we should be seeking to complement us. Merge is not necessary nor advisable.

Continue reading “The biggest problem in AI? Machines have no common sense” »

Mar 28, 2022

James Webb Space Telescope unfolds mirrors, completes deployment

Posted by in categories: futurism, space

Spreading its mirror wings was the telescope’s last big step in its complicated deployment.


NASA has pulled off the most technically audacious part of bringing its newest flagship observatory online: unfolding it.

On Saturday, Jan. 8, the operations team for the James Webb Space Telescope (JWST) announced that the observatory’s primary mirror had successfully unfolded its segments — the last major step of the telescope’s complicated deployment.

Continue reading “James Webb Space Telescope unfolds mirrors, completes deployment” »

Mar 28, 2022

1000X More Efficient Neural Networks: Building An Artificial Brain With 86 Billion Physical (But Not Biological) Neurons

Posted by in categories: biological, robotics/AI

Which, to me, sounds both unimaginably complex and sublimely simple.

Sort of like, perhaps, like our brains.

Building chips with analogs of biological neurons and dendrites and neural networks like our brains is also key to the massive efficiency gains Rain Neuromorphics is claiming: 1,000 times more efficient than existing digital chips from companies like Nvidia.

Mar 28, 2022

Sanctuary claims it’s creating robots with human-level intelligence, but experts are skeptical

Posted by in category: robotics/AI

Sanctuary, a startup developing human-like robots, has raised tens of millions in capital. But experts are skeptical it can deliver on its promises.

Mar 28, 2022

Explainable AI (XAI) with Class Maps

Posted by in categories: biotech/medical, information science, robotics/AI

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

Continue reading “Explainable AI (XAI) with Class Maps” »