Toggle light / dark theme

Circa 2017 😀


As the most common subtype of Leber congenital amaurosis (LCA), LCA10 is a severe retinal dystrophy caused by mutations in the CEP290 gene. The most frequent mutation found in patients with LCA10 is a deep intronic mutation in CEP290 that generates a cryptic splice donor site. The large size of the CEP290 gene prevents its use in adeno-associated virus (AAV)-mediated gene augmentation therapy. Here, we show that targeted genomic deletion using the clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system represents a promising therapeutic approach for the treatment of patients with LCA10 bearing the CEP290 splice mutation. We generated a cellular model of LCA10 by introducing the CEP290 splice mutation into 293FT cells and we showed that guide RNA pairs coupled with SpCas9 were highly efficient at removing the intronic splice mutation and restoring the expression of wild-type CEP290. In addition, we demonstrated that a dual AAV system could effectively delete an intronic fragment of the Cep290 gene in the mouse retina. To minimize the immune response to prolonged expression of SpCas9, we developed a self-limiting CRISPR/Cas9 system that minimizes the duration of SpCas9 expression. These results support further studies to determine the therapeutic potential of CRISPR/Cas9-based strategies for the treatment of patients with LCA10.

Keywords: CEP290; CRISPR/Cas9; LCA10.

Copyright © 2017 The American Society of Gene and Cell Therapy. Published by Elsevier Inc. All rights reserved.

What most people define as common sense is actually common learning, and much of that is biased.

The biggest short term problem in AI: as mentioned in the video clip, an over-emphasis on data set size, irrelevant of accuracy, representation or accountability.

The biggest long term problem in AI: Instead of trying to replace us we should be seeking to complement us. Merge is not necessary nor advisable.

If we think about it, building a machine to think like a human is like buying a race horse and insisting for it to function like a camel. And it is doomed to fail. Cos there are only two scenarios: Either humans are replaced or they are not. If we are, then we have failed. If we are not replaced, then the AI development has failed.

Time for a change of direction.

Spreading its mirror wings was the telescope’s last big step in its complicated deployment.


NASA has pulled off the most technically audacious part of bringing its newest flagship observatory online: unfolding it.

On Saturday, Jan. 8, the operations team for the James Webb Space Telescope (JWST) announced that the observatory’s primary mirror had successfully unfolded its segments — the last major step of the telescope’s complicated deployment.

The moment was a euphoric moment of validation for the entire team. “We’re on an incredible high right now,” said Bill Ochs, JWST’s project manager, at a press conference. “Today represents the beginning of a journey for this incredible machine, to its discoveries that we’ll be making in the future.”

Which, to me, sounds both unimaginably complex and sublimely simple.

Sort of like, perhaps, like our brains.

Building chips with analogs of biological neurons and dendrites and neural networks like our brains is also key to the massive efficiency gains Rain Neuromorphics is claiming: 1,000 times more efficient than existing digital chips from companies like Nvidia.

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.

A four-legged robot called Spot has been deployed to wander around the ruins of ancient Pompeii, identifying structural and safety issues while delving underground to inspect tunnels dug by relic thieves.

The dog-like robot is the latest in a series of technologies used as part of a broader project to better manage the archaeological park since 2013, when Unesco threatened to add Pompeii to a list of world heritage sites in peril unless Italian authorities improved its preservation.

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled “Reward is Enough.” The authors argue that “maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence.” However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do.

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how “cognitively plausible” some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you don’t care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didn’t solve the problem.

This conference will take place at EMBL Heidelberg, with a live streaming option for virtual participants free of charge. Proof of COVID-19 vaccination or recovery is required for on-site attendance. Please see EMBL’s COVID-19 terms and conditions.

Workshop registration is available only to EIROforum members. Please note the workshop is an on-site-only event and contact Iva Gavran for more information or use this link for registration.