Toggle light / dark theme

Metaverse was a huge, company destroyin, blunder. Smart move this decade is Drop Everything else and chase Agi.


VR pioneer John Carmack is leaving Meta for good. With his departure, the industry loses a visionary and an important voice.

Carmack published his farewell letter on Facebook after parts of the email were leaked to the press.

In the message to employees, Carmack, as usual, doesn’t mince words. He cites a lack of efficiency and his powerlessness to change anything about this circumstance as reasons.

Synchron, a neurovascular bioelectronics medicine company, today announced publication of a first-in-human study demonstrating successful use of the Stentrode™ brain-computer interface (BCI), or neuroprosthesis. Specifically, the study shows the Stentrode’s ability to enable patients with severe paralysis to resume daily tasks, including texting, emailing, shopping and banking online, through direct thought, and without the need for open brain surgery. The study is the first to demonstrate that a BCI implanted via the patient’s blood vessels is able to restore the transmission of brain impulses out of the body, and did so wirelessly. The patients were able to use their impulses to control digital devices without the need for a touchscreen, mouse, keyboard or voice activation technology. This feasibility study was published in the Journal of NeuroInterventional Surgery (JNIS), the leading international peer-review journal for the clinical field of neurointerventional surgery, and official journal of the Society of NeuroInterventional Surgery (SNIS).

Is digital immortality possible by uploading your mind? Dr. Paul Thagard discusses Neuralink, artificial intelligence, mind uploading, simulation theory, and the challenges involved with whole brain emulation.

Dr. Paul Thagard is a philosopher, cognitive scientist, and author of many interdisciplinary books. He currently teaches as a Distinguished Professor Emeritus of Philosophy at the University of Waterloo, where he founded and directed the Cognitive Science Program.

Dr. Thagard is a graduate of the Universities of Saskatchewan, Cambridge, Toronto (with a PhD in philosophy) and Michigan (with an MS in computer science). He is a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. The Canada Council awarded him a Molson Prize in 2007 and a Killam Prize in 2013.

LINKS & RESOURCES:

Check out all the on-demand sessions from the Intelligent Security Summit here.

On September 15, 2022, the Ethereum network migrated from a proof-of-work to a proof-of-stake consensus mechanism called the Merge. Apart from reducing energy consumption by 99%, the Merge laid the foundations for building a highly secure and scalable blockchain. However, despite the benefits of the Merge, it also marks a regression in privacy, which is a significant concern for Ethereum users.

Privacy generally takes a backseat to other core blockchain topics such as decentralization and scalability. In fact, blockchain networks’ zeal for data transparency often comes at the cost of compromising individual and enterprise privacy. But without a privacy-focused approach — even one that gives users optional privacy — Ethereum decentralized applications (dapps) will repeat the same mistakes of Web2 applications.

https://www.riffusion.com.
Get The Memo: https://lifearchitect.ai/memo.

Examples:
https://www.riffusion.com/?&prompt=bach+on+electone.
https://www.riffusion.com/?&prompt=eminem+style+anger+rap.

Code: https://github.com/hmartiro/riffusion-app.

Spectrogram demo: https://musiclab.chromeexperiments.com/Spectrogram.

Both animals and people use high-dimensional inputs (like eyesight) to accomplish various shifting survival-related objectives. A crucial aspect of this is learning via mistakes. A brute-force approach to trial and error by performing every action for every potential goal is intractable even in the smallest contexts. Memory-based methods for compositional thinking are motivated by the difficulty of this search. These processes include, for instance, the ability to: recall pertinent portions of prior experience; (ii) reassemble them into new counterfactual plans, and (iii) carry out such plans as part of a focused search strategy. Compared to equally sampling every action, such techniques for recycling prior successful behavior can considerably speed up trial-and-error. This is because the intrinsic compositional structure of real-world objectives and the similarity of the physical laws that control real-world settings allow the same behavior (i.e., sequence of actions) to remain valid for many purposes and situations. What guiding principles enable memory processes to retain and reassemble experience fragments? This debate is strongly connected to the idea of dynamic programming (DP), which using the principle of optimality significantly lowers the computing cost of trial-and-error. This idea may be expressed informally as considering new, complicated issues as a recomposition of previously solved, smaller subproblems.

This viewpoint has recently been used to create hierarchical reinforcement learning (RL) algorithms for goal-achieving tasks. These techniques develop edges between states in a planning graph using a distance regression model, compute the shortest pathways across it using DP-based graph search, and then use a learning-based local policy to follow the shortest paths. Their essay advances this field of study. The following is a summary of their contributions: They provide a strategy for long-term planning that acts directly on high-dimensional sensory data that an agent may see on its own (e.g., images from an onboard camera). Their solution blends traditional sampling-based planning algorithms with learning-based perceptual representations to recover and reassemble previously recorded state transitions in a replay buffer.

The two-step method makes this possible. To determine how many timesteps it takes for an optimum policy to move from one state to the next, they first learn a latent space where the distance between two states is the measure. They know contrastive representations using goal-conditioned Q-values acquired through offline hindsight relabeling. To establish neighborhood criteria across states, the second threshold this developed latent distance metric. They go on to design sampling-based planning algorithms that scan the replay buffer for trajectory segments—previously recorded successions of transitions—whose ends are adjacent states.

Fortunately, neuroscience can help — both to reassure you that you’re normal, and to provide support for the idea that there are specific habits and practices people can learn in order to improve memory when they need it most. Here are 8 of the most interesting I’ve found over the last couple of years:

Let’s start with this one, because it’s oh-so-easy. Michigan State University researchers studied whether Nile grass rats exhibited better memory when they were kept in an environment where the lighting resembled a corporate office (think dim fluorescent lighting), or where the lighting resembled a sunny day outside.

Sure enough, the study found that rats in dim lighting “lost about 30 percent of capacity in the hippocampus, a critical brain region for learning and memory, and performed poorly on a spatial task they had trained on previously.”