Toggle light / dark theme

I ve been quite impressed so far. And, if they can be improved over night i would love to see it.


With long-term memory, language models could be even more specific – or more personal. MemoryGPT gives a first impression.

Right now, interaction with language models refers to single instances, e.g. in ChatGPT to a single chat. Within that chat, the language model can to some extent take the context of the input into account for new texts and replies.

In the currently most powerful version of GPT-4, this is up to 32,000 tokens – about 50 pages of text. This makes it possible, for example, to chat about the contents of a long paper. To find new solutions, developers can talk to a larger code database. The context window is an important building block for the practical use of large language models, an innovation made possible by Transformer networks.

Fascinating proposal for methodology.


Models are scientific models, theories, hypotheses, formulas, equations, naïve models based on personal experiences, superstitions (!), and traditional computer programs. In a Reductionist paradigm, these Models are created by humans, ostensibly by scientists, and are then used, ostensibly by engineers, to solve real-world problems. Model creation and Model use both require that these humans Understand the problem domain, the problem at hand, the previously known shared Models available, and how to design and use Models. A Ph.D. degree could be seen as a formal license to create new Models[2]. Mathematics can be seen as a discipline for Model manipulation.

But now — by avoiding the use of human made Models and switching to Holistic Methods — data scientists, programmers, and others do not themselves have to Understand the problems they are given. They are no longer asked to provide a computer program or to otherwise solve a problem in a traditional Reductionist or scientific way. Holistic Systems like DNNs can provide solutions to many problems by first learning about the domain from data and solved examples, and then, in production, to match new situations to this gathered experience. These matches are guesses, but with sufficient learning the results can be highly reliable.

In an advance they consider a breakthrough in computational chemistry research, University of Wisconsin–Madison chemical engineers have developed model of how catalytic reactions work at the atomic scale. This understanding could allow engineers and chemists to develop more efficient catalysts and tune industrial processes—potentially with enormous energy savings, given that 90% of the products we encounter in our lives are produced, at least partially, via catalysis.

Catalyst materials accelerate without undergoing changes themselves. They are critical for refining petroleum products and for manufacturing pharmaceuticals, plastics, food additives, fertilizers, green fuels, industrial chemicals and much more.

Scientists and engineers have spent decades fine-tuning catalytic reactions—yet because it’s currently impossible to directly observe those reactions at the and pressures often involved in industrial-scale catalysis, they haven’t known exactly what is taking place on the nano and atomic scales. This new research helps unravel that mystery with potentially major ramifications for industry.

Researchers have discovered that in the exotic conditions of the early universe, waves of gravity may have shaken space-time so hard that they spontaneously created radiation.

The physical concept of resonance surrounds us in everyday life. When you’re sitting on a swing and want to go higher, you naturally start pumping your legs back and forth. You very quickly find the exact right rhythm to make the swing go higher. If you go off rhythm then the swing stops going higher. This particular kind of phenomenon is known in physics as a parametric resonance.

Your legs act as an external pumping mechanism. When they match the resonant frequency of the system, in this case your body sitting on a swing, they are able to transfer energy to the system making the swing go higher.

Origami robots are autonomous machines that are constructed by folding two-dimensional materials into complex, functional three-dimensional structures. These robots are highly versatile. They can be designed to perform a wide range of tasks, from manipulating small objects to navigating difficult terrain. Their compact size and flexibility allow them to move in ways that traditional robots cannot, making them ideal for use in environments that are hard to reach.

Another notable feature of origami-based robots is their low cost. Because they are constructed using simple materials and techniques, they can be produced relatively inexpensively. This makes them an attractive option for many researchers and companies looking to develop new robotics applications.

There are many potential applications for origami robots. They could be used in search and rescue missions, where their small size and flexibility would allow them to navigate through rubble and debris. They could also be used in manufacturing settings, where their ability to manipulate small objects could be put to use in assembly lines.

Join us on Patreon! https://www.patreon.com/MichaelLustgartenPhD

Discount Links:
NAD+ Quantification: https://www.jinfiniti.com/intracellular-nad-test/
Use Code: ConquerAging At Checkout.

Green Tea: https://www.ochaandco.com/?ref=conqueraging.

Oral Microbiome: https://www.bristlehealth.com/?ref=michaellustgarten.

Retrocausality, a mind-blowing quantum concept, proposes that future events impact the past. Challenging time’s traditional flow and exploring interconnected temporal relationships. Can the universe communicate with its past-self?

0:00 What is Retrocausality?
00:55 The Layers of the Universe.
02:17 The Universe Is Not Real.
04:32 The Role of Quantum Entanglement.
08:02 Does Time Travel Explain the Mysteries of the Universe?

#retrocausality #timetravel #quantummechanics.

Interested in what I do? Sign up to my Newsletter.

As chatbots like ChatGPT bring his work to widespread attention, we spoke to Hinton about the past, present and future of AI.

CBS Saturday Morning’s Brook Silva-Braga interviewed him at the Vector Institute in Toronto on March 1, 2023. #ai #interview #artificialintelligence #GeoffreyHinton #machinelearning #future

The growing presence of Russian submarines off the coast of the United States has sparked Cold War comparisons from military observers and a retired NATO admiral.

Russian President Vladimir Putin has been set on expanding Russia’s underwater capabilities. Over the past several years, Moscow has been producing a series of submarines that have the capability to reach the most critical targets in the U.S. or continental Europe.