The public perceives OpenAI’s ChatGPT as revolutionary, but the same techniques are being used and the same kind of work is going on at many research labs, says the deep learning pioneer.
Category: robotics/AI – Page 1,145
The concept of emergence is controversial to some — for example Eliezer Yudkowski, who favors reductionism, wrote a critique at Less Wrong (see link below). Do reductionists often dismiss emergence?
Ben formalized emergence in his book ‘The Evolving Mind’ as patterns that appear when you put two or more things together that are not there in any of the individual parts.
Book — ‘The Evolving Mind’: http://www.goertzel.org/books/mind/contents.html http://www.amazon.com/Evolving-Futures-General-Evolution-Stu…atfound-20
Eliezer’s post on ‘The Futility of Emergence’ at Less Wrong: http://lesswrong.com/lw/iv/the_futility_of_emergence/
Talking points: EMERGENCE & REDUCTION, CONFUSION REGARDING EMERGENCE, LARGE SCALE NEUROSCIENCE PROJECTS, EMERGENCE AS A WAY OF EXPLAINING AWAY COMPLEXITY, IS CONSCIOUSNESS AN EMERGENT EFFECT? EMERGENCE & ARTIFICIAL INTELLIGENCE
The Attack of the Aliens from Vector Space: Steps Toward a Complex Systems Theory of Categorization and Similarity: http://www.goertzel.org/papers/catpap.html (Emergence & Compression) Extract: “The important concept of emergent pattern is defined: a pattern emerges between two entities if it is present in the combination of the two entities, but not in either of the entities separately. And the structural complexity of an entity is defined as the “total amount” of pattern in it. If the Metapattern is accepted, then these two concepts become essential to any analysis of biological reality.
We turn from these abstractions to a concrete biological example: the mammalian immune system. The theory of clonal selection states that immune systems evolve by natural selection; using the computer simulations of Alan Perelson, Rob deBoer and their colleagues as a guide, we inquire as to the exact nature of this evolution.
Researchers have developed a robot that brings speed, agility and reproducibility to laboratory-scale coin cell batteries.
Until now, laboratories studying battery technology have had to choose between the freedom to iterate and optimise battery chemistry by manually assembling each individual cell, and the reproducibility and speed of large-scale production. AutoBass (Automated battery assembly system), the first laboratory-scale coin cell assembly robot of its kind, is designed to bridge this gap.
Developed by a team from Helmholtz Institute Ulm and Karlsruhe Institute of Technology in Germany, AutoBass promises to improve characterisation of coin cell batteries and promote reproducibility by photographing each individual cell at key points in the assembly process. It produces batches of 64 cells a day.
Polymorphic malware could be easily made using ChatGPT. With relatively little effort or expenditure on the part of the attacker, this malware’s sophisticated capabilities can readily elude security tools and make mitigation difficult.
Malicious software called ‘Polymorphic Malware’ has the capacity to alter its source code in order to avoid detection by antivirus tools. It is a very potent threat because it may quickly change and propagate before security systems can catch it.
According to researchers, getting around the content filters that prevent the chatbot from developing dangerous software is the first step. The bot was instructed to complete the task while adhering to a number of constraints, and the researchers were given a working code as an outcome.
Google Research spinout Osmo wants to find substitutes for hard-to-source aromas. The tech could inspire new perfumes—and help combat mosquito-borne diseases.
ChatGPT, the powerful new AI chatbot tool that interacts with users in an eerily convincing and conversational way, is being met with mixed reactions. CNN’s Vanessa Yurkevich looks at the risks and benefits of OpenAI’s ambitious project. #CNN #News
Use CHAT GPT to Create INSANE Wealth.
In this video, Dr. Jordan Peterson talks about the recent release of GPT (Generative Pre-trained Transformer) which is a General Language Processing model that was released about a week ago. He explains that this AI system is trained on a massive corpus of spoken and written text, which allows it to analyze a large corpus of text and derive models of the world from the analysis of human speech. He mentions that this technology is still in its early stages but it is expected to advance rapidly in the next year. Dr. Peterson also shares some examples of how GPT has been used, including writing essays, bullet points, and computer code, as well as grading papers and creating character descriptions and images. He concludes that GPT is already smarter than most people and that it is going to be a lot smarter in the next few years, and advises the audience to be prepared for this technological revolution.
* Subscribe to our channel :
Copyright info:
* I do not own the rights to this content. They have, in accordance with fair use, been repurposed with the intent of educating and inspiring others.
* I must state that in NO way, shape or form am I intending to infringe rights of the copyright holder. Content used is strictly for research/reviewing purposes and to help educate. All under the Fair Use law.
* I don’t own any copyright concerning the extracts used in this video. But I allow myself to use them in order to help people in motivational form. If any owners would like me to remove the video I have no problem with that, just send me a letter: [email protected]
My thoughts exactly
Posted in innovation, robotics/AI
To be clear: I’m not criticizing OpenAI’s work nor their claims.
I’m trying to correct a *perception* by the public & the media who see chatGPT as this incredibly new, innovative, & unique technological breakthrough that is far ahead of everyone else.
It’s just not.
The brain is often regarded as a soft-matter chemical computer, but the way it processes information is very different to that of conventional silicon circuits. Three groups now describe chemical systems capable of storing information in a manner that resembles the way that neurons communicate with one another at synaptic junctions. Such ‘neuromorphic’ devices could provide very-low-power computation and act as interfaces between conventional electronics and ‘wet’ chemical systems, potentially including neurons and other living cells themselves.
At a synapse, the electrical pulse or action potential that travels along a neuron triggers the release of neurotransmitter molecules that bridge the junction to the next neuron, altering the state of the second neuron by making it more or less likely to fire its own action potential. If one neuron repeatedly influences another, the connection between them may become strengthened. This is how information is thought to become imprinted as a memory, a process called Hebbian learning. The ability of synapses to adjust their connectivity in response to input signals is called plasticity, and in neural networks it typically happens on two timescales. Short-term plasticity (STP) creates connectivity patterns that fade quite fast and are used to filter and process sensory signals, while long-term plasticity (LTP, also called long-term potentiation) imprints more long-lived memories. Both biological processes are still imperfectly understood.
Neuromorphic circuits that display such learning behaviour have been developed previously using solid-state electronic devices called memristors, two-terminal devices in which the relationship between the current that passes through and the voltage applied depends on the charge that passed through previously. Memristors may retain this memory even when no power is applied – they are ‘non-volatile’ – meaning that neuromorphic circuits can potentially process information with very low power consumption, a feature crucial to the way our brains can function without overheating. Typically, memristor behaviour manifests as a current–voltage relationship on a loop, and the response varies depending on whether the voltage is increasing or decreasing: a property called hysteresis, which itself represents a kind of memory as the device behaviour is contingent on its history.
Gautam Adani, the world’s third richest person, is apparently hooked on OpenAI’s ChatGPT. He said so himself, in a post-Davos blog post on LinkedIn.
ChatGPT “was the buzzword at this year’s event,” Adani wrote in the post, caveating that he “must admit to some addiction since I started using it.”