Toggle light / dark theme

Researchers have developed a robot that brings speed, agility and reproducibility to laboratory-scale coin cell batteries.

Until now, laboratories studying battery technology have had to choose between the freedom to iterate and optimise battery chemistry by manually assembling each individual cell, and the reproducibility and speed of large-scale production. AutoBass (Automated battery assembly system), the first laboratory-scale coin cell assembly robot of its kind, is designed to bridge this gap.

Developed by a team from Helmholtz Institute Ulm and Karlsruhe Institute of Technology in Germany, AutoBass promises to improve characterisation of coin cell batteries and promote reproducibility by photographing each individual cell at key points in the assembly process. It produces batches of 64 cells a day.

Polymorphic malware could be easily made using ChatGPT. With relatively little effort or expenditure on the part of the attacker, this malware’s sophisticated capabilities can readily elude security tools and make mitigation difficult.

Malicious software called ‘Polymorphic Malware’ has the capacity to alter its source code in order to avoid detection by antivirus tools. It is a very potent threat because it may quickly change and propagate before security systems can catch it.

According to researchers, getting around the content filters that prevent the chatbot from developing dangerous software is the first step. The bot was instructed to complete the task while adhering to a number of constraints, and the researchers were given a working code as an outcome.

Use CHAT GPT to Create INSANE Wealth.
In this video, Dr. Jordan Peterson talks about the recent release of GPT (Generative Pre-trained Transformer) which is a General Language Processing model that was released about a week ago. He explains that this AI system is trained on a massive corpus of spoken and written text, which allows it to analyze a large corpus of text and derive models of the world from the analysis of human speech. He mentions that this technology is still in its early stages but it is expected to advance rapidly in the next year. Dr. Peterson also shares some examples of how GPT has been used, including writing essays, bullet points, and computer code, as well as grading papers and creating character descriptions and images. He concludes that GPT is already smarter than most people and that it is going to be a lot smarter in the next few years, and advises the audience to be prepared for this technological revolution.

* Subscribe to our channel :

Copyright info:
* I do not own the rights to this content. They have, in accordance with fair use, been repurposed with the intent of educating and inspiring others.
* I must state that in NO way, shape or form am I intending to infringe rights of the copyright holder. Content used is strictly for research/reviewing purposes and to help educate. All under the Fair Use law.
* I don’t own any copyright concerning the extracts used in this video. But I allow myself to use them in order to help people in motivational form. If any owners would like me to remove the video I have no problem with that, just send me a letter: [email protected]

To be clear: I’m not criticizing OpenAI’s work nor their claims.

I’m trying to correct a *perception* by the public & the media who see chatGPT as this incredibly new, innovative, & unique technological breakthrough that is far ahead of everyone else.

It’s just not.

The brain is often regarded as a soft-matter chemical computer, but the way it processes information is very different to that of conventional silicon circuits. Three groups now describe chemical systems capable of storing information in a manner that resembles the way that neurons communicate with one another at synaptic junctions. Such ‘neuromorphic’ devices could provide very-low-power computation and act as interfaces between conventional electronics and ‘wet’ chemical systems, potentially including neurons and other living cells themselves.

At a synapse, the electrical pulse or action potential that travels along a neuron triggers the release of neurotransmitter molecules that bridge the junction to the next neuron, altering the state of the second neuron by making it more or less likely to fire its own action potential. If one neuron repeatedly influences another, the connection between them may become strengthened. This is how information is thought to become imprinted as a memory, a process called Hebbian learning. The ability of synapses to adjust their connectivity in response to input signals is called plasticity, and in neural networks it typically happens on two timescales. Short-term plasticity (STP) creates connectivity patterns that fade quite fast and are used to filter and process sensory signals, while long-term plasticity (LTP, also called long-term potentiation) imprints more long-lived memories. Both biological processes are still imperfectly understood.

Neuromorphic circuits that display such learning behaviour have been developed previously using solid-state electronic devices called memristors, two-terminal devices in which the relationship between the current that passes through and the voltage applied depends on the charge that passed through previously. Memristors may retain this memory even when no power is applied – they are ‘non-volatile’ – meaning that neuromorphic circuits can potentially process information with very low power consumption, a feature crucial to the way our brains can function without overheating. Typically, memristor behaviour manifests as a current–voltage relationship on a loop, and the response varies depending on whether the voltage is increasing or decreasing: a property called hysteresis, which itself represents a kind of memory as the device behaviour is contingent on its history.

The academic community is growing increasingly concerned about students using ChatGPT for less than honest purposes as it has been found to be capable of not only writing essays for high school students, but passing some exams, such as parts of those used to license doctors and grant MBAs.

In two new papers posted on preprint servers, one team and another researcher independently tested the ability of ChatGPT to take and pass exams. In the first a team with members from AnsibleHealth, Inc., Brown University and OpenAI, Inc. describe testing they did to see how well ChatGPT could do on the United States Medical Licensing Examination (USMLE) and posted their results on the medRXiv preprint server.

In the second, Christian Terwiesch, the Andrew M. Heller Professor at Wharton School of the University of Pennsylvania, has posted a paper on Wharton’s preprint site, describing how he tested the chatbot’s ability to perform on the final of a typical Operations Management MBA core course and what he found.