In his science fiction classic Star Maker, he imagines a way to overcome fascism on a galactic scale.
Olaf Stapledon’s Cosmology of Peace
Posted in cosmology
Posted in cosmology
In his science fiction classic Star Maker, he imagines a way to overcome fascism on a galactic scale.
Artificial intelligence has entered our daily lives. First, it was ChatGPT. Now, it’s AI-generated pizza and beer commercials. While we can’t trust AI to be perfect, it turns out that sometimes we can’t trust ourselves with AI either.
Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo has found that scientists using popular computational tools to interpret AI predictions are picking up too much “noise,” or extra information, when analyzing DNA. And he’s found a way to fix this. Now, with just a couple new lines of code, scientists can get more reliable explanations out of powerful AIs known as deep neural networks. That means they can continue chasing down genuine DNA features. Those features might just signal the next breakthrough in health and medicine. But scientists won’t see the signals if they’re drowned out by too much noise.
So, what causes the meddlesome noise? It’s a mysterious and invisible source like digital “dark matter.” Physicists and astronomers believe most of the universe is filled with dark matter, a material that exerts gravitational effects but that no one has yet seen. Similarly, Koo and his team discovered the data that AI is being trained on lacks critical information, leading to significant blind spots. Even worse, those blind spots get factored in when interpreting AI predictions of DNA function. The study is published in the journal Genome Biology.
The sex of human and other mammal babies is decided by a male-determining gene on the Y chromosome. But the human Y chromosome is degenerating and may disappear in a few million years, leading to our extinction unless we evolve a new sex gene.
The good news is two branches of rodents have already lost their Y chromosome and have lived to tell the tale.
A recent paper in Proceedings of the National Academy of Science shows how the spiny rat has evolved a new male-determining gene.
Physicists have achieved a significant milestone in the world of quantum physics by recreating the famous double-slit experiment in time.
A polymorphic defense and a hyperintelligence that could always adapt to rapid malware changes would be need much like sending The Vision from Ironman seemed to counter the ultron threat. Another scenario is that we could have chat gpt defensive anti-virus that could be local like we have today. The dark side to this AI still is a chaos chat gpt where it always changing not just polymorphic but changing in all ways but still an AI cyberdefense would make this threat lower.
Mutating, or polymorphic, malware can be built using the ChatGPT API at runtime to effect advanced attacks that can evade endpoint detections and response (EDR) applications.
Basically I have talked about how chaos gpt poses a great threat to current cyberdefenses and it still does but there is great promise of a god like AI that can a powerful force of good. This could also become a greater AI arms race that would need even more security measures like an AI god that can counter state level or country level threats. I do think this threat would come regardless when we try to reach agi but chat gpt also shows much more promising results as it could be used a god like AI with coders using it aswell as AI coders.
Chaos-GPT, an autonomous implementation of ChatGPT, has been unveiled, and its objectives are as terrifying as they are well-structured.
Perhaps not surprisingly, the AI was the most helpful for the least-skilled workers and those who had been with the company for the shortest time. Meanwhile, the highest-skilled and most experienced agents didn’t benefit much from using the AI. This makes sense, since the tool was trained on conversations from these workers; they already know what they’re doing.
“High-skilled workers may have less to gain from AI assistance precisely because AI recommendations capture the knowledge embodied in their own behaviors,” said study author Erik Brynjolfsson, director of the Stanford Digital Economy Lab.
The AI enabled employees with only two months of experience to perform as well as those who’d been in their roles for six months. That’s some serious skill acceleration. But is it “cheating”? Are the employees using the AI skipping over valuable first-hand training, missing out on learning by doing? Would their skills grind to a halt if the AI were taken away, since they’ve been repeating its suggestions rather than thinking through responses on their own?
Is there anything ChatGPT can’t do? Yes, of course, but the list appears to be getting smaller and smaller. Now, researchers have used the large language model to help them design and construct a tomato-picking robot.
Large language models (LLMs) can process and internalize huge amounts of text data, using this information to answer questions. OpenAI’s ChatGPT is one such LLM.
In a new case study, researchers from the Delft University of Technology in the Netherlands and the Swiss Federal Institute of Technology (EPFL) enlisted the help of ChatGPT-3 to design and construct a robot, which might seem strange considering that ChatGPT is a language model.
Recently, the theory of Hawking radiation of a black hole has been tested in several analogue platforms. Shi et al. report a fermionic-lattice model realization of an analogue black hole using a chain of superconducting transmon qubits with tuneable couplers and show the stimulated Hawking radiation.