Toggle light / dark theme

GPT-4 is here, and you’ve probably heard a good bit about it already. It’s a smarter, faster, more powerful engine for AI programs such as ChatGPT. It can turn a hand-sketched design into a functional website and help with your taxes. It got a 5 on the AP Art History test. There were already fears about AI coming for white-collar work, disrupting education, and so much else, and there was some healthy skepticism about those fears. So where does a more powerful AI leave us?

Perhaps overwhelmed or even tired, depending on your leanings. I feel both at once. It’s hard to argue that new large language models, or LLMs, aren’t a genuine engineering feat, and it’s exciting to experience advancements that feel magical, even if they’re just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAI’s newest model inevitably sidesteps crucial questions—ones that simply don’t fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when we’re still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT?

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

There’s a theory that’s in vogue in astrochemistry called “Assembly Theory.” It posits that highly complex molecules—many acids, for example—could only come from living beings. The molecules are either part of living beings, or they’re things that intelligent living beings manufacture.

If Assembly Theory holds up, we could use it to search for aliens—by scanning distant planets and moons for complex molecules that should be evidence of living beings. That’s the latest idea from Assembly Theory’s originator, University of Glasgow chemist Leroy Cronin. “This is a radical new approach,” Cronin told The Daily Beast.

But not every expert agrees it would work—at least not anytime soon. To take chemical readings of faraway planets, scientists rely on spectroscopy. This is the process of interpreting a planet’s color palette to assess the possible mix of molecules in its atmosphere, land, and oceans.

Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. His position was marginal. “I don’t think the university was paying him on a regular basis,” recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years.

From the age of 6, Jaynes had been transfixed by the singularity of conscious experience. Gazing at a yellow forsythia flower, he’d wondered how he could be sure that others saw the same yellow as he did. As a young man, serving three years in a Pennsylvania prison for declining to support the war effort, he watched a worm in the grass of the prison yard one spring, wondering what separated the unthinking earth from the worm and the worm from himself. It was the kind of question that dogged him for the rest of his life, and the book he was working on would grip a generation beginning to ask themselves similar questions.

The Origin of Consciousness in the Breakdown of the Bicameral Mind, when it finally came out in 1976, did not look like a best-seller. But sell it did. It was reviewed in science magazines and psychology journals, Time, The New York Times, and the Los Angeles Times. It was nominated for a National Book Award in 1978. New editions continued to come out, as Jaynes went on the lecture circuit. Jaynes died of a stroke in 1997; his book lived on. In 2000, another new edition hit the shelves. It continues to sell today.

In this video, we explore the groundbreaking AI model CICERO developed by Meta AI. CICERO has achieved human-level performance in the complex natural language strategy game Diplomacy by combining strategic reasoning and natural language processing. Join us as we delve into the inner workings of this AI diplomat and discover how it outwits humans in the game of Diplomacy.

https://ai.facebook.com/research/cicero/
https://ai.facebook.com/blog/cicero-ai-negotiates-persuades-…th-people/

#ai #artificialintelligence #meta #cicero

In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

On Tuesday, OpenAI announced the next-generation version of the artificial intelligence technology that underpins its viral chatbot tool, ChatGPT. The more powerful GPT-4 promises to blow previous iterations out of the water, potentially changing the way we use the internet to work, play and create. But it could also add to challenging questions around how AI tools can upend professions, enable students to cheat, and shift our relationship with technology.

GPT-4 is an updated version of the company’s large language model, which is trained on vast amounts of online data to generate complex responses to user prompts. It is now available via a waitlist and has already made its way into some third-party products, including Microsoft’s new AI-powered Bing search engine. Some users with early access to the tool are sharing their experiences and highlighting some of its most compelling use cases.

So, 22% increase. Roughly like a 120 person, which means if this literally translates to people it means a maximizing of our current lifespan. The rest is just a rundown of Aubrey’s experiment.


Dr Katcher’s lifespan experiment has come to an end as the last remaining rat, Sima, has died. She was 1,464 days old which is a record for Sprague-Dawley rats. We also talk about the exciting Robust Mouse Rejuvenation project at the LEV Foundation.

Longevity Escape Velocity Foundation projects web page _https://www.levf.org/projects_

Designing, building, and launching a spacecraft is hugely expensive. That’s why NASA missions to Mars are designed with the hope that they’ll last as long as possible — like the famous Opportunity rover which was supposed to last for 90 days and managed to keep going for 15 years. The longer a mission can keep running, the more data it can collect, and the more we can learn from it.

That’s true for the orbiters which travel around Mars as well as the rovers which explore its surface, like the Mars Odyssey spacecraft which was launched in 2001 and has been in orbit around Mars for more than 20 years. But the orbiter can’t keep going forever as it will eventually run out of fuel, so figuring out exactly how much fuel is left is important — but it also turned out to be more complicated than the NASA engineers were expecting.

Odyssey started out with nearly 500 pounds of hydrazine fuel, though last year it looked as if the spacecraft was running much lower on fuel than had been predicted.