Toggle light / dark theme

It’s called the Einstein Probe and it’s meant to observe the changing universe.

China has ambitious plans to launch a new X-ray astronomical satellite called the Einstein Probe (EP) at the end of this year. This is according to a report by the ChinaDaily.

“The satellite has entered the final stage of development,” he said at the recent 35th National Symposium on Space Exploration.

EP will have many missions, including capturing the first light from supernova explosions, helping search for gravitational wave sources, and observing the transient phenomena in the universe.


Color me surprised… a new bipartisan house caucus on longevity.


We in the longevity field have received powerful allies on Capitol Hill with the creation of the bipartisan Congressional Caucus for Longevity Science. We had the opportunity to ask questions of one of its co-chairs.

The fight against aging must become one of humanity’s main priorities if we want to see meaningful progress on a global scale. This requires recruiting allies among politicians and other decision makers.

Recently, a major step in that direction was made. Reps. Gus Bilirakis (R-FL) and Paul Tonko (D-NY) announced the formation of the Congressional Caucus for Longevity Science. According to the press release, the caucus “aims to educate Members about the growing field of aging and longevity biotechnology, and promote initiatives aimed at increasing the healthy average lifespan of all Americans.”

The Wow! signal is a radio signal detected by astronomer Jerry R. Ehman on August 15, 1977, while he was analyzing data from Ohio State University’s Big Ear radio telescope.

When the astronomer discovered the signal, he was so impressed with it that he wrote a comment “Wow!.” Thus, the mysterious signal came to be called the Wow! signal.

The signal appeared to come from the Sagittarius constellation, and it lasted for 72 seconds. The signal was unusual because it had a narrow bandwidth, was significantly stronger than background noise, and appeared to come from a fixed point in space.

GPT-4 is here, and you’ve probably heard a good bit about it already. It’s a smarter, faster, more powerful engine for AI programs such as ChatGPT. It can turn a hand-sketched design into a functional website and help with your taxes. It got a 5 on the AP Art History test. There were already fears about AI coming for white-collar work, disrupting education, and so much else, and there was some healthy skepticism about those fears. So where does a more powerful AI leave us?

Perhaps overwhelmed or even tired, depending on your leanings. I feel both at once. It’s hard to argue that new large language models, or LLMs, aren’t a genuine engineering feat, and it’s exciting to experience advancements that feel magical, even if they’re just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAI’s newest model inevitably sidesteps crucial questions—ones that simply don’t fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when we’re still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT?

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

There’s a theory that’s in vogue in astrochemistry called “Assembly Theory.” It posits that highly complex molecules—many acids, for example—could only come from living beings. The molecules are either part of living beings, or they’re things that intelligent living beings manufacture.

If Assembly Theory holds up, we could use it to search for aliens—by scanning distant planets and moons for complex molecules that should be evidence of living beings. That’s the latest idea from Assembly Theory’s originator, University of Glasgow chemist Leroy Cronin. “This is a radical new approach,” Cronin told The Daily Beast.

But not every expert agrees it would work—at least not anytime soon. To take chemical readings of faraway planets, scientists rely on spectroscopy. This is the process of interpreting a planet’s color palette to assess the possible mix of molecules in its atmosphere, land, and oceans.

Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. His position was marginal. “I don’t think the university was paying him on a regular basis,” recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years.

From the age of 6, Jaynes had been transfixed by the singularity of conscious experience. Gazing at a yellow forsythia flower, he’d wondered how he could be sure that others saw the same yellow as he did. As a young man, serving three years in a Pennsylvania prison for declining to support the war effort, he watched a worm in the grass of the prison yard one spring, wondering what separated the unthinking earth from the worm and the worm from himself. It was the kind of question that dogged him for the rest of his life, and the book he was working on would grip a generation beginning to ask themselves similar questions.

The Origin of Consciousness in the Breakdown of the Bicameral Mind, when it finally came out in 1976, did not look like a best-seller. But sell it did. It was reviewed in science magazines and psychology journals, Time, The New York Times, and the Los Angeles Times. It was nominated for a National Book Award in 1978. New editions continued to come out, as Jaynes went on the lecture circuit. Jaynes died of a stroke in 1997; his book lived on. In 2000, another new edition hit the shelves. It continues to sell today.

In this video, we explore the groundbreaking AI model CICERO developed by Meta AI. CICERO has achieved human-level performance in the complex natural language strategy game Diplomacy by combining strategic reasoning and natural language processing. Join us as we delve into the inner workings of this AI diplomat and discover how it outwits humans in the game of Diplomacy.

https://ai.facebook.com/research/cicero/
https://ai.facebook.com/blog/cicero-ai-negotiates-persuades-…th-people/

#ai #artificialintelligence #meta #cicero

In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

On Tuesday, OpenAI announced the next-generation version of the artificial intelligence technology that underpins its viral chatbot tool, ChatGPT. The more powerful GPT-4 promises to blow previous iterations out of the water, potentially changing the way we use the internet to work, play and create. But it could also add to challenging questions around how AI tools can upend professions, enable students to cheat, and shift our relationship with technology.

GPT-4 is an updated version of the company’s large language model, which is trained on vast amounts of online data to generate complex responses to user prompts. It is now available via a waitlist and has already made its way into some third-party products, including Microsoft’s new AI-powered Bing search engine. Some users with early access to the tool are sharing their experiences and highlighting some of its most compelling use cases.