Toggle light / dark theme

So, you’ve set aside a chunk of change to build a new gaming PC and are just waiting for AMD and Nvidia to launch their next-gen GPUs, is that it? A solid plan, except for one thing—your next build is already obsolete. That’s because whatever you spec’d out is undoubtedly sitting on an AMD or Intel foundation, and didn’t you hear, x86 computing is basically dead. Finished. Kaput. We’re on the cusp of the end of an era, and all because Apple is dumping Intel for ARM.

Okay, maybe not, but that’s essentially the case made by Jean-Louis Gassée, a former Apple executive who led the development of Mac computers in the late 1980s. In no uncertain terms, he says Apple’s decision to phase out Intel CPUs in favor of its own silicon based on ARM will force “PC OEMs to reconsider their allegiance to x86 silicon…and that will have serious consequences for the old Wintel partnership.”

Ancient Egyptians used hieroglyphs over four millennia ago to engrave and record their stories. Today, only a select group of people know how to read or interpret those inscriptions.

To read and decipher the ancient hieroglyphic writing, researchers and scholars have been using the Rosetta Stone, an irregularly shaped black granite stone.

In 2017, game developer Ubisoft launched an initiative to use AI and machine learning to understand the written language of the Pharoahs.

Queen’s University researchers uncover brain-based marker of new thoughts and discover we have more than 6,000 thoughts each day.

Researchers at Queen’s University have established a method that, for the first time, can detect indirectly when one thought ends and another begins. Dr. Jordan Poppenk (Psychology) and his master’s student, Julie Tseng, devised a way to isolate “thought worms,” consisting of consecutive moments when a person is focused on the same idea. This research was recently published in Nature Communications.

“What we call thought worms are adjacent points in a simplified representation of activity patterns in the brain. The brain occupies a different point in this ‘state space’ at every moment. When a person moves onto a new thought, they create a new thought worm that we can detect with our methods,” explains Dr. Poppenk, who is the Canada Research Chair in Cognitive Neuroscience. “We also noticed that thought worms emerge right as new events do when people are watching movies. Drilling into this helped us validate the idea that the appearance of a new thought worm corresponds to a thought transition.”

A candid conversation with the pioneering creator of 2001: A Space Odyssey, Dr. Strangelove and Lolita

Throughout his 17-year career as a moviemaker, Stanley Kubrick has committed himself to pushing the frontiers of film into new and often controversial regions—despite the box-office problems and censorship battles that such a commitment invariably entails. Never a follower of the safe, well-traveled road to Hollywood success, he has consistently struck out on his own, shattering movie conventions and shibboleths along the way. In many respects, his latest film, the epic 2001: A Space Odyssey, stands as a metaphor for Kubrick himself. A technically flawless production that took three years and $10,500,000 to create, 2001 could have been just a super-spectacle of exotic gadgetry and lavish special effects; but with the collaboration of Arthur C.

She can’t get sick or be late to the set, and her hair and makeup needs are minimal: Her name is Erica, and Hollywood is hoping that a sophisticated robot can be its next big star. The synthetic actor has been cast in “b,” a $70 million science-fiction movie which producer Sam Khoze describes as “a James Bond meets Mission Impossible story with heart.”

Scribe Tarek Zohdy (“1st Born”), says, the story is about scientists who create an AI robot named Erica who quickly realize the danger of this top-secret program that is trying to perfect a human through a non-human form.

Variety caught up with the filmmakers Zohdy and Khoze to discuss “b” the $70 million film that plans to finish shooting next year, after a director and human star have been brought on.

NetHack, which was first released in 1987, is more sophisticated than might be assumed. It tasks players with descending more than 50 dungeon levels to retrieve a magical amulet, during which they must use hundreds of items and fight monsters while contending with rich interactions between the two. Levels in NetHack are procedurally generated and every game is different, which the Facebook researchers note tests the generalization limits of current state-of-the-art AI.


Facebook researchers believe the game NetHack is well-tailored to training, testing, and evaluating AI models. Today, they released the NetHack Learning Environment, a research tool for benchmarking the robustness and generalization of reinforcement learning agents.

For decades, games have served as benchmarks for AI. But things really kicked into gear in 2013 — the year Google subsidiary DeepMind demonstrated an AI system that could play Pong, Breakout, Space Invaders, Seaquest, Beamrider, Enduro, and Q*bert at superhuman levels. The advancements aren’t merely improving game design, according to folks like DeepMind cofounder Demis Hassabis. Rather, they’re informing the development of systems that might one day diagnose illnesses, predict complicated protein structures, and segment CT scans.