Toggle light / dark theme

Our Gattaca Exclusive Confirmed By The Hollywood Reporter

Our trusted and proven sources were correct once again, as just hours after we broke the news that a Gattaca series is in development at Showtime, The Hollywood Reporter confirmed our exclusive. One of our writers here at Giant Freakin Robot wrote just two weeks ago that the 1997 dystopian sci-fi classic would be perfect as a television series, and it’s amazing how quickly we went from hoping it would happen to confirming that it is. The new series will be coming from the creators of Homeland, Howard Gordan and Alex Gansa.

As noted in our initial report, this is not the first time the film, starring Ethan Hawke, Uma Thurman, and Jude Law, has been optioned as a series. Back in 2009, Sony attempted to turn the movie into a procedural from Gil Grant, a writer on 24 and NCIS. The underrated cult-classic movie is ideal for transforming into a prestige series on a premium network as its themes on transhumanism, genetic manipulation, and a stratified society have become more relevant as technology leaps forwards every year.

In Gattaca, eugenics separates society into “valids” and “in-valids,” even if genetic discrimination is illegal; that hasn’t stopped businesses from profiling, giving the best jobs to the former and only menial labor opportunities to the latter. Ethan Hawke plays Vincent, an in-valid with a heart defect that uses samples from Jude Law’s Jerome Morrow, a paralyzed Olympic champion swimmer that’s also a valid. Using the purloined DNA, Vincent cons his way into a job at Gattaca Aerospace Corporation, eventually being selected as a navigator for a trip to Saturn’s moon, Titan.

How a Beam of Pellets Could Blast a Probe Into Deep Space

It’s a theoretical concept, but realistic enough that NASA’s Innovative Advanced Concepts program has given Davoyan’s group $175,000 to show that the technology is feasible. “There’s rich physics in there,” says Davoyan, a mechanical and aerospace engineer at UCLA. To create propulsion, he continues, “you either throw the fuel out of the rocket or you throw the fuel at the rocket.” From a physics perspective, they work the same: Both impart momentum to a moving object.

His team’s project could transform long-distance space exploration, dramatically expanding the astronomical neighborhood accessible to us. After all, we’ve only sent a few robotic visitors to scope out Uranus, Neptune, Pluto, and their moons. We know even less about objects lurking farther away. The even smaller handful of NASA craft en route to interstellar space include Pioneer 10 and 11, which blasted off in the early 1970s; Voyager 1 and 2, which were launched in 1977 and continue their mission to this day; and the more recent New Horizons, which took nine years to fly by Pluto in 2015, glimpsing the dwarf planet’s now famous heart-shaped plain. Over its 46-year journey, Voyager 1 has ventured farthest from home, but a pellet-beam-powered craft could overtake it in just five years, Davoyan says.

He takes inspiration from Breakthrough Starshot, a $100 million initiative announced in 2016 by Russian-born philanthropist Yuri Milner and British cosmologist Stephen Hawking to use a 100-gigawatt laser beam to blast a miniature probe toward Alpha Centauri. (The star nearest our solar system, it resides “only” 4 light-years away.) The Starshot team is exploring how they could hurl a 1-gram craft attached to a lightsail into interstellar space, using the laser to accelerate it to 20 percent of the speed of light, which is ludicrously fast and would reduce travel time from millennia to decades. “I’m increasingly optimistic that later this century, humanity’s going to be including nearby stars in our reach,” says Pete Worden, Breakthrough Starshot’s executive director.

How Can Meta-Learning, Self-Attention And JAX Power The Next Generation of Evolutionary Optimizers?

Black box optimization methods are used in every domain, from Artificial Intelligence and Machine Learning to engineering and finance. These methods are used to optimize functions when an algebraic model is absent. Black box optimization looks into the design and analysis of algorithms for those problem statements where the structure of the objective function or the limitations defining the set is not known or explainable. Given a set of input parameters, black box optimization methods are designed to evaluate the optimal value of a function. This is done by iteratively assessing the function at multiple points in the input space so as to find the point that generates the optimal output.

Though gradient descent is the most used optimization approach for deep learning models, it is unsuitable for every problem. In cases where gradients cannot be calculated directly or where an objective function’s accurate analytical form is unknown, other approaches like Evolution Strategies (ES) are used. Evolution strategies come from evolutionary algorithms, which refer to a division of population-based optimization algorithms inspired by natural selection. Basically, Evolution Strategies (ES) is a type of Black Box Optimization method that operates by refining a sampling distribution based on the fitness of candidates and updating rules based on equations.

In a new AI paper, researchers from Deepmind, have introduced and developed a new way to use machine learning to learn the update rules from data, called meta-black-box optimization (MetaBBO), to make ES more flexible, adaptable, and scalable. MetaBBO works by meta-learning a neural network parametrization of a BBO update rule. The researchers have used MetaBBO to discover a new type of ES called learned evolution strategy (LES). The learned evolution strategy LES is a type of Set Transformer that updates its solutions based on the fitness of candidates and not depending upon the ordering of candidate solutions within the Black box evaluations. After meta-training, the LES can learn to choose the best-performing solution or update solutions based on a moving average.

OpenAI Releases ChatGPT-4 And Performs Impressive Demonstration

OpenAI has released a new version of ChatGPT, claiming that the new language learning model is capable of passing – and even excelling in – a variety of academic exams.

ChatGPT-4, which will be available on Bing as well as the OpenAI website, is more reliable and more creative than its predecessor, according to OpenAI. The team tested the model on a number of exams designed for humans, from the bar exam to biology, using publicly available papers. While no additional training was given to the model ahead of the tests, it was able to perform well on most subjects, performing in the estimated 90th percentile for the bar exam and the 86th-100th in art history.

Just as the previous model was accused of being bad at math, this version struggled more with calculus, scoring in the 43rd-59th percentile.

An energy-efficient text-to-audio AI

Generative artificial intelligence (AI) systems will inspire an explosion of creativity in the music industry and beyond, according to the University of Surrey researchers who are inviting the public to test out their new text-to-audio model.

AudioLDM is a new AI-based system from Surrey that allows users to submit a text prompt, which is then used to generate a corresponding audio clip. The system can process prompts and deliver clips using less than current AI systems without compromising or the users’ ability to manipulate clips.

The is able to try out AudioLDM by visiting its Hugging Face space. Their code is also open-sourced on GitHub with 1000+ stars.

Neuroscientist explores how ChatGPT mirrors its users to appear intelligent

The artificial intelligence (AI) language model ChatGPT has captured the world’s attention in recent months. This trained computer chatbot can generate text, answer questions, provide translations, and learn based on the user’s feedback. Large language models like ChatGPT may have many applications in science and business, but how much do these tools understand what we say to them, and how do they decide what to say back?

In new paper published in Neural Computation on February 17, 2023, Salk Professor Terrence Sejnowski, author of “The Deep Learning Revolution,” explores the relationship between the human and models to uncover why chatbots respond in particular ways, why those responses vary, and how to improve them in the future.

According to Sejnowski, language models reflect the intelligence and diversity of their interviewer.

High-resolution image reconstruction with latent diffusion models from human brain activity

Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance. We also characterize the inner mechanisms of the LDM by studying how its different components (such as the latent vector of image Z, conditioning inputs C, and different elements of the denoising U-Net) relate to distinct brain functions. We show that our proposed method can reconstruct high-resolution images with high fidelity in straight-forward fashion, without the need for any additional training and fine-tuning of complex deep-learning models. We also provide a quantitative interpretation of different LDM components from a neuroscientific perspective. Overall, our study proposes a promising method for reconstructing images from human brain activity, and provides a new framework for understanding DMs. Please check out our webpage at this https URL.

The authors have declared no competing interest.

OpenAI co-founder on company’s past approach to openly sharing research: “We were wrong”

OpenAI announced its latest language model, GPT-4, but many in the AI community were disappointed by the lack of public information. Their complaints track increasing tensions in the AI world over safety.

Yesterday, OpenAI announced GPT-4, its long-awaited next-generation AI language model.


Should AI research be open or closed? Experts disagree.

Many in the AI community have criticized this decision, noting that it undermines the company’s founding ethos as a research org and makes it harder for others to replicate its work. Perhaps more significantly, some say it also makes it difficult to develop safeguards against the sort of threats posed by AI systems like GPT-4, with these complaints coming at a time of increasing tension and rapid progress in the AI world.

“I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set,” tweeted Ben Schmidt, VP of information design at Nomic AI, in a thread on the topic.

/* */