Toggle light / dark theme

Mathematicians Prove 30-Year-Old André-Oort Conjecture

“The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London.

The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x3 + y3 = z3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century.

In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties.

Chip designer mimicking brain, backed by Sam Altman, gets $25 million funding

(Reuters) — Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company’s technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

This AI Learned the Design of a Million Algorithms to Help Build New AIs Faster

Might there be a better way? Perhaps.

A new paper published on the preprint server arXiv describes how a type of algorithm called a “hypernetwork” could make the training process much more efficient. The hypernetwork in the study learned the internal connections (or parameters) of a million example algorithms so it could pre-configure the parameters of new, untrained algorithms.

The AI, called GHN-2, can predict and set the parameters of an untrained neural network in a fraction of a second. And in most cases, the algorithms using GHN-2’s parameters performed as well as algorithms that had cycled through thousands of rounds of training.

AI nanny created by Chinese scientists to grow humans in robot wombs

The AI nanny is here! In a new feat for science, robots and AI can now be paired to optimise the creation of human life. In a Matrix-esque reality, robotics and artificial intelligence can now help to develop babies with algorithms and artificial wombs.

Reported by South China Morning Post, Chinese scientists in Suzhou have developed the new technology. However, there are worries surrounding the ethics of actually artificially growing human babies.

IBM and CERN use quantum computing to hunt elusive Higgs boson

That is not to say that the advantage has been proven yet. The quantum algorithm developed by IBM performed comparably to classical methods on the limited quantum processors that exist today – but those systems are still in their very early stages.

And with only a small number of qubits, today’s quantum computers are not capable of carrying out computations that are useful. They also remain crippled by the fragility of qubits, which are highly sensitive to environmental changes and are still prone to errors.

Rather, IBM and CERN are banking on future improvements in quantum hardware to demonstrate tangibly, and not only theoretically, that quantum algorithms have an advantage.

What jobs are affected by AI? Better-paid, better-educated workers face the most exposure

In part because the technologies have not yet been widely adopted, previous analyses have had to rely either on case studies or subjective assessments by experts to determine which occupations might be susceptible to a takeover by AI algorithms. What’s more, most research has concentrated on an undifferentiated array of “automation” technologies including robotics, software, and AI all at once. The result has been a lot of discussion—but not a lot of clarity—about AI, with prognostications that range from the utopian to the apocalyptic.

Given that, the analysis presented here demonstrates a new way to identify the kinds of tasks and occupations likely to be affected by AI’s machine learning capabilities, rather than automation’s robotics and software impacts on the economy. By employing a novel technique developed by Stanford University Ph.D. candidate Michael Webb, the new report establishes job exposure levels by analyzing the overlap between AI-related patents and job descriptions. In this way, the following paper homes in on the impacts of AI specifically and does it by studying empirical statistical associations as opposed to expert forecasting.

The danger of AI micro-targeting in the metaverse

Artificial intelligence will soon become one of the most important, and likely most dangerous, aspects of the metaverse. I’m talking about agenda-driven artificial agents that look and act like any other users but are virtual simulations that will engage us in “conversational manipulation,” targeting us on behalf of paying advertisers.

This is especially dangerous when the AI algorithms have access to data about our personal interests, beliefs, habits and temperament, while also reading our facial expressions and vocal inflections. Such agents will be able to pitch us more skillfully than any salesman. And it won’t just be to sell us products and services – they could easily push political propaganda and targeted misinformation on behalf of the highest bidder.

And because these AI agents will look and sound like anyone else in the metaverse, our natural skepticism to advertising will not protect us. For these reasons, we need to regulate some aspects of the coming metaverse, especially AI-driven agents. If we don’t, promotional AI-avatars will fill our lives, sensing our emotions in real time and quickly adjusting their tactics for a level of micro-targeting never before experienced.

Xenobots — Novel Synthetic Life Forms At The Intersection Of Biology & Information Science

Learnings For Regenerative Morphogenesis, Astro-Biology And The Evolution Of Minds — Dr. Michael Levin, Tufts University, and Dr. Josh Bongard, University of Vermont.


Xenobots are living micro-robots, built from cells, designed and programmed by a computer (an evolutionary algorithm) and have been demonstrated to date in the laboratory to move towards a target, pick up a payload, heal themselves after being cut, and reproduce via a process called kinematic self-replication.

In addition to all of their future potential that has been mentioned in the press, including Xenobot applications for cleaning up radioactive wastes, collecting micro-plastics in the oceans, and even helping terraform planets, Xenobot research offers a completely new tool kit to help increase our understanding of how complex tissues/organs/body segments are cooperatively formed during the process of morphogenesis, how minds develop, and even offers glimpses of possibilities of what novel life forms we may encounter one day in the cosmos.

This cutting edge Xenobot research has been conducted by an interdisciplinary team composed of scientists from University of Vermont, Tufts University, and Harvard, and our show is honored to be joined by two members of this team today.

Dr. Josh Bongard, Ph.D. (https://www.uvm.edu/cems/cs/profiles/josh_bongard), is Professor, Morphology, Evolution & Cognition Laboratory, Department of Computer Science, College of Engineering and Mathematical Sciences, University of Vermont.

Meta Is Making a Monster AI Supercomputer for the Metaverse

Though Meta didn’t give numbers on RSC’s current top speed, in terms of raw processing power it appears comparable to the Perlmutter supercomputer, ranked fifth fastest in the world. At the moment, RSC runs on 6,800 NVIDIA A100 graphics processing units (GPUs), a specialized chip once limited to gaming but now used more widely, especially in AI. Already, the machine is processing computer vision workflows 20 times faster and large language models (like, GPT-3) 3 times faster. The more quickly a company can train models, the more it can complete and further improve in any given year.

In addition to pure speed, RSC will give Meta the ability to train algorithms on its massive hoard of user data. In a blog post, the company said that they previously trained AI on public, open-source datasets, but RSC will use real-world, user-generated data from Meta’s production servers. This detail may make more than a few people blanch, given the numerous privacy and security controversies Meta has faced in recent years. In the post, the company took pains to note the data will be carefully anonymized and encrypted end-to-end. And, they said, RSC won’t have any direct connection to the larger internet.

To accommodate Meta’s enormous training data sets and further increase training speed, the installation will grow to include 16,000 GPUs and an exabyte of storage—equivalent to 36,000 years of high-quality video—later this year. Once complete, Meta says RSC will serve training data at 16 terabytes per second and operate at a top speed of 5 exaflops.

/* */