A second problem is the risk of technological job loss. This is not a new worry; people have been complaining about it since the loom, and the arguments surrounding it have become stylized: critics are Luddites who hate progress. Whither the chandlers, the lamplighters, the hansom cabbies? When technology closes one door, it opens another, and the flow of human energy and talent is simply redirected. As Joseph Schumpeter famously said, it is all just part of the creative destruction of capitalism. Even the looming prospect of self-driving trucks putting 3.5 million US truck drivers out of a job is business as usual. Unemployed truckers can just learn to code instead, right?
Those familiar replies make sense only if there are always things left for people to do, jobs that can’t be automated or done by computers. Now AI is coming for the knowledge economy as well, and the domain of humans-only jobs is dwindling absolutely, not merely morphing into something new. The truckers can learn to code, and when AI takes that over, coders can… do something or other. On the other hand, while technological unemployment may be long-term, its problematicity might be short-term. If our AI future is genuinely as unpredictable and as revolutionary as I suspect, then even the sort of economic system we will have in that future is unknown.
A third problem is the threat of student dishonesty. During a conversation about GPT-3, a math professor told me “welcome to my world.” Mathematicians have long fought a losing battle against tools like Photomath, which allows students to snap a photo of their homework and then instantly solves it for them, showing all the needed steps. Now AI has come for the humanities and indeed for everyone. I have seen many university faculty insist that AI surely could not respond to their hyper-specific writing prompts, or assert that at best an AI could only write a barely passing paper, or appeal to this or that software that claims to spot AI products. Other researchers are trying to develop encrypted watermarks to identify AI output. All of this desperate optimism smacks of nothing more than the first stage of grief: denial.
A fully-connected annealer extendable to a multi-chip system and featuring a multi-policy mechanism has been designed by Tokyo Tech researchers to solve a broad class of combinatorial optimization (CO) problems relevant to real-world scenarios quickly and efficiently. Named Amorphica, the annealer has the ability to fine-tune parameters according to a specific target CO problem and has potential applications in logistics, finance, machine learning, and so on.
The modern world has grown accustomed to an efficient delivery of goods right at our doorsteps. But did you know that realizing such an efficiency requires solving a mathematical problem, namely what is the best possible route between all the destinations? Known as the “traveling salesman problem,” this belongs to a class of mathematical problems known as “combinatorial optimization” (CO) problems.
As the number of destinations increases, the number of possible routes grows exponentially, and a brute force method based on exhaustive search for the best route becomes impractical. Instead, an approach called “annealing computation” is adopted to find the best route quickly without an exhaustive search.
Large language models still struggle with basic reasoning tasks. Two new papers that apply machine learning to math provide a blueprint for how that could change.
Say you have a cutting-edge gadget that can crack any safe in the world—but you haven’t got a clue how it works. What do you do? You could take a much older safe-cracking tool—a trusty crowbar, perhaps. You could use that lever to pry open your gadget, peek at its innards, and try to reverse-engineer it. As it happens, that’s what scientists have just done with mathematics.
Researchers have examined a deep neural network—one type of artificial intelligence, a type that’s notoriously enigmatic on the inside—with a well-worn type of mathematical analysis that physicists and engineers have used for decades. The researchers published their results in the journal PNAS Nexus on January 23. Their results hint their AI is doing many of the same calculations that humans have long done themselves.
The paper’s authors typically use deep neural networks to predict extreme weather events or for other climate applications. While better local forecasts can help people schedule their park dates, predicting the wind and the clouds can also help renewable energy operators plan what to put into the grid in the coming hours.
The “rank” of a graph is the number of loops it has; for each rank of graphs, there exists a moduli space. The size of this space grows quickly — if you fix the lengths of the graph’s edges, there are three graphs of rank 2, 15 of rank 3,111 of rank 4, and 2,314,204,852 of rank 10. On the moduli space, these lengths can vary, introducing even more complexity.
The shape of the moduli space for graphs of a given rank is determined by relationships between the graphs. As you walk around the space, nearby graphs should be similar, and should morph smoothly into one another. But these relationships are complicated, leaving the moduli space with mathematically unsettling features, such as regions where three walls of the moduli space pass through one another.
Mathematicians can study the structure of a space or shape using objects called cohomology classes, which can help reveal how a space is put together. For instance, consider one of mathematicians’ favorite shapes, the doughnut. On the doughnut, cohomology classes are simply loops.
It acted with rudimentary intelligence, learning, evolving and communicating with itself to grow more powerful.
A new model by a team of researchers led by Penn State and inspired by Crichton’s novel describes how biological or technical systems form complex structures equipped with signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance.
“Basically, these little nanobots become self-organized and self-aware,” said Igor Aronson, Huck Chair Professor of Biomedical Engineering, Chemistry, and Mathematics at Penn State, explaining the plot of Crichton’s book. The novel inspired Aronson to study the emergence of collective motion among interacting, self-propelled agents. The research was recently published in Nature Communications.
Quantum computing has entered a bit of an awkward period. There have been clear demonstrations that we can successfully run quantum algorithms, but the qubit counts and error rates of existing hardware mean that we can’t solve any commercially useful problems at the moment. So, while many companies are interested in quantum computing and have developed software for existing hardware (and have paid for access to that hardware), the efforts have been focused on preparation. They want the expertise and capability needed to develop useful software once the computers are ready to run it.
For the moment, that leaves them waiting for hardware companies to produce sufficiently robust machines—machines that don’t currently have a clear delivery date. It could be years; it could be decades. Beyond learning how to develop quantum computing software, there’s nothing obvious to do with the hardware in the meantime.
But a company called QuEra may have found a way to do something that’s not as obvious. The technology it is developing could ultimately provide a route to quantum computing. But until then, it’s possible to solve a class of mathematical problems on the same hardware, and any improvements to that hardware will benefit both types of computation. And in a new paper, the company’s researchers have expanded the types of computations that can be run on their machine.
Antibiotic resistance is a global threat to public health and associated with the overuse of antibiotics. Although non-antibiotic drugs occupy 95% of the drug market, their impact on the emergence and spread of antibiotic resistance remains unclear. Here we demonstrate that antidepressants, one of the most frequently prescribed drugs, can induce antibiotic resistance and persistence. Such effects are associated with increased reactive oxygen species, enhanced stress signature responses, and stimulation of efflux pump expression. Mathematical modeling also supported a role for antidepressants in the occurrence of antibiotic-resistant mutants and persister cells. Considering the high consumption of antidepressants (16,850 kg annually in the United States alone), our findings highlight the need to re-evaluate the antibiotic-like side effects of antidepressants.
Stephen Hawking was one of the greatest scientific and analytical minds of our time, says NASA’s Michelle Thaller. She posits that Hawking might be one of the parents of an entirely new school of physics because he was working on some incredible stuff—concerning quantum entaglement— right before he died. He was even humble enough to go back to his old work about black holes and rethink his hypotheses based on new information. Not many great minds would do that, she says, relaying just one of the reasons Stephen Hawking will be so deeply missed. You can follow Michelle Thaller on Twitter at @mlthaller.
MICHELLE THALLER: Dr. Michelle Thaller is an astronomer who studies binary stars and the life cycles of stars. She is Assistant Director of Science Communication at NASA. She went to college at Harvard University, completed a post-doctoral research fellowship at the California Institute of Technology (Caltech) in Pasadena, Calif. then started working for the Jet Propulsion Laboratory’s (JPL) Spitzer Space Telescope. After a hugely successful mission, she moved on to NASA’s Goddard Space Flight Center (GSFC), in the Washington D.C. area. In her off-hours often puts on about 30lbs of Elizabethan garb and performs intricate Renaissance dances. For more information, visit NASA.
TRANSCRIPT: Michelle Thaller: Yes Jeremy, a lot of us were really sad with the passing of Stephen Hawking. He was definitely an inspiration. He was one of the most brilliant theoretical physicists in the world, and of course, he overcame this incredible disability, his life was very difficult and very dramatic and I for one am really going to miss having him around. And I certainly miss him as a scientist too. He made some incredible contributions. Now, Stephen Hawking was something that we call a theoretical physicist, and what that means is that people use the mathematics of physics to explore areas of the universe that we can’t get to very easily. For example, conditions right after the Big Bang, the beginning of the universe, what were things like when the universe was a fraction of a second old? That’s not something we can create very easily in a laboratory or any place we can go to, but we can use our mathematics to predict what that would have been like and then test our assumptions based on how the universe changed over time. And one of the places that is also very difficult to go to is, could we explore a black hole? And this is what Stephen Hawking was best known for. Now, black holes are massive objects they’re made from collapsed dead stars, and the nearest black hole to us is about 3,000 light years away. That one is not particularly large, it’s only a couple times the mass of the sun. The biggest black hole that’s in our galaxy is about four million times the mass of the sun and that actually sits right in the heart of the Milky Way Galaxy. And right now you and I are actually orbiting that giant black hole at half a million miles an hour. These are incredibly exotic objects. The reason we call them black holes is that the gravity is so intense it can suck in everything, including light. Not even light, going through space freely at the speed of light, can escape a black hole, so talk about dramatic exotic objects that are difficult to do experiments on. Stephen Hawking laid down some of our basic understanding of how a black hole works. And one of the things he actually did was he even predicted that black holes can die. You would think that a collapsed star that forms a bottomless pit of gravity would exist forever, but Stephen Hawking used the laws of quantum mechanics and something called thermodynamics, how heat behaves in the universe, to prove that maybe black holes can evaporate over time. And of course, that’s a hugely significant thing. One of the reasons I think it’s very unfortunate he died is we’re actually right on the cusp of being able to do actual experiments with black holes. And I know that sounds like a strange thing to say, but there are some particle accelerators, I mean specifically the Large Hadron Collider, which is in Europe, that are about to get to high enough energies they’re going to smash particles together so hard that so much energy is generated they might be able to make tiny little black holes. Read full transcript on: https://bigthink.com/videos/michelle-thaller-ask-a-nasa-astr…-the-world