Toggle light / dark theme

Machine learning can work wonders, but it’s only one tool among many.

Artificial intelligence is among the most poorly understood technologies of the modern era. To many, AI exists as both a tangible but ill-defined reality of the here and now and an unrealized dream of the future, a marvel of human ingenuity, as exciting as it is opaque.

It’s this indistinct picture of both what the technology is and what it can do that might engender a look of uncertainty on someone’s face when asked the question, “Can AI solve climate change?” “Well,” we think, “it must be able to do *something*,” while entirely unsure of just how algorithms are meant to pull us back from the ecological brink.

Such ambivalence is understandable. The question is loaded, faulty in its assumptions, and more than a little misleading. It is a vital one, however, and the basic premise of utilizing one of the most powerful tools humanity has ever built to address the most existential threat it has ever faced is one that warrants our genuine attention.

Full Story:

Machine learning, a form of artificial intelligence, vastly speeds up computational tasks and enables new technology in areas as broad as speech and image recognition, self-driving cars, stock market trading and medical diagnosis.

Before going to work on a given task, algorithms typically need to be trained on pre-existing data so they can learn to make fast and accurate predictions about future scenarios on their own. But what if the job is a completely new one, with no data available for training?

Now, researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated that they can use machine learning to optimize the performance of particle accelerators by teaching the algorithms the basic principles behind operations—no prior data needed.

Speech is more than just a form of communication. A person’s voice conveys emotions and personality and is a unique trait we can recognize. Our use of speech as a primary means of communication is a key reason for the development of voice assistants in smart devices and technology. Typically, virtual assistants analyze speech and respond to queries by converting the received speech signals into a model they can understand and process to generate a valid response. However, they often have difficulty capturing and incorporating the complexities of human speech and end up sounding very unnatural.

Now, in a study published in the journal IEEE Access, Professor Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), and Dung Kim Tran, a doctoral course student at JAIST, have developed a system that can capture the information in similarly to how humans perceive speech.

“In humans, the auditory periphery converts the information contained in input speech signals into neural activity patterns (NAPs) that the brain can identify. To emulate this function, we used a matching pursuit algorithm to obtain sparse representations of speech signals, or signal representations with the minimum possible significant coefficients,” explains Prof. Unoki. “We then used psychoacoustic principles, such as the equivalent rectangular bandwidth scale, gammachirp function, and masking effects to ensure that the auditory sparse representations are similar to that of the NAPs.”

Decision-making has mostly revolved around learning from mistakes and making gradual, steady improvements. For several centuries, evolutionary experience has served humans well when it comes to decision-making. So, it is safe to say that most decisions human beings make are based on trial and error. Additionally, humans rely heavily on data to make key decisions. Larger the amount of high-integrity data available, the more balanced and rational their decisions will be. However, in the age of big data analytics, businesses and governments around the world are reluctant to use basic human instinct and know-how to make major decisions. Statistically, a large percentage of companies globally use big data for the purpose. Therefore, the application of AI in decision-making is an idea that is being adopted more and more today than in the past.

However, there are several debatable aspects of using AI in decision-making. Firstly, are *all* the decisions made with inputs from AI algorithms correct? And does the involvement of AI in decision-making cause avoidable problems? Read on to find out: involvement of AI in decision-making simplifies the process of making strategies for businesses and governments around the world. However, AI has had its fair share of missteps on several occasions.

“The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London.

The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x3 + y3 = z3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century.

In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties.

(Reuters) — Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company’s technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

Might there be a better way? Perhaps.

A new paper published on the preprint server arXiv describes how a type of algorithm called a “hypernetwork” could make the training process much more efficient. The hypernetwork in the study learned the internal connections (or parameters) of a million example algorithms so it could pre-configure the parameters of new, untrained algorithms.

The AI, called GHN-2, can predict and set the parameters of an untrained neural network in a fraction of a second. And in most cases, the algorithms using GHN-2’s parameters performed as well as algorithms that had cycled through thousands of rounds of training.

The AI nanny is here! In a new feat for science, robots and AI can now be paired to optimise the creation of human life. In a Matrix-esque reality, robotics and artificial intelligence can now help to develop babies with algorithms and artificial wombs.

Reported by South China Morning Post, Chinese scientists in Suzhou have developed the new technology. However, there are worries surrounding the ethics of actually artificially growing human babies.