Menu

Blog

Archive for the ‘information science’ category: Page 120

Feb 6, 2022

AI learns physics to optimize particle accelerator performance

Posted by in categories: biotech/medical, finance, information science, robotics/AI

Machine learning, a form of artificial intelligence, vastly speeds up computational tasks and enables new technology in areas as broad as speech and image recognition, self-driving cars, stock market trading and medical diagnosis.

Before going to work on a given task, algorithms typically need to be trained on pre-existing data so they can learn to make fast and accurate predictions about future scenarios on their own. But what if the job is a completely new one, with no data available for training?

Now, researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated that they can use machine learning to optimize the performance of particle accelerators by teaching the algorithms the basic principles behind operations—no prior data needed.

Feb 4, 2022

Removing water from underwater photography

Posted by in category: information science

A new algorithm for underwater photography makes marine life appear as clear as it would on land, and it’s helping scientists understand the ocean better.

Feb 3, 2022

Mimicking the brain to realize ‘human-like’ virtual assistants

Posted by in categories: information science, robotics/AI

Speech is more than just a form of communication. A person’s voice conveys emotions and personality and is a unique trait we can recognize. Our use of speech as a primary means of communication is a key reason for the development of voice assistants in smart devices and technology. Typically, virtual assistants analyze speech and respond to queries by converting the received speech signals into a model they can understand and process to generate a valid response. However, they often have difficulty capturing and incorporating the complexities of human speech and end up sounding very unnatural.

Now, in a study published in the journal IEEE Access, Professor Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), and Dung Kim Tran, a doctoral course student at JAIST, have developed a system that can capture the information in similarly to how humans perceive speech.

“In humans, the auditory periphery converts the information contained in input speech signals into neural activity patterns (NAPs) that the brain can identify. To emulate this function, we used a matching pursuit algorithm to obtain sparse representations of speech signals, or signal representations with the minimum possible significant coefficients,” explains Prof. Unoki. “We then used psychoacoustic principles, such as the equivalent rectangular bandwidth scale, gammachirp function, and masking effects to ensure that the auditory sparse representations are similar to that of the NAPs.”

Feb 3, 2022

Does AI Improve Human Judgment?

Posted by in categories: business, information science, robotics/AI

Decision-making has mostly revolved around learning from mistakes and making gradual, steady improvements. For several centuries, evolutionary experience has served humans well when it comes to decision-making. So, it is safe to say that most decisions human beings make are based on trial and error. Additionally, humans rely heavily on data to make key decisions. Larger the amount of high-integrity data available, the more balanced and rational their decisions will be. However, in the age of big data analytics, businesses and governments around the world are reluctant to use basic human instinct and know-how to make major decisions. Statistically, a large percentage of companies globally use big data for the purpose. Therefore, the application of AI in decision-making is an idea that is being adopted more and more today than in the past.

However, there are several debatable aspects of using AI in decision-making. Firstly, are *all* the decisions made with inputs from AI algorithms correct? And does the involvement of AI in decision-making cause avoidable problems? Read on to find out: involvement of AI in decision-making simplifies the process of making strategies for businesses and governments around the world. However, AI has had its fair share of missteps on several occasions.

Feb 3, 2022

Mathematicians Prove 30-Year-Old André-Oort Conjecture

Posted by in categories: information science, mathematics

“The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London.

The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x3 + y3 = z3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century.

In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties.

Feb 2, 2022

Chip designer mimicking brain, backed by Sam Altman, gets $25 million funding

Posted by in categories: information science, robotics/AI

(Reuters) — Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company’s technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

Feb 1, 2022

This AI Learned the Design of a Million Algorithms to Help Build New AIs Faster

Posted by in categories: information science, robotics/AI

Might there be a better way? Perhaps.

A new paper published on the preprint server arXiv describes how a type of algorithm called a “hypernetwork” could make the training process much more efficient. The hypernetwork in the study learned the internal connections (or parameters) of a million example algorithms so it could pre-configure the parameters of new, untrained algorithms.

The AI, called GHN-2, can predict and set the parameters of an untrained neural network in a fraction of a second. And in most cases, the algorithms using GHN-2’s parameters performed as well as algorithms that had cycled through thousands of rounds of training.

Feb 1, 2022

Will brains or algorithms rule the kingdom of science?

Posted by in categories: information science, neuroscience, science

Science today stands at a crossroads: will its progress be driven by human minds or by the machines that we’ve created?

Feb 1, 2022

AI nanny created by Chinese scientists to grow humans in robot wombs

Posted by in categories: biotech/medical, ethics, information science, robotics/AI

The AI nanny is here! In a new feat for science, robots and AI can now be paired to optimise the creation of human life. In a Matrix-esque reality, robotics and artificial intelligence can now help to develop babies with algorithms and artificial wombs.

Reported by South China Morning Post, Chinese scientists in Suzhou have developed the new technology. However, there are worries surrounding the ethics of actually artificially growing human babies.

Jan 31, 2022

IBM and CERN use quantum computing to hunt elusive Higgs boson

Posted by in categories: computing, finance, information science, particle physics, quantum physics

That is not to say that the advantage has been proven yet. The quantum algorithm developed by IBM performed comparably to classical methods on the limited quantum processors that exist today – but those systems are still in their very early stages.

And with only a small number of qubits, today’s quantum computers are not capable of carrying out computations that are useful. They also remain crippled by the fragility of qubits, which are highly sensitive to environmental changes and are still prone to errors.

Rather, IBM and CERN are banking on future improvements in quantum hardware to demonstrate tangibly, and not only theoretically, that quantum algorithms have an advantage.