Menu

Blog

Archive for the ‘information science’ category: Page 38

Apr 29, 2023

We’re still in the dark about a key black hole paradox

Posted by in categories: cosmology, information science, physics, singularity

Within a year, Karl Schwarzschild, who was “a lieutenant in the German army, by conscription, but a theoretical astronomer by profession,” as Mann puts it, heard of Einstein’s theory. He was the first person to work out a solution to Einstein’s equations, which showed that a singularity could form–and nothing, once it got too close, could move fast enough to escape a singularity’s pull.

Then, in 1939, physicists Rober Oppenheimer (of Manhattan Project fame, or infamy) and Hartland Snyder tried to find out whether a star could create Schwarzschild’s impossible-sounding object. They reasoned that given a big enough sphere of dust, gravity would cause the mass to collapse and form a singularity, which they showed with their calculations. But once World War II broke out, progress in this field stalled until the late 1950s, when people started trying to test Einstein’s theories again.

Physicist John Wheeler, thinking about the implications of a black hole, asked one of his grad students, Jacob Bekenstein, a question that stumped scientists in the late 1950s. As Mann paraphrased it: “What happens if you pour hot tea into a black hole?”

Apr 29, 2023

Google Reveals Major Hidden Weakness In Machine Learning

Posted by in categories: information science, robotics/AI

Deep learning algorithms are prone to a previously unknown problem, say a team of computer scientists at Google.

Apr 28, 2023

Amazon is developing an improved LLM to power Alexa

Posted by in categories: information science, robotics/AI

Amazon is building a more “generalized and capable” large language model (LLM) to power Alexa, said Amazon CEO Andy Jassy during the company’s first-quarter earnings call yesterday. An LLM, like ChatGPT, is a deep learning algorithm that can recognize, summarize and generate text and other content based on knowledge from enormous amounts of text data.

Jassy said that although Amazon has had an LLM powering Alexa, the tech giant is working on one that is more capable than the current one. The Amazon executive believes that the addition of an improved LLM will help Amazon work toward its goal of building “the world’s best personal assistant,” but acknowledged that it will be difficult to do so across many domains.

“I think when people often ask us about Alexa, what we often share is that if we were just building a smart speaker, it would be a much smaller investment,” said Jassy during the call. “But we have a vision, which we have conviction about that we want to build the world’s best personal assistant. And to do that, it’s difficult. It’s across a lot of domains and it’s a very broad surface area. However, if you think about the advent of Large Language Models and generative AI, it makes the underlying models that much more effective such that I think it really accelerates the possibility of building that world’s best personal assistant.”

Apr 28, 2023

Algorithm of quantum engineering of large-amplitude high-fidelity Schrödinger cat states

Posted by in categories: engineering, information science, quantum physics

We present an algorithm of quantum engineering of large-amplitude $$\ge 5$$ high-fidelity $$\ge 0.99$$ even/odd Schrödinger cat states (SCSs) using a single mode squeezed vacuum (SMSV) state as resource. Set of $$k$$ beam splitters (BSs) with arbitrary transmittance and reflectance coefficients sequentially following each other acts as a hub that redirects a multiphoton state into the measuring modes simultaneously measured by photon number resolving (PNR) detectors. We show that the multiphoton state splitting guarantees significant increase of the success probability of the SCSs generator compared to its implementation in a single PNR detector version and imposes less requirements on ideal PNR detectors.

Apr 27, 2023

Metamaterial Provides Underwater Stealth

Posted by in categories: genetics, information science

A lightweight structure made of rubber and metal layers can provide an object with underwater acoustic stealth over a broad frequency range.

An acoustic “cloak” could hide an underwater object from detection by sonar devices or by echolocating marine animals. Much like camouflage clothing allows figures to fade into a background, acoustic camouflage can make an object indistinguishable from the surrounding water. Underwater acoustic cloaks have previously been demonstrated, but they typically work over a narrow range of frequencies or are too bulky to be practical. Now Hao-Wen Dong at the Beijing Institute of Technology and colleagues demonstrate a lightweight, broadband cloak made of a thin shell of layered material. The cloak achieves acoustic stealth by both blocking the reflection of sonar pings off the surface and preventing the escape of sound generated from within the cloaked object [1].

Dong and colleagues designed a 4-cm-thick structure—combining an outer rubber layer and a “metamaterial” made of porous aluminum—which covered a steel plate. Using a genetic algorithm, they optimized the metamaterial’s elastic properties to tailor the interaction with underwater sound waves. Specifically, the metamaterial converts impinging longitudinal sound waves, which can travel long distances underwater, to transverse sound waves, which cannot propagate through water. These transverse waves get trapped in the rubber layer, where they get absorbed, eliminating reflected and transmitted waves simultaneously. The researchers built and tested a prototype cloak, confirming that it behaved as predicted. In particular, it absorbed 80% of the energy of incoming sound waves while offering 100-fold attenuation of acoustic noise produced on the side of the steel plate.

Apr 27, 2023

A new quantum approach to solve electronic structures of complex materials

Posted by in categories: chemistry, computing, engineering, information science, quantum physics

If you know the atoms that compose a particular molecule or solid material, the interactions between those atoms can be determined computationally, by solving quantum mechanical equations—at least, if the molecule is small and simple. However, solving these equations, critical for fields from materials engineering to drug design, requires a prohibitively long computational time for complex molecules and materials.

Now, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago’s Pritzker School of Molecular Engineering (PME) and Department of Chemistry have explored the possibility of solving these electronic structures using a quantum .

The research, which uses a combination of new computational approaches, was published online in the Journal of Chemical Theory and Computation. It was supported by Q-NEXT, a DOE National Quantum Information Science Research Center led by Argonne, and by the Midwest Integrated Center for Computational Materials (MICCoM).

Apr 24, 2023

Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT

Posted by in categories: information science, robotics/AI, transportation

Like many people, you may have had your mind blown recently by the possibility of ChatGPT and other large language models like the new Bing or Google’s Bard.

For anyone who somehow hasn’t come across them — which is probably unlikely as ChatGPT is reportedly the fastest-growing app of all time — here’s a quick recap:

Continue reading “Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT” »

Apr 23, 2023

On theoretical justification of the forward–backward algorithm for the variational learning of Bayesian hidden Markov models

Posted by in categories: computing, information science

Hidden Markov model (HMM) [ 1, 2 ] is a powerful model to describe sequential data and has been widely used in speech signal processing [ 3-5 ], computer vision [ 6-8 ], longitudinal data analysis [ 9 ], social networks [ 10-12 ] and so on. An HMM typically assumes the system has K internal states, and the transition of states forms a Markov chain. The system state cannot be observed directly, thus we need to infer the hidden states and system parameters based on observations. Due to the existence of latent variables, the Expectation-Maximisation (EM) algorithm [ 13, 14 ] is often used to learn an HMM. The main difficulty is to calculate site marginal distributions and pairwise marginal distributions based on the posterior distribution of latent variables. The forward-backward algorithm was specifically designed to tackle this problem. The derivation of the forward-backward algorithm heavily relies on HMM assumptions and probabilistic relationships between quantities, thus requiring the parameters in the posterior distribution to have explicit probabilistic meanings.

Bayesian HMM [ 15-22 ] further imposes priors on the parameters of HMM, and the resulting model is more robust. It has been demonstrated that Bayesian HMM often outperforms HMM in applications. However, the learning process of a Bayesian HMM is more challenging since the posterior distribution of latent variables is intractable. Mean-field theory-based variational inference is often utilised in the E-step of the EM algorithm, which tries to find an optimal approximation of the posterior distribution in a factorised family. The variational inference iteration also involves computing site marginal distributions and pairwise marginal distributions given the joint distribution of system state indicator variables. Existing works [ 15-23 ] directly apply the forward-backward algorithm to obtain these values without justification. This is not theoretically sound and the result is not guaranteed to be correct, since the requirements of the forward-backward algorithm are not met in this case.

In this paper, we prove that the forward-backward algorithm can be applied in more general cases where the parameters have no probabilistic meanings. The first proof converts the general case to an HMM and uses the correctness of the forward-backward algorithm on HMM to prove the claim. The second proof is model-free, which derives the forward-backward algorithm in a totally different way. The new derivation does not rely on HMM assumptions and merely utilises matrix techniques to rewrite the desired quantities. Therefore, this derivation naturally proves that it is unnecessary to make probabilistic requirements on the parameters of the forward-backward algorithm. Specifically, we justify that heuristically applying the forward-backward algorithm in the variational learning of Bayesian HMM is theoretically sound and guaranteed to return the correct result.

Apr 23, 2023

Quantum circuit learning as a potential algorithm to predict experimental chemical properties

Posted by in categories: chemistry, information science, quantum physics

We introduce quantum circuit learning (QCL) as an emerging regression algorithm for chemo-and materials-informatics. The supervised model, functioning on the rule of quantum mechanics, can process linear and smooth non-linear functions from small datasets (100 records). Compared with conventional algorithms, such as random forest, support vector machine, and linear regressions, the QCL can offer better predictions with some one-dimensional functions and experimental chemical databases. QCL will potentially help the virtual exploration of new molecules and materials more efficiently through its superior prediction performances.

Apr 22, 2023

The Multiverse: Our Universe Is Suspiciously Unlikely to Exist—Unless It Is One of Many

Posted by in categories: alien life, information science, particle physics

But we expect that it’s in that first tiny fraction of a second that the key features of our universe were imprinted.

The conditions of the universe can be described through its “fundamental constants”—fixed quantities in nature, such as the gravitational constant (called G) or the speed of light (called C). There are about 30 of these representing the sizes and strengths of parameters such as particle masses, forces, or the universe’s expansion. But our theories don’t explain what values these constants should have. Instead, we have to measure them and plug their values into our equations to accurately describe nature.

Continue reading “The Multiverse: Our Universe Is Suspiciously Unlikely to Exist—Unless It Is One of Many” »

Page 38 of 280First3536373839404142Last