Menu

Blog

Archive for the ‘information science’ category

Jul 1, 2020

Scientists Fire Up a Commercially Available Desktop Quantum Computer

Posted by in categories: computing, education, information science, quantum physics

Scientists suggest a desktop quantum computer based on nuclear magnetic resonance (NMR) could soon be on its way to a classroom near you. Although the device might not be suited to handle large quantum applications, the makers say it could help students learn about quantum computing.

SpinQ Chief Scientist Prof. Bei Zeng from University of Guelph, announced the SpinQ Gemini, a two-qubit desktop quantum computer, at the industry session of the Quantum Information Processing (QIP2020) conference, which is held recently in Shenzhen, China. It is the first time that a desktop quantum computer is commercially available, according to the researchers.

SpinQ Gemini is built by the state-of-the-art technology of permanent magnets, providing 1T magnetic field, running at room temperature, and maintenance free. It demonstrates quantum algorithms such as Deutsch’s algorithm and Grover’s algorithm for teaching quantum computing to university and high school students, also provides advanced models for quantum circuit design and control sequence design for researchers.

Continue reading “Scientists Fire Up a Commercially Available Desktop Quantum Computer” »

Jun 29, 2020

NASA’s New Moon-Bound Space Suits Will Get a Boost From AI

Posted by in categories: information science, robotics/AI, space

Engineers are turning to generative design algorithms to build components for NASA’s next-generation space suit—the first major update in decades.

Jun 29, 2020

How Chinese tech giants are disrupting insurance industry with pooled funds

Posted by in categories: biotech/medical, finance, health, information science, internet, mobile phones

However, the situation has been improving as Chinese tech giants including e-commerce company Alibaba, search engine Baidu, on-demand delivery company Meituan Dianping, ride-hailing operator Didi Chuxing and smartphone maker Xiaomi now offer more affordable health care plans via mutual aid platforms, which operate as a collective claim-sharing mechanism.


China’s online mutual aid platforms are disrupting old school insurance companies by leveraging big data and internet finance technologies to offer low cost medical coverage.

Continue reading “How Chinese tech giants are disrupting insurance industry with pooled funds” »

Jun 28, 2020

Mathematical Breakthrough Makes It Easier to Explore Quantum Entanglement

Posted by in categories: information science, mathematics, particle physics, quantum physics

Updated mathematical techniques that can distinguish between two types of ‘non-Gaussian curve’ could make it easier for researchers to study the nature of quantum entanglement.

Quantum entanglement is perhaps one of the most intriguing phenomena known to physics. It describes how the fates of multiple particles can become entwined, even when separated by vast distances. Importantly, the probability distributions needed to define the quantum states of these particles deviate from the bell-shaped, or ‘Gaussian’ curves which underly many natural processes. Non-Gaussian curves don’t apply to quantum systems alone, however. They can also be composed of mixtures of regular Gaussian curves, producing difficulties for physicists studying quantum entanglement. In new research published in EPJ D, Shao-Hua Xiang and colleagues at Huaihua University in China propose a solution to this problem. They suggest an updated set of equations that allows physicists to easily check whether or not a non-Gaussian state is genuinely quantum.

As physicists make more discoveries about the nature of quantum entanglement, they are rapidly making progress towards advanced applications in the fields of quantum communication and computation. The approach taken in this study could prove to speed up the pace of these advances. Xiang and colleagues acknowledge that while all previous efforts to distinguish between both types of non-Gaussian curve have had some success, their choices of Gaussian curves as a starting point have so far meant that no one approach has yet proven to be completely effective. Based on the argument that there can’t be any truly reliable Gaussian reference for any genuinely quantum non-Gaussian state, the researchers present a new theoretical framework.

Continue reading “Mathematical Breakthrough Makes It Easier to Explore Quantum Entanglement” »

Jun 27, 2020

Future shocks: 17 technology predictions for 2025

Posted by in categories: biotech/medical, information science, robotics/AI

1. AI-optimized manufacturing

Paper and pencil tracking, luck, significant global travel and opaque supply chains are part of today’s status quo, resulting in large amounts of wasted energy, materials and time. Accelerated in part by the long-term shutdown of international and regional travel by COVID-19, companies that design and build products will rapidly adopt cloud-based technologies to aggregate, intelligently transform, and contextually present product and process data from manufacturing lines throughout their supply chains. By 2025, this ubiquitous stream of data and the intelligent algorithms crunching it will enable manufacturing lines to continuously optimize towards higher levels of output and product quality – reducing overall waste in manufacturing by up to 50%. As a result, we will enjoy higher quality products, produced faster, at lower cost to our pocketbooks and the environment.

Anna-Katrina Shedletsky, CEO and Founder of Instrumental.

Jun 27, 2020

Pagaya raises $102 million to manage assets with AI

Posted by in categories: finance, information science, robotics/AI, transportation

Pagaya, an AI-driven institutional asset manager that focuses on fixed income and consumer credit markets, today announced it raised $102 million in equity financing. CEO Gal Krubiner said the infusion will enable Pagaya to grow its data science team, accelerate R&D, and continue its pursuit of new asset classes including real estate, auto loans, mortgages, and corporate credit.

Pagaya applies machine intelligence to securitization — the conversion of an asset (usually a loan) into marketable securities (e.g., mortgage-backed securities) that are sold to other investors — and loan collateralization. It eschews the traditional method of securitizing pools of previously assembled asset-backed securities (ABS) for a more bespoke approach, employing algorithms to compile discretionary funds for institutional investors such as pension funds, insurance companies, and banks. Pagaya selects and buys individual loans by analyzing emerging alternative asset classes, after which it assesses their risk and draws on “millions” of signals to predict their returns.

Pagaya’s data scientists can build algorithms to track activities, such as auto loans made to residents in cities and even specific neighborhoods, for instance. The company is only limited by the amount of data publicly available; on average, Pagaya looks at decades of information on borrowers and evaluates thousands of variables.

Jun 21, 2020

The case for self-explainable AI

Posted by in categories: biotech/medical, information science, robotics/AI

For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?

Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.

Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.

Jun 19, 2020

Scientists built a new quantum computer. It’s made of five atoms and “self-destroys” after each use

Posted by in categories: computing, information science, particle physics, quantum physics

Scientists managed another breakthrough. They built a quantum computer that can execute the difficult Shor’s algorithm. It’s just five atoms big, but the experts claim it will be easy to scale it up.

Jun 18, 2020

OpenAI’s New Text Generator Writes Even More Like a Human

Posted by in categories: information science, robotics/AI

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

Continue reading “OpenAI’s New Text Generator Writes Even More Like a Human” »

Jun 16, 2020

The Higgs Boson –“Gateway” to the Dark Universe?

Posted by in categories: cosmology, information science, particle physics

The cosmos contains a Higgs field—similar to an electric field—generated by Higgs bosons in the vacuum. Particles interact with the field to gain energy and, through Albert Einstein’s iconic equation, E=mc2, mass. The Standard Model of particle physics, although successful at describing elementary particles and their interactions at low energies, does not include a viable and hotly debated dark-matter particle. The only possible candidates, neutrinos, do not have the right properties to explain the observed dark matter.

“One particularly interesting possibility is that these long-lived dark particles are coupled to the Higgs boson in some fashion—that the Higgs is actually a portal to the dark world. We know for sure there’s a dark world, and there’s more energy in it than there is in ours. It’s possible that the Higgs could actually decay into these long-lived particles,” said LianTao Wang, a University of Chicago physicist, in 2019, referring to the last holdout particle in physicists’ grand theory of how the universe works, discovered at the LHC in 2012, filling the last gap in the standard model of fundamental particles and forces. Since then, the standard model has stood up to every test, yielding no hints of new physics.

The dark world makes up more than 95 percent of the universe, but scientists only know it exists from its effects—” like a poltergeist you can only see when it pushes something off a shelf.” We know there’s dark matter because like the poltergeist, we can see gravity acting on it keeping galaxies from flying apart.

Continue reading “The Higgs Boson --‘Gateway’ to the Dark Universe?” »

Page 1 of 11712345678Last