Toggle light / dark theme

What Is Quantum Computing (Quantum Computers Explained)

This video is the ninth in a multi-part series discussing computing and the second discussing non-classical computing. In this video, we’ll be discussing what quantum computing is, how it works and the impact it will have on the field of computing.

[0:28–6:14] Starting off we’ll discuss, what quantum computing is, more specifically — the basics of quantum mechanics and how quantum algorithms will run on quantum computers.

[6:14–9:42] Following that we’ll look at, the impact quantum computing will bring over classical computers in terms of the P vs NP problem and optimization problems and how this is correlated with AI.

[9:42–14:00] To conclude we’ll discuss, current quantum computing initiatives to reach quantum supremacy and ways you can access the power of quantum computers now!

Thank you to the patron(s) who supported this video ➤

Wyldn Pearson

One chip to rule them all: It natively runs all types of AI software

We tend to think of AI as a monolithic entity, but it has actually developed along multiple branches. One of the main branches involves performing traditional calculations but feeding the results into another layer that takes input from multiple calculations and weighs them before performing its calculations and forwarding those on. Another branch involves mimicking the behavior of traditional neurons: many small units communicating in bursts of activity called spikes, and keeping track of the history of past activity.

Each of these, in turn, has different branches based on the structure of its layers and communications networks, types of calculations performed, and so on. Rather than being able to act in a manner we would recognize as intelligent, many of these are very good at specialized problems, like pattern recognition or playing poker. And processors that are meant to accelerate the performance of the software can typically only improve a subset of them.

That last division may have come to an end with the development of Tianjic by a large team of researchers primarily based in China. Tianjic is engineered so that its individual processing units can switch from spiking communications back to binary and perform a large range of calculations, in almost all cases faster and more efficiently than a GPU can. To demonstrate the chip’s abilities, the researchers threw together a self-driving bicycle that ran three different AI algorithms on a single chip simultaneously.

Here’s how researchers are making machine learning more efficient and affordable for everyone

The research and development of neural networks is flourishing thanks to recent advancements in computational power, the discovery of new algorithms, and an increase in labelled data. Before the current explosion of activity in the space, the practical applications of neural networks were limited.

Much of the recent research has allowed for broad application, the heavy computational requirements for machine learning models still restrain it from truly entering the mainstream. Now, emerging algorithms are on the cusp of pushing neural networks into more conventional applications through exponentially increased efficiency.

How to Hack a Face: From Facial Recognition to Facial Recreation

Given that going viral on the Internet is often cyclical, it should come as no surprise that an app that made its debut in 2017 has once again surged in popularity. FaceApp applies various transformations to the image of any face, but the option that ages facial features has been especially popular. However, the fun has been accompanied by controversy; since biometric systems are replacing access passwords, is it wise to freely offer up our image and our personal data? The truth is that today the face is ceasing to be as non-transferable as it used to be, and in just a few years it could be more hackable than the password of a lifetime.

Our countenance is the most recognisable key to social relationships. We might have doubts when hearing a voice on the phone, but never when looking at the face of a familiar person. In the 1960s, a handful of pioneering researchers began training computers to recognise human faces, although it was not until the 1990s that this technology really began to take off. Facial recognition algorithms have improved to such an extent that since 1993 their error rate has been halved every two years. When it comes to recognising unfamiliar faces in laboratory experiments, today’s systems outperform human capabilities.

Nowadays these systems are among the most widespread applications of Artificial Intelligence (AI). Every day, our laptops, smartphones and tablets greet us by name as they recognise our facial features, but at the same time, the uses of this technology have set off alarm bells over invasion of privacy concerns. In China, the world leader in facial recognition systems, the introduction of this technology associated with surveillance cameras to identify even pedestrians has been viewed by the West as another step towards the Big Brother dystopia, the eye of the all-watching state, as George Orwell portrayed in 1984.

Facebook funds AI mind-reading experiment

Facebook has announced a breakthrough in its plan to create a device that allows people to type just by thinking.

It has funded a study that developed machine-learning algorithms capable of turning brain activity into speech

It worked on epilepsy patients who had already had recording electrodes placed on their brains to asses the origins of their seizures, ahead of surgery.

[hep-th/0510126] On Closed String Tachyon Dynamics

We study the condensation of closed string tachyons as a time-dependent process. In particular, we study tachyons whose wave functions are either space-filling or localized in a compact space, and whose masses are small in string units; our analysis is otherwise general and does not depend on any specific model. Using world-sheet methods, we calculate the equations of motion for the coupled tachyon-dilaton system, and show that the tachyon follows geodesic motion with respect to the Zamolodchikov metric, subject to a force proportional to its beta function and friction proportional to the time derivative of the dilaton.

An Israeli Scientist Paves the Way to Alzheimer’s Cure, One Algorithm at a Time

Scientists at work in laboratory. Photo: Public domain via Wikicommons.

CTech – When chemistry Nobel laureate Michael Levitt met his wife two years ago, he didn’t know it would lead to a wonderful friendship with a young Israeli scientist. When Israeli scientist Shahar Barbash decided to found a startup with the aim of cutting down the time needed to develop new medicine, he didn’t know that a friend’s wedding would help him score a meeting with a man many want to meet but few do. But Levitt’s wife is an old friend of Barbash’s parents, and the rest, as they say, is history.

One of the joys of being an old scientist is to encourage extraordinary young ones, Levitt, an American-British-Israeli biophysicist and a professor at Stanford University since 1987, said in a recent interview with Calcalist. He might have met Barbash because his wife knew his family, but that is not enough to make him go into business with someone, Levitt said. “I got on board because his vision excited me, even though I thought it would be very hard to realize.”

Virginia Tech researchers lead breakthrough in quantum computing

Abstract: The large, error-correcting quantum computers envisioned today could be decades away, yet experts are vigorously trying to come up with ways to use existing and near-term quantum processors to solve useful problems despite limitations due to errors or “noise.”

A key envisioned use is simulating molecular properties. In the long run, this can lead to advances in materials improvement and drug discovery. But not with noisy calculations confusing the results.

Now, a team of Virginia Tech chemistry and physics researchers have advanced quantum simulation by devising an algorithm that can more efficiently calculate the properties of molecules on a noisy quantum computer. Virginia Tech College of Science faculty members Ed Barnes, Sophia Economou, and Nick Mayhall recently published a paper in Nature Communications detailing the advancement.

Why ‘upgrading’ humanity is a transhumanist myth

Click on photo to start video.

Though some computer engineers claim to know what human consciousness is, many neuroscientists say that we’re nowhere close to understanding what it is — or its source.

In this video, bestselling author Douglas Rushkoff gives the “transhumanist myth” — the belief that A.I. will replace humans — a reality check. Is it hubristic to upload people’s minds to silicon chips, or re-create their consciousness with algorithms, when we still know so little about what it means to be human?

You can read more about Rushkoff’s perspective on this issue in his new book, Team Human.

Microsoft, Google and the Artificial Intelligence Race

Microsoft and Google companies want to be central to the development of the thinking machine.


The decision by Microsoft to invest $1 billion in OpenAI, a company jointly founded by Elon Musk, brings closer the time when machines threaten to replace humans in any tasks that humans do today.

OpenAI, which was founded just four years ago, has pioneered a range of technologies which have pushed the frontiers of massive data processing in defiance of the physical and computer capabilities that governed such developments for generations.

Now, with the investment from Microsoft, the pace of technological change is likely to accelerate rapidly. Today, Artificial Intelligence is at a level of what is known as ‘weak AI’ and relies on humans to create the algorithms which allow for the crunching of massive amounts of data to produce new and often predictive results. Artificial General Intelligence, or Strong AI, will herald a new era when robots will essentially be able to think for themselves.