Toggle light / dark theme

This is only the Beginning.


Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

“The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn and his colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

“When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, Anton Zeilinger of the University of Vienna and their colleagues have refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.

Below is my Answer.

“There is big confluence between AI & Social Media. It is a two way thing, AI not only affects Social Media, Social Media also plays a great role in the development of AI.

The way AI is developed is through data, large data (big data) and one of the easiest ways to generate and source for data at this scale is from the contents and interactions on social media.

Most social media platforms operate at scale, so for issues such as monitoring or censorship of what is being posted, the admin of these platforms have to use automation and AI for its management and policing.

AI algorithms such as sentiment analysis or recommendation engines (used by Facebook & Youtube to recommend posts based on the AI understanding of what you will like) are very much an integral part of any social platform architecture.

AI is integral to how and when Adverts are delivered to you on social media. AI controls the engagement levels on your posts and ensures that people who are most likely interested in the topics or communities you belong to get recommended to you as connection; this is because engagement is key goal for every social media platform.

So as you can see, AI plays a very critical role in social media. But beyond this, it is also important to mention that not all the effects of AI on social media are positive ones. For example, AI ensures a never ending supply of content recommendation (recommendation engines) that can keep you engrossed in social media, using time in an unproductive way.

After the program was first revealed in 2019, the Air Force’s then-Assistant Secretary of the Air Force for Acquisition, Technology and Logistics Will Roper stated he wanted to see operational demonstrations within two years. The latest test flight of the Skyborg-equipped Avenger shows the service has clearly hit that benchmark.

The General Atomics Avenger was used in experiments with another autonomy system in 2020, developed as part of the Defense Advanced Research Projects Agency’s (DARPA) Collaborative Operations in Denied Environment (CODE) program that sought to develop drones that could demonstrate “collaborative autonomy,” or the ability to work cooperatively.

Over the past few decades, roboticists have created increasingly advanced and sophisticated robotics systems. While some of these systems are highly efficient and achieved remarkable results, they still perform far poorly than humans on several tasks, including those that involve grasping and manipulating objects.

Researchers from Guangdong University of Technology, Politecnico di Milano, University of Sussex and Bristol Robotics Laboratory (BRL) at University of the West of England have recently developed a that could help to improve manipulation. This model, presented in a paper published in IEEE Transactions on Industrial Informatics, draws inspiration from how humans adapt their manipulation strategies based on the task they are trying to complete.

“Humans have the remarkable ability to deal with and complete dynamic tasks, such as curving, cutting and assembly, optimally and compliantly,” Professor Chenguang Yang, the corresponding author for the paper working at BRL, told TechXplore. “Although these tasks are easy for humans, they are quite challenging for robots to perform, even advanced ones.”

Back in May, China became just the third nation to land on the surface of Mars as it touched down with its Tianwen-1 probe. Packed aboard was the country’s first interplanetary rover, named Zhurong, which can be seen and heard making its very movements on the Red Planet in newly released recordings.

China’s Tianwen-1 mission set off for Mars last July and came to land on a plain in the planet’s northern hemisphere called Utopia Planitia following a 10-month journey. The Zhurong rover remained aboard the lander module for around a week surveying its surroundings and checking its instruments, before rolling down to the dusty surface to begin its explorations.

Recordings gathered by the China National Space Administration and shared by state-funded broadcaster CCTV show the rover’s start to life on Mars in intriguing new light. The first includes the first audio collected by a Chinese Mars rover, with an onboard recording device capturing the sounds as its engine was started, of the Martian winds and of the machinations of the robot as it made its way down to the surface.

Amazon announced Thursday that it plans to develop new technology for its autonomous delivery vehicles in Helsinki, Finland.

The Seattle-headquartered tech giant said in a blog post that it is setting up a new “Development Center” to support Amazon Scout, which is a fully electric autonomous delivery robot that is being tested in four U.S. locations.

Two dozen engineers will be based at the Amazon Scout Development Center in Helsinki initially, the company said, adding that they will be focused on research and development.