Toggle light / dark theme

A model for the automatic extraction of content from webs and apps

Content management systems or CMSs are the most popular tool for creating content on the internet. In recent years, they have evolved to become the backbone of an increasingly complex ecosystem of websites, mobile apps and platforms. In order to simplify processes, a team of researchers from the Internet Interdisciplinary Institute (IN3) at the Universitat Oberta de Catalunya (UOC) has developed an open-source model to automate the extraction of content from CMSs. Their associated research is published in Research Challenges in Information Science.

The open-source model is a fully functional scientific prototype that makes it possible to extract the data structure and libraries of each CMS and create a piece of software that acts as an intermediary between the content and the so-called front-end (the final application used by the user). This entire process is done automatically, making it an error-free and scalable solution, since it can be repeated multiple times without increasing its cost.

I, Chatbot: The perception of consciousness in conversational AI

So how can LaMDA provide responses that might be perceived by a human user as conscious thought or introspection? Ironically, this is due to the corpus of training data used to train LaMDA and the associativity between potential human questions and possible machine responses. It all boils down to probabilities. The question is how those probabilities evolve such that a rational human interrogator can be confused as to the functionality of the machine?

This brings us to the need for improved “explainability” in AI. Complex artificial neural networks, the basis for a variety of useful AI systems, are capable of computing functions that are beyond the capabilities of a human being. In many cases, the neural network incorporates learning functions that enable adaptation to tasks outside the initial application for which the network was developed. However, the reasons why a neural network provides a specific output in response to a given input are often unclear, even indiscernible, leading to criticism of human dependence upon machines whose intrinsic logic is not properly understood. The size and scope of training data also introduce bias to the complex AI systems, yielding unexpected, erroneous, or confusing outputs to real-world input data. This has come to be referred to as the “black box” problem where a human user, or the AI developer, cannot determine why the AI system behaves as it does.

The case of LaMDA’s perceived consciousness appears no different from the case of Tay’s learned racism. Without sufficient scrutiny and understanding of how AI systems are trained, and without sufficient knowledge of why AI systems generate their outputs from the provided input data, it is possible for even an expert user to be uncertain as to why a machine responds as it does. Unless the need for an explanation of AI behavior is embedded throughout the design, development, testing, and deployment of the systems we will depend upon tomorrow, we will continue to be deceived by our inventions, like the blind interrogator in Turing’s game of deception.

Teaching Physics to AI Can Allow It To Make New Discoveries All on Its Own

Incorporating established physics into neural network algorithms helps them to uncover new insights into material properties

According to researchers at Duke University, incorporating known physics into machine learning algorithms can help the enigmatic black boxes attain new levels of transparency and insight into the characteristics of materials.

Researchers used a sophisticated machine learning algorithm in one of the first efforts of its type to identify the characteristics of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.

Elon Musk’s Twitter content policy will make raising a ‘troll army’ more expensive

Elon Musk is finally revealing some specifics of his Twitter content moderation policy. Assuming he completes the buyout he initiated at $44 billion in April, it seems the tech billionaire and Tesla CEO is open to a “hands-on” approach — something many didn’t expect, according to an initial report from The Verge.

This came in reply to an employee-submitted question regarding Musk’s intentions for content moderation, where Musk said he thinks users should be allowed to “say pretty outrageous things within the law”, during an all-hands meeting he had with Twitter’s staff on Thursday.

Elon Musk views Twitter as a platform for ‘self-expression’

This exemplifies a distinction initially popularized by Renée DiResta, a disinformation authority — according to the report. But, during the meeting, Musk said he wants Twitter to impose a stricter standard against bots and spam, adding that “it needs to be much more expensive to have a troll army.”

All these images were generated

Text-to-image AI systems are going to be huge.


Google has released its latest text-to-image AI system, named Imagen, and the results are extremely impressive. However, the company warns the system is also prone to racial and gender biases, and isn’t releasing Imagen publicly.

Automating semiconductor research with machine learning

The semiconductor industry has been growing steadily ever since its first steps in the mid-twentieth century and, thanks to the high-speed information and communication technologies it enabled, it has given way to the rapid digitalization of society. Today, in line with a tight global energy demand, there is a growing need for faster, more integrated, and more energy-efficient semiconductor devices.

However, modern semiconductor processes have already reached the nanometer scale, and the design of novel high-performance materials now involves the structural analysis of semiconductor nanofilms. Reflection high-energy electron diffraction (RHEED) is a widely used analytical method for this purpose. RHEED can be used to determine the structures that form on the surface of thin films at the atomic level and can even capture structural changes in real time as the thin film is being synthesized!

Unfortunately, for all its benefits, RHEED is sometimes hindered by the fact that its output patterns are complex and difficult to interpret. In virtually all cases, a highly skilled experimenter is needed to make sense of the huge amounts of data that RHEED can produce in the form of diffraction patterns. But what if we could make machine learning do most of the work when processing RHEED data?

MEDUSA‘ dual robot’ drone flies and dives to collect aquatic data

Researchers at Imperial College London have developed a new dual drone that can both fly through air and land on water to collect samples and monitor water quality. The researchers developed a drone to make monitoring drones faster and more versatile in aquatic environments.

The ‘dual robot’ drone, tested at Empa and the aquatic research institute Eawag in Switzerland, has successfully measured water in lakes for signs of microorganisms and algal blooms, which can pose hazards to human health, and could in the future be used to monitor climate clues like temperature changes in Arctic seas.

The unique design, called Multi-Environment Dual robot for Underwater Sample Acquisition (MEDUSA), could also facilitate monitoring and maintenance of offshore infrastructure such as subsea pipelines and floating wind turbines.

Meet The High-Tech Urban Farmer Growing Vegetables Inside Hong Kong’s Skyscrapers

Hong Kong, a densely populated city where agriculture space is limited, is almost totally dependent on the outside world for its food supply. More than 90% of the skyscraper-studded city’s food, especially fresh produce like vegetables, is imported, mostly from mainland China. “During the pandemic, we all noticed that the productivity of locally grown vegetables is very low,” says Gordon Tam, cofounder and CEO of vertical farming company Farm66 in Hong Kong. “The social impact was huge.”

Tam estimates that only about 1.5% of vegetables in the city are locally produced. But he believes vertical farms like Farm66, with the help of modern technologies, such as IoT sensors, LED lights and robots, can bolster Hong Kong’s local food production—and export its know-how to other cities. “Vertical farming is a good solution because vegetables can be planted in cities,” says Tam in an interview at the company’s vertical farm in an industrial estate. “We can grow vegetables ourselves so that we don’t have to rely on imports.”

Tam says he started Farm66 in 2013 with his cofounder Billy Lam, who is COO of the company, as a high-tech vertical farming pioneer in Hong Kong. “Our company was the first to use energy-saving LED lighting and wavelength technologies in a farm,” he says. “We found out that different colors on the light spectrum help plants grow in different ways. This was our technological breakthrough.” For example, red LED light will make the stems grow faster, while blue LED light encourages plants to grow larger leaves.

/* */