Toggle light / dark theme

Engineers use artificial intelligence to capture the complexity of breaking waves

Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

Towards practical and robust DNA-based data archiving using the yin–yang codec system

The yin-yang codec transcoding algorithm is proposed to improve the practicality and robustness of DNA data storage.


Given these results, YYC offers the opportunity to generate DNA sequences that are highly amenable to both the ‘writing’ (synthesis) and ‘reading’ (sequencing) processes while maintaining a relatively high information density. This is crucially important for improving the practicality and robustness of DNA data storage. The DNA Fountain and YYC algorithms are the only two known coding schemes that combine transcoding rules and screening into a single process to ensure that the generated DNA sequences meet the biochemical constraints. The comparison hereinafter thus focuses on the YYC and DNA Fountain algorithms because of the similarity in their coding strategies.

The robustness of data storage in DNA is primarily affected by errors introduced during ‘writing’ and ‘reading’. There are two main types of errors: random and systematic errors. Random errors are often introduced by synthesis or sequencing errors in a few DNA molecules and can be redressed by mutual correction using an increased sequencing depth. System atic errors refer to mutations observed in all DNA molecules, including insertions, deletions and substitutions, which are introduced during synthesis and PCR amplification (referred to as common errors), or the loss of partial DNA molecules. In contrast to substitutions (single-nucleotide variations, SNVs), insertions and deletions (indels) change the length of the DNA sequence encoding the data and thus introduce challenges regarding the decoding process. In general, it is difficult to correct systematic errors, and thus they will lead to the loss of stored binary information to varying degrees.

To test the robustness baseline of the YYC against systematic errors, we randomly introduced the three most commonly seen errors into the DNA sequences at a average rate ranging from 0.01% to 1% and analysed the corresponding data recovery rate in comparison with the most well-recognized coding scheme (DNA Fountain) without introducing an error correction mechanism. The results show that, in the presence of either indels (Fig. 2a) or SNVs (Fig. 2b), YYC exhibits better data recovery performance in comparison with DNA Fountain, with the data recovery rate remaining fairly steady at a level above 98%. This difference between the DNA Fountain and other algorithms, including YYC, occurs because uncorrectable errors can affect the retrieval of other data packets through error propagation when using the DNA Fountain algorithm.

Elon Musk acquires Twitter for roughly $44 billion

The company’s board and the Tesla CEO hammered out the final details of his $54.20 a share bid.

The agreement marks the close of a dramatic courtship and a sharp change of heart at the social-media network.

Elon Musk acquired Twitter for $44 billion on Monday, the company announced, giving the world’s richest person command of one of its most influential social media sites — which serves as a platform for political leaders, a sounding board for experts across industries and an information hub for millions of everyday users.

The acquisition followed weeks of evangelizing on the necessity of “free speech,” as the Tesla CEO seized on Twitter’s role as the “de facto town square” and took umbrage with content moderation efforts he has seen as an escalation toward censorship. He said he sees Twitter as essential to the functioning of democracy and said the economics are not a concern.

Ownership of Twitter gives Musk power over hugely consequential societal and political issues, perhaps most significantly the ban on former president Donald Trump that the website enacted in response to the Jan. 6 riots.

Under the terms of the deal, Twitter will become a private company and shareholders will receive $54.20 per share, the company said in a news release. The deal is expected to close this year.

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” Musk said in the release. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential — I look forward to working with the company and the community of users to unlock it.”

Quantifying Human Consciousness With the Help of AI

A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.

#consc… See more.


Summary: A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.

Source: CORDIS

New research supported by the EU-funded HBP SGA3 and DoCMA projects is giving scientists new insight into human consciousness.

Led by Korea University and projects’ partner University of Liège (Belgium), the research team has developed an explainable consciousness indicator (ECI) to explore different components of consciousness.

Growing Anomalies at the Large Hadron Collider Raise Hopes

Amid the chaotic chains of events that ensue when protons smash together at the Large Hadron Collider in Europe, one particle has popped up that appears to go to pieces in a peculiar way.

All eyes are on the B meson, a yoked pair of quark particles. Having caught whiffs of unexpected B meson behavior before, researchers with the Large Hadron Collider beauty experiment (LHCb) have spent years documenting rare collision events featuring the particles, in hopes of conclusively proving that some novel fundamental particle or effect is meddling with them.

In their latest analysis, first presented at a seminar in March, the LHCb physicists found that several measurements involving the decay of B mesons conflict slightly with the predictions of the Standard Model of particle physics — the reigning set of equations describing the subatomic world. Taken alone, each oddity looks like a statistical fluctuation, and they may all evaporate with additional data, as has happened before. But their collective drift suggests that the aberrations may be breadcrumbs leading beyond the Standard Model to a more complete theory.

Quasiparticles used to generate millions of truly random numbers a second

This could lead to a truly random number generator making things much more secure.


Random numbers are crucial for computing, but our current algorithms aren’t truly random. Researchers at Brown University have now found a way to tap into the fluctuations of quasiparticles to generate millions of truly random numbers per second.

Random number generators are key parts of computer software, but technically they don’t quite live up to their name. Algorithms that generate these numbers are still deterministic, meaning that anyone with enough information about how it works could potentially find patterns and predict the numbers produced. These pseudo-random numbers suffice for low stakes uses like gaming, but for scientific simulations or cybersecurity, truly random numbers are important.

In recent years scientists have turned to the strange world of quantum physics for true randomization, using photons to generate strings of random ones and zeroes or tapping into the quantum vibrations of diamond. And for the new study, the Brown scientists tried something similar.

Scientists create algorithm to assign a label to every pixel in the world, without human supervision

Labeling data can be a chore. It’s the main source of sustenance for computer-vision models; without it, they’d have a lot of difficulty identifying objects, people, and other important image characteristics. Yet producing just an hour of tagged and labeled data can take a whopping 800 hours of human time. Our high-fidelity understanding of the world develops as machines can better perceive and interact with our surroundings. But they need more help.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University have attempted to solve this problem plaguing vision models by creating “STEGO,” an that can jointly discover and segment objects without any human labels at all, down to the pixel.

STEGO learns something called “semantic segmentation”—fancy speak for the process of assigning a label to every pixel in an image. Semantic segmentation is an important skill for today’s computer-vision systems because images can be cluttered with objects. Even more challenging is that these objects don’t always fit into literal boxes; algorithms tend to work better for discrete “things” like people and cars as opposed to “stuff” like vegetation, sky, and mashed potatoes. A previous system might simply perceive a nuanced scene of a dog playing in the park as just a dog, but by assigning every pixel of the image a label, STEGO can break the image into its main ingredients: a dog, sky, grass, and its owner.

/* */