Toggle light / dark theme

Get a Wonderful Person Tee: https://teespring.com/stores/whatdamath.
More cool designs are on Amazon: https://amzn.to/3wDGy2i.
Alternatively, PayPal donations can be sent here: http://paypal.me/whatdamath.

Hello and welcome! My name is Anton and in this video, we will talk about theories potentially explaining early stars and what they were like.
Links:
https://arxiv.org/abs/2301.

0:00 Intro to first stars in the universe.
1:05 Most massive stars we discovered so far.
2:35 Hypothetical early stars and differences with modern stars.
4:05 Why stars have mass limits.
5:15 How ancient gas was different.

Support this channel on Patreon to help me make this a full time job:

The team’s measurement of the proton’s radius was 0.73 femtometer, even smaller than the 0.84-femtometer electric charge radius. In either case, it is almost 10,000 times smaller than a hydrogen atom.

To be clear, this apparent 13 percent shrinkage is not a blow to the electric charge radius measurements and not as shocking as it may seem. The two measurements are complementary and work together to offer a big picture view of the little proton. Because they measure different distributions of matter, the discrepancy does not challenge our understanding of the proton the same way its previous 4 percent shrinkage did. Instead it adds to that understanding.

“The thing that makes this measurement really interesting is not whether or not it agrees with the electron measurements of the electromagnetic proton radius but the fact that it didn’t have to agree at all,” says Deborah Harris, co-spokesperson for the MINERvA experiment. This is because the way neutrinos interact with up quarks versus down quarks is very different from how quarks interact with electrons. Instead of an electromagnetic interaction, neutrinos interact via a different force called the weak force. (But don’t let its name fool you—the weak force is quite strong across subatomic distances!)

With each of these splittings, the universe completely remolded itself. New particles arose to replace ones that could exist only in extreme conditions previously. The fundamental quantum fields of space-time that dictate how particles and forces interact with each other reconfigured themselves. We do not know how smoothly or roughly these phase transitions took place, but it’s perfectly possible that with each splitting, the universe settled into multiple identities at once.

This fracturing isn’t as exotic as it sounds. It happens with all kinds of phase transitions, like water turning into ice. Different patches of water can form ice molecules with different orientations. No matter what, all the water turns into ice, but different domains can have differing molecular arrangements. Where those domains meet walls, or imperfections, fracturing will appear.

Physicists are especially interested in the so-called GUT phase transition of our universe. GUT is short for “grand unified theory,” a hypothetical model of physics that merges the strong nuclear force with electromagnetism and the weak nuclear force. These theories are just beyond the reach of current experiments, so physicists and astronomers turn to the conditions of the early universe to study this important transition.

Summary: Text-to-image generation deep learning models like OpenAI’s DALL-E 2 can be a promising new tool for image augmentation, generation, and manipulation in a healthcare setting.

Source: JMIR Publications

A new paper published in the Journal of Medical Internet Research describes how generative models such as DALL-E 2, a novel deep learning model for text-to-image generation, could represent a promising future tool for image generation, augmentation, and manipulation in health care.

The NLP community has recently discovered that pretrained language models may accomplish various real-world activities with the help of minor adjustments or direct assistance. Additionally, performance usually becomes better as the size grows. Modern language models often include hundreds of billions of parameters, continuing this trend. Several research groups published pretrained LLMs with more than 100B parameters. The BigScience project most recently made BLOOM available, a 176 billion parameter model that supports 46 natural and 13 computer languages. The public availability of 100B+ parameter models makes them more accessible, yet due to memory and computational expenses, most academics and practitioners still find it challenging to use them. For inference, OPT-175B and BLOOM-176B require more than 350GB of accelerator RAM and even more for finetuning.

As a result, running these LLMs typically requires several powerful GPUs or multi-node clusters. These two alternatives are relatively inexpensive, restricting the potential study topics and language model applications. By “offloading” model parameters to slower but more affordable memory and executing them on the accelerator layer by layer, several recent efforts seek to democratize LLMs. By loading parameters from RAM just in time for each forward pass, this technique enables executing LLMs with a single low-end accelerator. Although offloading has high latency, it can process several tokens in parallel. For instance, they are producing one token with BLOOM-176B requires at least 5.5 seconds for the fastest RAM offloading system and 22 seconds for the quickest SSD offloading arrangement.

Additionally, many machines lack sufficient RAM to unload 175B parameters. LLMs may be made more widely available through public inference APIs, where one party hosts the model and allows others to query it online. This is a fairly user-friendly choice because the API owner handles most of the engineering effort. However, APIs are frequently too rigid to be used in research since they cannot alter the model’s control structure or have access to its internal states. Additionally, the cost of some research initiatives may be exorbitant, given the current API price. In this study, they investigate a different approach motivated by widespread crowdsourcing training of neural networks from scratch.