Toggle light / dark theme

My Chapter Titled ‘’, has been published in ‘Handbook of Real-Time Computing’ in Springer Nature. The chapter provides information on satellite communication networks for different orbits, use-cases, scenarios, link budget analyses, history, and future developments.


Software-defined radio (SDR) is one of many new technologies being adopted by satellite communication to lower the costs both operational and capital by reducing the amount of radio equipment involved in the communication chain and by giving the advantage of remote configuration and regular firmware updates. SDR basically replaces most of the radio equipment by a single computing device with software capable of performing functions of the replaced hardware equipment. SDRs are introduced not only in terrestrial gateways and ground stations, but next generations of LEO and GEO satellites are already adopting the technology. Previously, satellite radio links were limited to the configuration of radio equipment that was installed during the manufacturing of the satellite, which couldn’t be modified throughout the lifespan of the satellite.

Figure 15 displays a generic digital communication transmit and receive RF chain at the physical layer for binary, sampled, and analogue data streams. Data in binary that is collected from data source at transmit end is coming from the higher layers, which is then coded in binary, modulated to sampled, converted to analogue waveform through digital to analogue converter before sending it to the antenna end for transmission over-the-air interface with required transmit power. At the receive end, the wireless signal is received as analogue, converted to sampled for demodulation, decoded to binary, and sent to data sink for integrating with upper layers. The coding/decoding and modulation/demodulation, commonly referred to as MOD/COD, are programmable functions and can be replaced by SDR using a processing device. This can be done at the ground stations, at the gateway, user terminals, and at the satellite using on-board processing.

Circa 2020


We discuss the possibility to predict the QCD axion mass in the context of grand unified theories. We investigate the implementation of the DFSZ mechanism in the context of renormalizable SU theories. In the simplest theory, the axion mass can be predicted with good precision in the range ma = (2–16) neV, and there is a strong correlation between the predictions for the axion mass and proton decay rates. In this context, we predict an upper bound for the proton decay channels with antineutrinos, τ p → K + ν ¯ ≲ 4 × 10 37 $$ \tau \left(p\to {K}^{+}\overline{
u}\right)\lesssim 4\times {10}^{37} $$ yr and τ p → π + ν ¯ ≲ 2 × 10 36 $$ \tau \left(p\to {\pi}^{+}\overline{
u}\right)\lesssim 2\times {10}^{36} $$ yr. This theory can be considered as the minimal realistic grand unified theory with the DFSZ mechanism and it can be fully tested by proton decay and axion experiments.

New satellite images show that Tesla significantly expanded its rooftop solar array at Gigafactory Nevada as it aims for it to become the world’s biggest.

In 2017, Tesla announced plans for a giant 70 MW rooftop array at Gigafactory Nevada, which would be the largest in the world by a wide margin. The project has been lagging for a long time. Tesla finally started construction of the solar array in 2018 and expanded on it throughout the next few years, but it has never grown near the size Tesla has been talking about.

Last summer, the automaker said that it had deployed 3.2 MW at the site. At the time, Tesla also changed its goal to deploy 24 MW instead of 70 MW on the rooftop of the factory, which itself is now smaller than originally planned. The company said that it believes this would still be enough to be the largest rooftop deployment of solar power.

While the Large Hadron Collider (LHC) at CERN is well known for smashing protons together, it is actually the quarks and gluons inside the protons—collectively known as partons—that are really interacting. Thus, in order to predict the rate of a process occurring in the LHC—such as the production of a Higgs boson or a yet-unknown particle—physicists have to understand how partons behave within the proton. This behavior is described in parton distribution functions (PDFs), which describe what fraction of a proton’s momentum is taken by its constituent quarks and gluons.

Knowledge of these PDFs has traditionally come from lepton–proton colliders, such as HERA at DESY. These machines use point-like particles, such as electrons, to directly probe the partons within the proton. Their research revealed that, in addition to the well-known up and down valence quarks that are inside a proton, there is also a sea of quark–antiquark pairs in the proton. This sea is theoretically made of all types of quarks, bound together by gluons. Now, studies of the LHC’s proton–proton collisions are providing a detailed look into PDFs, in particular the proton’s gluon and quark-type composition.

The physicists at CERN’s ATLAS Experiment have just released a new paper combining LHC and HERA data to determine PDFs. The result uses ATLAS data from several different Standard Model processes, including the production of W and Z bosons, pairs of top quarks and hadronic jets (collimated sprays of particles). It was traditionally thought that the strange-quark PDF would be suppressed by a factor of ~2 compared to that of the lighter up-and down-type quarks, because of its larger mass. The new paper confirms a previous ATLAS result, which found that the strange is not substantially suppressed at small momentum fractions and extends this result to show how suppression kicks in at higher momentum fractions (x 0.05) as shown in Figure 1.

The pace of replacing humans with robots in industries across China has been accelerating rapidly in the past couple of years, with observations on the ground suggesting that most industrial robotics and intelligent-manufacturing integrated service companies had at least doubled their annual sales in 2021.


Pandemic-led manufacturing export boom, concerns over China’s rapidly ageing society and a desire to save money have all contributed to the trend of replacing workers with machines.

Catherine Labadia, an archaeologist at the State Historic Preservation Office, was on vacation when the first text came in from fellow archaeologist David Leslie. The picture on her phone was of a channel flake, a stone remnant associated with the creation of spear points used by Paleoindians, the first humans known to enter the region more than 10,000 years ago. “I responded, ‘Is this what I think it is?’” “It most definitely is,” texted back Leslie, who was on site at the Avon excavation with Storrs-based Archaeological and Historical Services (AHS). “It was all mind-blowing emojis after that,” Labadia says.

But that first picture was just the beginning. By the time the excavation on Old Farms Road was completed after a whirlwind three months in the winter of 2019, the AHS team had uncovered 15,000 Paleoindian artifacts and 27 cultural features. Prior to this dig, according to Leslie, only 10–15 cultural features — non-movable items such as hearths and posts that can provide behavioral and environmental insights — had been found in all of New England.

The site is significant for more than the quantity and types of artifacts and features found. Early analyses are already changing the way archaeologists think of the Paleoindian period, an epoch spanning from about 13,000 to 10,000 years ago of which little is known due to relatively scant archaeological evidence. The forests of that time, for instance, were likely made up of more diverse species of trees than previously thought. And that opens up new interpretations for what Paleoindians ate. Remains found at the excavation also suggest — for the first time — that Paleoindians and mastodons might have overlapped in the region.

𝙃𝙤𝙬 𝙙𝙤𝙚𝙨 𝙤𝙪𝙧 𝙫𝙞𝙨𝙪𝙖𝙡 𝙨𝙮𝙨𝙩𝙚𝙢 𝙖𝙘𝙝𝙞𝙚𝙫𝙚 𝙩𝙝𝙞𝙨 𝙖𝙥𝙥𝙖𝙧𝙚𝙣𝙩 𝙨𝙩𝙖𝙗𝙞𝙡𝙞𝙩𝙮? 𝙉𝙚𝙬 𝙧𝙚𝙨𝙚𝙖𝙧𝙘𝙝 𝙗𝙮 𝙩𝙝𝙚 𝙐𝘾 𝘽𝙚𝙧𝙠𝙚𝙡𝙚𝙮 𝙨𝙘𝙞𝙚𝙣𝙩𝙞𝙨𝙩𝙨 𝙧𝙚𝙫𝙚𝙖𝙡𝙨 𝙩𝙝𝙖𝙩 𝙤𝙪𝙧 𝙗𝙧𝙖𝙞𝙣𝙨 𝙖𝙧𝙚 𝙘𝙤𝙣𝙨𝙩𝙖𝙣𝙩𝙡𝙮 𝙪𝙥𝙡𝙤𝙖𝙙𝙞𝙣𝙜 𝙧𝙞𝙘𝙝, 𝙫𝙞𝙨𝙪𝙖𝙡 𝙨𝙩𝙞𝙢𝙪𝙡𝙞. 𝙒𝙚 𝙨𝙚𝙚 𝙚𝙖𝙧𝙡𝙞𝙚𝙧 𝙫𝙚𝙧𝙨𝙞𝙤𝙣𝙨 𝙞𝙣𝙨𝙩𝙚𝙖𝙙 𝙤𝙛 𝙨𝙚𝙚𝙞𝙣𝙜 𝙩𝙝𝙚 𝙡𝙖𝙩𝙚𝙨𝙩 𝙞𝙢𝙖𝙜𝙚 𝙗𝙚𝙘𝙖𝙪𝙨𝙚 𝙤𝙪𝙧 𝙗𝙧𝙖𝙞𝙣’𝙨 𝙧𝙚𝙛𝙧𝙚𝙨𝙝 𝙩𝙞𝙢𝙚 𝙞𝙨 𝙖𝙗𝙤𝙪𝙩 15 𝙨𝙚𝙘𝙤𝙣𝙙𝙨.… See more.

The Neuro-Network.

𝐎𝐮𝐫 𝐛𝐫𝐚𝐢𝐧𝐬 𝐭𝐚𝐤𝐞 𝐚 𝐥𝐢𝐭𝐭𝐥𝐞 𝐰𝐡𝐢𝐥𝐞 𝐭𝐨 𝐮𝐩𝐝𝐚𝐭𝐞, 𝐬𝐭𝐮𝐝𝐲

𝙊𝙗𝙟𝙚𝙘𝙩𝙨 𝙖𝙥𝙥𝙚𝙖𝙧 𝙩𝙤 𝙗𝙚 𝙨𝙩𝙖𝙗𝙡𝙚 𝙙𝙚𝙨𝙥𝙞𝙩𝙚 𝙘𝙤𝙣𝙨𝙩𝙖𝙣𝙩 𝙘𝙝𝙖𝙣𝙜𝙚𝙨 𝙞𝙣 𝙩𝙝𝙚𝙞𝙧 𝙧𝙚𝙩𝙞𝙣𝙖𝙡 𝙞𝙢𝙖𝙜𝙚𝙨. 𝙏𝙝𝙞𝙨 𝙝𝙖𝙥𝙥𝙚𝙣𝙨 𝙙𝙪𝙚 𝙩𝙤 𝙢𝙖𝙣𝙮 𝙨𝙤𝙪𝙧𝙘𝙚𝙨 𝙤… See more.

I’ve posted some vids of her before. But here she says at 3:52 that she thinks stopping the aging process is farfetched.


Dr. Morgan Levine, a professor who specializes in the biology of aging, answers the internet’s burning questions about aging. Is there anyway to stop aging? Is aging a disease? Do you age slower in space? Dr. Levine answers all these questions and much more!

Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7