The recently detected gravitational waves are a muddled mix of various sources, new study finds.
Rocket propulsion technology has progressed leaps and bounds since the first weaponized rockets of the Chinese and Mongolian empires. They were nothing more than rocket-powered arrows and spears but they set the foundations for our exploration of space. Liquid propellant, ion engines and solar sails have all hit the headlines as we strive for more efficient methods of travel but a team has taken the next leap with a palm-sized thruster system that could boost future tiny spacecraft across the gulf of space.
Palm-sized thrusters are quite different from the gargantuan rockets we are used to, for example the Saturn V rocket that took the Apollo astronauts to the moon that stood 110 m tall. The difference for the ATHENA thrusters is that they are designed for maneuvering and propelling cubesats and small satellites once they are in space rather than propelling rockets from the surface of the Earth.
The team led by Daniel Perez Grande, CEO and Co-Founder of IENAI Spain, have called their palm-sized thruster “Athena,” not the most catchy title but neatly represents what it does—the Adaptable, THruster based on Electrospray powered NAnotechnology. The technology has been developed for ESA and, following a successful design stage and, if all goes to plan, a prototype will be available by the end of 2024.
The first satellites capable of providing direct-to-cellular service via SpaceX’s Starlink network and T-Mobile’s cellular network have been sent into orbit aboard a SpaceX Falcon 9 rocket.
Six of the cell-capable satellites were among a batch of 21 Starlink satellites launched from Vandenberg Space Force Base in California at 7:44 p.m. PT Tuesday. The satellites were deployed successfully, and the rocket’s first-stage booster made a routine landing on a drone ship in the Pacific Ocean.
SpaceX plans to launch hundreds of the upgraded satellites in the months ahead, with the aim of beginning satellite-enabled texting later this year. 4G LTE satellite connectivity for voice and data via unmodified mobile devices would follow in 2025, pending regulatory approval.
A new thermal transistor can control heat as precisely as an electrical transistor can control electricity.
By Rachel Nuwer
Scientists have fused brain-like tissue with electronics to make an ‘organoid neural network’ that can recognise voices and solve a complex mathematical problem. Their invention extends neuromorphic computing – the practice of modelling computers after the human brain – to a new level by directly including brain tissue in a computer.
The system was developed by a team of researchers from Indiana University, Bloomington; the University of Cincinnati and Cincinnati Children’s Hospital Medical Centre, Cincinnati; and the University of Florida, Gainesville. Their findings were published on December 11.
Artificial Intelligence.
AI and echoes of the enlightenment.
Personal Perspective: How today’s Cognitive Age is a second Enlightenment.
Exploring pre-trained models for research often poses a challenge in Machine Learning (ML) and Deep Learning (DL). Visualizing the architecture of these models usually demands setting up the specific framework they were trained on, which can be quite laborious. Without this framework, comprehending the model’s structure becomes cumbersome for AI researchers.
Some solutions enable model visualization but involve setting up the entire framework for training the model. This process can be time-consuming and intricate, deterring quick access to model architectures.
One solution to simplify the visualization of ML/DL models is the open-source tool called Netron. This tool functions as a viewer specifically designed for neural networks, supporting frameworks like TensorFlow Lite, ONNX, Caffe, Keras, etc. Netron bypasses the need to set up individual frameworks by directly presenting the model architecture, making it accessible and convenient for researchers.
The absence, not the presence, of the most important element for life in planets’ atmospheres may be what we should be seeking.
How the brain adjusts connections between #neurons during learning: this new insight may guide further research on learning in brain networks and may inspire faster and more robust learning #algorithms in #artificialintelligence.
Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have set out a new principle to explain how the brain adjusts connections between neurons during learning. This new insight may guide further research on learning in brain networks and may inspire faster and more robust learning algorithms in artificial intelligence.
The essence of learning is to pinpoint which components in the information-processing pipeline are responsible for an error in output. In artificial intelligence, this is achieved by backpropagation: adjusting a model’s parameters to reduce the error in the output. Many researchers believe that the brain employs a similar learning principle.
However, the biological brain is superior to current machine learning systems. For example, we can learn new information by just seeing it once, while artificial systems need to be trained hundreds of times with the same pieces of information to learn them. Furthermore, we can learn new information while maintaining the knowledge we already have, while learning new information in artificial neural networks often interferes with existing knowledge and degrades it rapidly.