Toggle light / dark theme

Baltimore, Md. (January 11, 2022) – In a breakthrough that holds significant promise for early diagnosis and better treatment of psychiatric illness, researchers have for the first time used neurons derived from human stem cells to predict the cardinal features of a psychiatric illness, such as psychosis and cognitive deficits in patients with schizophrenia.

A study published today in the Proceedings of the National Academy of Sciences (PNAS) by scientists at the Lieber Institute for Brain Development/Maltz Research Laboratories (LIBD) shows that the clinical symptoms of individuals with schizophrenia can be predicted by the activity of neurons derived from the patients’ own stem cells.

This connection — between the physiology of cells and symptoms like delusions, hallucinations and altered cognition— has never been made before. That is, no other study has demonstrated a robust association between neuronal models derived from a patient’s stem cells and clinically relevant features of the psychiatric disorder in the same person.

Scientists have discovered how to reverse time inside a quantum system. From a teenager taking a stylish Delorean 88 miles per hour to two-hearted alien creatures flying a blue police box, our fiction has been filled with fun stories about time travel. However, it looks like time travel is now no longer a matter of science fiction but science fact.

Engineers from UNSW Sydney have developed a miniature and flexible soft robotic arm which could be used to 3D print biomaterial directly onto organs inside a person’s body.

3D bioprinting is a process whereby biomedical parts are fabricated from so-called bioink to construct natural tissue-like structures.

Bioprinting is predominantly used for research purposes such as tissue engineering and in the development of new drugs — and normally requires the use of large 3D printing machines to produce cellular structures outside the living body.

Paper Advanced Sciences:

Researchers have discovered that channeling ions into defined pathways in perovskite materials improves the stability and operational performance of perovskite solar cells. The finding paves the way for a new generation of lighter, more flexible, and more efficient solar cell technologies suitable for practical use.

Perovskite materials, which are defined by their , are better at absorbing light than silicon is. That means that can be thinner and lighter than silicon solar cells without sacrificing the cell’s ability to convert light into electricity.

“That opens the door to a host of new technologies, such as flexible, lightweight solar cells, or layered solar cells (known as tandems) that can be far more efficient than the solar harvesting technology used today in so-called solar farms,” says Aram Amassian, corresponding author of a paper on the discovery. “There’s interest in integrating materials into silicon solar cell technologies, which would improve their efficiency from 25% to 40% while also making use of existing infrastructure.” Amassian is a professor of materials science and engineering at North Carolina State University.

The future of artificial intelligence is the question on all of our minds right now. AI has the potential of replacing us in every conceivable industry, leading to a potential dystopia. Humanity is suddenly gripped with this massive anxiety, but this is also our greatest opportunity.

Will this be the end of meaning?

Or is this humanity’s greatest gift in the fulfillment of a larger purpose?

What will be the fate of human value?

At the 2023 IEEE International Solid State Circuits Conference (ISSCC) in San Francisco this week, Irvine, Calif.–based Syntiant detailed the NDP200. This is an ultralow-power chip designed to run neural networks that monitor video and wake other systems when it spots something important. That may be its core purpose, but the NDP200 can also mow down the spawn of hell, if properly trained.

The exponentially expanding scale of deep learning models is a major force in advancing the state-of-the-art and a source of growing worry over the energy consumption, speed, and, therefore, feasibility of massive-scale deep learning. Recently, researchers from Cornell talked about Transformer topologies, particularly how they are dramatically better when scaled up to billions or even trillions of parameters, leading to an exponential rise in the utilization of deep learning computing. These large-scale Transformers are a popular but expensive solution for many tasks because digital hardware’s energy efficiency has not kept up with the rising FLOP requirements of cutting-edge deep learning models. They also perform increasingly impressively in other domains, such as computer vision, graphs, and multi-modal settings.

Also, they exhibit transfer learning skills, which enable them to quickly generalize to certain activities, sometimes in a zero-shot environment with no additional training required. The cost of these models and their general machine-learning capabilities are major driving forces behind the creation of hardware accelerators for effective and quick inference. Deep learning hardware has previously been extensively developed in digital electronics, including GPUs, mobile accelerator chips, FPGAs, and large-scale AI-dedicated accelerator systems. Optical neural networks have been suggested as solutions that provide better efficiency and latency than neural-network implementations on digital computers, among other ways. At the same time, there is also significant interest in analog computing.

Even though these analog systems are susceptible to noise and error, neural network operations can frequently be carried out optically for a much lower cost, with the main cost typically being the electrical overhead associated with loading the weights and data amortized in large linear operations. The acceleration of huge-scale models like Transformers is thus particularly promising. Theoretically, the scaling is asymptotically more efficient regarding energy per MAC than digital systems. Here, they demonstrate how Transformers use this scaling more and more. They sampled operations from a real Transformer for language modeling to run on a real spatial light modulator-based experimental system. They then used the results to create a calibrated simulation of a full Transformer running optically. This was done to show that Transformers may run on these systems despite their noise and error characteristics.

Thank you to Brilliant for Supporting PBS. To learn more go to https://brilliant.org/SpaceTime/

PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE

Sign Up on Patreon to get access to the Space Time Discord!
https://www.patreon.com/pbsspacetime.

Physics progresses by breaking our intuitions, but we’re now at a point where further progress may require us to do away with the most intuitive and seemingly fundamental concepts of all—space and time.