Toggle light / dark theme

In a new study, scientists have uncovered the mechanics of the blood-tumor barrier, one of the most significant obstacles to improving treatment efficacy and preventing the return of cancerous cells. The research team, led by Dr. Xi Huang, a Senior Scientist in Developmental & Stem Cell Biology program at The Hospital for Sick Children (SickKids), lays the foundation for more effectively treating medulloblastoma, the most common malignant pediatric brain tumor.

“Despite decades of research on brain cancer, the mechanisms that govern the formation and function of the blood-tumor barrier have remained poorly understood,” says Huang, who is also a Principal Investigator at the Arthur and Sonia Labatt Brain Tumor Research Center and Canada Research Chair in Cancer Biophysics. “Our discoveries represent a breakthrough in the understanding of how the blood-tumor barrier forms and works.”

In a paper published today in Neuron, the research team identifies a way to reduce the impact of the blood-tumor barrier on medulloblastoma treatment.

We’ve already seen systems that wirelessly transmit data via patterns of flickering light. A Saudi Arabian team has created a less energy-intensive alternative, that could use modulated sunlight in place of traditional Wi-Fi.

Currently in development at the King Abdullah University of Science and Technology (KAUST), the system utilizes “smart glass” elements known as Dual-cell Liquid Crystal Shutters (DLSs). These rapidly alter the polarity of sunlight passing through them, and could conceivably be used in the plate glass windows of large rooms such as offices.

The back-and-forth changes in polarity serve the same purpose as the 1s and 0s in binary code, and are reportedly not perceptible to the human eye … although tests have shown that they can be detected and decoded by smartphone cameras. By contrast, changes in the intensity of artificial light – utilized in some other proposed systems – can be visually perceived as an unpleasant flickering effect if the frequency of the changes is too low.

Today, Replit announced Ghostwriter, an AI-powered programming assistant that can make suggestions to make coding easier. It works within Replit’s online development environment and resembles GitHub Copilot’s ability to recognize and compose code in various programming languages to accelerate the development process.

According to Replit, Ghostwriter works by using a large language model trained on millions of lines of publicly available code. This baked-in data allows Ghostwriter to make suggestions based on what you’ve already typed while programming in Replit’s IDE. When you see a suggestion you like, you can “autocomplete” the code by pressing the Tab key.

Greg Brockman, President and Co-Founder of @OpenAI, joins Alexandr Wang, CEO and Founder of Scale, to discuss the role of foundation models like GPT-3 and DALL·E 2 in research and in the enterprise. Foundation models make it possible to replace task-specific models with those that are generalized in nature and can be used for different tasks with minimal fine-tuning.

In January 2021, OpenAI introduced DALL·E, a text-to-image generation program. One year later, it introduced DALL·E 2, which generates more realistic, accurate, lower-latency images with four times greater resolution than its predecessor. At the same time, it released InstructGPT, a large language model (LLM) explicitly designed to follow instructions. InstructGPT makes it practical to leverage the OpenAI API to revise existing content, such as rewriting a paragraph of text or refactoring code.

Before creating OpenAI, Brockman was the CTO of Stripe, which he helped build from four to 250 employees. Watch this talk to learn how foundation models can help businesses benefit from applications that they can create more quickly than with past generations of AI tools.

Thermodynamic phases governed by the strong nuclear force have been linked together using multiple theoretical tools.

Quantum chromodynamics (QCD) is the theory of the strong nuclear force. On a fundamental level, it describes the dynamics of quarks and gluons. Like more familiar systems, such as water, a many-body system of quarks and gluons can exist in very different thermodynamic phases depending on the external conditions. Researchers have long sought to map the different corners of the corresponding phase diagram. New experimental probes of QCD—first and foremost the detection of gravitational waves from neutron-star mergers—allow for a more comprehensive view of this phase structure than was previously possible. Now Tuna Demircik at the Asia Pacific Center for Theoretical Physics, South Korea, and colleagues have put together models originally used in very different contexts to push forward a global understanding of the phases of QCD [1].

Phase transitions governed by the strong force require extreme conditions such as high temperatures and high baryon densities (baryons are three-quark particles such as protons and neutrons). The region of the QCD phase diagram corresponding to high temperatures and relatively low baryon densities can be probed by colliding heavy ions. By contrast, the region associated with high baryon densities and relatively low temperatures can be studied by observing single neutron stars. For a long time, researchers lacked experimental data for the phase space between these two regions, not least because it is very difficult to create matter under neutron-star conditions in the laboratory. This difficulty still exists, although collider facilities are being constructed that are intended to produce matter at higher baryon densities than is currently possible.

Imagine the booming chords from a pipe organ echoing through the cavernous sanctuary of a massive, stone cathedral.

The a cathedral-goer will hear is affected by many factors, including the location of the organ, where the listener is standing, whether any columns, pews, or other obstacles stand between them, what the walls are made of, the locations of windows or doorways, etc. Hearing a sound can help someone envision their environment.

Researchers at MIT and the MIT-IBM Watson AI Lab are exploring the use of spatial acoustic information to help machines better envision their environments, too. They developed a that can capture how any sound in a room will propagate through the space, enabling the model to simulate what a listener would hear at different locations.