New signal-processing algorithms have been shown to mitigate the impact of turbulence in free-space optical experiments, potentially bringing “free space” internet a step closer to reality.
The team of researchers, from Aston University’s Aston Institute of Photonic Technologies and Glasgow University, used commercially available photonic lanterns, a commercial transponder, and a spatial light modulator to emulate turbulence. By applying a successive interference cancelation digital signal processing algorithm, they achieved record results.
The findings are published in the Journal of Lightwave Technology.
In spin-based quantum processors, each quantum dot of a qubit is populated by exactly one electron, which requires careful tuning of each gate voltage such that it lies inside the charge-stability region (the “Coulomb diamond’‘) associated with the dot array. However, mapping the boundary of a multidimensional Coulomb diamond by traditional dense raster scanning would take years, so the authors develop a sparse acquisition technique that autonomously learns Coulomb-diamond boundaries from a small number of measurements. Here we have hardware-triggered line searches in the gate-voltage space of a silicon quadruple dot, with smart search directions proposed by an active-learning algorithm.
How programmers turned the internet into a paintbrush. DALL-E 2, Midjourney, Imagen, explained.
Subscribe and turn on notifications 🔔 so you don’t miss any videos: http://goo.gl/0bsAjO
Beginning in January 2021, advances in AI research have produced a plethora of deep-learning models capable of generating original images from simple text prompts, effectively extending the human imagination. Researchers at OpenAI, Google, Facebook, and others have developed text-to-image tools that they have not yet released to the public, and similar models have proliferated online in the open-source arena and at smaller companies like Midjourney.
These tools represent a massive cultural shift because they remove the requirement for technical labor from the process of image-making. Instead, they select for creative ideation, skillful use of language, and curatorial taste. The ultimate consequences are difficult to predict, but — like the invention of the camera, and the digital camera thereafter — these algorithms herald a new, democratized form of expression that will commence another explosion in the volume of imagery produced by humans. But, like other automated systems trained on historical data and internet images, they also come with risks that have not been resolved.
The laws of physics do not exist, a theoretical physicist named Sankar Das Sarma argues in a new column published by New Scientist. While we define the laws as the “ultimate laws” of our universe, Sarma says they are merely working descriptions, and that they are nothing more than mathematical equations that match with parts of nature.
Both animals and people use high-dimensional inputs (like eyesight) to accomplish various shifting survival-related objectives. A crucial aspect of this is learning via mistakes. A brute-force approach to trial and error by performing every action for every potential goal is intractable even in the smallest contexts. Memory-based methods for compositional thinking are motivated by the difficulty of this search. These processes include, for instance, the ability to: recall pertinent portions of prior experience; (ii) reassemble them into new counterfactual plans, and (iii) carry out such plans as part of a focused search strategy. Compared to equally sampling every action, such techniques for recycling prior successful behavior can considerably speed up trial-and-error. This is because the intrinsic compositional structure of real-world objectives and the similarity of the physical laws that control real-world settings allow the same behavior (i.e., sequence of actions) to remain valid for many purposes and situations. What guiding principles enable memory processes to retain and reassemble experience fragments? This debate is strongly connected to the idea of dynamic programming (DP), which using the principle of optimality significantly lowers the computing cost of trial-and-error. This idea may be expressed informally as considering new, complicated issues as a recomposition of previously solved, smaller subproblems.
This viewpoint has recently been used to create hierarchical reinforcement learning (RL) algorithms for goal-achieving tasks. These techniques develop edges between states in a planning graph using a distance regression model, compute the shortest pathways across it using DP-based graph search, and then use a learning-based local policy to follow the shortest paths. Their essay advances this field of study. The following is a summary of their contributions: They provide a strategy for long-term planning that acts directly on high-dimensional sensory data that an agent may see on its own (e.g., images from an onboard camera). Their solution blends traditional sampling-based planning algorithms with learning-based perceptual representations to recover and reassemble previously recorded state transitions in a replay buffer.
The two-step method makes this possible. To determine how many timesteps it takes for an optimum policy to move from one state to the next, they first learn a latent space where the distance between two states is the measure. They know contrastive representations using goal-conditioned Q-values acquired through offline hindsight relabeling. To establish neighborhood criteria across states, the second threshold this developed latent distance metric. They go on to design sampling-based planning algorithms that scan the replay buffer for trajectory segments—previously recorded successions of transitions—whose ends are adjacent states.
Switch-Science has just announced a trio of quantum computing products that the company claims are the world’s first portable quantum computers. Sourced from SpinQ Technology, a Chinese quantum computing company based in Shenzen, the new quantum computing products have been designed for educational purposes. The aim is to democratize access to physical quantum computing solutions that can be deployed (and redeployed) at will. But considering the actual quantum machinery on offer, none of these (which we’re internally calling “quantops”) are likely to be a part of the future of quantum.
The new products being developed with education in mind shows in their qubit counts, which top out at three (compare that to Google’s Sycamore or IBM’s 433-qubit Osprey Quantum Processing Unit [QPU], both based on superconducting qubits). That’s not enough a number for any viable, problem-solving quantum computing to take place within these machines, but it’s enough that users can program and run quantum circuits — either the integrated, educational ones, or a single custom algorithm.
Self-supervised learning is a form of unsupervised learning in which the supervised learning task is constructed from raw, unlabeled data. Supervised learning is effective but usually requires a large amount of labeled data. Getting high-quality labeled data is time-consuming and resource-intensive, especially for sophisticated tasks like object detection and instance segmentation, where more in-depth annotations are sought.
Self-supervised learning aims to first learn usable representations of the data from an unlabeled pool of data by self-supervision and then to refine these representations with few labels for the supervised downstream tasks such as image classification, semantic segmentation, etc.
Self-supervised learning is at the heart of many recent advances in artificial intelligence. However, existing algorithms focus on a particular modality (such as images or text) and a high computer resource requirement. Humans, on the other hand, appear to learn significantly more efficiently than existing AI and to learn from diverse types of information consistently rather than requiring distinct learning systems for text, speech, and other modalities.
DALL-E is a system that has been around for years, but its successor, DALL-E 2, was launched this year.
Ibrahim Can/Interesting Engineering.
DALL-E and DALL-E 2 are machine-learning models created by OpenAI to produce images from language descriptions. These text-to-image descriptions are known as prompts. The system could generate realistic images just from a description of the scene. DALL-E is a neural network algorithm that creates accurate pictures from short phrases provided by the user. It comprehends language through textual descriptions and from “learning” information provided in its datasets by users and developers.
Researchers have developed a new all-optical method for driving multiple highly dense nanolaser arrays. The approach could enable chip-based optical communication links that process and move data faster than today’s electronic-based devices.
“The development of optical interconnects equipped with high-density nanolasers would improve information processing in the data centers that move information across the internet,” said research team leader Myung-Ki Kim from Korea University.
“This could allow streaming of ultra-high-definition movies, enable larger-scale interactive online encounters and games, accelerate the expansion of the Internet of Things and provide the fast connectivity needed for big data analytics.”
Turing Award winner and deep learning pioneer Geoffrey Hinton, one of the original proponents of backpropagation, has argued in recent years that backpropagation does not explain how the brain works. In his NeurIPS 2022 keynote speech, Hinton proposes a new approach to neural network learning: the Forward-Forward algorithm.