Part of the problem mirrors the rise of automation in any other industry — performers told Input that they’re nervous that game studios might try to replace them with sophisticated algorithms in order to save a few bucks. But the game modder’s decision also raises questions about the agency that performers have over their own voices, as well as the artistry involved in bringing characters to life.
“If this is true, this is just heartbreaking,” video game voice actor Jay Britton tweeted about the mod. “Yes, AI might be able to replace things but should it? We literally get to decide. Replacing actors with AI is not only a legal minefield but an utterly soulless choice.”
“Why not remove all human creativity from games and use AI…” he added.
Neil deGrasse Tyson explains the early state of our Universe. At the beginning of the universe, ordinary space and time developed out of a primeval state, where all matter and energy of the entire visible universe was contained in a hot, dense point called a gravitational singularity. A billionth the size of a nuclear particle.
While we can not imagine the entirety of the visible universe being a billion times smaller than a nuclear particle, that shouldn’t deter us from wondering about the early state of our universe. However, dealing with such extreme scales is immensely counter-intuitive and our evolved brains and senses have no capacity to grasp the depths of reality in the beginning of cosmic time. Therefore, scientists develop mathematical frameworks to describe the early universe.
Neil deGrasse Tyson also mentions that our senses are not necessarily the best tools to use in science when uncovering the mysteries of the Universe.
It is interesting to note that in the early Universe, high densities and heterogeneous conditions could have led sufficiently dense regions to undergo gravitational collapse, forming black holes. These types of Primordial black holes are hypothesized to have formed soon after the Big Bang. Going from one mystery to the next, some evidence suggests a possible Link Between Primordial Black Holes and Dark Matter.
In modern physics, antimatter is made up of elementary particles, each of which has the same mass as their corresponding matter counterparts — protons, neutrons and electrons — but the opposite charges and magnetic properties.
A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons, gamma rays and neutrinos. The majority of the total energy of annihilation emerges in the form of ionizing radiation. If surrounding matter is present, the energy content of this radiation will be absorbed and converted into other forms of energy, such as heat or light. The amount of energy released is usually proportional to the total mass of the collided matter and antimatter, in accordance with Einstein’s mass–energy equivalence equation.
The artificial intelligence revolution is just getting started. But it is already transforming conflict. Militaries all the way from the superpowers to tiny states are seizing on autonomous weapons as essential to surviving the wars of the future. But this mounting arms-race dynamic could lead the world to dangerous places, with algorithms interacting so fast that they are beyond human control. Uncontrolled escalation, even wars that erupt without any human input at all.
DW maps out the future of autonomous warfare, based on conflicts we have already seen – and predictions from experts of what will come next.
For more on the role of technology in future wars, check out the extended version of this video – which includes a blow-by-blow scenario of a cyber attack against nuclear weapons command and control systems: https://youtu.be/TmlBkW6ANsQ
MIT researchers demonstrate a way to sharply reduce errors in two-qubit gates, a significant advance toward fully realizing quantum computation.
MIT researchers have made a significant advance on the road toward the full realization of quantum computation, demonstrating a technique that eliminates common errors in the most essential operation of quantum algorithms, the two-qubit operation or “gate.”
“Despite tremendous progress toward being able to perform computations with low error rates with superconducting quantum bits (qubits), errors in two-qubit gates, one of the building blocks of quantum computation, persist,” says Youngkyu Sung, an MIT graduate student in electrical engineering and computer science who is the lead author of a paper on this topic published on June 16, 2021, in Physical Review X. “We have demonstrated a way to sharply reduce those errors.”
Advanced uses of time in image rendering and reconstruction have been the focus of much scientific research in recent years. The motivation comes from the equivalence between space and time given by the finite speed of light c. This equivalence leads to correlations between the time evolution of electromagnetic fields at different points in space. Applications exploiting such correlations, known as time-of-flight (ToF)1 and light-in-flight (LiF)2 cameras, operate at various regimes from radio3,4 to optical5 frequencies. Time-of-flight imaging focuses on reconstructing a scene by measuring delayed stimulus responses via continuous wave, impulses or pseudo-random binary sequence (PRBS) codes1. Light-in-flight imaging, also known as transient imaging6, explores light transport and detection2,7. The combination of ToF and LiF has recently yielded higher accuracy and detail to the reconstruction process, especially in non-line-of-sight images with the inclusion of higher-order scattering and physical processes such as Rayleigh–Sommerfeld diffraction8 in the modeling. However, these methods require experimental characterization of the scene followed by large computational overheads that produce images at low frame rates in the optical regime. In the radio-frequency (RF) regime, 3D images at frame rates of 30 Hz have been produced with an array of 256 wide-band transceivers3. Microwave imaging has the additional capability of sensing through optically opaque media such as walls. Nonetheless, synthetic aperture radar reconstruction algorithms such as the one proposed in ref. 3 required each transceiver in the array to operate individually thus leaving room for improvements in image frame rates from continuous transmit-receive captures. Constructions using beamforming have similar challenges9 where a narrow focused beam scans a scene using an array of antennas and frequency modulated continuous wave (FMCW) techniques.
In this article, we develop an inverse light transport model10 for microwave signals. The model uses a spatiotemporal mask generated by multiple sources, each emitting different PRBS codes, and a single detector, all operating in continuous synchronous transmit-receive mode. This model allows image reconstructions with capture times of the order of microseconds and no prior scene knowledge. For first-order reflections, the algorithm reduces to a single dot product between the reconstruction matrix and captured signal, and can be executed in a few milliseconds. We demonstrate this algorithm through simulations and measurements performed using realistic scenes in a laboratory setting. We then use the second-order terms of the light transport model to reconstruct scene details not captured by the first-order terms.
We start by estimating the information capacity of the scene and develop the light transport equation for the transient imaging model with arguments borrowed from basic information and electromagnetic field theory. Next, we describe the image reconstruction algorithm as a series of approximations corresponding to multiple scatterings of the spatiotemporal illumination matrix. Specifically, we show that in the first-order approximation, the value of each pixel is the dot product between the captured time series and a unique time signature generated by the spatiotemporal electromagnetic field mask. Next, we show how the second-order approximation generates hidden features not accessible in the first-order image. Finally, we apply the reconstruction algorithm to simulated and experimental data and discuss the performance, strengths, and limitations of this technique.
In 1642, famous Dutch painter Rembrandt van Rijn completed a large painting called Militia Company of District II under the Command of Captain Frans Banninck Cocq — today, the painting is commonly referred to as The Night Watch. It was the height of the Dutch Golden Age, and The Night Watch brilliantly showcased that.
The painting measured 363 cm × 437 cm (11.91 ft × 14.34 ft) — so big that the characters in it were almost life-sized, but that’s only the start of what makes it so special. Rembrandt made dramatic use of light and shadow and also created the perception of motion in what would normally be a stationary military group portrait. Unfortunately, though, the painting was trimmed in 1715 to fit between two doors at Amsterdam City Hall.
For over 300 years, the painting has been missing 60cm (2ft) from the left, 22cm from the top, 12cm from the bottom and 7cm from the right. Now, computer software has restored the missing parts.
A team of researchers working at Johannes Kepler University has developed an autonomous drone with a new type of technology to improve search-and-rescue efforts. In their paper published in the journal Science Robotics, the group describes their drone modifications. Andreas Birk with Jacobs University Bremen has published a Focus piece in the same journal issue outlining the work by the team in Austria.
Finding people lost (or hiding) in the forest is difficult because of the tree cover. People in planes and helicopters have difficulty seeing through the canopy to the ground below, where people might be walking or even laying down. The same problem exists for thermal applications—heat sensors cannot pick up readings adequately through the canopy. Efforts have been made to add drones to search-and–rescue operations, but they suffer from the same problems because they are remotely controlled by pilots using them to search the ground below. In this new effort, the researchers have added new technology that both helps to see through the tree canopy and to highlight people that might be under it.
The new technology is based on what the researchers describe as an airborne optical sectioning algorithm—it uses the power of a computer to defocus occluding objects such as the tops of trees. The second part of the new device uses thermal imaging to highlight the heat emitted from a warm body. A machine-learning application then determines if the heat signals are those of humans, animals or other sources. The new hardware was then affixed to a standard autonomous drone. The computer in the drone uses both locational positioning to determine where to search and cues from the AOS and thermal sensors. If a possible match is made, the drone automatically moves closer to a target to get a better look. If its sensors indicate a match, it signals the research team giving them the coordinates. In testing their newly outfitted drones over 17 field experiments, the researchers found it was able to locate 38 of 42 people hidden below tree canopies.
Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them—and for the first time, the technology works regardless of seasonal changes to that terrain.
Details about the process were published on June 23 in the journal Science Robotics.
The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, autonomous systems can locate themselves.
If you walk down the street shouting out the names of every object you see — garbage truck! bicyclist! sycamore tree! — most people would not conclude you are smart. But if you go through an obstacle course, and you show them how to navigate a series of challenges to get to the end unscathed, they would.
Most machine learning algorithms are shouting names in the street. They perform perceptive tasks that a person can do in under a second. But another kind of AI — deep reinforcement learning — is strategic. It learns how to take a series of actions in order to reach a goal. That’s powerful and smart — and it’s going to change a lot of industries.
Two industries on the cusp of AI transformations are manufacturing and supply chain. The ways we make and ship stuff are heavily dependent on groups of machines working together, and the efficiency and resiliency of those machines are the foundation of our economy and society. Without them, we can’t buy the basics we need to live and work.