Toggle light / dark theme

Musk’s tweet offering guidance on timing for an anticipated increase in Tesla’s market cap has since been deleted, but screenshots were widely shared on Twitter.

The U.S. Securities and Exchange Commission has clashed with Musk and Tesla over the CEO’s unfettered use of Twitter before.

In the third quarter of 2018, Musk faced securities fraud charges from the SEC after he tweeted to his tens of millions of followers then that he was planning to take Tesla private at $420 a share, and had secured funding to do so. Tesla’s stock price jumped more than 6 percent that day.

A “self-portrait” by humanoid robot Sophia, who “interpreted” a depiction of her own face, has sold at auction for over $688000.


A hand-painted “self-portrait” by the world-famous humanoid robot, Sophia, has sold at auction for over $688000.

The work, which saw Sophia “interpret” a depiction of her own face, was offered as a non-fungible token, or NFT, an encrypted digital signature that has revolutionized the art market in recent months.

Titled “Sophia Instantiation,” the image was created in collaboration with Andrea Bonaceto, an artist and partner at blockchain investment firm Eterna Capital. Bonaceto began the process by producing a brightly colored portrait of Sophia, which was processed by the robot’s neural networks. Sophia then painted an interpretation of the image.

PowerLight Technologies is turning wireless power transmission from science fiction into science fact… with frickin’ laser beams.


Wireless power transmission has been the stuff of science fiction for more than a century, but now PowerLight Technologies is turning it into science fact … with frickin’ laser beams.

“Laser power is closer than you think,” PowerLight CEO Richard Gustafson told GeekWire this week.

Artificial microswimmers that can replicate the complex behavior of active matter are often designed to mimic the self-propulsion of microscopic living organisms. However, compared with their living counterparts, artificial microswimmers have a limited ability to adapt to environmental signals or to retain a physical memory to yield optimized emergent behavior. Different from macroscopic living systems and robots, both microscopic living organisms and artificial microswimmers are subject to Brownian motion, which randomizes their position and propulsion direction. Here, we combine real-world artificial active particles with machine learning algorithms to explore their adaptive behavior in a noisy environment with reinforcement learning. We use a real-time control of self-thermophoretic active particles to demonstrate the solution of a simple standard navigation problem under the inevitable influence of Brownian motion at these length scales. We show that, with external control, collective learning is possible. Concerning the learning under noise, we find that noise decreases the learning speed, modifies the optimal behavior, and also increases the strength of the decisions made. As a consequence of time delay in the feedback loop controlling the particles, an optimum velocity, reminiscent of optimal run-and-tumble times of bacteria, is found for the system, which is conjectured to be a universal property of systems exhibiting delayed response in a noisy environment.

Living organisms adapt their behavior according to their environment to achieve a particular goal. Information about the state of the environment is sensed, processed, and encoded in biochemical processes in the organism to provide appropriate actions or properties. These learning or adaptive processes occur within the lifetime of a generation, over multiple generations, or over evolutionarily relevant time scales. They lead to specific behaviors of individuals and collectives. Swarms of fish or flocks of birds have developed collective strategies adapted to the existence of predators (1), and collective hunting may represent a more efficient foraging tactic (2). Birds learn how to use convective air flows (3). Sperm have evolved complex swimming patterns to explore chemical gradients in chemotaxis (4), and bacteria express specific shapes to follow gravity (5).

Inspired by these optimization processes, learning strategies that reduce the complexity of the physical and chemical processes in living matter to a mathematical procedure have been developed. Many of these learning strategies have been implemented into robotic systems (7–9). One particular framework is reinforcement learning (RL), in which an agent gains experience by interacting with its environment (10). The value of this experience relates to rewards (or penalties) connected to the states that the agent can occupy. The learning process then maximizes the cumulative reward for a chain of actions to obtain the so-called policy. This policy advises the agent which action to take. Recent computational studies, for example, reveal that RL can provide optimal strategies for the navigation of active particles through flows (11–13), the swarming of robots (14–16), the soaring of birds , or the development of collective motion (17).

A team of researchers affiliated with several institutions in South Korea has developed a simple method for converting 2D drawings to 3D objects. In their paper published in the journal Science Advances, the group describes their technique and possible uses for it.

Over the past several decades, 3D printing has become a popular way to create in a relatively simple manner. Such printing allows for on-demand supply of simple products. In this new effort, the researchers have developed another way to create 3D objects without the need for a printer.

The technique involves hand-drawing (or conventionally printing) a 2D image on an object using a special pen with special ink and then submerging the object in a tub of water. When the object is pulled from the water, the ink has been partly removed from the object and has formed into a 3D representation of the original image.