Toggle light / dark theme

“If we could see the air we fly in, we wouldn’t,” is a common saying among glider pilots. The invisible turbulent pockets that accompany soaring thermals present hazards to small aircraft, but today’s observational tools struggle to measure such wind features at high spatial resolutions over large distances. Now Yunpeng Zhang of the University of Science and Technology of China and his colleagues demonstrate how adapting a remote-sensing technology called pulsed coherent Doppler lidar (PCDL) enables long-range wind detection with submeter resolution [1].

PCDL senses wind speeds by detecting the frequency shift when a laser pulse scatters off dust particles in the air. By measuring the time taken for this scattered light to return to the detector, the technique allows wide-region profiling of wind speeds. This large-scale sampling comes at the cost of measurement precision, however. Measuring the laser’s travel time requires short-duration pulses, but short pulses transmit little total energy for a given laser power, and this energy is necessarily dispersed over a wide frequency range.

To avoid this trade-off, Zhang and his colleagues imprinted a phase-modulation pattern within each transmitted pulse using an electro-optic modulator. This pattern broke the link between pulse duration and spatial resolution, allowing a more flexible pulse duration. As a result, their setup achieved a spatial resolution of 0.9 m at a distance of 700 m (compared to a 3-m resolution at 300 m for a conventional instrument) and was able to detect the wind from an electric fan on a rooftop 329 m away.

Elon Musk/courtesy of Yichuan Cao/NurPhoto via Getty Images

In 2022, Elon Musk’s Neuralink tried – and failed – to secure permission from the FDA to run a human trial of its implantable brain-computer interface (BCI), according to a Reuters report published Thursday.

Citing seven current and former employees, speaking on the condition of anonymity, Reuters reported that the regulatory agency found “dozens of issues” with Neuralink’s application that the company must resolve before it can begin studying its tech in humans.

The company also showcased other executives, which could alleviate concern that Musk has been too distracted by his other business ventures. They also talked about “meat and potato” topics like cutting costs, improving margins, and EV-charging infrastructure.

The keys to winning the EV race will come down to product appeal, software or user interface, controlling cost, and consistent execution, he said.

“And Tesla right now is one generation ahead of the other automakers,” Fields said, though rivals like Ford and Hyundai are making a lot of progress. “Tesla still has the leg-up on the competition, and I think they demonstrated that yesterday.”

This video was creating using multiple AI tools. Script was generated using ChatGPT, the noration voice was generated with Elevenlabs.io, background audio was generated with AudioLDM model and finally images were created with Stable Diffusion using Illuminati Diffusion v1.1 model. The script itself was a source for prompts at image generation stage.

There were still some human input. Particularly I generated several images for each part of the script and choose the most appealing ones. I did also manually combine noration with background music. But mostly it was done in a way that each part of the process might be completely automated.

You can follow me here to see more of my work:
Twitter: https://twitter.com/volotat.
Github: https://github.com/volotat.
Medium: https://medium.com/@AlexeyBorsky

Conor russomanno, founder and CEO of openbci eva esteban, embedded software engineer at openbci

Galea is an award-winning platform that merges next-generation biometrics with mixed reality. It is the first device to integrate a wide range of physiological signals, including EEG, EMG, EDA, PPG, and eye-tracking, into a single headset. In this session, Conor and Eva will provide a live demonstration of the device and its capabilities, showcasing its potential for a variety of applications, from gaming to training and rehabilitation. They will give an overview of the different hardware and software components of the system, highlighting how it can be used to analyze user experiences in real time. Attendees will get an opportunity to ask questions at the end.

John Danaher, Senior Lecturer in Law at the National University of Ireland (NUI) Galway:

“Understanding Techno-Moral Revolutions”

Talk held on August 24, 2021 for Colloquium of the Center for Humans and Machines at the Max Planck Institute for Human Development, Berlin.

It is common to use ethical norms and standards to critically evaluate and regulate the development and use of emerging technologies like AI and Robotics. Indeed, the past few years has seen something of an explosion of interest in the ethical scrutiny of technology. What this emerging field of machine ethics tends to overlook, however, is the potential to use the development of novel technologies to critically evaluate our existing ethical norms and standards. History teaches us that social morality (the set of moral beliefs and practices shared within a given society) changes over time. Technology has sometimes played a crucial role in facilitating these historical moral revolutions. How will it do so in the future? Can we provide any meaningful answers to this question? This talk will argue that we can and will outline several tools for thinking about the mechanics of technologically-mediated moral revolutions.

Not peer-reviewed yet but a submitted paper.

The ‘presented images’ were shown to a group of humans. The ‘reconstructed images’ were the result of an fMRI output to Stable Diffusion.

In other words, #stablediffusion literally read people’s minds.

Source 👇

Call it naive, call it crazy, but I think we have a real chance to tackle aging in this century. And though it’s not easy — it’s very simple.

If you have seen the banner of this channel — it says it’s all. But in this video I go deeper into my personal story and motivation. This way I hope you can understand why I’m doing what I’m doing.

So pick your role and let’s work!
Worse case scenario — we’ll live for extra 20 healthy years. Best case… well, well we might stop or reverse aging all together.

Requirements to cure aging: