Toggle light / dark theme

Hi, if you came to this video, you’re probably wondering what would happen if a man lived 1,000 years or more? What possibilities would be open to mankind and how many useful things could be done, if such a thing were possible? Well, then make some tea, make yourself comfortable, and let’s go!

00:00 — Intro.
00:36 — Problems we will face.
2:07 — Is it possible to realize this?
3:19 — How to make it happen?
5:27 — Repair System.
7:26 — Is humanity ready for such a long life?
8:55 — Final.

#Future #technology #top10

Science fictionfuture2050 technology robots skyscrapers flying carshologramsvoitures volantes futur drone future world in 2050real future of earth future earth technologies future transportation technology future invention what will happen in the future#6what 2050 will be like future events self-driving car search 2050 the world in 2050 2100space exploration science super all cars of the future2020 the future 2050 technology the future BBC documentaries PBS documentaries BBC documentary nat geo documentary national geographic documentary history channel documentaries national geographic documentaries history channel documentary discovery channel documentaries The World In 2050 Future Of Earth Earth in 2050 Future of The World global warming people in the future people in 2050 worldearthfutureenergybbcTop Class Documentaries.

A while ago I spotted someone working on real time AI image generation in VR and I had to bring it to your attention because frankly, I cannot express how majestic it is to watch AI-modulated AR shifting the world before us into glorious, emergent dreamscapes.

Applying AI to augmented or virtual reality isn’t a novel concept, but there have been certain limitations in applying it—computing power being one of the major barriers to its practical usage. Stable Diffusion image generation software, however, is a boiled-down algorithm for use on consumer-level hardware and has been released on a Creative ML OpenRAIL-M licence. That means not only can developers use the tech to create and launch programs without renting huge amounts of server silicon, but they’re also free to profit from their creations.

A new deep-learning framework developed at the Department of Energy’s Oak Ridge National Laboratory is speeding up the process of inspecting additively manufactured metal parts using X-ray computed tomography, or CT, while increasing the accuracy of the results. The reduced costs for time, labor, maintenance and energy are expected to accelerate expansion of additive manufacturing, or 3D printing.

“The scan speed reduces costs significantly,” said ORNL lead researcher Amir Ziabari. “And the quality is higher, so the post-processing analysis becomes much simpler.”

The framework is already being incorporated into software used by commercial partner ZEISS within its machines at DOE’s Manufacturing Demonstration Facility at ORNL, where companies hone 3D-printing methods.

A CREEPY image of a female has been discovered lingering in an AI’s mind, the product of some unintentional programming.

Artificial intelligence machines have always promoted efficiency, but recently many people have expressed fear of them becoming sentient.

Swedish musician Supercomposite shared that fear after discovering an AI-generated image of a woman they dubbed ‘Loab’.

Inspired by living things, the unique material is 10 times as durable as natural rubber.

For the first time, researchers use only light and a catalyst to change properties such as hardness and elasticity in molecules of the same type, according to a new study published October 13 in Science.

The ability to control the physical properties of a material using light as a trigger is potentially transformative.


IStock/selimcan.

Inspired by living things like trees and shellfish, the team created a unique material that is ten times as durable as natural rubber and may lead to more flexible electronics and robots.

The E-Walker has been tried and tested on Earth, but it’s yet to prove itself in space.

Large construction projects in space may be one step closer to reality, thanks to a new walking space robot. Researchers have designed the E-Walker — a state-of-the-art walking robot — to take on the behemoth task of space construction. A robot prototype has already been tested here on Earth by assembling a 25m Large Aperture Space Telescope. The telescope would usually be built in space, which is the E-Walker’s future duty.

Doubling up on its potential duties, a smaller-scale prototype of the same robot has also been created and shows promise for large construction applications on Earth, such as maintenance of wind turbines.

The team’s findings were presented in the journal Frontiers in Robotics and AI.


IStock/Vitaly Kusaylo.

Researchers have designed the E-Walker – a state-of-the-art walking robot – to take on the behemoth task of space construction. A robot prototype has already been tested here on Earth by assembling a 25m Large Aperture Space Telescope. The telescope would usually be built in space, which is the E-Walker’s future duty.

“This exoskeleton personalizes assistance as people walk normally through the real world,” said Steve Collins, associate professor of mechanical engineering who leads the Stanford Biomechatronics Laboratory, in a press release. “And it resulted in exceptional improvements in walking speed and energy economy.”

The personalization is enabled by a machine learning algorithm, which the team trained using emulators—that is, machines that collected data on motion and energy expenditure from volunteers who were hooked up to them. The volunteers walked at varying speeds under imagined scenarios, like trying to catch a bus or taking a stroll through a park.

The algorithm drew connections between these scenarios and peoples’ energy expenditure, applying the connections to learn in real time how to help wearers walk in a way that’s actually useful to them. When a new person puts on the boot, the algorithm tests a different pattern of assistance each time they walk, measuring how their movements change in response. There’s a short learning curve, but on average the algorithm was able to effectively tailor itself to new users in just an hour.

The classification performance of all-optical Convolutional Neural Networks (CNNs) is greatly influenced by components’ misalignment and translation of input images in the practical applications. In this paper, we propose a free-space all-optical CNN (named Trans-ONN) which accurately classifies translated images in the horizontal, vertical, or diagonal directions. Trans-ONN takes advantages of an optical motion pooling layer which provides the translation invariance property by implementing different optical masks in the Fourier plane for classifying translated test images. Moreover, to enhance the translation invariance property, global average pooling (GAP) is utilized in the Trans-ONN structure, rather than fully connected layers.

Power Automate is making it easier to scale hyperautomation across your enterprise. With new innovations for unattended Robotic Process Automation (RPA) in the cloud, AI-assistance, and starter kits to streamline your Center of Excellence (CoE), this is a session you won’t want to miss!

Speakers: * Joe Fernandez * Christy Jefson * Mustapha Lazrek * Ken Seong Tan * Stephen Siciliano * Taiki Yoshida.