Toggle light / dark theme

Year 2021 viable fusion reactor in a z pinch device which is compact enough to fit in a van or airplane ✈️ 😀


The fusion Z-pinch experiment (FuZE) is a sheared-flow stabilized Z-pinch designed to study the effects of flow stabilization on deuterium plasmas with densities and temperatures high enough to drive nuclear fusion reactions. Results from FuZE show high pinch currents and neutron emission durations thousands of times longer than instability growth times. While these results are consistent with thermonuclear neutron emission, energetically resolved neutron measurements are a stronger constraint on the origin of the fusion production. This stems from the strong anisotropy in energy created in beam-target fusion, compared to the relatively isotropic emission in thermonuclear fusion. In dense Z-pinch plasmas, a potential and undesirable cause of beam-target fusion reactions is the presence of fast-growing, “sausage” instabilities. This work introduces a new method for characterizing beam instabilities by recording individual neutron interactions in plastic scintillator detectors positioned at two different angles around the device chamber. Histograms of the pulse-integral spectra from the two locations are compared using detailed Monte Carlo simulations. These models infer the deuteron beam energy based on differences in the measured neutron spectra at the two angles, thereby discriminating beam-target from thermonuclear production. An analysis of neutron emission profiles from FuZE precludes the presence of deuteron beams with energies greater than 4.65 keV with a statistical uncertainty of 4.15 keV and a systematic uncertainty of 0.53 keV. This analysis demonstrates that axial, beam-target fusion reactions are not the dominant source of neutron emission from FuZE. These data are promising for scaling FuZE up to fusion reactor conditions.

The authors would like to thank Bob Geer and Daniel Behne for technical assistance, as well as Amanda Youmans, Christopher Cooper, and Clément Goyon for advice and discussions. The authors would also like to thank Phil Kerr and Vladimir Mozin for the use of their Thermo Fisher P385 neutron generator, which was important in verifying the ability to measure neutron energy shifts via the pulse integral technique. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency—Energy (ARPA-E), U.S. Department of Energy, under Award Nos. DE-AR-0000571, 18/CJ000/05/05, and DE-AR-0001160. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344 and Lawrence Berkeley National Laboratory under Contract No. DE-AC02-05CH11231. U.

99% of the following speech was written by ChatGPT. I made a few changes here and there and cut and pasted a couple of paragraphs for better flow. This is the prompt with which I started the conversation:

Write a TED Talks style speech explaining how AI will be the next cross-platform operating system, entertainment service, and search engine as well as source of news and accurate information. Elaborate further in this speech about how this future AI could produce tailored entertainment experiences for the end-user. Explain its application in creating real-time, personally-tailored and novel media including mixed reality, virtual reality, extended reality, and augmented reality media as well as in written fiction and nonfiction, music, video and spoken-word entertainment for its end users. Write a strong and compelling opening paragraph to this speech and end it memorably. Add as much detail as you can on each point. The speech should last at least 15 minutes.

I used an online service called colossyan.com too produce the clips with metahumans. I used the reface app to put my face on some of the metahumans, but it unfortunately stepped on the video. I apologize for the blurriness.

This video explores the timelapse of artificial intelligence from 2030 to 10,000A.D.+. Watch this next video about Super Intelligent AI and why it will be unstoppable: https://youtu.be/xPvo9YYHTjE
► Support This Channel: https://www.patreon.com/futurebusinesstech.
► Udacity: Up To 75% Off All Courses (Biggest Discount Ever): https://bit.ly/3j9pIRZ
► Brilliant: Learn Science And Math Interactively (20% Off): https://bit.ly/3HAznLL
► Jasper AI: Write 5x Faster With Artificial Intelligence: https://bit.ly/3MIPSYp.

SOURCES:
https://www.futuretimeline.net.
• The Singularity Is Near: When Humans Transcend Biology (Ray Kurzweil): https://amzn.to/3ftOhXI
• The Future of Humanity (Michio Kaku): https://amzn.to/3Gz8ffA
• Physics of the Future (Michio Kaku): https://amzn.to/33NP7f7
• Physics of the Impossible (Michio Kaku): https://amzn.to/3wSBR4D
• AI 2041: 10 Visions of Our Future (Kai-Fu Lee & Chen Qiufan): https://amzn.to/3bxWat6

Official Discord Server: https://discord.gg/R8cYEWpCzK

💡 Future Business Tech explores the future of technology and the world.

Examples of topics I cover include:
• Artificial Intelligence.
• Genetic Engineering.
• Virtual and Augmented Reality.
• Space Exploration.
• Science Fiction.

SUBSCRIBE: https://bit.ly/3geLDGO

Microsoft shared a pair (opens in new tab) of blog posts (opens in new tab) summarizing the progress and success of its HoloLens 2. The tech giant has brought together several of its popular services and capabilities to improve collaboration within augmented reality. Full Microsoft Teams integration with HoloLens 2 headlines a wave of updates that center on collaboration.

Microsoft also highlighted several partnerships, including its work with Toyota.

Kindly see my latest FORBES article:

In the piece I explore some of the emerging tech that will impact our coming year. Thank you for reading and sharing!


2022 was a transformative year for technological innovation and digital transformation. The trend will continue as the pace of innovation and development of potentially disruptive emerging technologies exponentially increases every year. The question arises, what lies ahead for tech for us to learn and experience in 2023?

While there are many impactful tech topics such as the Internet of Things, 5G, Space, Genomics, Synthetic Biology, Automation, Augmented Reality, and others, there are four tech areas to keep a keen watch on this coming year as they have promising and near-term capabilities to transform lives. They include: 1) artificial intelligence, 2) computing technologies, 3) robotics, and 4) materials science.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
ChatGPT from Open AI has shocked many users as it is able to complete programming tasks from natural language descriptions, create legal contracts, automate tasks, translate languages, write articles, answer questions, make video games, carry out customer service tasks, and much more — all at the level of human intelligence with 99% percent of its outputs. PAL Robotics has taught its humanoid AI robots to use objects in the environment to avoid falling when losing balance.

AI News Timestamps:
0:00 Why OpenAI’s ChatGPT Has People Panicking.
3:29 New Humanoid AI Robots Technology.
8:20 Coursera Deep Learning AI

Twitter / Reddit Credits:
ChatGPT3 AR (Stijn Spanhove) https://bit.ly/3HmxPYm.
Roblox game made with ChatGPT3 (codegodzilla) https://bit.ly/3HkdXoY
ChatGPT3 making text to image prompts (Manu. Vision | Futuriste) https://bit.ly/3UyyKrG
ChatGPT3 for video game creation (u/apinanaivot) https://bit.ly/3VI17oI
ChatGPT3 making video game land (Lucas Ferreira da Silva) https://bit.ly/3iMdotO
ChatGPT3 deleting blender default cube (Blender Renaissance) https://bit.ly/3FcM3rZ
ChatGPT3 responding about Matrix (Mario Reder) https://bit.ly/3UIsX2K
ChatGPT3 to write acquisition rational for the board of directors (The Secret CFO) https://bit.ly/3BhmmW5
ChatGPT3 to get job offers (Leon Noel) https://bit.ly/3UFl3qT
Automated rpa with ChatGPT3 (Sahar Mor) https://bit.ly/3W1ZkKK
ChatGPT3 making 3D web designs (Avalon•4) https://bit.ly/3UzGXf7
ChatGPT3 making a legal contract (Atri) https://bit.ly/3BljuYn.
ChatGPT3 making signup program (Chris Raroque) https://bit.ly/3Hrachc.

#technology #tech #ai

Good Morning, 2033 — A Sci-Fi Short Film.

What will your average morning look like in 2033? And who hacked us?

This scif-fi short film explores a number of near-future futurist predictions for the 2030s.

Sleep with a brain sensor sleep mask that determines when to wake you. Wake up with gentle stimulation. Drink enhanced water with nutrients, vitamins, and supplements you need. Slide on your smart glasses that you wear all day. Do yoga and stretching on a smart scale that senses you, and get tips from a virtual trainer. Help yourself wake up with a 99CRI, 500,000 lumen light. Go for a walk and your glasses scan your brain as you walk. Live neurofeedback helps you meditate. Your kitchen uses biodata to figure out the ideal health meal, and a kitchen robot makes it for you. You work in VR, AR, MR, XR, reality in the metaverse. You communicate with the world through your AI assistant and AI avatar. You enter the high tech bathroom that uses UV lights and robotics to clean your body for you. Ubers come in the form of flying cars, EVTOL aircraft, that move at 300km/h. Cities become a single color as every inch of roads and buildings become covered in photovoltaic materials.

Creator: Cayden Pierce — https://caydenpierce.com.

How did you make this sci-fi short film?

One of the promising technologies being developed for next-generation augmented/virtual reality (AR/VR) systems is holographic image displays that use coherent light illumination to emulate the 3D optical waves representing, for example, the objects within a scene. These holographic image displays can potentially simplify the optical setup of a wearable display, leading to compact and lightweight form factors.

On the other hand, an ideal AR/VR experience requires relatively to be formed within a large field-of-view to match the resolution and the viewing angles of the human eye. However, the capabilities of holographic image projection systems are restricted mainly due to the limited number of independently controllable pixels in existing image projectors and spatial light modulators.

A recent study published in Science Advances reported a deep learning-designed transmissive material that can project super-resolved images using low-resolution image displays. In their paper titled “Super-resolution image display using diffractive decoders,” UCLA researchers, led by Professor Aydogan Ozcan, used deep learning to spatially-engineer transmissive diffractive layers at the wavelength scale, and created a material-based physical image decoder that achieves super-resolution image projection as the light is transmitted through its layers.