Toggle light / dark theme

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
ChatGPT from Open AI has shocked many users as it is able to complete programming tasks from natural language descriptions, create legal contracts, automate tasks, translate languages, write articles, answer questions, make video games, carry out customer service tasks, and much more — all at the level of human intelligence with 99% percent of its outputs. PAL Robotics has taught its humanoid AI robots to use objects in the environment to avoid falling when losing balance.

AI News Timestamps:
0:00 Why OpenAI’s ChatGPT Has People Panicking.
3:29 New Humanoid AI Robots Technology.
8:20 Coursera Deep Learning AI

Twitter / Reddit Credits:
ChatGPT3 AR (Stijn Spanhove) https://bit.ly/3HmxPYm.
Roblox game made with ChatGPT3 (codegodzilla) https://bit.ly/3HkdXoY
ChatGPT3 making text to image prompts (Manu. Vision | Futuriste) https://bit.ly/3UyyKrG
ChatGPT3 for video game creation (u/apinanaivot) https://bit.ly/3VI17oI
ChatGPT3 making video game land (Lucas Ferreira da Silva) https://bit.ly/3iMdotO
ChatGPT3 deleting blender default cube (Blender Renaissance) https://bit.ly/3FcM3rZ
ChatGPT3 responding about Matrix (Mario Reder) https://bit.ly/3UIsX2K
ChatGPT3 to write acquisition rational for the board of directors (The Secret CFO) https://bit.ly/3BhmmW5
ChatGPT3 to get job offers (Leon Noel) https://bit.ly/3UFl3qT
Automated rpa with ChatGPT3 (Sahar Mor) https://bit.ly/3W1ZkKK
ChatGPT3 making 3D web designs (Avalon•4) https://bit.ly/3UzGXf7
ChatGPT3 making a legal contract (Atri) https://bit.ly/3BljuYn.
ChatGPT3 making signup program (Chris Raroque) https://bit.ly/3Hrachc.

#technology #tech #ai

Good Morning, 2033 — A Sci-Fi Short Film.

What will your average morning look like in 2033? And who hacked us?

This scif-fi short film explores a number of near-future futurist predictions for the 2030s.

Sleep with a brain sensor sleep mask that determines when to wake you. Wake up with gentle stimulation. Drink enhanced water with nutrients, vitamins, and supplements you need. Slide on your smart glasses that you wear all day. Do yoga and stretching on a smart scale that senses you, and get tips from a virtual trainer. Help yourself wake up with a 99CRI, 500,000 lumen light. Go for a walk and your glasses scan your brain as you walk. Live neurofeedback helps you meditate. Your kitchen uses biodata to figure out the ideal health meal, and a kitchen robot makes it for you. You work in VR, AR, MR, XR, reality in the metaverse. You communicate with the world through your AI assistant and AI avatar. You enter the high tech bathroom that uses UV lights and robotics to clean your body for you. Ubers come in the form of flying cars, EVTOL aircraft, that move at 300km/h. Cities become a single color as every inch of roads and buildings become covered in photovoltaic materials.

One of the promising technologies being developed for next-generation augmented/virtual reality (AR/VR) systems is holographic image displays that use coherent light illumination to emulate the 3D optical waves representing, for example, the objects within a scene. These holographic image displays can potentially simplify the optical setup of a wearable display, leading to compact and lightweight form factors.

On the other hand, an ideal AR/VR experience requires relatively to be formed within a large field-of-view to match the resolution and the viewing angles of the human eye. However, the capabilities of holographic image projection systems are restricted mainly due to the limited number of independently controllable pixels in existing image projectors and spatial light modulators.

A recent study published in Science Advances reported a deep learning-designed transmissive material that can project super-resolved images using low-resolution image displays. In their paper titled “Super-resolution image display using diffractive decoders,” UCLA researchers, led by Professor Aydogan Ozcan, used deep learning to spatially-engineer transmissive diffractive layers at the wavelength scale, and created a material-based physical image decoder that achieves super-resolution image projection as the light is transmitted through its layers.

Neuralink’s invasive brain implant vs phantom neuro’s minimally invasive muscle implant. Deep dive on brain computer interfaces, Phantom Neuro, and the future of repairing missing functions.

Connor glass.
Phantom is creating a human-machine interfacing system for lifelike control of technology. We are currently hiring skilled and forward-thinking electrical, mechanical, UI, AR/VR, and Ai/ML engineers. Looking to get in touch with us? Send us an email at [email protected].

Phantom Neuro.
Phantom is a neurotechnology company, spun out of the lab at The Johns Hopkins University School of Medicine, that is enabling lifelike control of robotic orthopedic technologies, such as prosthetic limbs and exoskeletons. Phantom’s solution, the Phantom X, consists of low-risk implantable sensors, AI, and enabling software. By providing superior control of robotic orthopedic mechanisms, the Phantom X will drastically improve the lives of individuals with limb difference who have yet to see a tangible improvement in quality of life despite significant advancements in the field of robotics.

Links:

This time I come to talk about a new concept in this Age of Artificial Intelligence and the already insipid world of Social Networks. Initially, quite a few years ago, I named it “Counterpart” (long before the TV series “Counterpart” and “Black Mirror”, or even the movie “Transcendence”).

It was the essence of the ETER9 Project that was taking shape in my head.

Over the years and also with the evolution of technologies — and of the human being himself —, the concept “Counterpart” has been getting better and, with each passing day, it makes more sense!

More and more companies and scientists are working to equip contact lenses with applications that not long ago still seemed like science fiction, such as the ability to record videos or diagnose and even treat diseases. Mojo Vision, an American startup, is one company that has been improving its prototypes since 2015. It is currently developing an ambitious project involving augmented reality lenses that, in addition to correcting your vision, will let you consult all kinds of information, from the trails on a ski slope to your pace when you run, all through microLED displays the size of a grain of sand.

“In the short term, it sounds like a futuristic idea, but 20 years ago we couldn’t even imagine many of the technological advances that we have today,” says Ana Belén Cisneros del Río, deputy dean of the College of Opticians-Optometrists in the Spanish region of Castilla y León, of the Mojo Vision project. However, Daniel Elies, a specialist in cornea, cataract and refractive surgery and medical director of the Institute of Ocular Microsurgery (IMO) Miranza Group in Madrid, does not believe that this type of contact lens will become part of everyday life anytime soon, “especially due to cost issues.”

One of the companies interested in manufacturing augmented reality contacts is Magic Leap. Sony, for its part, applied a few years ago for a patent for lenses that can record videos, and Samsung did the same for lenses equipped with a camera and a display that projects images directly into the user’s eye. Some researchers are trying to create robotic lenses that can zoom in and out with the blink of an eye, and yet others are working on night vision contact lenses, which could be useful in military applications.

Virtual reality (VR) and augmented reality (AR) headsets are becoming increasingly advanced, enabling increasingly engaging and immersive digital experiences. To make VR and AR experiences even more realistic, engineers have been trying to create better systems that produce tactile and haptic feedback matching virtual content.

Researchers at University of Hong Kong, City University of Hong Kong, University of Electronic Science and Technology of China (UESTC) and other institutes in China have recently created WeTac, a miniaturized, soft and ultrathin wireless electrotactile system that produces on a user’s skin. This system, introduced in Nature Machine Intelligence, works by delivering through a user’s .

“As the tactile sensitivity among and different parts of the hand within a person varies widely, a universal method to encode tactile information into faithful feedback in hands according to sensitivity features is urgently needed,” Kuanming Yao and his colleagues wrote in their paper. “In addition, existing haptic interfaces worn on the hand are usually bulky, rigid and tethered by cables, which is a hurdle for accurately and naturally providing haptic feedback.”

When in 2015, Eileen Brown looked at the ETER9 Project (crazy for many, visionary for few) and wrote an interesting article for ZDNET with the title “New social network ETER9 brings AI to your interactions”, it ensured a worldwide projection of something the world was not expecting.

Someone, in a lost world (outside the United States), was risking, with everything he had in his possession (very little or less than nothing), a vision worthy of the American dream. At that time, Facebook was already beginning to annoy the cleaner minds that were looking for a difference and a more innovative world.

Today, after that test bench, we see that Facebook (Meta or whatever) is nothing but an illusion, or, I dare say, a big disappointment. No, no, no! I am not now bad-mouthing Facebook just because I have a project in hand that is seen as a potential competitor.

Over the last three decades, the digital world that we access through smartphones and computers has grown so rich and detailed that much of our physical world has a corresponding life in this digital reality. Today, the physical and digital realities are on a steady course to merging, as robots, Augmented Reality (AR) and wearable digital devices enter our physical world, and physical items get their digital twin computer representations in the digital world.

These digital twins can be uniquely identified and protected from manipulation thanks to crypto technologies like blockchains. The trust that these technologies provide is extremely powerful, helping to fight counterfeiting, increase supply chain transparency, and enable the circular economy. However, a weak point is that there is no versatile and generally applicable identifier of physical items that is as trustworthy as a blockchain. This breaks the connection between the physical and digital twins and therefore limits the potential of technical solutions.

In a new paper published in Light: Science & Applications, an interdisciplinary team of scientists led by Professors Jan Lagerwall (physics) and Holger Voos (robotics) from the University of Luxembourg, Luxembourg, and Prof. Mathew Schwartz (architecture, construction of the built environment) from the New Jersey Institute of Technology, U.S., propose an innovative solution to this problem where physical items are given unique and unclonable fingerprints realized using cholesteric spherical reflectors, or CSRs for short.