Toggle light / dark theme

Google’s New AI Learned To See In The Dark! 🤖

GOOGLE’S NEW SENSOR DENOISNG ALGORITHM brings yet another game changer for LOW LIGHT PHOTOGRAPHY. Within a handful of years, this will be added to other factors coming down the pipe, giving further impetus to a revolution in night vision. The video below speaks for itself. In effect, the system takes a series of images from different angles, exposures, and so on, then accurately reconstructs what is missing:


❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers.

📝 The paper “NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images” is available here:
https://bmild.github.io/rawnerf/index.html.

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers.
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join.

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:

Google’s Sergey Brin talks AI safety efforts to prevent ‘sci-fi style sentience’

Google co-founder Sergey Brin has taken a rather similar stance as Tesla CEO Elon Musk on artificial intelligence, emphasizing AI dangers in a recent investor communication. According to the Russian-born billionaire, the present day is an era of possibilities, but it is also a time when responsibility has to be practiced, particularly when it comes to emerging technologies.

“We’re in an era of great inspiration and possibility, but with this opportunity comes the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies,” he wrote.

Brin’s statements were outlined in Alphabet’s recent Founders’ Letter, where the 44-year-old billionaire described how Google is utilizing bleeding-edge technology for its ventures. While AI as a discipline is still an emerging field, Brin noted that there are already a lot of everyday applications for the technology. Among these are the algorithms utilized by Waymo’s self-driving cars, the smart cooling units of Google’s data centers, and of course, Google Translate and YouTube’s automatic captions.

Jeff Lichtman (Harvard) Part 2: Neuromuscular Connectomics

The Connectome and Connectomics: Seeking Neural Circuit Motifs

Talk Overview: The human brain is extremely complex with much greater structural and functional diversity than other organs and this complexity is determined both by one’s experiences and one’s genes. In Part 1 of his talk, Lichtman explains how mapping the connections in the brain (the connectome) may lead to a better understanding of brain function. Together with his colleagues, Lichtman has developed tools to label individual cells in the nervous system with different colors producing beautiful and revealing maps of the neuronal connections.
Using transgenic mice with differently colored, fluorescently labeled proteins in each neuron (Brainbow mice), Lichtman and his colleagues were able to follow the formation and destruction of neuromuscular junctions during mouse development. This work is the focus of Part 2.
In Part 3, Lichtman asks whether some day it might be possible to map all of the neural connections in the brain. He describes the technical advances that have allowed him and his colleagues to begin this endeavor as well as the enormous challenges to deciphering the brain connectome.

Speaker Bio: Jeff Lichtman’s interest in how specific neuronal connections are made and maintained began while he was a MD-PhD student at Washington University in Saint Louis. Lichtman remained at Washington University for nearly 30 years. In 2004, he moved to Harvard University where he is Professor of Molecular and Cellular Biology and a member of the Center for Brain Science.
A major focus of Lichtman’s current research is to decode the map of all the neural connections in the brain. To this end, Lichtman and his colleagues have developed exciting new tools and techniques such as “Brainbow” mice and automated ultra thin tissue slicing machines.

TRANSFORMERS: The Future Is Here With This Robotics Tech | New AI For Quantum Computers

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI researchers aim to achieve stability, speed, manipulability and a gain in operational height from for the robot by using machine learning and a 3D printed stick on the robot’s hind legs to allow quadruped transformers to become a humanoid biped robot and walk. Quantum researchers designed a machine learning-based method that shows how artificial controllers can discover non-intuitive pulse sequences that can rapidly cool a mechanical object from high to ultra cold temperatures, faster than other standard methods, which could be used to advance quantum computers. Researchers used deep reinforcement learning to arrange atoms into a lattice shape, which could be used to create new materials and nano devices, including a robot arm mate of atoms.

AI News Timestamps:
0:00 Transformers Robotics Tech.
2:39 Artificial Intelligence To Control Quantum Computer.
5:21 New Nano Scale Robot Arm.

#technology #tech #ai

Learn a Craft to Survive the Coming Robot Apocalypse

Apple Inc. recently added audiobook narration to the growing list of occupations where algorithms are poised to replace humans alongside graphic designers, college essayists and limerick writers. Luckily, the fine art of newslettering remains (ahem) far beyond the capabilities of even the most sophisticated artificial intelligence software. Still, hope is at hand for those not fortunate enough to toil in the newsletter mines but still seeking gainful employment that won’t disappear as robots take control.


To remain employed in an AI-dominated workplace, train as an artisan.

Chip that Mimics the Human Brain

In this video I discuss why Neuromorphic processors is the future of AI #NeuromorphicChips #Loihi #IntelNeuromorphic.

➞ Register for Phenom AI Day here:
https://bit.ly/3l4CDWx.

TIMESTAMPS:
00:00 — Intro.
00:32 — Phenom AI day (Ad)
01:03 — What is Neuromorphic Chip.
03:19 — Intel Loihi explained.
07:16 — New Intel Loihi 2
09:45 — Analog Neuromorphic chip by Rain Neuromorphic.
10:45 — Other chips.

***
➞ Support me on Patreon: https://www.patreon.com/AnastasiInTech.
➞ Subscribe for new videos every week! ❤ And leave me a comment below!

This AI robot arm can do everything from making coffee to 3D printing

It can also rotate 220 degrees and lift up to 26.5 ounces of weight.

Supernova, a South Korean startup, has designed HUENIT, a robotic arm to help people with various household chores and creative tasks. Supernova showcased its AI Camera and Robot Arm at CES 2023. The company has been developing innovative robots to help people with everyday tasks.

Although many innovative technologies were showcased at the CES 2023, the Huenit Robot Arm captured the attention of visitors. HUENIT is an easy-to-use AI-based multi-functional robotic arm that combines advanced AI technologies with a modular arm to work on complex tasks with high precision. The robot can do everything from making coffee to 3D printing a prototype.

This first FDA-approved dental robot will make implants safer

The robot is used for installing implants.

Could a robot make dentistry speedier and more comfortable? That’s what the new Yomi robot aims to do for implants, according to the device’s website. “We created Yomi to tackle one of the dental procedures people fear the most, implants,” write the robot’s inventors on their site.

The first and only FDA-approved dental surgery robot.

“Yomi is the first and only FDA-cleared robotic system for dental surgery. Through a combination of haptic feedback, intuitive visualization, and audio cues, Yomi helps doctors place implants with superior accuracy and precision.”


Yomi.