Toggle light / dark theme

Artificial Intelligence has been improving rapidly these past few years, and it’s now becoming obvious according to the top AI Scientists such as Yann LeCun that AI in 2030 will be almost unrecognizable compared to ones right now. They will be part of everyday lifes in the form of digital assistants and more. What other awesome abilities the best AI of the future will have, I’ll show you in this future predictions video.

TIMESTAMPS:
00:00 The first Human-Level AI
01:34 What are SSL World Models?
02:26 What is Self Supervised Learning?
05:24 Learning like Humans.
09:14 The Implications of Human AI
10:57 Last Words.

#ai #agi #technology

Mayo Clinic researchers have proposed a new model for mapping the symptoms of Alzheimer’s disease to brain anatomy. This model was developed by applying machine learning to patient brain imaging data. It uses the entire function of the brain rather than specific brain regions or networks to explain the relationship between brain anatomy and mental processing. The findings are reported in Nature Communications.

“This new model can advance our understanding of how the brain works and breaks down during aging and Alzheimer’s disease, providing new ways to monitor, prevent and treat disorders of the mind,” says David T. Jones, M.D., a Mayo Clinic neurologist and lead author of the study.

Alzheimer’s disease typically has been described as a protein-processing problem. The toxic proteins amyloid and tau deposit in areas of the brain, causing neuron failure that results in clinical symptoms such as , difficulty communicating and confusion.

Nvidia’s AI model is pretty impressive: a tool that quickly turns 2D snapshots into a 3D-rendered scene. The tool is called Nvidia Instant NeRF, referring to “neural radiance fields”.


Nvidia’s AI model is pretty impressive: a tool that quickly turns a collection of 2D snapshots into a 3D-rendered scene. The tool is called Instant NeRF, referring to “neural radiance fields”.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The Nvidia research team has developed an approach that accomplishes this task almost instantly making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.

NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images.

Quantum computing startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future.


As a buzzword, quantum computing probably ranks only below AI in terms of hype. Large tech companies such as Alphabet, Amazon, and Microsoft now have substantial research and development efforts in quantum computing. A host of startups have sprung up as well, some boasting staggering valuations. IonQ, for example, was valued at $2 billion when it went public in October through a special-purpose acquisition company. Much of this commercial activity has happened with baffling speed over the past three years.

I am as pro-quantum-computing as one can be: I’ve published more than 100 technical papers on the subject, and many of my PhD students and postdoctoral fellows are now well-known quantum computing practitioners all over the world. But I’m disturbed by some of the quantum computing hype I see these days, particularly when it comes to claims about how it will be commercialized.

On top of the environmental concerns, Japan has an added motivation for this push towards automation —its aging population and concurrent low birth rates mean its workforce is rapidly shrinking, and the implications for the country’s economy aren’t good.

Thus it behooves the Japanese to automate as many job functions as they can (and the rest of the world likely won’t be far behind, though they won’t have quite the same impetus). According to the Nippon Foundation, more than half of Japanese ship crew members are over the age of 50.

In partnership with Misui OSK Lines Ltd., the foundation recently completed two tests of autonomous ships. The first was a 313-foot container ship called the Mikage, which sailed 161 nautical miles from Tsuruga Port, north of Kyoto, to Sakai Port near Osaka. Upon reaching its destination port the ship was even able to steer itself into its designated bay, with drones dropping its mooring line.

What most people define as common sense is actually common learning, and much of that is biased.

The biggest short term problem in AI: as mentioned in the video clip, an over-emphasis on data set size, irrelevant of accuracy, representation or accountability.

The biggest long term problem in AI: Instead of trying to replace us we should be seeking to complement us. Merge is not necessary nor advisable.

If we think about it, building a machine to think like a human is like buying a race horse and insisting for it to function like a camel. And it is doomed to fail. Cos there are only two scenarios: Either humans are replaced or they are not. If we are, then we have failed. If we are not replaced, then the AI development has failed.

Time for a change of direction.

I think of a super intelligent learning machine as a data addict and then it’s easy to see how we fit in.