Toggle light / dark theme

Breakthrough in Next-Generation Memory Technology!

A research team led by Professor Jang-Sik Lee from the Department of Materials Science and Engineering and the Department of Semiconductor Engineering at Pohang University of Science and Technology (POSTECH) has significantly enhanced the data storage capacity of ferroelectric memory devices. By utilizing hafnia-based ferroelectric materials and an innovative device structure, their findings, published on June 7 in the international journal Science Advances, mark a substantial advancement in memory technology.

With the exponential growth in data production and processing due to advancements in electronics and artificial intelligence (AI), the importance of data storage technologies has surged. NAND flash memory, one of the most prevalent technologies for mass data storage, can store more data in the same area by stacking cells in a three-dimensional structure rather than a planar one. However, this approach relies on charge traps to store data, which results in higher operating voltages and slower speeds.

Recently, hafnia-based ferroelectric memory has emerged as a promising next-generation memory technology. Hafnia (Hafnium oxide) enables ferroelectric memories to operate at low voltages and high speeds. However, a significant challenge has been the limited memory window for multilevel data storage.

Philosopher David Chalmers: We Can Be Rigorous in Thinking about the Future

David is one of the world’s best-known philosophers of mind and thought leaders on consciousness. I was a freshman at the University of Toronto when I first read some of his work. Since then, Chalmers has been one of the few philosophers (together with Nick Bostrom) who has written and spoken publicly about the Matrix simulation argument and the technological singularity. (See, for example, David’s presentation at the 2009 Singularity Summit or read his The Singularity: A Philosophical Analysis)

During our conversation with David, we discuss topics such as: how and why Chalmers got interested in philosophy; and his search to answer what he considers to be some of the biggest questions – issues such as the nature of reality, consciousness, and artificial intelligence; the fact that academia in general and philosophy, in particular, doesn’t seem to engage technology; our chances of surviving the technological singularity; the importance of Watson, the Turing Test and other benchmarks on the way to the singularity; consciousness, recursive self-improvement, and artificial intelligence; the ever-shrinking of the domain of solely human expertise; mind uploading and what he calls the hard problem of consciousness; the usefulness of philosophy and ethics; religion, immortality, and life-extension; reverse engineering long-dead people such as Ray Kurzweil’s father.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

China Demos New QINGLONG AI Robot with 43 DoF (2024 WORLD ARTIFICIAL INTELLIGENCE CONFERENCE)

Introducing the Qinglong humanoid robot with open-source AI, plus Tesla’s Optimus Gen 2 is shown for the first time ever in public. Meta HOT3D dataset is bringing robotic hands closer than ever before, plus China’s KLING is now available on a web app.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/
Advanced Robotics, Drones, 3D Printers, \& AI Tech HERE: https://bit.ly/3wNxDyA

AI news timestamps:
0:00 Qinglong humanoid robot.
0:33 Specifications.
1:08 Performance.
1:33 AI development.
1:48 Future roadmap.
2:31 Tesla Optimus Gen 2
2:49 Key improvements.
3:23 Roadmap.
3:53 Meta HOT3D
5:10 KLING text to video web app.
5:44 Meta 3D Gen.
6:10 2 AI models.
7:10 EMU AI

#ai #robot #technology

The Turing Lectures: The future of generative AI

With their ability to generate human-like language and complete a variety of tasks, generative AI has the potential to revolutionise the way we communicate, learn and work. But what other doors will this technology open for us, and how can we harness it to make great leaps in technology innovation? Have we finally done it? Have we cracked AI?

Join Professor Michael Wooldridge for a fascinating discussion on the possibilities and challenges of generative AI models, and their potential impact on societies of the future.

Michael Wooldridge is Director of Foundational AI Research and Turing AI World-Leading Researcher Fellow at The Alan Turing Institute. His work focuses on multi-agent systems and developing techniques for understanding the dynamics of multi-agent systems. His research draws on ideas from game theory, logic, computational complexity, and agent-based modelling. He has been an AI researcher for more than 30 years and has published over 400 scientific articles on the subject.

This lecture is part of a series of events — How AI broke the internet — that explores the various angles of large-language models and generative AI in the public eye.

This series of Turing Lectures is organised in collaboration with The Royal Institution of Great Britain.

Ammo ATM? AI-powered bullet vending machines introduced in US

Vending machines are an old charming piece of technology that supposedly makes the lives of people easier by making water, snacks and food in general readily available.


American Rounds says that it aims to redefine convenience in ammunition purchasing, as its ammo dispensers can be accessed round the clock.

The company’s website also promises a ‘hassle-free buying experience every time,’ and of a smooth transaction every time a prospective buyer reaches it.

The ‘smart automated’ bullet dispensing machines use AI technology to identify the buyer’s details before allowing the purchase, according to American Rounds website.

A first physical system to learn nonlinear tasks without a traditional computer processor

Scientists run into a lot of tradeoffs trying to build and scale up brain-like systems that can perform machine learning. For instance, artificial neural networks are capable of learning complex language and vision tasks, but the process of training computers to perform these tasks is slow and requires a lot of power.

Training machines to learn digitally but perform tasks in analog—meaning the input varies with a physical quantity, such as voltage—can reduce time and power, but small errors can rapidly compound.

An electrical network that physics and engineering researchers from the University of Pennsylvania previously designed is more scalable because errors don’t compound in the same way as the size of the system grows, but it is severely limited as it can only learn linear tasks, ones with a simple relationship between the input and output.

SenseTime unveils SenseNova 5o, China’s first real-time multimodal AI model to rival GPT-4o

1/ Chinese AI company SenseTime introduced its new multimodal AI model SenseNova 5o at the World Artificial Intelligence Conference, which SenseTime claims is China’s first GPT-4o-level multimodal real-time model.

2/ It processes audio, text, image and video data to interact with users as if they…


Chinese AI company SenseTime introduced its new multimodal AI model SenseNova 5o and the improved language model SenseNova 5.5 at the World Artificial Intelligence Conference.

Ad.

SenseTime claims that SenseNova 5o is China’s first real-time multimodal model that provides multimodal AI interaction comparable to GPT-4o. It can process audio, text, image and video data, allowing users to interact with the model simply by talking to it.