Toggle light / dark theme

I will begin with the first point and make my way gradually to the tenth point.

I’ve already mentioned to you that the AI of the 1970s was toy-like in comparison to the more involved and expansive AI of today. Modern-day generative AI, for example, makes use of vast amounts of data as scanned across the Internet to pattern-match the nature of human writing. This requires a massive amount of computing resources (something far beyond the depth readily employable in the 1970s). The large-scale modeling or pattern matching is what makes contemporary generative AI seem highly fluent.

A common phrase is to say that generative AI is mimicking or parroting human writing.

Passwords, Touch ID, and Face ID could all be a thing of the past, as Apple is working on a future where unlocking your devices is as easy as just holding a future iPhone or letting your Apple Watch sense your unique heart rhythm.

Everyone’s heart has a unique rhythm, which the Apple Watch monitors through the ECG app. In a recently granted patent, Apple describes a technique for identifying users based on their unique cardiovascular measurements.

With this technology, you can unlock all your devices if you keep wearing your Apple Watch. Verifying your heart patterns instead of a password or a fingerprint scan increases security and speeds up your identification.

A team led by researchers from the California NanoSystems Institute at UCLA has designed a unique material based on a conventional superconductor—that is, a substance that enables electrons to travel through it with zero resistance under certain conditions, such as extremely low temperature. The experimental material showed properties signaling its potential for use in quantum computing, a developing technology with capabilities beyond those of classical digital computers.

Paper page — Visual Riddles: a Commonsense and World Knowledge Challenge for Large Vision and Language Models

Posted in futurism

Visual riddles a commonsense and world knowledge challenge for large vision and language models.

Visual Riddles.

A commonsense and world knowledge challenge for large vision and language models.

Imagine observing someone scratching their arm; to understand why, additional context would be necessary.


Join the discussion on this paper page.

Navigating The Looming AI Energy Crunch

Posted in robotics/AI, sustainability

Brandon Wang is vice president of Synopsys.

The rapid development of AI has led to significant growth across the computing industry. But it is also causing a huge increase in energy consumption, which is leading us into an energy crisis. Current AI models, especially large language models (LLMs), need huge amounts of power to train and run. AI queries require much more energy than traditional searches; for example, asking ChatGPT a question consumes up to 25 times as much energy as a Google search. At current rates of growth, AI is expected to account for up to 3.5% of global electricity demand by 2030, twice as much as the country of France.

We need to address this issue urgently before it becomes unsustainable. If we don’t, the impact could threaten sustainable growth and the widespread adoption of AI technologies themselves. Fortunately, there are a number of pathways toward more energy-efficient AI systems and computing architectures.