Toggle light / dark theme

Dr. Simon Stringer. Obtained his Ph.D in mathematical state space control theory and has been a Senior Research Fellow at Oxford University for over 27 years. Simon is the director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, which is based within the Oxford University Department of Experimental Psychology. His department covers vision, spatial processing, motor function, language and consciousness — in particular — how the primate visual system learns to make sense of complex natural scenes. Dr. Stringers laboratory houses a team of theoreticians, who are developing computer models of a range of different aspects of brain function. Simon’s lab is investigating the neural and synaptic dynamics that underpin brain function. An important matter here is the The feature-binding problem which concerns how the visual system represents the hierarchical relationships between features. the visual system must represent hierarchical binding relations across the entire visual field at every spatial scale and level in the hierarchy of visual primitives.

We discuss the emergence of self-organised behaviour, complex information processing, invariant sensory representations and hierarchical feature binding which emerges when you build biologically plausible neural networks with temporal spiking dynamics.

00:00:00 Tim Intro.
00:09:31 Show kickoff.
00:14:37 Hierarchical Feature binding and timing of action potentials.
00:30:16 Hebb to Spike-timing-dependent plasticity (STDP)
00:35:27 Encoding of shape primitives.
00:38:50 Is imagination working in the same place in the brain.
00:41:12 Compare to supervised CNNs.
00:45:59 Speech recognition, motor system, learning mazes.
00:49:28 How practical are these spiking NNs.
00:50:19 Why simulate the human brain.
00:52:46 How much computational power do you gain from differential timings.
00:55:08 Adversarial inputs.
00:59:41 Generative / causal component needed?
01:01:46 Modalities of processing i.e. language.
01:03:42 Understanding.
01:04:37 Human hardware.
01:06:19 Roadmap of NNs?
01:10:36 Intepretability methods for these new models.
01:13:03 Won’t GPT just scale and do this anyway?
01:15:51 What about trace learning and transformation learning.
01:18:50 Categories of invariance.
01:19:47 Biological plausibility.

Pod version: https://anchor.fm/machinelearningstrehttps://en.wikipedia.org/wiki/Simon_S / simon-stringer-a3b239b4 “A new approach to solving the feature-binding problem in primate vision” https://royalsocietypublishing.org/do… James B. Isbister, Akihiro Eguchi, Nasir Ahmad, Juan M. Galeazzi, Mark J. Buckley and Simon Stringer Simon’s department is looking for funding, please do get in touch with him if you can facilitate this. #machinelearning #neuroscience.

https://www.neuroscience.ox.ac.uk/res
https://en.wikipedia.org/wiki/Simon_S
/ simon-stringer-a3b239b4.

When astronomers detected the first long-predicted gravitational waves in 2015, it opened a whole new window into the universe. Before that, astronomy depended on observations of light in all its wavelengths.

We also use light to communicate, mostly . Could we use gravitational waves to communicate?

The idea is intriguing, though beyond our capabilities right now. Still, there’s value in exploring the hypothetical, as the future has a way of arriving sooner than we sometimes think.

By crafting an artificial brain-like environment with microscopic nanopillars, researchers have successfully guided neurons to grow in structured networks. This innovation could revolutionize how scientists study neurological conditions by offering a more accurate way to observe brain cell behavior.

As some of the lowest of the vertebrata attained a far greater size than has descended to their more highly organised living representatives, so a diminution in the size of machines has often attended their development and progress. Take the watch for instance. Examine the beautiful structure of the little animal, watch the intelligent play of the minute members which compose it; yet this little creature is but a development of the cumbrous clocks of the thirteenth century — it is no deterioration from them. The day may come when clocks, which certainly at the present day are not diminishing in bulk, may be entirely superseded by the universal use of watches, in which case clocks will become extinct like the earlier saurians, while the watch (whose tendency has for some years been rather to decrease in size than the contrary) will remain the only existing type of an extinct race.

One need only follow this progression to its logical conclusion to face the inevitable question of “what sort of creature man’s next successor in the supremacy of the earth is likely to be”:

We are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying, by all sorts of ingenious contrivances, that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

In today’s AI news, for those who were thinking Meta Platforms Inc. might back down from its heavy-spending ways in the wake of the DeepSeek news, think again. On the earnings call, Zuckerberg spoke of “the hundreds of billions of dollars” Meta will invest in AI infrastructure over the long term.

S ChatGPT and the newly ascending DeepSeek. “Qwen 2.5-Max outperforms… almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba Meanwhile, amid the market turbulence and breathless headlines, Dario Amodei, co-founder of Anthropic and one of the pioneering researchers behind today’s large language models, published a detailed analysis that offers a more nuanced perspective on DeepSeek’s achievements.

And, Microsoft is bringing Chinese AI company DeepSeek’s R1 model to its Azure AI Foundry platform and GitHub. The R1 model is now part of a model catalog on Azure AI Foundry and GitHub — allowing Microsoft’s customers to integrate it into their AI applications.

In videos, Reid Hoffman is the co-founder of LinkedIn, a legendary Silicon Valley investor, and author of the new book Superagency: What Could Possibly Go Right with Our AI Future. Hoffman joins Big Technology Podcast to discuss his optimistic case for AI, the massive investments flooding into the field, and whether they can pay off.

And, in the new episode of Moonshots, Emad Mostaque, Salim Ismail, and Peter Diamandis discuss recent DeepSeek news, the China vs. USA AI race, and more. Emad is the founder of Intelligent Internet and former CEO of Stability AI. Salim is a serial entrepreneur and strategist well known for his expertise in exponential organizations.

Then, reverse the diffusion process, and unlock the secrets of AI-generated images. Isaac Ke explores how to harness the power of diffusion models to create stunning, high-quality images from text prompts. From text-to-image generation to image-to-image editing, learn how diffusion models are being applied in various fields.

We close out with, Nate B. Jones sharing the strategy behind DeepSeek strategically launching free AI products to disrupt OpenAI’s dominance, including an Operator clone, voice AI, and a likely Sora competitor. This aggressive move threatens OpenAI’s paid model and forces competitors to rethink pricing.

s future, predicting a $10 trillion valuation driven by the launch of the Optimus robot and full self-driving technology, alongside ambitious plans for a Robo taxi service and significant production growth +# ## Key Insights.

S vertically integrated supply chains and in-house development of Optimus components make it difficult for competitors to replicate their success in robotics and AI. + Economic Impact.

S most valuable company, worth more than the next 5 largest companies combined, primarily due to autonomous vehicles and robots. + ⏰Autonomous Tesla vehicles are expected to increase car utility by 5x, operating 55 hours/week instead of the typical 10 hours, enabling 24/7 ride-hailing and delivery services.

Safety and Technology.

S Full Self-Driving (FSD) technology is reported to be 8 times safer per mile than human driving, with continuous improvements and updates. + s real-world AI for self-driving is so advanced that Musk jokes competitors would need a telescope to see them, highlighting Tesla Energy and Infrastructure.

S energy storage solutions are becoming increasingly important, potentially doubling the grid Future Outlook.

The world of AI is evolving at a breakneck pace with new models constantly being created. With so much rapid innovation, it is essential to have the flexibility to quickly adapt applications to the latest models. This is where Azure Container Apps serverless GPUs come in.

Azure Container Apps is a managed serverless container platform that enables you to deploy and run containerized applications while reducing infrastructure management and saving costs.

With serverless GPU support, you get the flexibility to bring any containerized workload, including new language models, and deploy them to a platform that automatically scales with your customer demand. In addition, you get optimized cold start, per-second billing and reduced operational overhead to allow you to focus on the core components of your applications when using GPUs. All the while, you can run your AI applications alongside your non-AI apps on the same platform, within the same environment, which shares networking, observability, and security capabilities.