Toggle light / dark theme

Paper page — SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series

Posted in innovation

SiMBA

Simplified Mamba-Based Architecture for Vision and Multivariate Time series.

Transformers have widely adopted attention networks for sequence mixing and MLPs for channel mixing, playing a pivotal role in achieving breakthroughs across domains.


Join the discussion on this paper page.