Skip to content
View tkersey's full-sized avatar
:octocat:
:octocat:

Organizations

@grays

Block or report tkersey

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. For future reference but maybe not. For future reference but maybe not.
    1
    # 2025
    2
    ## January
    3
    * ## [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446)
    4
      > Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2. [Code](https://github.com/Amshaker/SwiftFormer)
    5
    * ## [Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture | RFC 9330](https://datatracker.ietf.org/doc/rfc9330)
  2. algebra-to-co-monads.md algebra-to-co-monads.md
    1
    # [fit] Algebra to
    2
    # [fit] **(Co)monads**
    3
    ---
    4
    # **$$Cᴮᴬ = (Cᴮ)ᴬ$$**
    5
    ---
  3. resume resume Public

    4

  4. dotfiles dotfiles Public

    public dot files

    Shell 11