© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Google DeepMind · Exploring Gemma 2 with Gemma Scope
    3. Gemma-2-2B
    4. Residual Stream - 16k
    5. 22-GEMMASCOPE-RES-16K
    6. 14418
    Prev
    Next
    INDEX
    Explanations

    common and circular phrases, as well as connections in complex ideas or discussions

    oai_token-act-pair · gpt-4o-miniTriggered by @bot

    Question words (how, what, why)

    np_acts-logits-general · gemini-2.0-flash

    what how why where

    np_acts-logits-general · gemini-2.5-flash-lite
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GEMMA-2-2B @ 22-gemmascope-res-16k
    Configuration
    google/gemma-scope-2b-pt-res/layer_22/width_16k/average_l0_72
    Prompts (Dashboard)
    36,864 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.22.hook_resid_post
    Hook Layer
    22
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     تعدى
    -0.50
     they
    -0.49
    tocin
    -0.48
    这样
    -0.45
     Vikipedi
    -0.45
     они
    -0.43
    EnableWeb
    -0.42
     вони
    -0.41
    ándolo
    -0.40
    Gdy
    -0.40
    POSITIVE LOGITS
     how
    2.00
     what
    1.53
    how
    1.51
     cómo
    1.38
     why
    1.36
    what
    1.26
     bagaimana
    1.20
     their
    1.16
    why
    1.13
     How
    1.10
    Activations Density 0.444%

    No Known Activations