© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Google DeepMind · Exploring Gemma 2 with Gemma Scope
    3. Gemma-2-9B
    4. Residual Stream - 16k
    5. 12-GEMMASCOPE-RES-16K
    6. 5680
    Prev
    Next
    INDEX
    Explanations

    mentions of attention and its varying applications or contexts

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GEMMA-2-9B @ 12-gemmascope-res-16k
    Configuration
    google/gemma-scope-9b-pt-res/layer_12/width_16k/average_l0_130
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.12.hook_resid_post
    Hook Layer
    12
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     onPostExecute
    -0.44
     plegable
    -0.44
    stateProvider
    -0.44
     isolato
    -0.43
    ReusableCell
    -0.43
    Прода
    -0.43
     Codable
    -0.42
     Económica
    -0.42
    lillah
    -0.42
     sèche
    -0.40
    POSITIVE LOGITS
     attention
    1.70
     Attention
    1.61
    attention
    1.48
    Attention
    1.48
     ATTENTION
    1.30
     attentions
    1.22
    ATTENTION
    1.10
     aandacht
    1.06
     внимание
    1.05
     atenção
    1.05
    Activations Density 0.145%

    No Known Activations