Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlogGitHubSlackTwitterContact
    1. Home
    2. Google DeepMind · Exploring Gemma 2 with Gemma Scope
    3. Gemma-2-2B
    4. Attention Out - 16k
    5. 10-GEMMASCOPE-ATT-16K
    6. 5614
    Prev
    Next
    INDEX
    Explanations

    attends to phrases containing the first token marked, which can occur before or after the second token, regardless of the content type surrounding them

    oai_attention-head · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GEMMA-2-2B @ 10-gemmascope-att-16k
    Configuration
    google/gemma-scope-2b-pt-att/layer_10/width_16k/average_l0_70
    Prompts (Dashboard)
    36,864 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.10.attn.hook_z
    Hook Layer
    10
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.10
    1:0.10
    2:0.44
    3:0.08
    4:0.08
    5:0.02
    6:0.05
    7:0.10
    Negative Logits
    parsedMessage
    -0.41
     eventdata
    -0.40
     jajaja
    -0.36
     Reſ
    -0.35
     fallu
    -0.35
    JspWriter
    -0.34
    sizeCache
    -0.34
     myſelf
    -0.34
     Efq
    -0.33
     acá
    -0.32
    POSITIVE LOGITS
    AndEndTag
    0.45
    abase
    0.33
     Akku
    0.31
     surla
    0.29
    keyColumn
    0.29
    ACTO
    0.29
     can
    0.28
    <eos>
    0.28
    Lorenzo
    0.28
    xter
    0.28
    Activations Density 1.374%

    No Known Activations