Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APICircuit TracerNEWSteerSAE EvalsExportsSlackBlogPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlogGitHubSlackTwitterContact
    1. Home
    2. Google DeepMind · Exploring Gemma 2 with Gemma Scope
    3. Gemma-2-9B
    4. Attention Out - 16k
    5. 0-GEMMASCOPE-ATT-16K
    6. 1228
    Prev
    Next
    INDEX
    Explanations

    attends to the token "additional" from tokens that include "additional" later in the sequence, as well as certain tokens related to "rotate" from related tokens

    oai_attention-head · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-9b-pt-att/layer_0/width_16k/average_l0_61
    Prompts (Dashboard)
    16,384 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.0.attn.hook_z
    Hook Layer
    0
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.02
    1:0.01
    2:0.01
    3:0.06
    4:0.58
    5:0.03
    6:0.02
    7:0.02
    8:0.01
    9:0.01
    10:0.01
    11:0.01
    12:0.05
    13:0.04
    14:0.01
    15:0.01
    Negative Logits
    .
    -0.73
    *
    -0.56
     
    -0.55
    -
    -0.55
    \
    -0.54
     the
    -0.52
     T
    -0.52
    /
    -0.51
     L
    -0.50
     A
    -0.49
    POSITIVE LOGITS
    "){
    
    1.01
    ")){
    
    1.00
    '){
    
    0.98
    ".
    
    0.98
    '],
    
    0.96
    "],
    
    0.96
    "])
    
    0.96
    ")));
    
    0.95
    "]);
    
    0.95
    '))
    
    0.94
    Activations Density 0.729%

    No Known Activations