Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APICircuit TracerNEWSteerSAE EvalsExportsSlackBlogPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlogGitHubSlackTwitterContact
    1. Home
    2. Google DeepMind · Exploring Gemma 2 with Gemma Scope
    3. Gemma-2-9B
    4. Attention Out - 16k
    5. 0-GEMMASCOPE-ATT-16K
    6. 1224
    Prev
    Next
    INDEX
    Explanations

    attends to the various contexts surrounding the token "fee" from other tokens indicating conditions or related information, especially how "fee" interacts with phrases that denote payment structures or additional costs

    oai_attention-head · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-9b-pt-att/layer_0/width_16k/average_l0_61
    Prompts (Dashboard)
    16,384 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.0.attn.hook_z
    Hook Layer
    0
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.01
    1:0.02
    2:0.56
    3:0.05
    4:0.01
    5:0.01
    6:0.03
    7:0.03
    8:0.01
    9:0.01
    10:0.01
    11:0.00
    12:0.01
    13:0.10
    14:0.07
    15:0.00
    Negative Logits
    InstanceState
    -0.21
    nodeValue
    -0.21
     {
    -0.21
    jsonwebtoken
    -0.20
    HL
    -0.20
    gg
    -0.19
    																				
    -0.19
     Sl
    -0.19
    eder
    -0.19
    ht
    -0.18
    POSITIVE LOGITS
     Infór
    0.42
    uxxxx
    0.38
    Personensuche
    0.36
     Italij
    0.35
    rungsseite
    0.35
     fédé
    0.33
     Cæsar
    0.32
     sereia
    0.32
     Efq
    0.32
     itſelf
    0.32
    Activations Density 0.037%

    No Known Activations