© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Google DeepMind · Exploring Gemma 2 with Gemma Scope
    3. Gemma-2-9B
    4. Residual Stream - 16k
    5. 22-GEMMASCOPE-RES-16K
    6. 6522
    Prev
    Next
    INDEX
    Explanations

    instances of distraction and focus in various contexts

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GEMMA-2-9B @ 22-gemmascope-res-16k
    Configuration
    google/gemma-scope-9b-pt-res/layer_22/width_16k/average_l0_123
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.22.hook_resid_post
    Hook Layer
    22
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     ब्रेकडाउन
    -0.39
     anonymously
    -0.38
     jspb
    -0.35
    mbggenerated
    -0.35
    xmlhttp
    -0.35
     anonymous
    -0.33
    躇
    -0.33
     transfieras
    -0.33
    anonymous
    -0.31
     asymptomatic
    -0.31
    POSITIVE LOGITS
     attention
    0.73
     distraction
    0.72
    attention
    0.71
     distracted
    0.70
     distra
    0.67
     Distra
    0.65
     focus
    0.65
    Attention
    0.63
     fokus
    0.62
    focused
    0.61
    Activations Density 0.248%

    No Known Activations