© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Gemma-2-2B
    3. 22-GEMMASCOPE-TRANSCODER-16K
    4. 10257
    Prev
    Next
    INDEX
    Explanations

    text across multiple human languages and code.

    oai_token-act-pair · gemini-2.0-flash

    why or which

    np_max-act-logits · gemini-2.0-flash
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-2b-pt-transcoders/layer_22/width_16k/average_l0_15
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.22.ln2.hook_normalized
    Architecture
    jumprelu_transcoder
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    SharedCtor
    -0.89
     itſelf
    -0.86
     Efq
    -0.82
     myſelf
    -0.79
     themſelves
    -0.79
     uſed
    -0.79
     leſs
    -0.79
     oock
    -0.78
    LookAnd
    -0.77
    Personensuche
    -0.76
    POSITIVE LOGITS
     why
    0.84
    Which
    0.80
     Which
    0.74
     which
    0.71
    why
    0.70
    which
    0.70
     quelles
    0.68
     WHY
    0.66
    WHY
    0.65
     WHICH
    0.65
    Activations Density 0.402%

    No Known Activations