© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Gemma-2-2B
    3. 24-GEMMASCOPE-TRANSCODER-16K
    4. 15719
    Prev
    Next
    INDEX
    Explanations

    instances of the phrase "thanks in advance" as well as other tokens associated with programming questions

    oai_token-act-pair · gemini-2.0-flash

    Code and programming

    np_max-act-logits · gemini-2.0-flash
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-2b-pt-transcoders/layer_24/width_16k/average_l0_37
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.24.ln2.hook_normalized
    Architecture
    jumprelu_transcoder
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     beginnetje
    -0.71
    Pautan
    -0.70
    CPtr
    -0.68
    complexContent
    -0.68
     Dunlap
    -0.68
    IndentedString
    -0.66
     незавершена
    -0.65
    HideFlags
    -0.65
    Vidite
    -0.63
     Hooper
    -0.62
    POSITIVE LOGITS
    nav
    0.71
    tov
    0.71
    ROV
    0.68
    sev
    0.67
    lov
    0.66
    rov
    0.66
    Pav
    0.66
    lav
    0.65
     Hov
    0.64
    gow
    0.64
    Activations Density 12.203%

    No Known Activations