© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Gemma-2-2B
    3. 24-GEMMASCOPE-TRANSCODER-16K
    4. 3696
    Prev
    Next
    INDEX
    Explanations

    auxiliary verbs like "is, are, was, were" especially when they are next to personal pronoun like 'we' and 'they'.

    oai_token-act-pair · gemini-2.0-flash

    Code-related

    np_max-act-logits · gemini-2.0-flash
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-2b-pt-transcoders/layer_24/width_16k/average_l0_37
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.24.ln2.hook_normalized
    Architecture
    jumprelu_transcoder
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    madas
    -0.48
    ontale
    -0.46
     mor
    -0.45
    ucous
    -0.44
     loin
    -0.44
    copa
    -0.43
     Dogg
    -0.43
    FIX
    -0.42
    rxjs
    -0.42
    kit
    -0.41
    POSITIVE LOGITS
    /*
    0.80
    iconque
    0.73
    __(/*!
    0.67
    таратура
    0.67
     comigo
    0.66
     AssemblyCompany
    0.66
    StructEnd
    0.65
    IntoConstraints
    0.64
     purpoſe
    0.64
    ViewFeatures
    0.62
    Activations Density 2.362%

    No Known Activations