© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Gemma-2-2B
    3. 25-GEMMASCOPE-TRANSCODER-16K
    4. 9431
    Prev
    Next
    INDEX
    Explanations

    sentences that use the words 'can', 'will', 'could', and 'which'

    oai_token-act-pair · gemini-2.0-flash

    Potential negative outcomes

    np_max-act-logits · gemini-2.0-flash
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-2b-pt-transcoders/layer_25/width_16k/average_l0_41
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.25.ln2.hook_normalized
    Architecture
    jumprelu_transcoder
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    LookAnd
    -0.77
     NgModule
    -0.62
     protoimpl
    -0.55
    webElementXpaths
    -0.54
    קישורים
    -0.53
    __*/
    -0.53
    ↘
    -0.52
    сылкі
    -0.52
    Зноскі
    -0.52
    Източници
    -0.52
    POSITIVE LOGITS
    queryInterface
    0.59
    ibatis
    0.57
     تضيفلها
    0.56
     ra
    0.55
    MessageTagHelper
    0.54
    regon
    0.54
    bulin
    0.53
    zeitige
    0.52
    atap
    0.51
    estanding
    0.51
    Activations Density 4.340%

    No Known Activations