© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Gemma-2-2B
    3. 24-GEMMASCOPE-TRANSCODER-16K
    4. 8436
    Prev
    Next
    INDEX
    Explanations

    uses of the words "that", "had", "who", "which", "he", "Grant", "the town", "death", "she", "it", "Before"

    oai_token-act-pair · gemini-2.0-flash

    have

    np_max-act-logits · gemini-2.0-flash
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-2b-pt-transcoders/layer_24/width_16k/average_l0_37
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    monology/pile-uncopyrighted
    Features
    16,384
    Data Type
    float32
    Hook Name
    blocks.24.ln2.hook_normalized
    Architecture
    jumprelu_transcoder
    Context Size
    1,024
    Dataset
    monology/pile-uncopyrighted
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    ]--;
    -0.93
     myſelf
    -0.77
    setVerticalGroup
    -0.75
    RegistryLite
    -0.74
     MonoBehaviour
    -0.74
    Tikang
    -0.72
    ")));
    
    -0.72
    __":
    
    -0.71
    RegressionTest
    -0.71
    ]++;
    -0.71
    POSITIVE LOGITS
     had
    2.08
    had
    1.57
    Had
    1.55
     Had
    1.52
     HAD
    1.20
     having
    1.15
     have
    1.06
     has
    1.05
     tenido
    1.03
     có
    1.01
    Activations Density 16.149%

    No Known Activations