© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 1-RES-JB
    6. 4988
    Prev
    Next
    INDEX
    Explanations

    instances of the word "lots" being mentioned with varying degrees of emphasis

    oai_token-act-pair · gpt-3.5-turbo

    the word "lots" and its variations, indicating a focus on abundance or quantity

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 1-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.1.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.1.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    1
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    rift
    -0.78
    Adult
    -0.77
    interstitial
    -0.77
    heid
    -0.77
    tein
    -0.76
    inal
    -0.71
    ectar
    -0.70
    Lie
    -0.68
    antle
    -0.68
    zyk
    -0.68
    POSITIVE LOGITS
     lots
    0.96
     Lots
    0.93
    icult
    0.82
    creen
    0.80
     thereof
    0.72
     loads
    0.69
    tery
    0.68
    lot
    0.67
     amounts
    0.67
     headaches
    0.66
    Activations Density 0.007%

    No Known Activations