© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 4-RES-JB
    6. 14767
    Prev
    Next
    INDEX
    Explanations

    instances where attention is being emphasized or discussed

    oai_token-act-pair · gpt-3.5-turbo

    instances of the word "attention" and its variations in different contexts

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 4-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.4.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.4.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    4
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     halves
    -0.78
     Yugoslavia
    -0.68
     Tale
    -0.66
     Recon
    -0.65
     Mehran
    -0.63
    tre
    -0.62
    iche
    -0.61
    tein
    -0.60
    ourke
    -0.60
     Townsend
    -0.58
    POSITIVE LOGITS
    estinal
    1.05
    orial
    0.91
    ively
    0.91
    arios
    0.88
     attention
    0.86
    atile
    0.84
     Attention
    0.79
    ibility
    0.76
     largeDownload
    0.76
    stadt
    0.76
    Activations Density 0.017%

    No Known Activations