© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 0-RES-JB
    6. 16978
    Prev
    Next
    INDEX
    Explanations

    legal or official terms related to the concept of attention or focus

    oai_token-act-pair · gpt-3.5-turbo

    terms related to attention and memory

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 0-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.0.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.0.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    0
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    maker
    -0.69
     shakes
    -0.64
     Rolls
    -0.62
     documents
    -0.58
     substr
    -0.57
     phases
    -0.56
     counters
    -0.56
     morph
    -0.55
     Pumpkin
    -0.55
     Mat
    -0.55
    POSITIVE LOGITS
    ention
    4.24
    ENTION
    2.75
    ension
    1.25
     Attention
    1.20
    ersion
    1.11
    ailability
    1.11
    ensity
    1.10
    inence
    1.09
    ent
    1.04
    ensions
    1.03
    Activations Density 0.016%

    No Known Activations