Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APISteerSAE EvalsBlog/PodcastNEWSlackPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlog/PodcastGitHubSlackTwitterContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 5-RES-JB
    6. 5504
    Prev
    Next
    INDEX
    Explanations

    The neuron seems to be looking for references to specific entities or attributes indicated by the word "that" with an emphasis on explanations or relationships

    oai_token-act-pair · gpt-3.5-turbo

    the word "that" in various contexts

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 5-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.5.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.5.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    5
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    brates
    -0.76
    ormons
    -0.76
    rior
    -0.70
    cycles
    -0.69
    uously
    -0.69
    oby
    -0.69
    istics
    -0.69
    ciples
    -0.68
    asters
    -0.68
     Leilan
    -0.66
    POSITIVE LOGITS
     pesky
    1.12
     fateful
    1.08
     particular
    1.04
     same
    0.97
     kind
    0.97
     sort
    0.85
     equation
    0.84
     aforementioned
    0.84
     elusive
    0.83
     type
    0.81
    Activations Density 0.102%

    No Known Activations