Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APICircuit TracerNEWSteerSAE EvalsBlog/PodcastSlackPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlog/PodcastGitHubSlackTwitterContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 5-RES-JB
    6. 11736
    Prev
    Next
    INDEX
    Explanations

    terms related to ancient Rome

    oai_token-act-pair · gpt-3.5-turbo

    references to the Roman context or culture

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 5-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.5.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.5.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    5
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    intosh
    -1.10
    mble
    -0.94
    ramid
    -0.87
    */(
    -0.86
    olulu
    -0.84
    jri
    -0.83
    anwhile
    -0.82
    NetMessage
    -0.82
    lessly
    -0.82
    ickr
    -0.81
    POSITIVE LOGITS
     Catholic
    1.03
     Reign
    0.87
     numer
    0.87
     Roman
    0.86
     Catholicism
    0.86
     Torch
    0.85
     Catholics
    0.81
     Emperor
    0.78
     Inquisition
    0.78
     Pont
    0.77
    Activations Density 0.014%

    No Known Activations