Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlogGitHubSlackTwitterContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 10-RES-JB
    6. 20745
    Prev
    Next
    INDEX
    Explanations

    phrases describing a shift in focus or attention from one thing to another

    oai_token-act-pair · gpt-3.5-turbo

    words and phrases indicating a shift in focus or perspective

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 10-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.10.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.10.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    10
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    not
    -0.69
    NOT
    -0.67
    nce
    -0.67
    ilty
    -0.65
     BUT
    -0.61
     NOT
    -0.60
     ineligible
    -0.60
     no
    -0.59
     Illegal
    -0.59
    avier
    -0.58
    POSITIVE LOGITS
     focus
    0.83
    ocused
    0.83
     focuses
    0.82
     focusing
    0.80
     focused
    0.79
    Instead
    0.79
     concentrate
    0.79
     foc
    0.78
     Instead
    0.75
     concentrating
    0.73
    Activations Density 0.156%

    No Known Activations