© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. OpenAI · Sparse Autoencoder for GPT2-Small - v5 2024
    3. GPT2-Small
    4. Attention Out - 128k
    5. 0-ATT_128K-OAI
    gpt2-small · 0-att_128k-oai
    Source from gpt2sm-oai-2024 · Attention Out - 128k · Layer 0
    Jump to Source/SAE
    Jump to Feature
    INDEX

    Configuration

    jbloom/GPT2-Small-OAI-v5-128k-attn-out-SAEs/v5_128k_layer_0
    Features
    131,072
    Data Type
    torch.float32
    Hook Name
    blocks.0.hook_attn_out
    Hook Layer
    0
    Architecture
    standard
    Context Size
    64
    Dataset
    Skylion007/openwebtext
    Activation Function
    topk

    Search Explanations

    Browse

    Features in GPT2-SMALL@0-att_128k-oai
    1. Hover over a feature on the left to preview its details.
    2. Click a feature to lock it and interact with it.