© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Under Peer Review · Attention SAE Research Paper
    3. GPT2-Small
    4. Attention Out
    5. 2-ATT-KK
    6. 21704
    Prev
    Next
    INDEX
    Explanations

    references to Prime Minister Narendra Modi and his government

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 2-att-kk
    Configuration
    ckkissane/attn-saes-gpt2-small-all-layers/gpt2-small_L2_Hcat_z_lr1.20e-03_l11.00e+00_ds24576_bs4096_dc1.00e-06_rsanthropic_rie25000_nr4_v4.pt
    Prompts (Dashboard)
    36,864 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    float32
    Hook Name
    blocks.2.attn.hook_z
    Hook Layer
    2
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.05
    1:0.07
    2:0.19
    3:0.06
    4:0.04
    5:0.07
    6:0.05
    7:0.06
    8:0.04
    9:0.18
    10:0.09
    11:0.04
    Negative Logits
     apiece
    -1.30
     tips
    -1.19
    etheless
    -1.12
     Radar
    -1.04
     Burnett
    -1.03
     pend
    -1.03
     Cutter
    -1.03
     leftover
    -1.02
     assignments
    -1.01
     contrace
    -1.01
    POSITIVE LOGITS
     Himself
    1.61
    �
    1.30
    ć
    1.30
     himself
    1.30
    eem
    1.26
    iband
    1.20
    isan
    1.19
     Sabha
    1.19
    �
    1.17
    ariat
    1.16
    Activations Density 0.023%

    No Known Activations