© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Under Peer Review · Attention SAE Research Paper
    3. GPT2-Small
    4. Attention Out
    5. 10-ATT-KK
    6. 14245
    Prev
    Next
    INDEX
    Explanations

    phrases that indicate contradictions or undermine previous statements

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 10-att-kk
    Configuration
    ckkissane/attn-saes-gpt2-small-all-layers/gpt2-small_L10_Hcat_z_lr1.20e-03_l11.30e+00_ds24576_bs4096_dc1.00e-05_rsanthropic_rie25000_nr4_v9.pt
    Prompts (Dashboard)
    36,864 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    float32
    Hook Name
    blocks.10.attn.hook_z
    Hook Layer
    10
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.16
    1:0.22
    2:0.08
    3:0.09
    4:0.04
    5:0.02
    6:0.06
    7:0.09
    8:0.02
    9:0.04
    10:0.09
    11:0.03
    Negative Logits
     darts
    -2.74
    �
    -2.64
     Bom
    -2.62
     Afric
    -2.54
    ERSON
    -2.44
    hedon
    -2.41
     embell
    -2.41
     Tup
    -2.40
    COR
    -2.39
     greens
    -2.29
    POSITIVE LOGITS
     Holy
    8.04
    Holy
    7.37
    holy
    6.23
     holy
    5.96
     Sacred
    3.88
    Pope
    3.78
     Catholics
    3.66
     Vatican
    3.66
    Catholic
    3.62
     Catholic
    3.56
    Activations Density 0.018%

    No Known Activations