© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Under Peer Review · Attention SAE Research Paper
    3. GPT2-Small
    4. Attention Out
    5. 9-ATT-KK
    6. 10452
    Prev
    Next
    INDEX
    Explanations

    references to power dynamics in fantasy settings

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 9-att-kk
    Configuration
    ckkissane/attn-saes-gpt2-small-all-layers/gpt2-small_L9_Hcat_z_lr1.20e-03_l11.20e+00_ds24576_bs4096_dc1.00e-06_rsanthropic_rie25000_nr4_v9.pt
    Prompts (Dashboard)
    36,864 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    float32
    Hook Name
    blocks.9.attn.hook_z
    Hook Layer
    9
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.07
    1:0.02
    2:0.07
    3:0.13
    4:0.05
    5:0.08
    6:0.04
    7:0.06
    8:0.32
    9:0.03
    10:0.03
    11:0.03
    Negative Logits
     Zucker
    -3.21
     Gutierrez
    -3.18
     Toyota
    -3.14
     Schumer
    -3.04
    FBI
    -2.96
     Einstein
    -2.94
    Houston
    -2.93
    Texas
    -2.93
    AIDS
    -2.89
    NBC
    -2.89
    POSITIVE LOGITS
     Elven
    6.06
     Mages
    5.52
     orcs
    5.38
     dragons
    5.37
     Dwar
    5.33
     Wyr
    5.28
     Forsaken
    5.18
     mages
    5.16
     Rune
    5.09
     Druid
    5.08
    Activations Density 0.723%

    No Known Activations