© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    Natural Language
    Autoencoders
    NEW
    Assistant AxisNEWCircuit TracerUPDATESteerSAE EvalsExportsAPI Community BlogPrivacy & TermsContact
    1. Home
    2. Under Peer Review · Attention SAE Research Paper
    3. GPT2-Small
    4. Attention Out
    5. 5-ATT-KK
    6. 28672
    Prev
    Next
    INDEX
    Explanations

    punctuation marks and their frequency

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 5-att-kk
    Configuration
    ckkissane/attn-saes-gpt2-small-all-layers/gpt2-small_L5_Hcat_z_lr1.20e-03_l11.00e+00_ds49152_bs4096_dc1.00e-06_rsanthropic_rie25000_nr4_v9.pt
    Prompts (Dashboard)
    36,864 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    49,152
    Data Type
    float32
    Hook Name
    blocks.5.attn.hook_z
    Hook Layer
    5
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Head Attr Weights
    0:0.21
    1:0.13
    2:0.07
    3:0.04
    4:0.04
    5:0.12
    6:0.05
    7:0.03
    8:0.06
    9:0.08
    10:0.07
    11:0.06
    Negative Logits
    mos
    -1.64
    peria
    -1.61
    alus
    -1.60
    watching
    -1.58
    maximum
    -1.56
    module
    -1.53
    registered
    -1.53
    emp
    -1.53
    did
    -1.52
    party
    -1.50
    POSITIVE LOGITS
     rooft
    1.60
     XVI
    1.60
    ��
    1.56
     XI
    1.51
     lineup
    1.50
     17
    1.50
     fac
    1.49
     11
    1.49
     IX
    1.49
     fray
    1.48
    Activations Density 0.000%

    No Known Activations