© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Joseph Bloom · Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
    3. GPT2-Small
    4. Residual Stream
    5. 7-RES-JB
    6. 23392
    Prev
    Next
    INDEX
    Explanations

    mentions of time, especially emphasizing the concept of not wasting it

    oai_token-act-pair · gpt-3.5-turbo

    references to the concept of wasting or investing time

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With GPT2-SMALL @ 7-res-jb
    Configuration
    jbloom/GPT2-Small-SAEs-Reformatted/blocks.7.hook_resid_pre
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    Skylion007/openwebtext
    Features
    24,576
    Data Type
    torch.float32
    Hook Point
    blocks.7.hook_resid_pre
    Architecture
    standard
    Context Size
    128
    Dataset
    Skylion007/openwebtext
    Hook Point Layer
    7
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    ramid
    -0.93
    ards
    -0.77
    artisan
    -0.76
    LCS
    -0.74
    lar
    -0.70
    liga
    -0.69
    mson
    -0.69
    ortium
    -0.69
    riad
    -0.68
    assis
    -0.66
    POSITIVE LOGITS
    frames
    1.03
    zone
    0.91
    frame
    0.90
     elapsed
    0.81
     consuming
    0.81
     periods
    0.76
     Bucc
    0.74
     frame
    0.73
     continuum
    0.72
    consuming
    0.71
    Activations Density 0.059%

    No Known Activations