Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APICircuit TracerNEWSteerSAE EvalsExportsSlackBlogPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlogGitHubSlackTwitterContact
    1. Home
    2. OpenMOSS · Llama Scope: SAEs for Llama-3.1-8B
    3. Llama3.1-8B (Base)
    4. MLP
    5. 22-LLAMASCOPE-MLP-32K
    6. 32745
    Prev
    Next
    INDEX
    Explanations

    attributes related to size and strength in various contexts

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    fnlp/Llama3_1-8B-Base-LXM-8x/Llama3_1-8B-Base-L22M-8x
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    cerebras/SlimPajama-627B
    Features
    32,768
    Data Type
    bfloat16
    Hook Name
    blocks.22.hook_mlp_out
    Hook Layer
    22
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    cerebras/SlimPajama-627B
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     Petite
    -0.17
    cko
    -0.16
    ptune
    -0.15
    hazi
    -0.15
    anvas
    -0.15
     Little
    -0.15
    pane
    -0.14
     OG
    -0.14
    leri
    -0.14
    INTR
    -0.14
    POSITIVE LOGITS
     strong
    0.45
     big
    0.44
     large
    0.41
     tall
    0.35
     loud
    0.34
    big
    0.34
    large
    0.31
     wide
    0.30
     fast
    0.29
    .big
    0.27
    Activations Density 0.863%

    No Known Activations