Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    © Neuronpedia 2025
    Privacy & TermsBlogGitHubSlackTwitterContact
    1. Home
    2. OpenMOSS · Llama Scope: SAEs for Llama-3.1-8B
    3. Llama3.1-8B (Base)
    4. Residual Stream
    5. 25-LLAMASCOPE-RES-32K
    6. 10716
    Prev
    Next
    INDEX
    Explanations

    references to historical figures or entities associated with nobility and regal elements

    oai_token-act-pair · gpt-4o-miniTriggered by @bot
    New Auto-Interp
    Top Features by Cosine Similarity
    Comparing With LLAMA3.1-8B @ 25-llamascope-res-32k
    Configuration
    fnlp/Llama3_1-8B-Base-LXR-8x/Llama3_1-8B-Base-L25R-8x
    Prompts (Dashboard)
    24,576 prompts, 128 tokens each
    Dataset (Dashboard)
    cerebras/SlimPajama-627B
    Features
    32,768
    Data Type
    bfloat16
    Hook Name
    blocks.25.hook_resid_post
    Hook Layer
    25
    Architecture
    jumprelu
    Context Size
    1,024
    Dataset
    cerebras/SlimPajama-627B
    Activation Function
    relu
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
    ilerden
    -0.14
    GetString
    -0.13
    )))),
    -0.13
     záv
    -0.13
    /repos
    -0.13
     borr
    -0.13
    .gridColumn
    -0.13
    _Record
    -0.12
    escription
    -0.12
     Gaul
    -0.12
    POSITIVE LOGITS
     king
    0.66
     King
    0.65
     royalty
    0.63
     royal
    0.58
     roy
    0.57
     kings
    0.56
    çİĭ
    0.55
    King
    0.54
     monarch
    0.54
     queen
    0.53
    Activations Density 0.318%

    No Known Activations