© Neuronpedia 2026
    Privacy & TermsBlogGitHubSlackTwitterContact
    Neuronpedia logo - a computer chip with a rounded viewfinder border around it

    Neuronpedia

    APIAssistant AxisNEWCircuit TracerNEWSteerSAE EvalsExports Community BlogPrivacy & TermsContact
    1. Home
    2. Gemma-3-27B-IT
    3. 31-GEMMASCOPE-2-RES-65K
    4. 2062
    Prev
    Next
    INDEX
    Explanations

    the distinction between pre-training and fine-tuning in the context of large language models.

    oai_token-act-pair · gpt-4o-miniTriggered by @gersonkroiz

    multi-word phrases

    np_acts-logits-general · gemini-2.5-flash-lite
    New Auto-Interp
    Top Features by Cosine Similarity
    Configuration
    google/gemma-scope-2-27b-it/resid_post/layer_31_width_65k_l0_medium
    Prompts (Dashboard)
    238,145 prompts, 512 tokens each
    Dataset (Dashboard)
    lmsys + oasst1
    No Configuration Found
    Embeds
    IFrame
    Link
    Not in Any Lists

    No Comments

    Negative Logits
     شمالی
    0.27
     વખત
    0.27
     hexadecimal
    0.26
     vaginale
    0.26
     beurre
    0.26
     Federación
    0.25
     asymptotically
    0.25
    ։
    0.25
     deuxième
    0.25
    谛
    0.25
    POSITIVE LOGITS
     wors
    0.24
     payer
    0.23
     এমন
    0.23
     engagement
    0.22
     publicity
    0.22
     personalizar
    0.22
     careg
    0.21
    ጆ
    0.21
     yêu
    0.21
     sakta
    0.21
    Activations Density 1.566%

    No Known Activations