Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Slack
Blog
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
OpenMOSS · Llama Scope: SAEs for Llama-3.1-8B
Llama3.1-8B (Base)
MLP
22-LLAMASCOPE-MLP-32K
32562
Prev
Next
MODEL
22-llamascope-mlp-32k
Source/SAE
INDEX
Go
Explanations
instances of dialogue and punctuation in text
oai_token-act-pair · gpt-4o-mini
Triggered by @bot
No Scores
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Configuration
fnlp/Llama3_1-8B-Base-LXM-8x/Llama3_1-8B-Base-L22M-8x
How To Load
Prompts (Dashboard)
24,576 prompts, 128 tokens each
Dataset (Dashboard)
cerebras/SlimPajama-627B
Features
32,768
Data Type
bfloat16
Hook Name
blocks.22.hook_mlp_out
Hook Layer
22
Architecture
jumprelu
Context Size
1,024
Dataset
cerebras/SlimPajama-627B
Activation Function
relu
Show All
Embeds
Show Plots
Show Explanation
Show Activations
Show Test Field
Show Steer
Show Link
IFrame
<iframe src="https://www.neuronpedia.org/llama3.1-8b/22-llamascope-mlp-32k/32562?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/llama3.1-8b/22-llamascope-mlp-32k/32562?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Negative Logits
vie
-0.14
.builders
-0.14
xef
-0.14
mania
-0.14
审
-0.14
noch
-0.13
oker
-0.13
iggers
-0.13
uzz
-0.13
410
-0.13
POSITIVE LOGITS
"
0.26
'
0.21
«
0.20
"B
0.17
“
0.17
"
0.17
`
0.16
"S
0.15
imens
0.15
Ang
0.15
Act
ivations
Density 0.087%
Stacked
Snippet
Full
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations