Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Slack
Blog
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
OpenMOSS · Llama Scope: SAEs for Llama-3.1-8B
Llama3.1-8B (Base)
MLP
22-LLAMASCOPE-MLP-32K
32505
Prev
Next
MODEL
22-llamascope-mlp-32k
Source/SAE
INDEX
Go
Explanations
the occurrence of the verb "make" and its variations in various contexts
oai_token-act-pair · gpt-4o-mini
Triggered by @bot
No Scores
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Configuration
fnlp/Llama3_1-8B-Base-LXM-8x/Llama3_1-8B-Base-L22M-8x
How To Load
Prompts (Dashboard)
24,576 prompts, 128 tokens each
Dataset (Dashboard)
cerebras/SlimPajama-627B
Features
32,768
Data Type
bfloat16
Hook Name
blocks.22.hook_mlp_out
Hook Layer
22
Architecture
jumprelu
Context Size
1,024
Dataset
cerebras/SlimPajama-627B
Activation Function
relu
Show All
Embeds
Show Plots
Show Explanation
Show Activations
Show Test Field
Show Steer
Show Link
IFrame
<iframe src="https://www.neuronpedia.org/llama3.1-8b/22-llamascope-mlp-32k/32505?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/llama3.1-8b/22-llamascope-mlp-32k/32505?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Negative Logits
up
-0.28
up
-0.17
seau
-0.16
)prepare
-0.15
733
-0.15
-up
-0.15
up
-0.15
_up
-0.14
ัà¸Ļย
-0.14
Up
-0.14
POSITIVE LOGITS
ends
0.18
eway
0.17
Due
0.17
ends
0.17
hus
0.16
-do
0.16
do
0.15
way
0.15
due
0.15
due
0.15
Act
ivations
Density 0.176%
Stacked
Snippet
Full
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations