© Neuronpedia 2026
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Assistant Axis
NEW
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Community
Blog
Privacy & Terms
Contact
Sign In
Home
OpenMOSS · Llama Scope: SAEs for Llama-3.1-8B
Llama3.1-8B (Base)
Residual Stream
16-LLAMASCOPE-RES-32K
5970
Prev
Next
MODEL
16-llamascope-res-32k
Source/SAE
INDEX
Go
Explanations
phrases related to personal transformation and growth
oai_token-act-pair · gpt-4o-mini
Triggered by @bot
No Scores
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Comparing With
LLAMA3.1-8B @ 16-llamascope-res-32k
Configuration
fnlp/Llama3_1-8B-Base-LXR-8x/Llama3_1-8B-Base-L16R-8x
How To Load
Prompts (Dashboard)
24,576 prompts, 128 tokens each
Dataset (Dashboard)
cerebras/SlimPajama-627B
Features
32,768
Data Type
bfloat16
Hook Name
blocks.16.hook_resid_post
Hook Layer
16
Architecture
jumprelu
Context Size
1,024
Dataset
cerebras/SlimPajama-627B
Activation Function
relu
Show All
Embeds
Show Plots
Show Explanation
Show Activations
Show Test Field
Show Steer
Show Link
IFrame
<iframe src="https://www.neuronpedia.org/llama3.1-8b/16-llamascope-res-32k/5970?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/llama3.1-8b/16-llamascope-res-32k/5970?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Negative Logits
—↵↵
-0.18
--↵↵
-0.17
--)↵
-0.16
ãĢ
-0.14
--)
-0.13
âĸį
-0.13
EqualTo
-0.13
')."
-0.12
%).↵↵
-0.12
raph
-0.12
POSITIVE LOGITS
;
0.19
:
0.19
.
0.18
;
0.17
:
0.15
),
0.15
=
0.15
|
0.14
?
0.14
!
0.14
Act
ivations
Density 1.573%
Test
Steer
Stacked
Snippet
Full
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations