Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Slack
Blog
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-2B
Residual Stream - 16k
21-GEMMASCOPE-RES-16K
6961
Prev
Next
MODEL
21-gemmascope-res-16k
Source/SAE
INDEX
Go
Explanations
the presence of indentation and formatting characters in code snippets
oai_token-act-pair · gpt-4o-mini
Triggered by @bot
No Scores
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Comparing With
GEMMA-2-2B @ 21-gemmascope-res-16k
Configuration
google/gemma-scope-2b-pt-res/layer_21/width_16k/average_l0_70
How To Load
Prompts (Dashboard)
36,864 prompts, 128 tokens each
Dataset (Dashboard)
monology/pile-uncopyrighted
Features
16,384
Data Type
float32
Hook Name
blocks.21.hook_resid_post
Hook Layer
21
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Embeds
Show Plots
Show Explanation
Show Activations
Show Test Field
Show Steer
Show Link
IFrame
<iframe src="https://www.neuronpedia.org/gemma-2-2b/21-gemmascope-res-16k/6961?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/gemma-2-2b/21-gemmascope-res-16k/6961?embed=true&embedexplanation=true&embedplots=true&embedsteer=true&embedactivations=true&embedlink=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Negative Logits
=[]
-0.60
way
-0.58
fine
-0.54
ERTA
-0.53
”
-0.52
ho
-0.52
<>
-0.52
’
-0.52
qu
-0.51
παρά
-0.51
POSITIVE LOGITS
1.24
1.12
0.97
0.96
0.92
0.91
tvguidetime
0.86
الحره
0.82
myſelf
0.81
0.81
Act
ivations
Density 1.447%
Test
Steer
Stacked
Snippet
Full
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations