Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Slack
Blog
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-2B
Attention Out - 16k
0-GEMMASCOPE-ATT-16K
301
Prev
Next
MODEL
0-gemmascope-att-16k
Source/SAE
INDEX
Go
Explanations
attends to the specific numerical tokens from the adjacent unit of measurement tokens
oai_attention-head · gpt-4o-mini
Triggered by @bot
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Comparing With
GEMMA-2-2B @ 0-gemmascope-att-16k
Configuration
google/gemma-scope-2b-pt-att/layer_0/width_16k/average_l0_104
How To Load
Prompts (Dashboard)
36,864 prompts, 128 tokens each
Dataset (Dashboard)
monology/pile-uncopyrighted
Features
16,384
Data Type
float32
Hook Name
blocks.0.attn.hook_z
Hook Layer
0
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Embeds
Plots
Explanation
Show Test Field
Default Test Text
IFrame
<iframe src=https://www.neuronpedia.org/gemma-2-2b/0-gemmascope-att-16k/301?embed=true&embedexplanation=true&embedplots=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/gemma-2-2b/0-gemmascope-att-16k/301?embed=true&embedexplanation=true&embedplots=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Head Attr Weights
0:
0.87
1:
0.00
2:
0.01
3:
0.01
4:
0.04
5:
0.01
6:
0.01
7:
0.01
Negative Logits
0
-0.67
5
-0.63
1
-0.63
2
-0.62
6
-0.60
7
-0.59
4
-0.59
3
-0.59
8
-0.58
-
-0.57
POSITIVE LOGITS
Efq
1.00
myſelf
0.99
^(@)
0.95
raiſ
0.91
purpoſe
0.91
itſelf
0.90
ſelf
0.89
faſt
0.88
ſtate
0.87
pleaſure
0.87
Act
ivations
Density 6.399%
Test
Steer
Stacked
Snippet
Full
Split DFA
Combine DFA
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations