Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Slack
Blog
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-9B
Attention Out - 16k
0-GEMMASCOPE-ATT-16K
1227
Prev
Next
MODEL
0-gemmascope-att-16k
Source/SAE
INDEX
Go
Explanations
attends to the first token in a sequence marked with "star" and the second token in a sequence marked with "bracket"
oai_attention-head · gpt-4o-mini
Triggered by @bot
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Configuration
google/gemma-scope-9b-pt-att/layer_0/width_16k/average_l0_61
How To Load
Prompts (Dashboard)
16,384 prompts, 128 tokens each
Dataset (Dashboard)
monology/pile-uncopyrighted
Features
16,384
Data Type
float32
Hook Name
blocks.0.attn.hook_z
Hook Layer
0
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Embeds
Plots
Explanation
Show Test Field
Default Test Text
IFrame
<iframe src=https://www.neuronpedia.org/gemma-2-9b/0-gemmascope-att-16k/1227?embed=true&embedexplanation=true&embedplots=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/gemma-2-9b/0-gemmascope-att-16k/1227?embed=true&embedexplanation=true&embedplots=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Head Attr Weights
0:
0.02
1:
0.01
2:
0.01
3:
0.01
4:
0.01
5:
0.01
6:
0.07
7:
0.06
8:
0.03
9:
0.02
10:
0.01
11:
0.01
12:
0.61
13:
0.02
14:
0.01
15:
0.02
Negative Logits
.)}
-0.82
")));
-0.80
}))
-0.79
}}$}
-0.78
]")]
-0.78
})*/
-0.78
"]);
-0.78
]));
-0.77
}));
-0.77
}}}
-0.75
POSITIVE LOGITS
,
0.97
0.77
(
0.74
-
0.73
"
0.71
and
0.70
.
0.70
2
0.69
T
0.68
/
0.68
Act
ivations
Density 0.046%
Stacked
Snippet
Full
Split DFA
Combine DFA
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations