Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Slack
Blog
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-2B
Attention Out - 16k
0-GEMMASCOPE-ATT-16K
252
Prev
Next
MODEL
0-gemmascope-att-16k
Source/SAE
INDEX
Go
Explanations
attends to the phrase "To" from various contexts to corresponding actions or inquiries represented by "see."
oai_attention-head · gpt-4o-mini
Triggered by @bot
New Auto-Interp
AutoInterp Type
claude-4-5-haiku
Generate
Top Features by Cosine Similarity
Comparing With
GEMMA-2-2B @ 0-gemmascope-att-16k
Configuration
google/gemma-scope-2b-pt-att/layer_0/width_16k/average_l0_104
How To Load
Prompts (Dashboard)
36,864 prompts, 128 tokens each
Dataset (Dashboard)
monology/pile-uncopyrighted
Features
16,384
Data Type
float32
Hook Name
blocks.0.attn.hook_z
Hook Layer
0
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Embeds
Plots
Explanation
Show Test Field
Default Test Text
IFrame
<iframe src=https://www.neuronpedia.org/gemma-2-2b/0-gemmascope-att-16k/252?embed=true&embedexplanation=true&embedplots=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/gemma-2-2b/0-gemmascope-att-16k/252?embed=true&embedexplanation=true&embedplots=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Head Attr Weights
0:
0.07
1:
0.02
2:
0.03
3:
0.01
4:
0.12
5:
0.66
6:
0.02
7:
0.03
Negative Logits
myſelf
-1.20
itſelf
-1.15
Efq
-1.03
purpoſe
-1.02
Anſ
-1.00
ſelf
-1.00
Reſ
-1.00
Theſe
-0.99
―――――
-0.99
uſed
-0.98
POSITIVE LOGITS
<eos>
0.62
:
0.56
(
0.49
"
0.47
[
0.47
di
0.45
/
0.45
.
0.45
-
0.44
/
0.44
Act
ivations
Density 0.088%
Test
Steer
Stacked
Snippet
Full
Split DFA
Combine DFA
Show Raw Tokens
Show Formatted
Show Breaks
Hide Breaks
No Known Activations