© Neuronpedia 2026
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Neuronpedia
Natural Language
Autoencoders
NEW
Assistant Axis
NEW
Circuit Tracer
UPDATE
Releases
Jump To
Search
Models
Steer
SAE Evals
Exports
Guides
API
Community
Blog
Privacy & Terms
Contact
Sign In
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-2B
MLP Out - 16k
11-GEMMASCOPE-MLP-16K
gemma-2-2b · 11-gemmascope-mlp-16k
Source from
gemma-scope
·
MLP Out - 16k
· Layer 11
Jump to Source/SAE
11-gemmascope-mlp-16k
Source/SAE
Go
Jump to Feature
11-gemmascope-mlp-16k
Source/SAE
INDEX
Go
Configuration
google/gemma-scope-2b-pt-mlp/layer_11/width_16k/average_l0_98
How To Load
Features
16,384
Data Type
float32
Hook Name
blocks.11.hook_mlp_out
Hook Layer
11
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Search via Inference
?
MLP Out - 16k
Layer 11
SEARCH
Run Example Search
Random
🌮 Food
📰 News
📖 Literary
👯 Personal
🧑💻 Programming
🧑🔬 Technical
🧑🏫 Academic
💼 Business
🧑⚖️ Legal
🧑🏫 Educational
🗼 Cultural
Search TopK by Token
MODEL
MLP Out - 16k
LAYER
RANDOM
SEARCH
Density Threshold:
0.75%
Reset
0%
1%
10%
100%
Show BOS
Hide BOS
Sort by Frequency
Sort by Max Act
Sort by Density
Search Explanations
All
By Release
By Model
By Sources
MODEL
MLP Out - 16k
Layer 11
Show Dashboards
Hide Dashboards
Browse
MODEL
MLP Out - 16k
LAYER
Features in
GEMMA-2-2B
@
11-gemmascope-mlp-16k
Hover over a feature on the left to preview its details.
Click a feature to lock it and interact with it.