© Neuronpedia 2026
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Neuronpedia
Natural Language
Autoencoders
NEW
Assistant Axis
NEW
Circuit Tracer
UPDATE
Releases
Jump To
Search
Models
Steer
SAE Evals
Exports
Guides
API
Community
Blog
Privacy & Terms
Contact
Sign In
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-9B
MLP Out - 131k
26-GEMMASCOPE-MLP-131K
gemma-2-9b · 26-gemmascope-mlp-131k
Source from
gemma-scope
·
MLP Out - 131k
· Layer 26
Jump to Source/SAE
26-gemmascope-mlp-131k
Source/SAE
Go
Jump to Feature
26-gemmascope-mlp-131k
Source/SAE
INDEX
Go
Configuration
google/gemma-scope-9b-pt-mlp/layer_26/width_131k/average_l0_110
How To Load
Features
131,072
Data Type
float32
Hook Name
blocks.26.hook_mlp_out
Hook Layer
26
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Search Explanations
All
By Release
By Model
By Sources
MODEL
MLP Out - 131k
Layer 26
Show Dashboards
Hide Dashboards
Browse
MODEL
MLP Out - 131k
LAYER
Features in
GEMMA-2-9B
@
26-gemmascope-mlp-131k
Hover over a feature on the left to preview its details.
Click a feature to lock it and interact with it.