© Neuronpedia 2026
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Neuronpedia
Natural Language
Autoencoders
NEW
Assistant Axis
NEW
Circuit Tracer
UPDATE
Releases
Jump To
Search
Models
Steer
SAE Evals
Exports
Guides
API
Community
Blog
Privacy & Terms
Contact
Sign In
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-2B
MLP In - 16k
1-GEMMASCOPE-TRANSCODER-16K
gemma-2-2b · 1-gemmascope-transcoder-16k
Source from
gemma-scope
·
MLP In - 16k
· Layer 1
Jump to Source/SAE
1-gemmascope-transcoder-16k
Source/SAE
Go
Jump to Feature
1-gemmascope-transcoder-16k
Source/SAE
INDEX
Go
Configuration
google/gemma-scope-2b-pt-transcoders/layer_1/width_16k/average_l0_65
How To Load
Features
16,384
Data Type
float32
Hook Name
blocks.1.ln2.hook_normalized
Architecture
jumprelu_transcoder
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Show All
Search via Inference
?
MLP In - 16k
Layer 1
SEARCH
Run Example Search
Random
🌮 Food
📰 News
📖 Literary
👯 Personal
🧑💻 Programming
🧑🔬 Technical
🧑🏫 Academic
💼 Business
🧑⚖️ Legal
🧑🏫 Educational
🗼 Cultural
Search TopK by Token
MODEL
MLP In - 16k
LAYER
RANDOM
SEARCH
Density Threshold:
0.75%
Reset
0%
1%
10%
100%
Show BOS
Hide BOS
Sort by Frequency
Sort by Max Act
Sort by Density
Search Explanations
All
By Release
By Model
By Sources
MODEL
MLP In - 16k
Layer 1
Show Dashboards
Hide Dashboards
Browse
MODEL
MLP In - 16k
LAYER
Features in
GEMMA-2-2B
@
1-gemmascope-transcoder-16k
Hover over a feature on the left to preview its details.
Click a feature to lock it and interact with it.