© Neuronpedia 2026
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Assistant Axis
NEW
Circuit Tracer
NEW
Steer
SAE Evals
Exports
Community
Blog
Privacy & Terms
Contact
Sign In
Home
OpenAI · Sparse Autoencoder for GPT2-Small - v5 2024
GPT2-Small
Attention Out - 128k
0-ATT_128K-OAI
gpt2-small · 0-att_128k-oai
Source from
gpt2sm-oai-2024
·
Attention Out - 128k
· Layer 0
Jump to Source/SAE
0-att_128k-oai
Source/SAE
Go
Jump to Feature
0-att_128k-oai
Source/SAE
INDEX
Go
Configuration
jbloom/GPT2-Small-OAI-v5-128k-attn-out-SAEs/v5_128k_layer_0
How To Load
Features
131,072
Data Type
torch.float32
Hook Name
blocks.0.hook_attn_out
Hook Layer
0
Architecture
standard
Context Size
64
Dataset
Skylion007/openwebtext
Activation Function
topk
Show All
Search Explanations
All
By Release
By Model
By Sources
MODEL
Attention Out - 128k
Layer 0
Show Dashboards
Hide Dashboards
Browse
MODEL
Attention Out - 128k
LAYER
Features in
GPT2-SMALL
@
0-att_128k-oai
Hover over a feature on the left to preview its details.
Click a feature to lock it and interact with it.