Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Circuit Tracer
NEW
Steer
SAE Evals
Blog
Slack
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog
GitHub
Slack
Twitter
Contact
Home
OpenAI · Sparse Autoencoder for GPT2-Small - v5 2024
GPT2-Small
Attention Out - 128k
0-ATT_128K-OAI
6474
Prev
Next
MODEL
0-att_128k-oai
Source/SAE
INDEX
Go
Explanations
No Explanations Found
New Auto-Interp
AutoInterp Type
claude-3-5-haiku-20241022
Generate
Top Features by Cosine Similarity
Configuration
jbloom/GPT2-Small-OAI-v5-128k-attn-out-SAEs/v5_128k_layer_0
How To Load
Prompts (Dashboard)
24,576 prompts, 128 tokens each
Dataset (Dashboard)
Skylion007/openwebtext
Features
131,072
Data Type
torch.float32
Hook Name
blocks.0.hook_attn_out
Hook Layer
0
Architecture
standard
Context Size
64
Dataset
Skylion007/openwebtext
Activation Function
topk
Show All
Embeds
Plots
Explanation
Show Test Field
Default Test Text
IFrame
<iframe src=https://www.neuronpedia.org/gpt2-small/0-att_128k-oai/6474?embed=true&embedexplanation=true&embedplots=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/gpt2-small/0-att_128k-oai/6474?embed=true&embedexplanation=true&embedplots=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
No Known Activations
This feature has no known activations.
Show Anyway