OpenAI's Automated Interpretability from paper "Language models can explain neurons in language models". Modified by Johnny Lin to add new models/context windows.
LaTeX math formatting commands and spacing constructs, especially sequences indicating fine-grained kerning and symbol formatting around Greek-letter expressions.
structured alphanumeric identifiers and paths, especially those with punctuation separators (hyphens/slashes/underscores) and numeric codes in URLs, figure/reference labels, and code tokens.
This neuron activates on highly structured technical text, especially LaTeX-style math/diagram syntax and code-like test identifiers with dense brackets, braces, operators, and numeric parameters.
structural connectors and delimiters linking elements—prepositions and conjunctions plus syntax symbols (slashes, braces, assignment/statement endings)—especially in technical, math, or code contexts.
structural markers of formal, technical or expository prose—function words, punctuation and connective/notation cues that signal complex, multi-clause sentences and technical descriptions.
gpt-5
ic wiring is indispensable to a high density mounting and,