OpenAI's Automated Interpretability from paper "Language models can explain neurons in language models". Modified by Johnny Lin to add new models/context windows.
The neuron fires strongly on mentions of specific software/model names—most notably “Llama”/“llama.cpp” (and similar acronyms), i.e. tokens that are part of those library or model identifiers.