[Updated 7/2023] Submissions to arXiv in the Computing and Language section (cs.CL) continue to rise dramatically, with pronounced seasonal spikes around pre-conference "quiet periods". What are these papers about? I grabbed all the cs.CL abstracts from the arXiv API and plotted a time series for 100 topics. The units on the y-axis are estimated token-counts. Topics are sorted by their average date, so the top rising topics are prompting, pre-training, BERT, few-shot, and distillation. The "oldest" topics are classic NLP, but also major topics from the pre-transformer era such as LSTMs/RNNs and embeddings. Topic models are down there too, but as you can see, they still work 😜.