2025 Emergent Abilities in Large Language Models: A Survey (Berti, Giorgi & Kasneci, 2025) Emergent Mind+1 A comprehensive survey that: reviews definitions of “emergent abilities”, summarizes which capabilities have been reported (reasoning, in-context learning, code generation, problem solving), and critically examines under what conditions these appear (scaling, pretraining loss, prompting, quantization, etc.). Emergent Mind+1
2025 LLMs and Emergence: A Complex Systems Perspective (Krakauer, Krakauer & Mitchell, 2025) Emergent Mind Frames emergence in LLMs through a complex-systems lens. Examines whether emergent capabilities reflect genuine emergent intelligence (with internal coarse-grained representations), rather than just scaling artifacts. Provides conceptual clarity about different kinds of “emergence.” Emergent Mind
2025 Emergent Abilities of Large Language Models under Continued Pre‑training for Language Adaptation (Elhady, Agirre & Artetxe, ACL 2025) ACL Anthology+1 Empirical work showing that emergent abilities can arise (or shift) when LLMs undergo continued pre-training (CPT) for language adaptation — even when the original model was English-centric. This speaks to the dynamics of emergence under distribution shift. ACL Anthology+1
2025 Emergent Response Planning in LLM (Dong et al., 2025) arXiv Presents evidence that LLMs’ hidden representations encode “future outputs beyond the next token” — structural and content attributes of full responses — suggesting a form of emergent planning behavior. This challenges the simplistic view of LLMs as only “next-token predictors.” arXiv
2024 Understanding Emergent Abilities of Language Models from the Loss Perspective (2403.15796v3) Emergent Mind Rather than tying emergent abilities strictly to model scale, this work studies them through the lens of pre-training loss: it shows that models with similar pre-training loss — even if different in size — can have comparable downstream performance, indicating that emergent abilities may depend more on loss dynamics than model size per se. Emergent Mind
2024 Are Emergent Abilities in Large Language Models just In‑Context Learning? (Lu et al., ACL 2024) ACL Anthology+1 A critical study challenging the emergent-abilities narrative: through extensive experiments, argues that many purported “emergent abilities” may be explained by in-context learning, model memory, and linguistic knowledge rather than some scale-driven jump in capability. Raises caution about overinterpreting “emergence.” ACL Anthology+1