CVE-2026-21869

NameCVE-2026-21869
Descriptionllama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.
SourceCVE (at NVD; CERT, ENISA, LWN, oss-sec, fulldisc, Debian ELTS, Red Hat, Ubuntu, Gentoo, SUSE bugzilla/CVE, GitHub advisories/code/issues, web search, more)
Debian Bugs1125060

Vulnerable and fixed packages

The table below lists information on source packages.

Source PackageReleaseVersionStatus
llama.cpp (PTS)sid8064+dfsg-1vulnerable

The information below is based on the following data on fixed versions.

PackageTypeReleaseFixed VersionUrgencyOriginDebian Bugs
llama.cppsource(unstable)(unfixed)1125060

Notes

https://github.com/ggml-org/llama.cpp/issues/18717
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8947-pfff-2f3c

Search for package or bug name: Reporting problems