Name | CVE-2025-53630 |
Description | llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579. |
Source | CVE (at NVD; CERT, LWN, oss-sec, fulldisc, Debian ELTS, Red Hat, Ubuntu, Gentoo, SUSE bugzilla/CVE, GitHub advisories/code/issues, web search, more) |
Debian Bugs | 1109124 |
Vulnerable and fixed packages
The table below lists information on source packages.
Source Package | Release | Version | Status |
---|
ggml (PTS) | forky, sid | 0.0~git20250712.d62df60-5 | fixed |
llama.cpp (PTS) | forky | 5882+dfsg-3 | fixed |
| sid | 5882+dfsg-4 | fixed |
The information below is based on the following data on fixed versions.
Package | Type | Release | Fixed Version | Urgency | Origin | Debian Bugs |
---|
ggml | source | (unstable) | 0.0~git20250711.b6d2ebd-1 | | | 1109124 |
llama.cpp | source | (unstable) | 5882+dfsg-1 | unimportant | | |
Notes
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8
Fixed by: https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579 (b5854)
llama.cpp builts embedded ggml but does not use it, rather Debian uses standalone src:ggml