> "Dangerous" in the sense that if you give "od" a large task it needs a
> lot of RAM? If so, most nontrivial programs are "dangerous".

While it’s true that many programs may allocate large memory for large
inputs, well-designed software validates user input to prevent pathological
or abusive cases. The issue here is not that "od" performs a large task,
but that it allows unbounded, unchecked memory allocation from a single
user-supplied argument, with no upper bound or safety net, leading to
potential denial-of-service.

Since od is a trusted system utility that is included in virtually all
Linux distributions, it is held to a higher standard of robustness and
input validation. Allowing unbounded memory allocation based on user input
undermines the reliability expected of such core tools.

> No need for that. Just use 'ulimit -v' and set whatever limit you like.

While it is true that ulimit can impose memory usage limits on a per-user
basis, it is not a sufficient or reliable substitute for proper input
validation within applications. Relying solely on ulimit assumes that every
environment has it correctly configured, which is rarely the case,
especially in containerized setups, developer environments, or lightweight
Linux distributions where such limits may be unset or overly permissive.

More importantly, ulimit does not eliminate the developer's responsibility
to ensure that their programs behave safely when handling untrusted input.
Allowing unbounded memory allocation based on user-controlled parameters
breaks the principle of fail-safe defaults and shifts critical security
responsibility from the application to the system administrator, which is
both dangerous and unreasonable.

Furthermore, in multi-user systems, a single user’s abuse of such a flaw
can exhaust shared system memory and trigger the OOM killer, potentially
terminating unrelated services or user processes, leading to a
denial-of-service scenario. Secure applications are expected to enforce
internal constraints—especially for memory allocation—and most
well-maintained software includes such sanity checks.

Additionally, we attempted a PoC where the bug triggers a resource
exhaustion attack.
```
for i in $(seq 1 120); do src/od -w99999999998 /bin/ls & done
```

When you run the above command, the memory status will be exhausted as
shown below.

Before running
```
$ free -h
               total        used        free      shared  buff/cache
available
Mem:           187Gi       4.5Gi       122Gi       2.0Mi        60Gi
181Gi
Swap:           15Gi          0B        15Gi
```

After running
```
$ free -h
               total        used        free      shared  buff/cache
available
Mem:           187Gi       176Gi       8.9Gi       0.0Ki       2.1Gi
9.2Gi
Swap:           15Gi        16Mi        15Gi
```

Based on the above, I personally believe it would be appropriate to include
input validation in this case.
However, if the maintainers consider that such checks fall outside the
intended design scope of coreutils, I understand and will respect that
perspective.

Reply via email to