"Derrick Stolee via GitGitGadget" <[email protected]> writes:
> + if (stat(chain_name, &st)) {
> ...
> + if (st.st_size <= the_hash_algo->hexsz) {
> ...
> + fp = fopen(chain_name, "r");
> + free(chain_name);
> +
> + if (!fp)
> + return NULL;
Checking for size before opening is an invitation for an unnecessary
race, isn't it? Perhaps fopen() followed by fstat() is a better
alternative?
> + oids = xcalloc(st.st_size / (the_hash_algo->hexsz + 1), sizeof(struct
> object_id));
> +
> + while (strbuf_getline_lf(&line, fp) != EOF && valid) {
> + char *graph_name;
> + struct commit_graph *g;
I am imagining an evil tester growing the file after you called
xcalloc() above ;-) Should we at least protect ourselves not to read
more than we planned to read originally? I would imagine that the
ideal code organization would be more like
valid = 1; have_read_all = 0;
fopen();
fstat(fp->fileno);
count = st.st_size / hashsize;
oids = xcalloc();
for (i = 0; i < count; i++) {
if (getline() == EOF) {
have_read_all = 1;
break;
}
add one graph based on the line;
if (error) {
valid = 0;
break;
}
}
if (valid && i < count)
die("file truncated while we are reading?");
if (valid && !have_read_all)
die("file grew while we are reading?");
if we really care, but even without going to that extreme, at least
we should refrain from reading more than we allocated.