Kris, Kris, Kris... So no one in the world ever reads files bigger than 2GB? That's a silly notion. You can't design an API based on what you think a programmer is _most likely_ to need, without consideration to other scenarios. At least not if you want it to be scalable enough to be relevant in a few years. The UNIX people understand that, and that's why UNIX-like operating systems are still in use after decades.
As for the OP: People have given a few good reasons why stderr is useful, and that's why it's around. Couldn't have said it better myself. On 6/11/10, Kris Maglione <maglion...@gmail.com> wrote: > On Fri, Jun 11, 2010 at 06:19:18PM +0200, pancake wrote: >>On 06/11/10 15:21, Moritz Wilhelmy wrote: >> >> unsigned int read(int fd, ref char *buf, unsigned int buf_len, GError >> **err); >> >>(yeah, thats a silly example, but it allows you to make reads >>bigger than 31 bits without having to check for the return >>value) In other situations it is good to handle errors in this >>way, but thinking on some restrictions allows you to mix error >>values and data in the same pipe. > > ((1<<31)-1) / (1<<30) ≅ 2GB. > > I'm not seeing a major problem here. At any rate, the GError > arguments is more about a disdain for errno than anything else. > It's the same reason that Go, Limbo, and Common Lisp support > multiple return values. > > -- > Kris Maglione > > The first symptom of love in a young man is shyness; the first symptom > in a woman, it's boldness. > --Victor Hugo > > >