I mostly agree, but, if you read one char at a time it's likely you'll
become quite
slow, in general. An external process providing `buffering' so you can
seek back if you want, seems to me like a more general solution that does
not require a kernel change.

In any case, if I gave the impression that it's not worth to
experiment, I apologize.
that's not what I tried to say.


On Sat, Dec 5, 2009 at 5:32 PM, ron minnich <rminn...@gmail.com> wrote:
> On Sat, Dec 5, 2009 at 3:44 AM, Francisco J Ballesteros <n...@lsub.org> wrote:
>
>> If you insist on 'unreading', you could just put a front-end process that
>> keeps per-request data so that your external process can ask the
>> front-end for all the data again.
>
> The easiest way to implement unread is not to read in the first place.
>
> If you're only reading small amounts of data, say less then 1024
> bytes, and then forking a process to handle the rest, then by all
> means don't use IO that reads in lots of data you may not want.
> Instead:
>
> read(fd, &c, 1);
>
> and then there's no "overread" to deal with.
>
> That said, you can prototype unread() so why not?
> unread(fd, data, size);
>
> Attach the "unread" data to the open file struct, modify read so that
> if it sees this data it reads it first, try it out. Why not? Plan 9 is
> there to be hacked on, so hack it.
>
> Sam, the rule is, just do it. This hackability is one thing that makes
> Plan 9 so attractive.
>
> ron
>
>

Reply via email to