On 14 Jun 2010, at 22:35, Ilya Ilembitov wrote:

So, here is my question. If we take only modern and active projects, how standard are they? Suppose, we have a browser engine that implements only the current standards (OK, may be some legacy standards, but no IE or other tweaks), will we still be able to use 95% of the web?

Probably, but why? There's nothing suckless at all about the standards coming out of the w3c. I don't know much about rendering html but I recently made a web server, and while I started out with the noble intent of supporting standards, before I was done I just had to declare http 1.1 schizophrenic and delusional!

Consider this: Out of web browser and web server, which one has to examine the data in order to render it, and which one is just reading it from the disk and dumping it down a pipe? Which one's resources are at a premium, and which is mostly idling between fetching web pages? With those two questions in mind, can someone please tell me what the w3c were collectively smoking when they made content-type mandatory in http 1.1? If that isn't enough argument, it's actually impossible to set content-type correctly from file extension. No-one really tries and I very much doubt they ever did, but that didn't stop the w3c from making it mandatory. Idiots.

"Schizophrenic" actually refers to a less serious problem, but still a bizarre one. Dates are provided in headers to guide caching, very useful in itself but the date format is about as long-winded as it can get and it's US-localised too. With that in mind, why are chunk length values for chunked encoding given in hex? That's not even consistent with the length value of content-length, which is decimal. And what titan amongst geniuses decided it was appropriate to apply chunked encoding to the http headers?

--
Complexity is not a function of the number of features. Some features exist only because complexity was _removed_ from the underlying system.


Reply via email to