Re: NetSurf at Wakefield 2008
On 27 Apr 2008 Paul Vigay <[EMAIL PROTECTED]> wrote: > In a dim and distant universe <[EMAIL PROTECTED]>, >Michael Drake <[EMAIL PROTECTED]> enlightened us thusly: >> The developers present will be at least: >> John-Mark Bell >> Michael Drake >> Rob Kendrick >> Daniel Silverstone >> ( http://www.netsurf-browser.org/about/team ) > I think that web page should have mug shots added to it, so we can > recognise them all. :-) > I must confess that I didn't recognise everyone on the NetSurf stand at > Wakefield, and didn't really want to embarrass myself totally by have to go > and ask who was who! I regret that I embarrassed myself by trying to thank the developers, only to be met with blank stares. Am I that much of a pariah? :-) With best wishes, Peter. -- Peter \ / \ Prestbury, Cheltenham, Glos. GL52 Anne \ / ____ \ England. and / / \ | | |\ | / _ \ http://pnyoung.orpheusweb.co.uk family / \__/ \_/ | \| \__/ \__ [EMAIL PROTECTED]
Re: NetSurf at Wakefield 2008
On Mon, 2008-04-28 at 11:48 +0100, Dr Peter Young wrote: > I regret that I embarrassed myself by trying to thank the developers, > only to be met with blank stares. Am I that much of a pariah? :-) Who specifically did you try to thank? :) I think we like thanks: but sometimes it can be a bit over the top! B.
Re: NetSurf at Wakefield 2008
On 28 Apr 2008 Dr Peter Young <[EMAIL PROTECTED]> wrote: > I regret that I embarrassed myself by trying to thank the developers, > only to be met with blank stares. Am I that much of a pariah? :-) You'll have to forgive them, they don't get out much and struggle with concepts such as meeting users in the physicality. I walked past, but they were still all blinking because of the strange daylight stuff :-) Cheers ---Dave -- __ David J. Ruck Phone: +44- (0)7974 108301 Email: [EMAIL PROTECTED] __
Re: Linking to style sheets
In article <[EMAIL PROTECTED]>, Brian Howlett <[EMAIL PROTECTED]> wrote: > On 27 Apr, [EMAIL PROTECTED] wrote: > >> On 27 Apr, [EMAIL PROTECTED] wrote: > [snip] > > Thanks, i'm using the XHTML 1.0 Transitional DOCTYPE. > > Are you using a 4.01 DOCTYPE? > Yes I am - the only other thing is to make sure the stylesheet is in > the same level directory/folder as the HTML file, otherwise you will > need to expand the "href" to point to the correct folder - mine is in > a folder called "stuff", so the link I posted before actually reads > May I recommend, however, that you produce new documents using the full xhtml/xml syntax - a little future proofing is never a bad thing. Any element which has no content needs what is formally called an 'empty tag'. Thus, for example the hr element tag should be written as ! This is a sort of short-hand for "" - which you could use - but don't blame me if MSIE objects! NOTE that all element names are in "lower case" - upper case names are not valid! The space preceding the solidus is NOT part of the syntax, but some versions of MSIE will 'barf' if the space isn't there. Producing the above link element therefore should be something like - Most valid elements in the header require this kind of empty tag - elements such as img, hr and others in the body also, of course, have empty tags. You can go a step further in future-proofing by avoiding elements likely to become obsolescent in xhtml full in future - br, b, i, font - and a few others. There are several ways of achieving the same effects using other elements and/or style sheets. Keith -- Inspired!
GSoC participant reporting for duty
Hello, I'm one of the Google Summer of Code participants this year. More specifically, I'll be working on a new HTML parser for NetSurf with John Mark-Bell. What exactly an HTML parser does may not be immediately apparent, so here's a rough explanation of what I'll be doing. :) In one line: an HTML parser takes an HTML file and turns it into a structure in memory that can be used by the display engine to render to the screen. In a little more detail: an HTML file usually looks something like: A page A heading A paragraph: some bold text!. And this is normal Another paragraph The parser takes the HTML file and turns it a tree-shaped structure in RAM which the display engine can look around much more easily than just trying to read the above text. For the above file, you might get something looking like: (use your imaginations!) + \+ | A page + | A heading + | A paragraph: some |\+ | | bold text! | + and this is normal. + Another paragraph Each start tag (the bits in angle brackets, <>) gets its own branch of the tree. This is useful for a whole variety of reasons: for example, now it's really easy to see what should be highlighted if you wanted the display engine to display in bright red all paragraphs ( tags). You just run down the tree, and find all the branches that are marked as paragraphs. In other words, the parser turns the document from what the writer wrote it as (a kind of text document) into a logical structure in memory, far more suitable for manipulation. Now, there's already a parser being used in NetSurf, or it wouldn't be displaying anything at all-- so why am I interested in writing a new one? Well, the current one isn't really very good at dealing with badly-written documents. If you like, it's like a passable Microsoft Word file converter: you get most of the meaning out of the document but some bits of it aren't *quite* right. This is because most people who write HTML just test in one browser, and since how to parse HTML has never been formally defined, all the browsers do it slightly differently. Actually, HTML parsing in the major non-IE browsers (Safari, Fierfox, Opera) is mostly reverse-engineered from IE's behaviour, but reverse-engineering is both a) not very easy b) very error-prone, so they're all quite inconsistent. NetSurf's current parser just hasn't had the time spent on it that the big web browsers' parsers have, which is over a decade of work. It would be silly of me to suggest, then, that in one summer, someone could reverse-engineer all these browsers and write a brand new parser that parsed every page like other browsers did. It's lucky, then, that most of the reverse-engineering work is already done. :) There's a new version of HTML on the horizon, HTML5, and its editor has spent many years looking at how the different browsers go about parsing badly-written documents. The draft specification includes very carefully-written rules that combine the best aspects of each browser's parsers. My job, then, is to implement the parsing bit of HTML5. In doing this, NetSurf gets that little bit closer to the major browsers, and the world gets a new HTML parser written in the C programming language, which can hopefully be reused by many other projects over the course of time. I hope I've explained myself well enough, and I look forward to helping get NetSurf's users a better-parsed web. :) Cheers, -- Andrew Sidwell