On 29 October 2014 19:00, FRIGN <d...@frign.de> wrote: > As nice as this all sounds, this is far away from reality, making you > sound more or less like Richard Stallman philosophing about the free > software revolution now having been waiting 25 years to happen.
Indeed! I feel like it too, unfortunately. But, in my alternate universe, I can picture systems running it quite happily :) > Working in an advertising agency myself, I know the underlying issue > with the semantic web: It looks great on the paper but in reality, > people really want to have full freedom in how their sites are > structured. > As a company, same as in software development, usually not the > best solution but the fastest (= cheapest in the short-term) one > is desired. Having to work on giant HTML-shipwrecks every day and > how many hours I am paid to do that, I see the importance of clean > semantics, however, companies don't learn easily, let alone customers > who don't know the matter (all websites look the same to them in the > end). I recognise that problem too; my thoughts currently is that there are a finite set of underlying website structures and layouts that a common set can represent most of them. I'm using the concept of semantics to refer to a limited set but necessary subset to describe the content but not to the complex extent that has been proposed with semantic web ideas. > To really introduce changes to the web, it would have to be carried > out by a conglomerate of big corporations at best behind the major > browsers pushing a common agenda forward (starting in the codebases). > As soon as the major browsers support a new type of markup-language, > which could be much easier to write and develop with, a real change > could be made. > However, philosophing about a new way won't help much as long as it > hasn't hit the browsers themselves. > Moving away from XML would be a blessing! It's what I'd consider > the switch from Web 2.0 to 3.0. The same reason that most wikipedia pages lead back to philosophy - we need to start from somewhere! I think one potential method is for some sort of interface/translator (for 'read-only' websites, this would be the equivalent of a web scraper) between existing websites. Of course writing the translators would need to be tailored per site, although not necessarily difficult for a programmer, will suffer difficulty of handling the sheer quantity of the web already out there and the suckiness of the approach. Cheers, Ralph -- Tai Chi Minh Ralph Eastwood tcmreastw...@gmail.com