Hi, I have a bookmark-file from opera6.11 that contains a lot of duplicate entries.
I would like to be able to remove all duplicate entries without destroying the structure of the file. I have tried this with a set of scripts that converted the file into a format that could be sent through 'uniq' but somewhere the structure got mangled up and all my folder-settings were lost. An opera bookmark file has the fillowing syntax: --begin-of-file-- Opera Hotlist version 2.0 Options: encoding = utf8, version=3 #FOLDER NAME=software CREATED=1025797561 ORDER=0 #URL NAME=Arachnophilia Home Page URL=http://www.arachnoid.com/arachnophilia/ CREATED=976878001 VISITED=1025962454 ORDER=0 - #FOLDER ... - --end-of-file The lines at the top can be easily copied should they get lost, they're not much of a concern to me. But all '#FOLDER' blocks and all empty lines and lines containing a single '-' should be preserved. The values of 'CREATED' and 'VISITED' can be ignored for the comparison and the value of 'ORDER' should be reset to 'ORDER='. (This way Opera will regenerate the value of ORDER when the file is loaded) An additional problem I discovered yesterday is that Murphy's law applies even to Opera's bookmark file... The bookmarks are sorted alphabetically, but only on name, so I found some blocks like: NAME=tripod URL=http://www.tripod.com ... NAME=tripod URL=http://www.tripod.lycos.com ... NAME=tripod URL=http://www.tripod.com ... NAME=tripod URL=http://www.tripod.lycos.com ... So the script would have to look back 2 blocks... So the script should be able to read several lines into a couple of variables and modify and compare those variables to determine which lines can be deleted... Any suggestions? TIA -- # Mertens Bram "M8ram" <[EMAIL PROTECTED]> Linux User #249103 # # Red Hat Linux release 7.3 (Valhalla) kernel 2.4.18-3 i686 128MB RAM # # 11:24pm up 9 days, 3:38, 1 user, load average: 0.75, 0.89, 0.73 # -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]