Re: [9fans] the fossil (tm) stolen
> I Really use fossil and venti and have done so for the last 12 years, > though perhaps I am nobody... > So we are 2 nobodies, at least... ;) Pavel
Re: [9fans] the fossil (tm) stolen
3 nobody's; I switched to fossil+venti about 4 years ago. I also know of at least 2 other nobody's that run it. -Skip On Mon, Apr 16, 2018, 12:10 AM Pavel Klinkovský wrote: > > I Really use fossil and venti and have done so for the last 12 years, >> though perhaps I am nobody... >> > > So we are 2 nobodies, at least... ;) > > Pavel > >
Re: [9fans] the fossil (tm) stolen
Four. Long-standing nobody, too. Lucio.
Re: [9fans] the fossil (tm) stolen
Five. I just haven't had the heart to decommission it yet. On Mon, Apr 16, 2018 at 4:22 AM, Lucio De Re wrote: > Four. Long-standing nobody, too. > > Lucio. >
Re: [9fans] the fossil (tm) stolen
six, since 2007 at home and on several industrial plants adriano Five. I just haven't had the heart to decommission it yet. On Mon, Apr 16, 2018 at 4:22 AM, Lucio De Re wrote: Four. Long-standing nobody, too. Lucio.
Re: [9fans] the fossil (tm) stolen
Hello, Recently, I tried fossil on 9front (libventi library) and came across 2 issues. 1. flfmt -v fails with "no qidSpace" 2. fossil hangs while snap -a is working in the background. This happens during the initial load as there is more data to work through. A few GB of unventi'd data can cause this. These are probably issues that do not happen during routine usage. Details of the issues: 1. flfmt -v snap -a in the console: http://okturing.com/src/4288/body main: archive vac:5c1bb334c0e40b080c245409d0a9ea8b73e9d074 no qidSpace with the same vac (fossil/last has the same vac): http://okturing.com/src/4286/body dumpvacroots and vacchain.rc: http://okturing.com/src/4287/body I should have tried this without the 'vac:' prefix: fossil/flfmt -v vac:da39a3ee5e6b4b0d3255bfef95601890afd80709 /dev/bkp/fossil this is the 'short' vac reported by vacchain.rc I do not have the fossil install to try it now. But, hopefully this information might prove useful to the next guy coming along. 2.fossil hang stack trace when the hang happens: http://okturing.com/src/4254/body http://okturing.com/src/4267/body fossil setup: http://okturing.com/src/4268/body >From fossil to venti, it took almost a day or more to push through 30GB uncompressed data. Whereas, vac did it within an hour'ish. Though, one does not get the snapshot and active partitions, it appears that vac is much more attuned to venti than fossil. Again, I am a newbie and might be missing something fundamental. Hopefully, this documents some hurdles for the next guy coming along. Thanks Joe Adriano Verardo wrote: > six, since 2007 at home and on several industrial plants > adriano > > Five. I just haven't had the heart to decommission it yet. > > > > On Mon, Apr 16, 2018 at 4:22 AM, Lucio De Re wrote: > >> Four. Long-standing nobody, too. > >> > >> Lucio. > >> > > > >
[9fans] fossil+venti vs. cwfs - dealing with backups
What has kept me running fossil+venti is the ease of backing up the file server. Copying the venti arenas offsite is trivial. And Geoff put together glue to write sealed arenas to blu-ray as well. I don't see any simple way to do that with cwfs*. Or hjfs. I am very curious to know how the not-fossil/venti FS servers are backing up. Share? --lyndon
Re: [9fans] fossil+venti vs. cwfs - dealing with backups
The easiest method with cwfs or Ken's is to keep track of the size of the WORM - since everything is appended, it's fairly simple to copy the set of blocks after each dump. It's been a few years since I've done this, but it is just as reliable as venti, albeit less convenient. On Mon, Apr 16, 2018 at 6:15 PM, Lyndon Nerenberg wrote: > What has kept me running fossil+venti is the ease of backing up the file > server. Copying the venti arenas offsite is trivial. And Geoff put > together glue to write sealed arenas to blu-ray as well. > > I don't see any simple way to do that with cwfs*. Or hjfs. I am very > curious to know how the not-fossil/venti FS servers are backing up. Share? > > --lyndon > >