Noam Preil:
> I have a
> branch at https://git.sr.ht/~pixelherodev/plan9 which has fossil
> integrated.
I took the liberty of having a look at the fossil source in that repo.
It seems to be missing the fossil-time-backward patch. That's on the
current 9legacy distribution ISO (built 14 April 2023)
Indeed, the coupling is moderately loose (I found one constant shared in
the code I compiled for 9front - Fossil from somewhere, probably p9p, but
maybe not - the 56000-byte Venti block size, I believe). But Fossil without
Venti is a much less valuable component, as I understand it. And Fossil
wit
Fossil will run without venti, but the moment you connect it to a venti, it
cannot be standalone again, as it stands.
On Sat, 18 May 2024 at 14:50, Lucio De Re wrote:
> Please include me as well. I have an unambitious plan I would like to
> experiment with. And the most advanced version of Fossi
Please include me as well. I have an unambitious plan I would like to
experiment with. And the most advanced version of Fossil would fit
nicely into that. Also, am I mistaken in believing that in all of 9legacy,
9front and p9p, Fossil and Venti need to be treated as a bundle, possibly
starting with
Just a simple note. When I compared the fossil version posted by Moody in the
original discussion thread to the one I am using (and IIRC it is the one in
the 9legacy git repository), I found that they differed in 2 points. One was
the increase of a msg buffer, which is probably no big issue, bu
> Responding off list shortly :)
I'd like to be included into the discussion as well.
Thanks.
--
David du Colombier
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T6b867aa3be7bf660-M789b993f6eb5e311f7e78821
Delivery options: https://
Responding off list shortly :)
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T6b867aa3be7bf660-M3fe517cc779e245e44a024b1
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
Noam Preil:
> I demonstrated one
> of the problems with fossil by (attempting to) install Go, which crashes
> the file system _every single time_.
This is a useful bit of evidence that needs following up. The go test
suite (which begins by installing and completely rebuilding go) is
running 24/7
The easiest method with cwfs or Ken's is to keep track of the size of
the WORM - since everything is appended, it's fairly simple to copy
the set of blocks after each dump. It's been a few years since I've
done this, but it is just as reliable as venti, albeit less
convenient.
On Mon, Apr 16, 2018
What has kept me running fossil+venti is the ease of backing up the file
server. Copying the venti arenas offsite is trivial. And Geoff put
together glue to write sealed arenas to blu-ray as well.
I don't see any simple way to do that with cwfs*. Or hjfs. I am very
curious to know how the
Hi Jim,
It's important to point out that the arena size does not have to match
the size of an arenas file. In my case, I do something similar where I
use 2GB for an arena but keep my arenas files at 2GB (I don't have
much use for keeping multiple arena files).
More indexes help to an extent. My f
On Wed, Oct 19, 2016 at 9:47 AM Steven Stallion wrote:
> In short, start small and grow as needed. For reference, when I ran
> Coraid's fs based on 64-bit Ken's (WORM only, no dedupe) in RWC
> (based on the main fs in Athens). Over the course of a few years
> the entire WORM grew to around 35GB. T
I was looking over the 9atom install script and I saw it appeared
to code in support for building filesystems based on kfs,
fossil, or fossil+venti, but it only surfaced kfs and fossil+venti.
I was wondering why that was. Does anyone know?
Jim
i agree absolutely with steve here, expanding venture arena by arena is easy,
the ventibackup scripts show you how. even easier is to add arenas on a
different disk partition to the same venti.
personally i wouldn't keep music or videos in venti. they don't compress well
using the arithmetic te
> On 20 Oct 2016, at 19:41, Steven Stallion wrote:
>
>> On Thu, Oct 20, 2016 at 1:15 PM, wrote:
>> Steven Stallion writes:
>>
>>> Sizing venti is also simple.
>>
>> I disagree with this. The best way to configure venti depends largely
>> on how you plan to use it. I have multiple venti s
On Thu, Oct 20, 2016 at 1:15 PM, wrote:
> Steven Stallion writes:
>
>> Sizing venti is also simple.
>
> I disagree with this. The best way to configure venti depends largely
> on how you plan to use it. I have multiple venti servers configured for
> different uses. For example, I keep my DVD
"James A. Robinson" writes:
> Anyone able to tell me whether or not there are
> disk size limits I should beware of given a limited
> amount of system memory in a file server?
Although there have been some replies on this thread, none of them have
really yet directly answered your question. Whe
Steven Stallion writes:
> Sizing venti is also simple.
I disagree with this. The best way to configure venti depends largely
on how you plan to use it. I have multiple venti servers configured for
different uses. For example, I keep my DVD images on a different venti
server than I do for smal
On Wed, Oct 19, 2016 at 10:13 AM Aram Hăvărneanu wrote:
> There are cheaper ways of disposing of 10TB of data.
>
If I decide the configuration is problematic
I'm sure I can repurpose the device.
Besides, the costs of spinning disk these
days is amazingly low. As, I think, the
developers for Pl
Hi Jim,
It probably helps to break apart fossil and venti for the sake of the
conversation. While you can use fossil as a standalone filesystem, it
is effectively your write cache in this scenario since it will be
backed by venti. Conventional wisdom is to size your main fossil fs
based on how muc
There are cheaper ways of disposing of 10TB of data.
--
Aram Hăvărneanu
Anyone able to tell me whether or not there are
disk size limits I should beware of given a limited
amount of system memory in a file server?
What I'm wanting to try and do is get a hardware
RAID1+0 enclosure and put in 20TB of disk (so
10TB of usable space).
The board I am looking at will take
On Sun May 10 14:36:15 PDT 2015, cinap_len...@felloff.net wrote:
> how is this the opposite? your patch shows the tcb->mss init being removed
> completely from tcpincoming().
>
> - /* our sending max segment size cannot be bigger than what he asked for
> */
> - if(lp->mss != 0 && lp->ms
how is this the opposite? your patch shows the tcb->mss init being removed
completely from tcpincoming().
- /* our sending max segment size cannot be bigger than what he asked for
*/
- if(lp->mss != 0 && lp->mss < tcb->mss) {
- tcb->mss = lp->mss;
- tpriv
> 2.a) tcpiput() gets a ACK packet for Listening connection, calls
> tcpincoming().
> 2.b) tcpincoming() looks in limbo, finds lp. and makes new connection.
> 3.c) initialize our connections tcb->mss.
>
> > * the setting of tcb->mss in tcpincoming is not correct, tcp->mss is
> > set by SYN, not b
On Sun May 10 10:58:55 PDT 2015, 0in...@gmail.com wrote:
> >> however, after fixing things so the initial cwind isn't hosed, i get a
> >> little better story:
> >
> > so, actually, i think this is the root cause. the intial cwind is misset
> > for loopback.
> > i but that the symptom folks will
> * the SYN-ACK needs to send the local mss, not echo the remote mss.
> asymmetry is "fine" in the other side, even if ip/tcp.c isn't smart enough to
> keep tx and rx mss seperate. (scare quotes = untested, there may be
> some performance niggles if the sender is sending legal packets larger than
>> however, after fixing things so the initial cwind isn't hosed, i get a
>> little better story:
>
> so, actually, i think this is the root cause. the intial cwind is misset for
> loopback.
> i but that the symptom folks will see is that /net/tcp/stats shows
> fragmentation when
> performance
> however, after fixing things so the initial cwind isn't hosed, i get a little
> better story:
so, actually, i think this is the root cause. the intial cwind is misset for
loopback.
i but that the symptom folks will see is that /net/tcp/stats shows
fragmentation when
performance sucks. evide
for what it's worth, the original newreno work tcp does not have the mtu
bug. on a 8 processor system i have around here i get
bwc; while() nettest -a 127.1
tcp!127.0.0.1!40357 count 10; 81920 bytes in 1.505948 s @ 519 MB/s (0ms)
tcp!127.0.0.1!47983 count 10; 81920 bytes in 1.3779
2015-05-09 10:35 GMT-07:00 Lyndon Nerenberg :
>
> On May 9, 2015, at 10:30 AM, Devon H. O'Dell wrote:
>
>> Or when your client is on a cell phone. Cell networks are the worst.
>
> Really? Quite often I slave my laptop to my phone's LTE connection, and I
> never have problems with PMTU. Both her
> On May 9, 2015, at 10:25 AM, Lyndon Nerenberg wrote:
>
>
>> On May 9, 2015, at 7:43 AM, erik quanstrom wrote:
>>
>> easy enough until one encounters devices that don't send icmp
>> responses because it's not implemented, or somehow considered
>> "secure" that way.
>
> Oddly enough, I don'
On May 9, 2015, at 10:30 AM, Devon H. O'Dell wrote:
> Or when your client is on a cell phone. Cell networks are the worst.
Really? Quite often I slave my laptop to my phone's LTE connection, and I
never have problems with PMTU. Both here (across western Canada) and in the UK.
signature.as
2015-05-09 10:25 GMT-07:00 Lyndon Nerenberg :
>
>
> On May 9, 2015, at 7:43 AM, erik quanstrom wrote:
>
> > easy enough until one encounters devices that don't send icmp
> > responses because it's not implemented, or somehow considered
> > "secure" that way.
>
> Oddly enough, I don't see this 'pro
On May 9, 2015, at 7:43 AM, erik quanstrom wrote:
> easy enough until one encounters devices that don't send icmp
> responses because it's not implemented, or somehow considered
> "secure" that way.
Oddly enough, I don't see this 'problem' in the real world. And FreeBSD is far
from being alon
On Fri May 8 20:12:57 PDT 2015, cinap_len...@felloff.net wrote:
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
> as i see it, procsyn() is called only when tcb->state is Syn_sent,
> which only should happen for client connections doing a connect, in
> which case tcpsndsyn() w
On Fri May 8 20:12:57 PDT 2015, cinap_len...@felloff.net wrote:
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
> as i see it, procsyn() is called only when tcb->state is Syn_sent,
> which only should happen for client connections doing a connect, in
> which case tcpsndsyn() w
yes, but i was not refering to the adjusting which isnt changed here. only
the tcpmtu() call that got added.
yes, it *should* not make any difference but maybe we'r missing
something. at worst it makes the code more confusing and cause bugs in
the future because one of the initializations of mss i
> Looking at the first few bytes in each dir of the initial TCP
> handshake (with tcpdump) I see:
>
> 0x: 4500 0030 24da <= from plan9 to freebsd
>
> 0x: 4500 0030 d249 4000 <= from freebsd to plan9
>
> Looks like FreeBSD always sets the DF (don't fragment) bit
>
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
> as i see it, procsyn() is called only when tcb->state is Syn_sent,
> which only should happen for client connections doing a connect, in
> which case tcpsndsyn() would have initialized tcb->mss already no?
tcb->mss may still ne
do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
as i see it, procsyn() is called only when tcb->state is Syn_sent,
which only should happen for client connections doing a connect, in
which case tcpsndsyn() would have initialized tcb->mss already no?
--
cinap
On Fri, 08 May 2015 21:24:13 +0200 David du Colombier <0in...@gmail.com> wrote:
> On the loopback medium, I suppose this is the opposite issue.
> Since the TCP stack didn't fix the MSS in the incoming
> connection, the programs sent multiple small 1500 bytes
> IP packets instead of large 16384 IP p
I confirm - my old performance is back.
Thanks very much David.
-Steve
I've finally figured out the issue.
The slowness issue only appears on the loopback, because
it provides a 16384 MTU.
There is an old bug in the Plan 9 TCP stack, were the TCP
MSS doesn't take account the MTU for incoming connections.
I originally fixed this issue in January 2015 for the Plan 9
> oh. possibly the queue isn't big enough, given the window size.
> it's using qpass on a Queue with Qmsg and if the queue is full,
> Blocks will be discarded.
I tried to increase the size of the queue, but no luck.
--
David du Colombier
On 8 May 2015 at 17:13, David du Colombier <0in...@gmail.com> wrote:
> Also, the issue is definitely related to the loopback.
> There is no problem when using an address on /dev/ether0.
>
oh. possibly the queue isn't big enough, given the window size. it's using
qpass on a Queue with Qmsg
and if
I've enabled tcp, tcpwin and tcprxmt logs, but there isn't
anything very interesting.
tcpincoming s 127.0.0.1!53150/127.0.0.1!53150 d
127.0.0.1!17034/127.0.0.1!17034 v 4/4
Also, the issue is definitely related to the loopback.
There is no problem when using an address on /dev/ether0.
cpu% cat /n
> cpu% cat /net/tcp/3/local
> 127.0.0.1!57796
> cpu% cat /net/tcp/3/remote
> 127.0.0.1!17034
> cpu% cat /net/tcp/3/status
> Established qin 0 qout 0 rq 0.0 srtt 80 mdev 40 sst 1048560 cwin
> 258192 swin 1048560>>4 rwin 1048560>>4 qscale 4 timer.start 10
> timer.count 10 rerecv 0 katimer.start 2400
> NOW is defined as MACHP(0)->ticks, so this is a pretty course timer
> that can't go backwards on intel processors. this limits the timer's
> resolution to HZ,
> which on 9atom is 1000, and 100 on pretty much anything else. further
> limiting the
> resolution is the tcp retransmit timers which
On Tue May 5 15:54:45 PDT 2015, ara...@mgk.ro wrote:
> It's pretty interesting that at least three people all got exactly
> 150kB/s on vastly different machines, both real and virtual. Maybe the
> number comes from some tick frequency?
i might suggest altering HZ and seeing if there is a throughp
On Wed May 6 14:28:03 PDT 2015, 0in...@gmail.com wrote:
> I got it!
>
> The regression was caused by the NewReno TCP
> change on 2013-01-24.
>
> https://github.com/0intro/plan9/commit/e8406a2f44
if you have proof, i'd be interested in reproduction of the issue from the
original source, or
perh
On Wed May 6 15:30:24 PDT 2015, charles.fors...@gmail.com wrote:
> On 6 May 2015 at 22:28, David du Colombier <0in...@gmail.com> wrote:
>
> > Since the problem only happen when Fossil or vacfs are running
> > on the same machine as Venti, I suppose this is somewhat related
> > to how TCP behaves
On 6 May 2015 at 23:35, Steven Stallion wrote:
> Were these the changes that erik submitted?
I don't think so. Someone else submitted a different set of tcp changes
independently much earlier.
Definitely interesting, and explains why I've never seen the regression (I
switched to a dedicated venti server a couple of years ago). Were these the
changes that erik submitted? ISTR him working on reno bits somewhere around
there...
On Wed, May 6, 2015 at 4:28 PM, David du Colombier <0in...@gma
On 6 May 2015 at 22:28, David du Colombier <0in...@gmail.com> wrote:
> Since the problem only happen when Fossil or vacfs are running
> on the same machine as Venti, I suppose this is somewhat related
> to how TCP behaves with the loopback.
>
Interesting. That would explain the clock-like delays.
Since the problem only happen when Fossil or vacfs are running
on the same machine as Venti, I suppose this is somewhat related
to how TCP behaves with the loopback.
--
David du Colombier
I got it!
The regression was caused by the NewReno TCP
change on 2013-01-24.
https://github.com/0intro/plan9/commit/e8406a2f44
--
David du Colombier
On 6 May 2015 at 21:55, David du Colombier <0in...@gmail.com> wrote:
> However, now I'm sure the issue was caused by a kernel
> change in 2013.
>
> There is no problem when running a kernel from early 2013.
>
Welly, welly, welly, well. That is interesting.
Just to be sure, I tried again, and the issue is not related
to the lock change on 2013-09-19.
However, now I'm sure the issue was caused by a kernel
change in 2013.
There is no problem when running a kernel from early 2013.
--
David du Colombier
It's pretty interesting that at least three people all got exactly
150kB/s on vastly different machines, both real and virtual. Maybe the
number comes from some tick frequency?
--
Aram Hăvărneanu
Yes, I'm pretty sure it's not related to Fossil, since it happens with
vacfs as well.
Also, Venti was pretty much unchanged during the last few years.
I suspected it was related to the lock change on 2013-09-19.
https://github.com/0intro/plan9/commit/c4d045a91e
But I remember I tried to revert t
semlocks?
anyway, should not be too hard to figure out with /n/dump
--
cinap
On 5 May 2015 at 16:38, David du Colombier <0in...@gmail.com> wrote:
> > How many times do you time it on each machine?
>
> Maybe ten times. The results are always the same ~5%.
> Also, I restarted vacfs between each try.
It was the effect of the ram caches that prompted the question.
My experi
> I too see this, and feel, no proof, that things used to be better. I.e. the
> first time I read a file from venti it it very, very slow. subsequent reads
> from the ram cache are quick.
>
> I think venti used to be faster a few years ago. maybe another effect of this
> is the boot time seems s
I too see this, and feel, no proof, that things used to be better. I.e. the
first time I read a file from venti it it very, very slow. subsequent reads
from the ram cache are quick.
I think venti used to be faster a few years ago. maybe another effect of this
is the boot time seems slower than
>> I've just made some measurements when reading a file:
>>
>> Vacfs running on the same machine as Venti: 151 KB/s
>> Vacfs running on another machine: 5131 KB/s
>
>
> How many times do you time it on each machine?
Maybe ten times. The results are always the same ~5%.
Also, I restarted vacfs betw
Thanks Aram.
> I have spent some time
> debugging this, but unfortunately, I couldn't find the root cause, and
> I just stopped using fossil.
I tried to measure performance effect by replacement of component.
1) mbr or GRUB
2) pbs or pbslba
3) sdata or sdvirtio (sdvirtio is imported from 9legacy
On 4 May 2015 at 19:51, David du Colombier <0in...@gmail.com> wrote:
>
> I've just made some measurements when reading a file:
>
> Vacfs running on the same machine as Venti: 151 KB/s
> Vacfs running on another machine: 5131 KB/s
How many times do you time it on each machine?
Thanks Anthony.
> I bet if you re-run the same test twice in a
> row, you’re going to see dramatically improved
> performance.
I try to re-run ‘iostats md5sum /386/9pcf’.
Read result is very fast.
first read result is 152KB/s.
second read result is 232MB/s.
> Your write performance in that test
Hello!
imho placing fossil, venti, isect, bloom and swap on single drive is bad
idea.
As written in in http://plan9.bell-labs.com/sys/doc/venti/venti.html - "The
prototype Venti server is implemented for the Plan 9 operating system in
about 10,000 lines of C. The server runs on a dedicated dual 55
I'm experiencing the same issue as well.
When I launch vacfs on the same machine as Venti,
reading is very slow. When I launch vacfs on another
Plan 9 or Unix machine, reading is fast.
I've just made some measurements when reading a file:
Vacfs running on the same machine as Venti: 151 KB/s
Vacf
I have seen the same problem a few years back on about half of my
machines. The other half were fine. There was a 1000x difference in
performance between the good and bad machines. I have spent some time
debugging this, but unfortunately, I couldn't find the root cause, and
I just stopped using fos
The reason, in general:
In a fossil+venti setup, fossil runs (basically) as a
cache for venti. If your access just hits fossil, it’ll
be quick; if not, you hit the (significantly slower)
venti. I bet if you re-run the same test twice in a
row, you’re going to see dramatically improved
performance.
Hello, fans.
I’m running Plan 9(labs) on public QEMU/KVM service.
My Plan 9 system has a slow read performance problem.
I ran 'iostats md5sum /386/9pcf’, DMA is on, read result is 150KB/s.
but write performance is fast.
My Plan 9 system has a 200GB HDD, formatted with fossil+venti.
disk layout is
i should explain further, since this is sneaky. since we're calling
ARGBEGIN lots of times, we hit a special case. the defn is
#define ARGBEGINfor((argv0||(argv0=*argv)),argv++,argc--;\
a subsequent call to ARGBEGIN will not reset argv0, and worse, argv0
can be pointing to bogus memory.
small but potentially deadly
diff -c /n/dump/2014/0402/sys/src/cmd/fossil/9fsys.c 9fsys.c
/n/dump/2014/0402/sys/src/cmd/fossil/9fsys.c:34,40 - 9fsys.c:34,40
char* curfsys;
} sbox;
- static char *_argv0;
+ char *_argv0;
#define argv0 _argv0
static char FsysAll[] = "all";
In article <20130603202129.ga84...@intma.in>, kh...@intma.in says...
>
> On Mon, Jun 03, 2013 at 03:41:39PM -0400, erik quanstrom wrote:
> > which is to say that the thesis that fossil sucks is refuted.
> >
> > - erik
>
> *now* I know what you guys meant by 'snarky comments.'
>
> "Just the plac
>> Richard mentioned fixing the snapshots bug in fossil. This
>> is about as close as we've come to examining the technical
>> issues.
>
> No: this *is* examining the technical issues. Richard has done
> actual engineering here; it's moderately depressing that many
> members of this list, and parti
>> Richard mentioned fixing the snapshots bug in fossil. This
>> is about as close as we've come to examining the technical
>> issues.
>
> No: this *is* examining the technical issues. Richard has done
> actual engineering here; it's moderately depressing that many
> members of this list, and parti
Long-haul airlines can appear to have better safety statistics than
local services, because they spend proportionately more flying hours
in a straight-and-level steady state than in takeoff and landing where
most accidents occur. Similarly someone who has used fossil as a
production system over th
On Jun 3, 2013, at 15:50 , s...@9front.org wrote:
> Richard mentioned fixing the snapshots bug in fossil. This
> is about as close as we've come to examining the technical
> issues.
No: this *is* examining the technical issues. Richard has done
actual engineering here; it's moderately depressing
On Mon, Jun 3, 2013 at 3:17 PM, Steve Simon wrote:
> In the end we have to fall
> back on 'it works for me' done we?
>
I think there is a certain amount of wisdom in choosing and (more
importantly) accepting a tool. Provided you aren't attempting to hammer a
screw, there is a lot of variety out
On Mon, Jun 3, 2013 at 1:14 PM, Federico G. Benavento
wrote:
> Don't worry, I'm not going to bore you with my stories about how
> fossil/venti
> saved my life so many times and never lost a file, I'll just keep using it.
>
Now *that* sounds like a story worth listening too!
What I don't userstand is how do we do better
than anecdotal evidence; unless we write everything
in Z (haeven forbid).
I suppose we have some measures like "XYZfs is simpler
so its less likely to have bugs' or age 'ABCfs is so old
the bugs are more likely to have been be found', but these
are sti
> Don't worry, I'm not going to bore you with my stories about how fossil/venti
> saved my life so many times and never lost a file, I'll just keep using it.
> Thanks for sharing your wisdom with the list.
I wasn't the one who complained about anecdotes. We just seem
to get lost in these words and
On Mon, Jun 03, 2013 at 03:41:39PM -0400, erik quanstrom wrote:
> which is to say that the thesis that fossil sucks is refuted.
>
> - erik
*now* I know what you guys meant by 'snarky comments.'
"Just the place for some Snark!" the 9fan cried,
As he landed his Apples with care;
Supporting each ma
On Jun 3, 2013, at 4:50 PM, s...@9front.org wrote:
>>> Certainly. And we're back at square one. Everyone has their own story
>>> about how they lost data.
>>
>> which is to say that the thesis that fossil sucks is refuted.
>
> I think it rather says that everyone has a story. Someone was
> comp
>> Certainly. And we're back at square one. Everyone has their own story
>> about how they lost data.
>
> which is to say that the thesis that fossil sucks is refuted.
I think it rather says that everyone has a story. Someone was
complaining about anecdotes, but that's what we've got.
Richard men
> No doubt, but you then do then *exactly* the same thing with cwfs. To
> my certain knowledge, it is possible for the old file server to lose
> data and files, sometimes catastrophically so, forcing a recover main,
> and sometimes, a recover further back. That's unsurprising if you
> look at the
On Jun 3, 2013, at 8:45 AM, s...@9front.org wrote:
> I ran fossil on both hardware and under different virtual machines and
> eventually experienced file corruption on every single install.
This may have something to do with VM settings -- I vaguely recall some
buffering issues. Haven't had any f
> No doubt, but you then do then *exactly* the same thing with cwfs.
Certainly. And we're back at square one. Everyone has their own story
about how they lost data.
-sl
On 3 June 2013 16:45, wrote:
> Saying "there is no problem" changes nothing. You can
> debate with the Grand Canyon for hours, but when you walk off the
> cliff you're still going to plummet to the ground.
>
No doubt, but you then do then *exactly* the same thing with cwfs.
To my certain knowled
> what would be helpful, and move the discussion forward, is if someone
> could try to replicate this with unclean shutdowns after various file
> operations. i suspect that it won't repeat. but either way, it
> will move the discussion forward.
For what it's worth, unclean shutdowns resulted in
> The point I was making that it's amusing how much effort goes into the
> annual "fossil does NOT suck!" parade on this mailing list. I'd be
i believe you may have misread the emails. iirc, the way this started was
a random jibe at fossil to the tune of "fossil is teh suck. data = lossage."
it
On 3 June 2013 12:49, Kurt H Maier wrote:
> I *know* fossil has had problems,
> because I've lost data to it. Once a bug kills my data, that software
> doesn't land on my computer again, full stop.
>
Sure. But I've lost nothing with fossil and I did indeed lose things with
the old file server.
> I see that
> in this thread we've made progress: someone has admitted that fossil
> _used_to_be_ unreliable. (I expect even this assault on the sanctity of
> fossil will now be repelled.)
I think not. The archive bug was well known, and you'll find several
conversations about it over the year
> The point I was making that it's amusing how much effort goes into the
> annual "fossil does NOT suck!" parade on this mailing list. I'd be
> interested to know if anyone who has been burned by fossil has been
> convinced to give it another try.
I'd swap fossil for any number of Unix-ey file sy
On Sun, Jun 02, 2013 at 10:45:53PM -0400, erik quanstrom wrote:
>
> sorry, what point was he making? i saw a clearly false claim unsupported
> by evidence or anecdote that fossil is not stable. but that's not making
> a point.
>
It's been shown that this mailing list is unwilling to admit that
> if one dedicates a machine (or vm)
> to the file server, than one can be sure that punting the cpu server will
> leave one's files available and bugs in the cpu server won't leak over.
There's also a security advantage to reducing the amount of extra stuff
running on the same machine as the file
On Sun Jun 2 17:59:16 EDT 2013, 23h...@gmail.com wrote:
> > dedicate a machine to the file server.
>
> This must be the best way to keep the plebeian hands off the artwork:
> museums that are only open to curators.
> This certainly also provided for my technical contribution to this mailing
> li
1 - 100 of 377 matches
Mail list logo