The easiest method with cwfs or Ken's is to keep track of the size of
the WORM - since everything is appended, it's fairly simple to copy
the set of blocks after each dump. It's been a few years since I've
done this, but it is just as reliable as venti, albeit less
convenient.
On Mon, Apr 16, 2018
What has kept me running fossil+venti is the ease of backing up the file
server. Copying the venti arenas offsite is trivial. And Geoff put
together glue to write sealed arenas to blu-ray as well.
I don't see any simple way to do that with cwfs*. Or hjfs. I am very
curious to know how the
Hi Jim,
It's important to point out that the arena size does not have to match
the size of an arenas file. In my case, I do something similar where I
use 2GB for an arena but keep my arenas files at 2GB (I don't have
much use for keeping multiple arena files).
More indexes help to an extent. My f
On Wed, Oct 19, 2016 at 9:47 AM Steven Stallion wrote:
> In short, start small and grow as needed. For reference, when I ran
> Coraid's fs based on 64-bit Ken's (WORM only, no dedupe) in RWC
> (based on the main fs in Athens). Over the course of a few years
> the entire WORM grew to around 35GB. T
i agree absolutely with steve here, expanding venture arena by arena is easy,
the ventibackup scripts show you how. even easier is to add arenas on a
different disk partition to the same venti.
personally i wouldn't keep music or videos in venti. they don't compress well
using the arithmetic te
> On 20 Oct 2016, at 19:41, Steven Stallion wrote:
>
>> On Thu, Oct 20, 2016 at 1:15 PM, wrote:
>> Steven Stallion writes:
>>
>>> Sizing venti is also simple.
>>
>> I disagree with this. The best way to configure venti depends largely
>> on how you plan to use it. I have multiple venti s
On Thu, Oct 20, 2016 at 1:15 PM, wrote:
> Steven Stallion writes:
>
>> Sizing venti is also simple.
>
> I disagree with this. The best way to configure venti depends largely
> on how you plan to use it. I have multiple venti servers configured for
> different uses. For example, I keep my DVD
"James A. Robinson" writes:
> Anyone able to tell me whether or not there are
> disk size limits I should beware of given a limited
> amount of system memory in a file server?
Although there have been some replies on this thread, none of them have
really yet directly answered your question. Whe
Steven Stallion writes:
> Sizing venti is also simple.
I disagree with this. The best way to configure venti depends largely
on how you plan to use it. I have multiple venti servers configured for
different uses. For example, I keep my DVD images on a different venti
server than I do for smal
On Wed, Oct 19, 2016 at 10:13 AM Aram Hăvărneanu wrote:
> There are cheaper ways of disposing of 10TB of data.
>
If I decide the configuration is problematic
I'm sure I can repurpose the device.
Besides, the costs of spinning disk these
days is amazingly low. As, I think, the
developers for Pl
Hi Jim,
It probably helps to break apart fossil and venti for the sake of the
conversation. While you can use fossil as a standalone filesystem, it
is effectively your write cache in this scenario since it will be
backed by venti. Conventional wisdom is to size your main fossil fs
based on how muc
There are cheaper ways of disposing of 10TB of data.
--
Aram Hăvărneanu
Anyone able to tell me whether or not there are
disk size limits I should beware of given a limited
amount of system memory in a file server?
What I'm wanting to try and do is get a hardware
RAID1+0 enclosure and put in 20TB of disk (so
10TB of usable space).
The board I am looking at will take
On Sun May 10 14:36:15 PDT 2015, cinap_len...@felloff.net wrote:
> how is this the opposite? your patch shows the tcb->mss init being removed
> completely from tcpincoming().
>
> - /* our sending max segment size cannot be bigger than what he asked for
> */
> - if(lp->mss != 0 && lp->ms
how is this the opposite? your patch shows the tcb->mss init being removed
completely from tcpincoming().
- /* our sending max segment size cannot be bigger than what he asked for
*/
- if(lp->mss != 0 && lp->mss < tcb->mss) {
- tcb->mss = lp->mss;
- tpriv
> 2.a) tcpiput() gets a ACK packet for Listening connection, calls
> tcpincoming().
> 2.b) tcpincoming() looks in limbo, finds lp. and makes new connection.
> 3.c) initialize our connections tcb->mss.
>
> > * the setting of tcb->mss in tcpincoming is not correct, tcp->mss is
> > set by SYN, not b
On Sun May 10 10:58:55 PDT 2015, 0in...@gmail.com wrote:
> >> however, after fixing things so the initial cwind isn't hosed, i get a
> >> little better story:
> >
> > so, actually, i think this is the root cause. the intial cwind is misset
> > for loopback.
> > i but that the symptom folks will
> * the SYN-ACK needs to send the local mss, not echo the remote mss.
> asymmetry is "fine" in the other side, even if ip/tcp.c isn't smart enough to
> keep tx and rx mss seperate. (scare quotes = untested, there may be
> some performance niggles if the sender is sending legal packets larger than
>> however, after fixing things so the initial cwind isn't hosed, i get a
>> little better story:
>
> so, actually, i think this is the root cause. the intial cwind is misset for
> loopback.
> i but that the symptom folks will see is that /net/tcp/stats shows
> fragmentation when
> performance
> however, after fixing things so the initial cwind isn't hosed, i get a little
> better story:
so, actually, i think this is the root cause. the intial cwind is misset for
loopback.
i but that the symptom folks will see is that /net/tcp/stats shows
fragmentation when
performance sucks. evide
for what it's worth, the original newreno work tcp does not have the mtu
bug. on a 8 processor system i have around here i get
bwc; while() nettest -a 127.1
tcp!127.0.0.1!40357 count 10; 81920 bytes in 1.505948 s @ 519 MB/s (0ms)
tcp!127.0.0.1!47983 count 10; 81920 bytes in 1.3779
2015-05-09 10:35 GMT-07:00 Lyndon Nerenberg :
>
> On May 9, 2015, at 10:30 AM, Devon H. O'Dell wrote:
>
>> Or when your client is on a cell phone. Cell networks are the worst.
>
> Really? Quite often I slave my laptop to my phone's LTE connection, and I
> never have problems with PMTU. Both her
> On May 9, 2015, at 10:25 AM, Lyndon Nerenberg wrote:
>
>
>> On May 9, 2015, at 7:43 AM, erik quanstrom wrote:
>>
>> easy enough until one encounters devices that don't send icmp
>> responses because it's not implemented, or somehow considered
>> "secure" that way.
>
> Oddly enough, I don'
On May 9, 2015, at 10:30 AM, Devon H. O'Dell wrote:
> Or when your client is on a cell phone. Cell networks are the worst.
Really? Quite often I slave my laptop to my phone's LTE connection, and I
never have problems with PMTU. Both here (across western Canada) and in the UK.
signature.as
2015-05-09 10:25 GMT-07:00 Lyndon Nerenberg :
>
>
> On May 9, 2015, at 7:43 AM, erik quanstrom wrote:
>
> > easy enough until one encounters devices that don't send icmp
> > responses because it's not implemented, or somehow considered
> > "secure" that way.
>
> Oddly enough, I don't see this 'pro
On May 9, 2015, at 7:43 AM, erik quanstrom wrote:
> easy enough until one encounters devices that don't send icmp
> responses because it's not implemented, or somehow considered
> "secure" that way.
Oddly enough, I don't see this 'problem' in the real world. And FreeBSD is far
from being alon
On Fri May 8 20:12:57 PDT 2015, cinap_len...@felloff.net wrote:
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
> as i see it, procsyn() is called only when tcb->state is Syn_sent,
> which only should happen for client connections doing a connect, in
> which case tcpsndsyn() w
On Fri May 8 20:12:57 PDT 2015, cinap_len...@felloff.net wrote:
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
> as i see it, procsyn() is called only when tcb->state is Syn_sent,
> which only should happen for client connections doing a connect, in
> which case tcpsndsyn() w
yes, but i was not refering to the adjusting which isnt changed here. only
the tcpmtu() call that got added.
yes, it *should* not make any difference but maybe we'r missing
something. at worst it makes the code more confusing and cause bugs in
the future because one of the initializations of mss i
> Looking at the first few bytes in each dir of the initial TCP
> handshake (with tcpdump) I see:
>
> 0x: 4500 0030 24da <= from plan9 to freebsd
>
> 0x: 4500 0030 d249 4000 <= from freebsd to plan9
>
> Looks like FreeBSD always sets the DF (don't fragment) bit
>
> do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
> as i see it, procsyn() is called only when tcb->state is Syn_sent,
> which only should happen for client connections doing a connect, in
> which case tcpsndsyn() would have initialized tcb->mss already no?
tcb->mss may still ne
do we really need to initialize tcb->mss to tcpmtu() in procsyn()?
as i see it, procsyn() is called only when tcb->state is Syn_sent,
which only should happen for client connections doing a connect, in
which case tcpsndsyn() would have initialized tcb->mss already no?
--
cinap
On Fri, 08 May 2015 21:24:13 +0200 David du Colombier <0in...@gmail.com> wrote:
> On the loopback medium, I suppose this is the opposite issue.
> Since the TCP stack didn't fix the MSS in the incoming
> connection, the programs sent multiple small 1500 bytes
> IP packets instead of large 16384 IP p
I confirm - my old performance is back.
Thanks very much David.
-Steve
I've finally figured out the issue.
The slowness issue only appears on the loopback, because
it provides a 16384 MTU.
There is an old bug in the Plan 9 TCP stack, were the TCP
MSS doesn't take account the MTU for incoming connections.
I originally fixed this issue in January 2015 for the Plan 9
> oh. possibly the queue isn't big enough, given the window size.
> it's using qpass on a Queue with Qmsg and if the queue is full,
> Blocks will be discarded.
I tried to increase the size of the queue, but no luck.
--
David du Colombier
On 8 May 2015 at 17:13, David du Colombier <0in...@gmail.com> wrote:
> Also, the issue is definitely related to the loopback.
> There is no problem when using an address on /dev/ether0.
>
oh. possibly the queue isn't big enough, given the window size. it's using
qpass on a Queue with Qmsg
and if
I've enabled tcp, tcpwin and tcprxmt logs, but there isn't
anything very interesting.
tcpincoming s 127.0.0.1!53150/127.0.0.1!53150 d
127.0.0.1!17034/127.0.0.1!17034 v 4/4
Also, the issue is definitely related to the loopback.
There is no problem when using an address on /dev/ether0.
cpu% cat /n
> cpu% cat /net/tcp/3/local
> 127.0.0.1!57796
> cpu% cat /net/tcp/3/remote
> 127.0.0.1!17034
> cpu% cat /net/tcp/3/status
> Established qin 0 qout 0 rq 0.0 srtt 80 mdev 40 sst 1048560 cwin
> 258192 swin 1048560>>4 rwin 1048560>>4 qscale 4 timer.start 10
> timer.count 10 rerecv 0 katimer.start 2400
> NOW is defined as MACHP(0)->ticks, so this is a pretty course timer
> that can't go backwards on intel processors. this limits the timer's
> resolution to HZ,
> which on 9atom is 1000, and 100 on pretty much anything else. further
> limiting the
> resolution is the tcp retransmit timers which
On Tue May 5 15:54:45 PDT 2015, ara...@mgk.ro wrote:
> It's pretty interesting that at least three people all got exactly
> 150kB/s on vastly different machines, both real and virtual. Maybe the
> number comes from some tick frequency?
i might suggest altering HZ and seeing if there is a throughp
On Wed May 6 14:28:03 PDT 2015, 0in...@gmail.com wrote:
> I got it!
>
> The regression was caused by the NewReno TCP
> change on 2013-01-24.
>
> https://github.com/0intro/plan9/commit/e8406a2f44
if you have proof, i'd be interested in reproduction of the issue from the
original source, or
perh
On Wed May 6 15:30:24 PDT 2015, charles.fors...@gmail.com wrote:
> On 6 May 2015 at 22:28, David du Colombier <0in...@gmail.com> wrote:
>
> > Since the problem only happen when Fossil or vacfs are running
> > on the same machine as Venti, I suppose this is somewhat related
> > to how TCP behaves
On 6 May 2015 at 23:35, Steven Stallion wrote:
> Were these the changes that erik submitted?
I don't think so. Someone else submitted a different set of tcp changes
independently much earlier.
Definitely interesting, and explains why I've never seen the regression (I
switched to a dedicated venti server a couple of years ago). Were these the
changes that erik submitted? ISTR him working on reno bits somewhere around
there...
On Wed, May 6, 2015 at 4:28 PM, David du Colombier <0in...@gma
On 6 May 2015 at 22:28, David du Colombier <0in...@gmail.com> wrote:
> Since the problem only happen when Fossil or vacfs are running
> on the same machine as Venti, I suppose this is somewhat related
> to how TCP behaves with the loopback.
>
Interesting. That would explain the clock-like delays.
Since the problem only happen when Fossil or vacfs are running
on the same machine as Venti, I suppose this is somewhat related
to how TCP behaves with the loopback.
--
David du Colombier
I got it!
The regression was caused by the NewReno TCP
change on 2013-01-24.
https://github.com/0intro/plan9/commit/e8406a2f44
--
David du Colombier
On 6 May 2015 at 21:55, David du Colombier <0in...@gmail.com> wrote:
> However, now I'm sure the issue was caused by a kernel
> change in 2013.
>
> There is no problem when running a kernel from early 2013.
>
Welly, welly, welly, well. That is interesting.
Just to be sure, I tried again, and the issue is not related
to the lock change on 2013-09-19.
However, now I'm sure the issue was caused by a kernel
change in 2013.
There is no problem when running a kernel from early 2013.
--
David du Colombier
It's pretty interesting that at least three people all got exactly
150kB/s on vastly different machines, both real and virtual. Maybe the
number comes from some tick frequency?
--
Aram Hăvărneanu
Yes, I'm pretty sure it's not related to Fossil, since it happens with
vacfs as well.
Also, Venti was pretty much unchanged during the last few years.
I suspected it was related to the lock change on 2013-09-19.
https://github.com/0intro/plan9/commit/c4d045a91e
But I remember I tried to revert t
semlocks?
anyway, should not be too hard to figure out with /n/dump
--
cinap
On 5 May 2015 at 16:38, David du Colombier <0in...@gmail.com> wrote:
> > How many times do you time it on each machine?
>
> Maybe ten times. The results are always the same ~5%.
> Also, I restarted vacfs between each try.
It was the effect of the ram caches that prompted the question.
My experi
> I too see this, and feel, no proof, that things used to be better. I.e. the
> first time I read a file from venti it it very, very slow. subsequent reads
> from the ram cache are quick.
>
> I think venti used to be faster a few years ago. maybe another effect of this
> is the boot time seems s
I too see this, and feel, no proof, that things used to be better. I.e. the
first time I read a file from venti it it very, very slow. subsequent reads
from the ram cache are quick.
I think venti used to be faster a few years ago. maybe another effect of this
is the boot time seems slower than
>> I've just made some measurements when reading a file:
>>
>> Vacfs running on the same machine as Venti: 151 KB/s
>> Vacfs running on another machine: 5131 KB/s
>
>
> How many times do you time it on each machine?
Maybe ten times. The results are always the same ~5%.
Also, I restarted vacfs betw
Thanks Aram.
> I have spent some time
> debugging this, but unfortunately, I couldn't find the root cause, and
> I just stopped using fossil.
I tried to measure performance effect by replacement of component.
1) mbr or GRUB
2) pbs or pbslba
3) sdata or sdvirtio (sdvirtio is imported from 9legacy
On 4 May 2015 at 19:51, David du Colombier <0in...@gmail.com> wrote:
>
> I've just made some measurements when reading a file:
>
> Vacfs running on the same machine as Venti: 151 KB/s
> Vacfs running on another machine: 5131 KB/s
How many times do you time it on each machine?
Thanks Anthony.
> I bet if you re-run the same test twice in a
> row, you’re going to see dramatically improved
> performance.
I try to re-run ‘iostats md5sum /386/9pcf’.
Read result is very fast.
first read result is 152KB/s.
second read result is 232MB/s.
> Your write performance in that test
Hello!
imho placing fossil, venti, isect, bloom and swap on single drive is bad
idea.
As written in in http://plan9.bell-labs.com/sys/doc/venti/venti.html - "The
prototype Venti server is implemented for the Plan 9 operating system in
about 10,000 lines of C. The server runs on a dedicated dual 55
I'm experiencing the same issue as well.
When I launch vacfs on the same machine as Venti,
reading is very slow. When I launch vacfs on another
Plan 9 or Unix machine, reading is fast.
I've just made some measurements when reading a file:
Vacfs running on the same machine as Venti: 151 KB/s
Vacf
I have seen the same problem a few years back on about half of my
machines. The other half were fine. There was a 1000x difference in
performance between the good and bad machines. I have spent some time
debugging this, but unfortunately, I couldn't find the root cause, and
I just stopped using fos
The reason, in general:
In a fossil+venti setup, fossil runs (basically) as a
cache for venti. If your access just hits fossil, it’ll
be quick; if not, you hit the (significantly slower)
venti. I bet if you re-run the same test twice in a
row, you’re going to see dramatically improved
performance.
Hello, fans.
I’m running Plan 9(labs) on public QEMU/KVM service.
My Plan 9 system has a slow read performance problem.
I ran 'iostats md5sum /386/9pcf’, DMA is on, read result is 150KB/s.
but write performance is fast.
My Plan 9 system has a 200GB HDD, formatted with fossil+venti.
disk layout is
Should I worry that this is what is delivered by consecutive
invocations of df within fossilcons, some overnight, while catching up
with a long overdue snap (don't ask!)?
main: df
main: 2,873,221,120 used + 3,547,136 free = 2,876,768,256 (99%
used)
main: df
> there's no question that a better strategy is to
> use a 100% reliable underlying storage device.
let me know when you find one.
- erik
On Thu, Jun 25, 2009 at 9:24 AM, erik quanstrom wrote:
>> > does venti even keep scores on the bloom filter blocks and the icache?
>>
>> no, but those are soft data and can be reconstructed.
>
> being the paranoid type, i worry about this. does the
> rebuild rate on a large (say, 1tb) venti make t
> > does venti even keep scores on the bloom filter blocks and the icache?
>
> no, but those are soft data and can be reconstructed.
being the paranoid type, i worry about this. does the
rebuild rate on a large (say, 1tb) venti make this a
practical strategy?
- erik
> it's even neater to use a raid level that doesn't require venti
> intervention.
agreed.
> does venti even keep scores on the bloom filter blocks and the icache?
no, but those are soft data and can be reconstructed.
russ
On Wed, Jun 24, 2009 at 5:59 PM, erik quanstrom wrote:
>> /boot/fossil: cacheLocalData: addr=155039 type got 0 exp 0: tag got
>> 19383bf exp 11383bf
>> /boot/fossil: cacheLocalData: addr=155167 type got 0 exp 0: tag got
>> 19383bf exp 11383bf
>
> am i wrong in thinking that it would be an error to
> /boot/fossil: cacheLocalData: addr=155039 type got 0 exp 0: tag got
> 19383bf exp 11383bf
> /boot/fossil: cacheLocalData: addr=155167 type got 0 exp 0: tag got
> 19383bf exp 11383bf
am i wrong in thinking that it would be an error to have the same tag at
two different addresses?
- erik
> Not directly related to the topic here, but this has always bugged me
> about running Venti on mirrored or raided disks.
>
> When a block on a mirrored pair doesn't match the block on its
> partner, the mirroring layer has no idea which one is right, but Venti
> does. Some way to export this rea
On Wed, Jun 24, 2009 at 7:39 PM, erik quanstrom wrote:
>> So I went ahead and reinstalled fossil and venti--this time I went
>> with a RAID-10 configuration on the Coraid.
>
> for data integrety, raid 5 is a better solution because
> on a raid 10, if one block is wrong, it's a coin flip as
> to whi
> So I went ahead and reinstalled fossil and venti--this time I went
> with a RAID-10 configuration on the Coraid.
for data integrety, raid 5 is a better solution because
on a raid 10, if one block is wrong, it's a coin flip as
to which one is correct (if any). with raid 5, it's possible
to deter
On Wed, Jun 24, 2009 at 12:09 PM, wrote:
> /boot/fossil: cacheLocalData: addr=78989 type got 0 exp 0: tag got
> e63eb942 exp 663eb942
> /boot/fossil: cacheLocalData: addr=99457 type got 0 exp 0: tag got
> 150daf85 exp 150daf05
> /boot/fossil: cacheLocalData: addr=68651 type got 0 exp 0: tag got
>
/boot/fossil: cacheLocalData: addr=78989 type got 0 exp 0: tag got
e63eb942 exp 663eb942
/boot/fossil: cacheLocalData: addr=99457 type got 0 exp 0: tag got
150daf85 exp 150daf05
/boot/fossil: cacheLocalData: addr=68651 type got 0 exp 0: tag got
66be7fe5 exp 663e7fe5
/boot/fossil: cacheLocalData: a
On Thu, Jun 18, 2009 at 9:01 AM, John Floren wrote:
>
> Our Coraid device recently lost two disks from the RAID5
> configuration; while we were able to rebuild from instructions given
> by support, I suspect some small amount of data was corrupted.
>
> Since rebuilding the device a few days ago, e
On Thu, Jun 18, 2009 at 10:10 AM, John Floren wrote:
> On Thu, Jun 18, 2009 at 9:45 AM, erik quanstrom wrote:
>>
>> > It seems to only happen once per boot, but not necessarily when fossil
>> > starts responding--I've seen it a couple hours after booting, which
>> > the filesystem tends to go away
On Jun 21, 2009, at 7:11 AM, erik quanstrom wrote:
On Sun Jun 21 07:59:52 EDT 2009, 9f...@hamnavoe.com wrote:
Forgot to add that I've only seen one error on the console during
all of this:
/boot/fossil: could not write super block; waiting 10 seconds
/boot/fossil: blistAlloc: called on clean
On Sun Jun 21 07:59:52 EDT 2009, 9f...@hamnavoe.com wrote:
> > Forgot to add that I've only seen one error on the console during all of
> > this:
> > /boot/fossil: could not write super block; waiting 10 seconds
> > /boot/fossil: blistAlloc: called on clean block.
>
> I get a few of these nearly
> /boot/fossil: could not write super block; waiting 10 seconds
> /boot/fossil: blistAlloc: called on clean block.
I have a few a day for the last 5 years on my home server, and one a week
on the work machine... I always ignored them.
-Steve
> Forgot to add that I've only seen one error on the console during all of this:
> /boot/fossil: could not write super block; waiting 10 seconds
> /boot/fossil: blistAlloc: called on clean block.
I get a few of these nearly every day. I've been assuming they are benign.
On Thu, Jun 18, 2009 at 9:45 AM, erik quanstrom wrote:
>
> > It seems to only happen once per boot, but not necessarily when fossil
> > starts responding--I've seen it a couple hours after booting, which
> > the filesystem tends to go away at night.
>
> the failure is somewhere in blockWrite. sin
> It seems to only happen once per boot, but not necessarily when fossil
> starts responding--I've seen it a couple hours after booting, which
> the filesystem tends to go away at night.
the failure is somewhere in blockWrite. since blockWrite
calls diskWrite and diskWrite just queues up i/o to s
On Thu, Jun 18, 2009 at 9:25 AM, erik quanstrom wrote:
>
> > Forgot to add that I've only seen one error on the console during all of
> > this:
> > /boot/fossil: could not write super block; waiting 10 seconds
> > /boot/fossil: blistAlloc: called on clean block.
>
> is that once, or every time?
>
On Thu, Jun 18, 2009 at 9:01 AM, John Floren wrote:
>
> Our Coraid device recently lost two disks from the RAID5
> configuration; while we were able to rebuild from instructions given
> by support, I suspect some small amount of data was corrupted.
>
> Since rebuilding the device a few days ago, e
> Forgot to add that I've only seen one error on the console during all of this:
> /boot/fossil: could not write super block; waiting 10 seconds
> /boot/fossil: blistAlloc: called on clean block.
is that once, or every time?
- erik
Our Coraid device recently lost two disks from the RAID5
configuration; while we were able to rebuild from instructions given
by support, I suspect some small amount of data was corrupted.
Since rebuilding the device a few days ago, every morning I have
returned to work to find my CPU/auth/file se
> I can't easily check before fossil is active, but venti takes a long
> time to start and by the time the machine is "ready", memory is full
> and half of swap is in use :-( During "snap -a" load, context
> switching and interrupts tend to swing wildly and swap is often being
> accessed (it's on I
> i would imagine that cpu has nothing to do with it and encryption
> would add no overhead at all. i would image that seeks dominate
> your performance numbers.
Well, there are numerous issues. The machine is a CPU server booting
off another fossil/venti host; it has its own rather pristine, mo
>> http://project-iris.net/isw-2003/papers/sit.pdf
>
> Sounds very intersting.
> Is there any source code available ?
Most of what is described in that paper is now
libventi, vbackup, and vnfs. There was some
notion that it would be interesting to try storing
data in a peer-to-peer storage syste
erik quanstrom wrote:
with
a 1 machine solution, i don't need any more disks to have a full
mirror and i have the option of raid5 which will reduce the number
of disks i need to 10TB + 1 disk. since your model is that the
storage is a significant expense, a single raid5 machine would make
more
>> since storage is very cheep, i think this is a good tradeoff.
>
> I'm thinking of an scale where storage isn't that cheap ...
what scale is that?
>> what problem are you trying to solve? if you are trying to go for
>> reliability, i would think it would be easier to use raid+backups
>> for d
* Christian Kellermann <[EMAIL PROTECTED]> wrote:
> IIRC Russ et al. have written a paper on connecting a venti server
> to a distributed hash table (like chord) I think the word to google
> for would be venti and dhash.
>
> http://project-iris.net/isw-2003/papers/sit.pdf
Sounds very intersting.
IIRC Russ et al. have written a paper on connecting a venti server
to a distributed hash table (like chord) I think the word to google
for would be venti and dhash.
http://project-iris.net/isw-2003/papers/sit.pdf
HTH
Christian
--
You may use my gpg key for replies:
pub 1024D/47F79788 2005/02/
* erik quanstrom <[EMAIL PROTECTED]> wrote:
> > As a more sophisticated aproach, I'm planning an *real* clustered
> > venti, which also keeps track of block atime's and copy-counters.
> > This way, seldomly used blocks can be removed from one node as long
> > as there are still enough copies in t
> You could adapt Plan B's bns to fail over between different FSs. But...
> We learned that although you can let he FS fail over nicely, many other
> things stand in the way making it unnecessary to fail over. For example,
> on Plan 9, cs and dns have problems after a fail over, your IP address
> m
> I believe the Plan B folks did some work with fail-over (amongst other
> things) that might be applicable. Beyond that, if you want to get what you
You could adapt Plan B's bns to fail over between different FSs. But...
We learned that although you can let he FS fail over nicely, many other
thin
I'm not running Linux, but I've run venti+fossil on Mac OS X for testing. I
intend to use venti there regularly once I figure out how to get OS X to let
me get at a raw partition that isn't mounted (anyone?).
I don't think venti+fossil will do what you're looking for, however, at least
not without
1 - 100 of 101 matches
Mail list logo