Hi,
Following up on my last post on this topic (since there seemed to be at least
some interest): after evaluating the setup for a few weeks, I have decided that
this is not as good a solution as I had hoped.
In particular, the expected benefits of splitting data between a special vdev
backed
That looks like a missing piece that would have saved us a lot of headache.
At the time we were on linux 4.4 and 4.8. Thanks for sharing that.
Without deep diving through the impl, I wonder if that flag would also
protect against network buffer allocations in the kernel caused by network
reads/wri
On Sep 14 2022, Shaun McDowell wrote:
> It has been a few years since I've worked on it but there were a number of
> gotchyas we had to overcome when running block devices in userspace and
> mounted on the same system. The largest of which is that the userland
> process needs to be extremely caref
Wanted to share a personal journey since myself and a partner went down a
similar route on a past project of using NBD + userland loopback + S3 for
block devices.
5 years ago we actually forked nbdkit and nbd-client to create async,
optimized versions for use privately as a userland loopback block
As an aside, we'll soon be adding the feature to use nbdkit plugins as
Linux ublk (userspace block) devices. The API is nearly the same so
there's just a bit of code needed to let nbdkit plugins be loaded by
ubdsrv. Watch this space.
Of course it may not (probably will not) fix other problems
Hi all,
In case people have been wondering about the background of the various
questions that I asked on these lists in the last few months:
I've been experimenting with running ZFS-on-NBD as a cloud backup solution (and
potential alternative to S3QL, which I am using for this purpose at the m