Well, I haven't solved everything yet, but I do feel better now that I realize
that it was setting moutpoint=none that caused the zfs send/recv to hang.
Allowing the default mountpoint setting fixed that problem. I'm now trying
with moutpoint=legacy, because I'd really rather leave it unmounte
Ok so I left the thumb drive to try to backup all weekend. It got *most* of
the first snapshot copied over, about 50MB, and that's it. So I tried an
external USB hard drive today, and it actually bothered to copy over the
snapshots, but it does so very slowly. It copied over the first snapsho
readonly=on worked (at least with -F), but then it got the error creating a
mountpoint I mentioned above. So I took away readonly=on, and it got past that
part, however the snapshots past the first one take an eternity. I left it
overnight and it managed to get from 21MB copied for the second
I found setting atime=off was enough to get zfs receive working for me, but the
readonly property should work as well.
I chose not to set the pool readonly, as I want to be able to use my backup
pool as a replacement easily, without changing any settings. Not using -F
means that as soon as the
You've seen -F be necessary on some systems and not on others?
Also, was the mount=legacy suggestion for my problem with not wanting to use -F
or for my "cannot create mountpoint" problem? Or both?
If you use legacy mountpoints, does that mean that mounting the parent
filesystem doesn't actual
On Thu, Oct 9, 2008 at 6:56 PM, BJ Quinn <[EMAIL PROTECTED]> wrote:
> So, here's what I tried - first of all, I set the backup FS to readonly.
> That resulted in the same error message. Strange, how could something have
> changed since the last snapshot if I CONSCIOUSLY didn't change anything or
Ok, in addition to my "why do I have to use -F" post above, now I've tried it
with -F but after the first in the series of snapshots gets sent, it gives me a
"cannot mount '/backup/shares': failed to create mountpoint".
--
This message posted from opensolaris.org
_
Yeah -F should probably work fine (I'm trying it as we speak, but it takes a
little while), but it makes me a bit nervous. I mean, it should only be
necessary if (as the error message suggests) something HAS actually changed,
right?
So, here's what I tried - first of all, I set the backup FS t
On 09.10.2008, at 09:17, Brent Jones wrote:
> Correct, the other side should be set Read Only, that way nothing at
> all is modified when the other hosts tries to zfs send.
Since I use the receiving side for backup purposes only, which means
that any change would be accidental - shouldn't a rec
On Wed, Oct 8, 2008 at 10:49 PM, BJ Quinn <[EMAIL PROTECTED]> wrote:
> Oh and I had been doing this remotely, so I didn't notice the following error
> before -
>
> receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL
> PROTECTED]
> cannot receive incremental stream: desti
Oh and I had been doing this remotely, so I didn't notice the following error
before -
receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL
PROTECTED]
cannot receive incremental stream: destination backup/shares has been modified
since most recent snapshot
This is repo
Ok I'm taking a step back here. Forgetting the incremental for a minute (which
is the part causing the segmentation fault), I'm simply trying to use zfs send
-R to get a whole filesystem and all of its snapshots. I ran the following,
after creating a compressed pool called backup :
zfs send -
The problem could be in the zfs command or in the kernel. Run "pstack" on the
core dump and search the bug database for the functions it lists. If you can't
find a bug that matches your situation and your stack, file a new bug and
attach the core. If the engineers find a duplicate bug, they'll j
Next "stable" (as in fedora or ubuntu releases) opensolaris version
will be 2008.11.
In my case I found 2008.05 is simply unusable (my
main interest is xen/xvm), but upgrading to the latest available build
with OS's pkg, (similar to apt-get) fixed the problem.
If you
installed the original OS 200
Bob Friesenhahn <[EMAIL PROTECTED]> wrote:
> On Tue, 30 Sep 2008, BJ Quinn wrote:
>
> > True, but a search for zfs "segmentation fault" returns 500 bugs.
> > It's possible one of those is related to my issue, but it would take
> > all day to find out. If it's not "flaky" or "unstable", I'd like
On Tue, 30 Sep 2008, BJ Quinn wrote:
> True, but a search for zfs "segmentation fault" returns 500 bugs.
> It's possible one of those is related to my issue, but it would take
> all day to find out. If it's not "flaky" or "unstable", I'd like to
> try upgrading to the newest kernel first, unle
BJ Quinn wrote:
> True, but a search for zfs "segmentation fault" returns 500 bugs. It's
> possible one of those is related to my issue, but it would take all day to
> find out. If it's not "flaky" or "unstable", I'd like to try upgrading to
> the newest kernel first, unless my Linux mindset i
True, but a search for zfs "segmentation fault" returns 500 bugs. It's
possible one of those is related to my issue, but it would take all day to find
out. If it's not "flaky" or "unstable", I'd like to try upgrading to the
newest kernel first, unless my Linux mindset is truly out of place her
BJ Quinn wrote:
> Please forgive my ignorance. I'm fairly new to Solaris (Linux convert), and
> although I recognize that Linux has the same concept of Segmentation faults /
> core dumps, I believe my typical response to a Segmentation Fault was to
> upgrade the kernel and that always fixed the
Please forgive my ignorance. I'm fairly new to Solaris (Linux convert), and
although I recognize that Linux has the same concept of Segmentation faults /
core dumps, I believe my typical response to a Segmentation Fault was to
upgrade the kernel and that always fixed the problem (i.e. somebody
BJ Quinn wrote:
> Is there more information that I need to post in order to help diagnose this
> problem?
>
Segmentation faults should be correctly handled by the software.
Please file a bug and attach the core.
http://bugs.opensolaris.org
-- richard
Is there more information that I need to post in order to help diagnose this
problem?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
22 matches
Mail list logo