I was going with the spring release myself, and finally got tired of waiting.
Got to build some new servers.
I don't believe you've missed anything. As I'm sure you know, it was
originally officially 2010.02, then it was officially 2010.03, then it was
rumored to be .04, sort of leaked as .0
Actually my current servers are 2008.05, and I noticed the problems I was
having with 2009.06 BEFORE I put those up as the new servers, so my pools are
not too new to revert back to 2008.11, I'd actually be upgrading from 2008.05.
I do not have paid support, but it's just not going to go over we
Yeah, it's just that I don't think I'll be allowed to put up a dev version, but
I would probably get away with putting up 2008.11 if it doesn't have the same
problems with zfs send/recv. Does anyone know?
--
This message posted from opensolaris.org
__
I'm actually only running one at a time. It is recursive / incremental (and
hundreds of GB), but it's only one at a time. Was there still problems in
2009.06 in that scenario?
Does 2008.11 have these problems? 2008.05 didn't, and I'm considering moving
back to that rather than using a develo
I have a couple of systems running 2009.06 that hang on relatively large zfs
send/recv jobs. With the -v option, I see the snapshots coming across, and at
some point the process just pauses, IO and CPU usage go to zero, and it takes a
hard reboot to get back to normal. The same script running
In my case, snapshot creation time and atime don't matter. I think rsync can
preserve mtime and ctime, though. I'll have to double check that.
I'd love to enable dedup. Trying to stay on "stable" releases of OpenSolaris
for whatever that's worth, and I can't seem to find a link to download 20
Ugh, yeah, I've learned by now that you always want at least that one snapshot
in common to keep the continuity in the dataset. Wouldn't I be able to
recreate effectively the same thing by rsync'ing over each snapshot one by one?
It may take a while, and I'd have to use the --inplace and --no-
Not exactly sure how to do what you're recommending -- are you suggesting I go
ahead with using rsync to bring in each snapshot, but to bring it into to a
clone of the old set of snapshots? Is there another way to bring my recent
stuff in to the clone?
If so, then as for the storage savings, I
Is there any way to merge them back together? I really need the history data
going back as far as possible, and I'd like to be able to access it from the
same place . I mean, worst case scenario, I could rsync the contents of each
snapshot to the new filesystem and take a snapshot for each one
I have a series of daily snapshots against a set of data that go for several
months, but then the server crashed. In a hurry, we set up a new server and
just copied over the live data and didn't bother with the snapshots (since zfs
send/recv was too slow and would have taken hours and hours to
I believe it was physical corruption of the media. Strange thing is last time
it happened to me it also managed to replicate the bad blocks over to my backup
server replicated with SNDR...
And yes, it IS read only, and a scrub will NOT actively clean up corruption in
snapshots. It will DETECT
Say I end up with a handful of unrecoverable bad blocks that just so happen to
be referenced by ALL of my snapshots (in some file that's been around forever).
Say I don't care about the file or two in which the bad blocks exist. Is
there any way to purge those blocks from the pool (and all sna
So, I had a fun ZFS learning experience a few months ago. A server of mine
suddenly dropped off the network, or so it seemed. It was an OpenSolaris
2008.05 box serving up samba shares from a ZFS pool, but it noticed too many
checksum errors and so decided it was time to take the pool down so a
Anyone know if this means that this will actually show up in SNV soon, or
whether it will make 2010.02? (on disk dedup specifically)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
Personally I don't care about SXCE EOL, but what about before 2010.02?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Great, thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Then what if I ever need to export the pool on the primary server and then
import it on the replicated server. Will ZFS know which drives should be part
of the stripe even though the device names across servers may not be the same?
--
This message posted from opensolaris.org
___
> The means to specify this is "sndradm -nE ...",
> when 'E' is equal enabled.
Got it. Nothing on the disk, nothing to replicate (yet).
>The manner in which SNDR can guarantee that
>two or more volumes are write-order consistent, as they are
>replicated is place them in the same I/O consistency
I have two servers set up, with two drives each. The OS is stored on one
drive, and the data on the second drive. I have SNDR replication set up
between the two servers for the data drive only.
I'm running out of space on my data drive, and I'd like to do a simple "zpool
attach" command to ad
>> What about when I pop in the drive to be resilvered, but right before I add
>> it back to the mirror, will Solaris get upset that I have two drives both
>> with the same pool name?
>No, you have to do a manual import.
What you mean is that if Solaris/ZFS detects a drive with an identical pool
That sounds like a great idea if I can get it to work--
I get how to add a drive to a zfs mirror, but for the life of me I can't find
out how to safely remove a drive from a mirror.
Also, if I do remove the drive from the mirror, then pop it back up in some
unsuspecting (and unrelated) Solaris
I'm using OpenSolaris with ZFS as a backup server. I copy all my data from
various sources onto the OpenSolaris server daily, and run a snapshot at the
end of each backup. Using gzip-1 compression, mount -F smbfs, and the
--in-place and --no-whole-file switches for rsync, I get efficient space
Should I set that as rsync's block size?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Oh. Yup, I had figured this out on my own but forgot to post back. --inplace
accomplishes what we're talking about. --no-whole-file is also necessary if
copying files locally (not over the network), because rsync does default to
only copying changed blocks, but it overrides that default behav
Here's an idea - I understand that I need rsync on both sides if I want to
minimize network traffic. What if I don't care about that - the entire file
can come over the network, but I specifically only want rsync to write the
changed blocks to disk. Does rsync offer a mode like that?
--
This
Thank you both for your responses. Let me see if I understand correctly -
1. Dedup is what I really want, but it's not implemented yet.
2. The only other way to accomplish this sort of thing is rsync (in other
words, don't overwrite the block in the first place if it's not different), and
i
We're considering using an OpenSolaris server as a backup server. Some of the
servers to be backed up would be Linux and Windows servers, and potentially
Windows desktops as well. What I had imagined was that we could copy files
over to the ZFS-based server nightly, take a snapshot, and only t
Well, I haven't solved everything yet, but I do feel better now that I realize
that it was setting moutpoint=none that caused the zfs send/recv to hang.
Allowing the default mountpoint setting fixed that problem. I'm now trying
with moutpoint=legacy, because I'd really rather leave it unmounte
Ok so I left the thumb drive to try to backup all weekend. It got *most* of
the first snapshot copied over, about 50MB, and that's it. So I tried an
external USB hard drive today, and it actually bothered to copy over the
snapshots, but it does so very slowly. It copied over the first snapsho
readonly=on worked (at least with -F), but then it got the error creating a
mountpoint I mentioned above. So I took away readonly=on, and it got past that
part, however the snapshots past the first one take an eternity. I left it
overnight and it managed to get from 21MB copied for the second
You've seen -F be necessary on some systems and not on others?
Also, was the mount=legacy suggestion for my problem with not wanting to use -F
or for my "cannot create mountpoint" problem? Or both?
If you use legacy mountpoints, does that mean that mounting the parent
filesystem doesn't actual
Ok, in addition to my "why do I have to use -F" post above, now I've tried it
with -F but after the first in the series of snapshots gets sent, it gives me a
"cannot mount '/backup/shares': failed to create mountpoint".
--
This message posted from opensolaris.org
_
Yeah -F should probably work fine (I'm trying it as we speak, but it takes a
little while), but it makes me a bit nervous. I mean, it should only be
necessary if (as the error message suggests) something HAS actually changed,
right?
So, here's what I tried - first of all, I set the backup FS t
Oh and I had been doing this remotely, so I didn't notice the following error
before -
receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL
PROTECTED]
cannot receive incremental stream: destination backup/shares has been modified
since most recent snapshot
This is repo
Ok I'm taking a step back here. Forgetting the incremental for a minute (which
is the part causing the segmentation fault), I'm simply trying to use zfs send
-R to get a whole filesystem and all of its snapshots. I ran the following,
after creating a compressed pool called backup :
zfs send -
True, but a search for zfs "segmentation fault" returns 500 bugs. It's
possible one of those is related to my issue, but it would take all day to find
out. If it's not "flaky" or "unstable", I'd like to try upgrading to the
newest kernel first, unless my Linux mindset is truly out of place her
Please forgive my ignorance. I'm fairly new to Solaris (Linux convert), and
although I recognize that Linux has the same concept of Segmentation faults /
core dumps, I believe my typical response to a Segmentation Fault was to
upgrade the kernel and that always fixed the problem (i.e. somebody
Is there more information that I need to post in order to help diagnose this
problem?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
38 matches
Mail list logo