Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-02-06 Thread Will Murnane
On Thu, Jan 29, 2009 at 23:00, Will Murnane wrote: > *sigh* The 9010b is ordered. Ground shipping, unfortunately, but > eventually I'll post my impressions of it. Well, the drive arrived today. It's as nice-looking as it appears in the pictures, and building a zpool out of it alone makes for so

Re: [zfs-discuss] Migrate filesystem+all snapshots from one local disk to another

2009-02-06 Thread Ian Collins
Jonny Gerold wrote: > Hello, > I have two local disks mounted on my system: > > Both are broken mirrors (I only have 2 sata ports, and need to move the > data off one drive to the new drive) > > Mountpoints: > /rpool-old > - Filesystems - > backup1 > bigmac-RAID > hq-pbx > therm7 > > -- I have

[zfs-discuss] Migrate filesystem+all snapshots from one local disk to another

2009-02-06 Thread Jonny Gerold
Hello, I have two local disks mounted on my system: Both are broken mirrors (I only have 2 sata ports, and need to move the data off one drive to the new drive) Mountpoints: /rpool-old - Filesystems - backup1 bigmac-RAID hq-pbx therm7 -- I have about 20 snapshots for each filesystem -- /rp

[zfs-discuss] Problem with zfs mount lu and solaris 8/9 containers

2009-02-06 Thread Peter Pickford
Hi, If this is not a zfs question please direct me to the correct place for this question. I have a server with Solaris 10 u6 zfs root file system and Solaris 9 zones along with Solaris 10 zones. What is the best way to configure the root file system of a Solaris 9 container WRT zfs file system

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross Smith
Something to do with cache was my first thought. It seems to be able to read and write from the cache quite happily for some time, regardless of whether the pool is live. If you're reading or writing large amounts of data, zfs starts experiencing IO faults and offlines the pool pretty quickly. I

Re: [zfs-discuss] ZFS snapshot splitting & joining

2009-02-06 Thread Miles Nordin
> "re" == Richard Elling writes: >> well, I think most backups are archival. re> Disagree. Archives tend to not be overwritten, ever. Backups re> have all sorts of management schemes to allow the backup media re> to be reused. The problem with storing 'zfs send' arises w

Re: [zfs-discuss] ZFS snapshot splitting & joining

2009-02-06 Thread Miles Nordin
> "re" == Richard Elling writes: >> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-December/053894.html re> Bzzt. Thanks for playing. That is: CR 6764193 was fixed in re> b105 http://bugs.opensolaris.org/view_bug.do?bug_id=6764193 Is re> there another? I don't und

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Brent Jones
On Fri, Feb 6, 2009 at 10:50 AM, Ross Smith wrote: > I can check on Monday, but the system will probably panic... which > doesn't really help :-) > > Am I right in thinking failmode=wait is still the default? If so, > that should be how it's set as this testing was done on a clean > install of sn

Re: [zfs-discuss] ZFS snapshot splitting & joining

2009-02-06 Thread Richard Elling
my last contribution to this thread (and there was much rejoicing!) Miles Nordin wrote: >> "re" == Richard Elling writes: >> > > re> The reason is that zfs send/recv has very good application, > re> even in the backup space. There are, in fact, many people > re>

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross Smith
I can check on Monday, but the system will probably panic... which doesn't really help :-) Am I right in thinking failmode=wait is still the default? If so, that should be how it's set as this testing was done on a clean install of snv_106. From what I've seen, I don't think this is a problem wi

Re: [zfs-discuss] ZFS root pool over more than one disks?

2009-02-06 Thread Cindy . Swearingen
Hi Sandro, A ZFS root pool can only be created on a single-disk or mirrored disks. Consider the following choices for the root pool: 1 disk 2 or 3 mirrored disks 2-way mirror of 2 disks 2 mirrored disks with 2 spares In ZFS land, /usr is not a separate file system nor does it need a separate di

Re: [zfs-discuss] ZFS snapshot splitting & joining

2009-02-06 Thread Miles Nordin
> "re" == Richard Elling writes: re> The reason is that zfs send/recv has very good application, re> even in the backup space. There are, in fact, many people re> using it. [...] re> ZFS send is not an archival solution. You should use an re> archival method which is a

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Richard Elling
Ross, this is a pretty good description of what I would expect when failmode=continue. What happens when failmode=panic? -- richard Ross wrote: > Ok, it's still happening in snv_106: > > I plugged a USB drive into a freshly installed system, and created a single > disk zpool on it: > # zpool cre

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross
Just another thought off the back of this, would it be possible to modify zpool status to also: - Generate a warning if a pool has not been exported cleanly. State that there's possible data loss. - Check /var/adm/messages, or fma, and warn if there have been any messages related to drives att

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross
Ok, it's still happening in snv_106: I plugged a USB drive into a freshly installed system, and created a single disk zpool on it: # zpool create usbtest c1t0d0 I opened the (nautilus?) file manager in gnome, and copied the /etc/X11 folder to it. I then copied the /etc/apache folder to it, and

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross
Ok, I noticed somebody's flagged the bug as 'retest', I don't know whether that's aimed at Sun or myself, but either way I'm installing snv_106 on a test machine now and will check whether this is still an issue. -- This message posted from opensolaris.org ___

[zfs-discuss] ZFS root pool over more than one disks?

2009-02-06 Thread Sandro
Hi folks I have a system currently running ufs with four disks / is a mirror of two disks and /usr is a mirror of two disks. I was wondering if such a config is still possible with zfs boot? The disks are only 18gigs, that's why I would like to spread solaris over four disks - or two mirrored