Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-25 Thread Ralf Ramge
ill continue working), but you must not forget that your PCI bridges, fans, power supplies, etc. remain single points of failures why can take the entire service down like your pulling of the non-hotpluggable drive did. c) If you want both, you should buy a second server and create a NFS clust

Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-25 Thread Ralf Ramge
Ralf Ramge wrote: [...] Oh, and please excuse the grammar mistakes and typos. I'm in a hurry, not a retard ;-) At least I think so. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Ralf Ramge
price. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht Montabaur HRB 6484 Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas Gottschlich, Matthias Gre

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Ralf Ramge
That's not a limitation, just looks like one. The cluster's resource type called "SUNW.nfs" decides if a file system is shared or not. And it does this with the usual "share" and "unshare" commands in a separate dfstab file. The ZFS sharenfs flag is set

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-05 Thread Ralf Ramge
don't use it to replicate single boxes with local drives. And, in case OpenSolaris is not an option for you due to your company policies or support contracts, building a real cluster also A LOT cheaper. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-07 Thread Ralf Ramge
narios with tens of thousands of servers. Jim, it's okay. I know that you're a project leader at Sun Microsystems and that AVS is your main concern. But if there's one thing I cannot withstand, it's getting stroppy replies from someone who should know better and should

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-08 Thread Ralf Ramge
en. In any case and any disk size scenario, that's something you don't want to have on your network if there's a chance to avoid this. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraß

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-10 Thread Ralf Ramge
he other "online" drives) and get back to > "full speed" quickly? or will I always have to wait until one of the servers > resilvers itself (from scratch?), and re-replicates itself?? I have not tested this scenario, so I can't say anything about this. -- Ralf

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-16 Thread Ralf Ramge
Jorgen Lundman wrote: > If we were interested in finding a method to replicate data to a 2nd > x4500, what other options are there for us? If you already have an X4500, I think the best option for you is a cron job with incremental 'zfs send'. Or rsync. -- Ralf Ramg

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread Ralf Ramge
out 25% of the performance of the existing Linux ext2 boxes I had to compete with. But in the end, striping 13 RAIDZ sets of 3 drives each + 1 hot spare delivered acceptable results in both categories. But it took me a lot of benchmarks to get there. -- Ralf Ramge Senior Solaris Administ

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-22 Thread Ralf Ramge
ge:cluster:avs " on all mounted file systems and save it locally for my "zpool import wrapper" script. This is a cheap workaround, but honestly: You can use something like this for your own datacenter, but I bet nobody wants to sell it to a customer as a supported solution ;-)

Re: [zfs-discuss] Inconsistent df and du output ?

2008-09-22 Thread Ralf Ramge
65G 4.8G54G 9%/export [...] --- Looks good to me. Or did I miss something and understood you wrong? -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsg

Re: [zfs-discuss] RAIDZ one of the disk showing unavail

2008-09-26 Thread Ralf Ramge
Srinivas Chadalavada wrote: > I see the first disk as unavailble, How do i make it online? By replacing it with a non-broken one. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 4

Re: [zfs-discuss] RAIDZ one of the disk showing unavail

2008-09-29 Thread Ralf Ramge
g has been that the drive was unavailable right after the *creation* of the zpool. And replacing a broken drive with itself doesn't make sense. And after replacing the drive with a working one, ZFS should recognize this automatically. -- Ralf Ramge Senior Solaris Administrator, SCNA, SC

Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-10 Thread Ralf Ramge
1x J4500 to eliminate the storage as a SPoF, too. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht Montabaur HRB 6484 Vorstand: Henning Ahlert, Ralph Dommermuth, M

[zfs-discuss] [AVS] Question concerning reverse synchronization of a zpool

2007-07-11 Thread Ralf Ramge
and there's a workaround in Nevada build 53 and higher. Has somebody done a comparison, can you share some experiences? I only have a few days left and I don't waste time on installing Nevada for nothing ... Thanks, Ralf -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA

Re: [zfs-discuss] [AVS] Question concerning reverse synchronization of a zpool

2007-07-12 Thread Ralf Ramge
Ralf Ramge wrote: > Questions: > > a) I don't understand why the kernel panics at the moment. the zpool > isn't mounted on both systems, the zpool itself seems to be fine after a > reboot ... and switching the primary and secondary hosts just for > resyncing seems

Re: [zfs-discuss] zfs root boot (installgrub fails)

2007-07-23 Thread Ralf Ramge
oesn't exist. Did you try installgrub with c1d0s0? -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht Montabaur HRB 6484 Vorstand: Henning Ahlert, Ralph Dommerm

Re: [zfs-discuss] ? ZFS dynamic striping over RAID-Z

2007-08-02 Thread Ralf Ramge
s0 /usr/sbin/zpool add -f big raidz c1t7d0s0 c4t7d0s0 c6t7d0s0 /usr/sbin/zpool status --- -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht Montabaur HRB 6484 Vo

Re: [zfs-discuss] Is ZFS efficient for large collections of small files?

2007-08-21 Thread Ralf Ramge
the average I/O transaction size. There's a good chance that your I/O performance will be best if you set your recordsize to a smaller value. For instance, if your average file size is 12 KB, try using 8K or even 4K recordsize, stay away from 16K or higher. -- Ralf Ramge Senior Sola

Re: [zfs-discuss] Mirrored zpool across network

2007-08-21 Thread Ralf Ramge
ven't tried. It's because Sun Cluster 3.2 instantly crashes on Thumpers, SATA-related kernel panics, and the OpenHA Cluster isn't available yet. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet

Re: [zfs-discuss] Mirrored zpool across network

2007-08-22 Thread Ralf Ramge
ion "set shareiscsi=on", to > get end users in using iSCSI. > Too bad the X4500 has too few PCI slots to consider buying iSCSI cards. The two existing slots are already needed for the Sun Cluster interconnect. I think iSCSI won't be real option unless the servers are shi

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-12 Thread Ralf Ramge
gt; amazing), but to tell you the true we are keeping 2 large zpool in sync on > each system because we fear an other zpool corruption. > > May I ask how you accomplish that? And why are you doing this? You should replicate your zpool to another host, instead of mirroring locally

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-12 Thread Ralf Ramge
ld sleep better if I were responsible for an application under such a service level agreement without full high availability. If a system reboot can be a single point of failure, what about the network infrastructure? Hardware errors? Or power outages? I'm definitely NOT some kind of know-i

Re: [zfs-discuss] X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.

2007-12-04 Thread Ralf Ramge
the error count which iostat reports without a reboot, so this method is not suitable for monitoring purposes. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Ralf Ramge
ystem and applying individual quotas afterwards. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht Montabaur HRB 6484 Vorstand: Henning Ahlert, Ralph Dommermuth, Mat

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Ralf Ramge
f 100G: > > shares 228G28K 220G 1%/shares > shares/production 100G 8,4G92G 9%/shares/production > > This would suite me perfectly, as this would be exactly what i wanted to do ;) > > Yep, you got it. -- Ralf Ramge Senior Sola

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-18 Thread Ralf Ramge
is, because I'm not just talking about a single databases - I'd need a total number of 42 shelves and I'm pretty sure SUN doesn't offer Try&Buy deals at such a scale. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http