ill continue
working), but you must not forget that your PCI bridges, fans, power
supplies, etc. remain single points of failures why can take the entire
service down like your pulling of the non-hotpluggable drive did.
c) If you want both, you should buy a second server and create a NFS
clust
Ralf Ramge wrote:
[...]
Oh, and please excuse the grammar mistakes and typos. I'm in a hurry,
not a retard ;-) At least I think so.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße
price.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsgericht Montabaur HRB 6484
Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas
Gottschlich, Matthias Gre
That's not a limitation, just looks like one. The cluster's resource
type called "SUNW.nfs" decides if a file system is shared or not. And it
does this with the usual "share" and "unshare" commands in a separate
dfstab file. The ZFS sharenfs flag is set
don't use it to
replicate single boxes with local drives. And, in case OpenSolaris is
not an option for you due to your company policies or support contracts,
building a real cluster also A LOT cheaper.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL
narios with
tens of thousands of servers.
Jim, it's okay. I know that you're a project leader at Sun Microsystems
and that AVS is your main concern. But if there's one thing I cannot
withstand, it's getting stroppy replies from someone who should know
better and should
en.
In any case and any disk size scenario, that's something you don't want
to have on your network if there's a chance to avoid this.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraß
he other "online" drives) and get back to
> "full speed" quickly? or will I always have to wait until one of the servers
> resilvers itself (from scratch?), and re-replicates itself??
I have not tested this scenario, so I can't say anything about this.
--
Ralf
Jorgen Lundman wrote:
> If we were interested in finding a method to replicate data to a 2nd
> x4500, what other options are there for us?
If you already have an X4500, I think the best option for you is a cron
job with incremental 'zfs send'. Or rsync.
--
Ralf Ramg
out 25%
of the performance of the existing Linux ext2 boxes I had to compete
with. But in the end, striping 13 RAIDZ sets of 3 drives each + 1 hot
spare delivered acceptable results in both categories. But it took me a
lot of benchmarks to get there.
--
Ralf Ramge
Senior Solaris Administ
ge:cluster:avs " on all mounted file systems and save
it locally for my "zpool import wrapper" script. This is a cheap
workaround, but honestly: You can use something like this for your own
datacenter, but I bet nobody wants to sell it to a customer as a
supported solution ;-)
65G 4.8G54G 9%/export
[...]
---
Looks good to me.
Or did I miss something and understood you wrong?
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsg
Srinivas Chadalavada wrote:
> I see the first disk as unavailble, How do i make it online?
By replacing it with a non-broken one.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 4
g has been that the drive was unavailable
right after the *creation* of the zpool. And replacing a broken drive
with itself doesn't make sense. And after replacing the drive with a
working one, ZFS should recognize this automatically.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SC
1x J4500 to eliminate the storage as a SPoF, too.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsgericht Montabaur HRB 6484
Vorstand: Henning Ahlert, Ralph Dommermuth, M
and there's a workaround in Nevada build 53 and higher.
Has somebody done a comparison, can you share some experiences? I only
have a few days left and I don't waste time on installing Nevada for
nothing ...
Thanks,
Ralf
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Ralf Ramge wrote:
> Questions:
>
> a) I don't understand why the kernel panics at the moment. the zpool
> isn't mounted on both systems, the zpool itself seems to be fine after a
> reboot ... and switching the primary and secondary hosts just for
> resyncing seems
oesn't exist. Did you try
installgrub with c1d0s0?
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsgericht Montabaur HRB 6484
Vorstand: Henning Ahlert, Ralph Dommerm
s0
/usr/sbin/zpool add -f big raidz c1t7d0s0 c4t7d0s0 c6t7d0s0
/usr/sbin/zpool status
---
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsgericht Montabaur HRB 6484
Vo
the average I/O
transaction size. There's a good chance that your I/O performance will
be best if you set your recordsize to a smaller value. For instance, if
your average file size is 12 KB, try using 8K or even 4K recordsize,
stay away from 16K or higher.
--
Ralf Ramge
Senior Sola
ven't
tried. It's because Sun Cluster 3.2 instantly crashes on Thumpers,
SATA-related kernel panics, and the OpenHA Cluster isn't available yet.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet
ion "set shareiscsi=on", to
> get end users in using iSCSI.
>
Too bad the X4500 has too few PCI slots to consider buying iSCSI cards.
The two existing slots are already needed for the Sun Cluster
interconnect. I think iSCSI won't be real option unless the servers are
shi
gt; amazing), but to tell you the true we are keeping 2 large zpool in sync on
> each system because we fear an other zpool corruption.
>
>
May I ask how you accomplish that?
And why are you doing this? You should replicate your zpool to another
host, instead of mirroring locally
ld sleep better if I were responsible for an application
under such a service level agreement without full high availability. If
a system reboot can be a single point of failure, what about the network
infrastructure? Hardware errors? Or power outages?
I'm definitely NOT some kind of know-i
the error count which iostat
reports without a reboot, so this method is not suitable for monitoring
purposes.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsgericht
ystem and applying individual quotas afterwards.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/
1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe
Amtsgericht Montabaur HRB 6484
Vorstand: Henning Ahlert, Ralph Dommermuth, Mat
f 100G:
>
> shares 228G28K 220G 1%/shares
> shares/production 100G 8,4G92G 9%/shares/production
>
> This would suite me perfectly, as this would be exactly what i wanted to do ;)
>
>
Yep, you got it.
--
Ralf Ramge
Senior Sola
is, because I'm not just talking about a
single databases - I'd need a total number of 42 shelves and I'm pretty
sure SUN doesn't offer Try&Buy deals at such a scale.
--
Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA
Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http
28 matches
Mail list logo