Re: [zfs-discuss] Mirrored zpool across network

2007-08-24 Thread Mark
Ok, had a bit of a look around,

What about this setup.

Two boxes with all the hard drives in them. And all drives iSCSI Targets. A 
third Box puts all of the Drives into a mirrored RAIDz setup (one box mirroring 
the other, each has a RAIDz zfs zpool). This setup wil be shared via samba out.

Does anybody see a problem with this?

Also i know this isnt ZFS, but is there any upper limit on file size with samba?

Thanks For all your help.

Mark
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odp: zfs destroy takes long time

2007-08-24 Thread Łukasz K
Dnia 23-08-2007 o godz. 22:15 Igor Brezac napisał(a):
> We are on Solaris 10 U3 with relatively recent recommended patches
> applied.  zfs destroy of a filesystem takes a very long time; 20GB usage
> and about 5 million objects takes about 10 minutes to destroy.  zfs pool
> is a 2 drive stripe, nothing too fancy.  We do not have any snapshots.
> 
> Any ideas?

Maybe your pool is fragmented and pool space map i very big.

Run this script:

#!/bin/sh


echo '::spa' | mdb -k | grep ACTIVE \
  | while read pool_ptr state pool_name
do
  echo "checking pool map size [B]: $pool_name"

  echo "${pool_ptr}::walk metaslab|::print -d struct metaslab 
ms_smo.smo_objsize" \
| mdb -k \
| nawk '{sub("^0t","",$3);sum+=$3}END{print sum}'
done

This will show the size of pool space map on disk ( in bytes ).
Then destroying filesystem or snapshot on fragmented pool kernel
will have to:
1. read space map ( in memory space map will take
4x more RAM )
2. do changes
3. write space map ( space map is kept on disks it 2 copies )

I don't know any workaround for this bug.


Lukas


Poznaj nowego wybrańca Boga... i jego trzódkę! 
Rewelacyjna komedia Evan Wszechmogący w kinach od 24 sierpnia.
http://klik.wp.pl/?adr=http%3A%2F%2Fadv.reklama.wp.pl%2Fas%2Fevanw.html&sid=1270


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored zpool across network

2007-08-24 Thread Darren J Moffat
Mark wrote:
> Ok, had a bit of a look around,
> 
> What about this setup.
> 
> Two boxes with all the hard drives in them. And all drives iSCSI Targets. A 
> third Box puts all of the Drives into a mirrored RAIDz setup (one box 
> mirroring the other, each has a RAIDz zfs zpool). This setup wil be shared 
> via samba out.
> 
> Does anybody see a problem with this?

Seems reasonable to me.  However you haven't said anything about
how "third box" is networked to "first box" and "second box".

With iSCSI I HIGHLY recommend at least using IPsec AH to that you get 
integrity protection of the packets - the TCP checksum is not enough. 
If you care enough to use sha256 checksum with ZFS you should care 
enough to ensure the data on the wire is checksum strongly too.

Also consider that if this was direct attach you would probably be using 
two separate HBAs so you may want to consider using different physical 
NICs and or IPMP or other network failover technologies (depending on 
what hardware you have network wise).

I did a similar setup recently where I had a zpool on one machine and 
created two iscsi targets (using zfs) and then created a mirror using 
those two luns on another machine.  In the end I removed the ZFS pool on 
the target side and shared out the raw disks with iscsi and build the 
pool on the initator machine that way.  Why ?  because I couldn't 
rationalise to myself what value ZFS was giving me in this particular 
case since I was sharing the whole disk array.  In cases where you 
aren't sharing the whole array to a single initiator then I can see 
value in having the iscsi targets be zvols.


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impact on performance

2007-08-24 Thread Victor Latushkin
Hi Lukasz and all,

I just returned from month long sick-leave, so I need some time to sort 
pile of emails, do SPARC build and some testing and then I'll be able to 
provide you with my changes in some form. Hope this will happen next week.

Cheers,
Victor

Łukasz K wrote:
> Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a):
>> Hello Victor,
>>
>> Wednesday, June 27, 2007, 1:19:44 PM, you wrote:
>>
>> VL> Gino wrote:
 Same problem here (snv_60).
 Robert, did you find any solutions?
>> VL> Couple of week ago I put together an implementation of space maps
>> which
>> VL> completely eliminates loops and recursion from space map alloc
>> VL> operation, and allows to implement different allocation strategies
>> quite
>> VL> easily (of which I put together 3 more). It looks like it works for me
>> VL> on thumper and my notebook with ZFS Root though I have almost no
>> time to
>> VL> test it more these days due to year end. I haven't done SPARC build
>> yet
>> VL> and I do not have test case to test against.
>>
>> VL> Also, it comes at a price - I have to spend some more time
>> (logarithmic,
>> VL> though) during all other operations on space maps and is not
>> optimized now.
>>
>> Lukasz (cc) - maybe you can test it and even help on tuning it?
>>
> Yes, I can test it. I'm building environment to compile opensolaris
> and test zfs. I will be ready next week.
> 
> Victor, can you tell me where to look for your changes ?
> How to change allocation strategy ?
> I can see that changing space_map_ops_t
> I can declare diffrent callback functions.
> 
> Lukas
> 
> 
> Tylko od nich zależy czy przeżyją tę noc. Jak uciec, gdy 
> oni widzą wszystko? Kate Beckinsale w mrocznym thrillerze
> MOTEL - kinach od 3 sierpnia!
> http://klik.wp.pl/?adr=http%3A%2F%2Fadv.reklama.wp.pl%2Fas%2Fmotel.html&sid=1236
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impacton performance

2007-08-24 Thread Łukasz K
Great, I have latest snv_70 sources and I'm working on it.

Lukas


Dnia 24-08-2007 o godz. 14:57 Victor Latushkin napisał(a):
> Hi Lukasz and all,
> 
> I just returned from month long sick-leave, so I need some time to sort
> pile of emails, do SPARC build and some testing and then I'll be able to
> provide you with my changes in some form. Hope this will happen next week.
> 
> Cheers,
> Victor
> 
> Łukasz K wrote:
> > Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a):
> >> Hello Victor,
> >>
> >> Wednesday, June 27, 2007, 1:19:44 PM, you wrote:
> >>
> >> VL> Gino wrote:
>  Same problem here (snv_60).
>  Robert, did you find any solutions?
> >> VL> Couple of week ago I put together an implementation of space maps
> >> which
> >> VL> completely eliminates loops and recursion from space map alloc
> >> VL> operation, and allows to implement different allocation strategies
> >> quite
> >> VL> easily (of which I put together 3 more). It looks like it works for me
> >> VL> on thumper and my notebook with ZFS Root though I have almost no
> >> time to
> >> VL> test it more these days due to year end. I haven't done SPARC build
> >> yet
> >> VL> and I do not have test case to test against.
> >>
> >> VL> Also, it comes at a price - I have to spend some more time
> >> (logarithmic,
> >> VL> though) during all other operations on space maps and is not
> >> optimized now.
> >>
> >> Lukasz (cc) - maybe you can test it and even help on tuning it?
> >>
> > Yes, I can test it. I'm building environment to compile opensolaris
> > and test zfs. I will be ready next week.
> > 
> > Victor, can you tell me where to look for your changes ?
> > How to change allocation strategy ?
> > I can see that changing space_map_ops_t
> > I can declare diffrent callback functions.
> > 
> > Lukas
> > 
> > 
> > Tylko od nich zależy czy przeżyją tę noc. Jak uciec, gdy
> > oni widzą wszystko? Kate Beckinsale w mrocznym thrillerze
> > MOTEL - kinach od 3 sierpnia!
> > http://klik.wp.pl/?adr=http%3A%2F%2Fadv.reklama.wp.pl%2Fas%2Fmotel.html&sid=1236
> > 
> > 
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Najnowszy album światowej klasy DJ'a Paula van Dyka 
"In Between" już w sklepach! Kompilacja zawiera singiel 
"White Lies" z udziałem Jessica Sutta z Pussycat Dolls
http://klik.wp.pl/?adr=http%3A%2F%2Fadv.reklama.wp.pl%2Fas%2Fpaulvandyk.html&sid=1272


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
Is it a supported configuration to have a single LUN presented to 4 different 
Sun servers over a fiber channel network and then mounting that LUN on each 
host as the same ZFS filesystem?

We need any of the 4 servers to be able to write data to this shared FC disk. 
We are not using NFS as we do not want to go over the network, just direct to 
the FC disk from any of the hosts. 



Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Darren Dunham
> Is it a supported configuration to have a single LUN presented to 4
> different Sun servers over a fiber channel network and then mounting
> that LUN on each host as the same ZFS filesystem?

ZFS today does not support multi-host simultaneous mounts.  There's no
arbitration for the pool metadata, so you'll end up corrupting the
filesystem if you force it.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Ronald Kuehn
On Friday, August 24, 2007 at 20:14:05 CEST, Matt B wrote:

Hi,

> Is it a supported configuration to have a single LUN presented to 4 different 
> Sun servers over a fiber channel network and then mounting that LUN on each 
> host as the same ZFS filesystem?

No. You can neither access ZFS nor UFS in that way. Only one
host can mount the file system at the same time (read/write or
read-only doesn't matter here).

> We need any of the 4 servers to be able to write data to this shared FC disk. 
> We are not using NFS as we do not want to go over the network, just direct to 
> the FC disk from any of the hosts. 

If you don't want to use NFS, you can use QFS in such a configuration.
The shared writer approach of QFS allows mounting the same file system
on different hosts at the same time.

Ronald
-- 
Sun Microsystems GmbH Ronald Kühn, TSC - Solaris
Sonnenallee 1 [EMAIL PROTECTED]
D-85551 Kirchheim-Heimstetten Tel: +49-89-46008-2901
Amtsgericht München: HRB 161028   Fax: +49-89-46008-2954
Geschäftsführer: Wolfgang Engels, Dr. Roland Bömer
Vorsitzender des Aufsichtsrates: Martin Häring
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
That is what I was afraid of.

In regards to QFS and NFS, isnt QFS something that must be purchased? I looked 
on the SUN website and it appears to be a little pricey.

NFS is free, but is there a way to use NFS without traversing the network? We 
already have our SAN presenting this disk to each of the four hosts using Fiber 
HBA's so the network is not part of the picture at this point. 
Is there some way to utilize NFS with the SAN and the 4 hosts that are fiber 
attached?

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
Cant use the network because these 4 hosts are database servers that will be 
dumping close to a Terabyte every night. If we put that over the network all 
the other servers would be starved
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Ronald Kuehn
On Friday, August 24, 2007 at 20:41:04 CEST, Matt B wrote:
> That is what I was afraid of.
> 
> In regards to QFS and NFS, isnt QFS something that must be purchased? I 
> looked on the SUN website and it appears to be a little pricey.
> 
> NFS is free, but is there a way to use NFS without traversing the network? We 
> already have our SAN presenting this disk to each of the four hosts using 
> Fiber HBA's so the network is not part of the picture at this point. 
> Is there some way to utilize NFS with the SAN and the 4 hosts that are fiber 
> attached?

You cannot use NFS to talk directly to SAN devices.
What's wrong with using the network? Attach the SAN devices to one
host (or more hosts in a Sun Cluster configuration to get HA) and share
the file systems using NFS. That way you are able to enjoy the benefits
of ZFS.

Ronald
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Ronald Kuehn
On Friday, August 24, 2007 at 21:06:28 CEST, Matt B wrote:
> Cant use the network because these 4 hosts are database servers that will be 
> dumping close to a Terabyte every night. If we put that over the network all 
> the other servers would be starved

I'm afraid there aren't many other options than

- a shared file system like QFS to directly access the SAN devices
  from different hosts in parallel
- add more network capacity (either additional network connections
  or faster links like 10 gigabit ethernet) to get the required
  performance with NFS

Ronald 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Darren Dunham
> That is what I was afraid of.
> 
> In regards to QFS and NFS, isnt QFS something that must be purchased?
> I looked on the SUN website and it appears to be a little pricey.

That's correct.  Earlier this year Sun declared an intent to opensource
QFS/SAMFS, but that doesn't help you install it today.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there _any_ suitable motherboard?

2007-08-24 Thread Ian Collins
[EMAIL PROTECTED] wrote:
>
> If power consumption and heat is a consideration, the newer Intel CPUs
> have an advantage in that Solaris supports native power management on
> those CPUs.
>
>   
Are P35 chipset boards supported?

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored zpool across network

2007-08-24 Thread Mark
hi,

Few questions, I seem to remember in WAN environments IPsec can have a 
reasonably large performance impact, how large is this performance impact? and 
is there soe way to mitigate it? The problem is we could be needing to use all 
a gigbit links bandwidth (possibly more). is IPsec AH slightly different to the 
cryptography algorithms to keep VPN's secure?

also i had a look at IPMP, sounds really good. I was wondering yesterday about 
the possibility of linking a few gigabit links together, at FB is very 
expensive and 10GbE is almost the same. I read on the Wikipedia article that by 
using IPMP the bandwidth is increased due to sharing across all the network 
cards, is this true?

Thanks again for all your help

Cheers
Mark
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there _any_ suitable motherboard?

2007-08-24 Thread Neal Pollack
Ian Collins wrote:
> [EMAIL PROTECTED] wrote:
>   
>> If power consumption and heat is a consideration, the newer Intel CPUs
>> have an advantage in that Solaris supports native power management on
>> those CPUs.
>>
>>   
>> 
> Are P35 chipset boards supported?
>   

The P35 "chipset" works fine with Solaris.
Whether or not the "motherboard" works with Solaris is decided by
the vendors choice of additional chips/drivers for things like
SATA, Network, and other ports.
The Intel network core in the P35 chipset (ICH-9 southbridge) works with 
Nevada.
The Intel SATA ports in the ICH-9 southbridge work.
Some of the third party boards add two additional SATA ports on an
unsupported third party chip.  Beware.
Some boards add a second ethernet port using a Marvell or other 
unsupported controller.

Neal

> Ian
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread James C. McPherson
Ronald Kuehn wrote:
> On Friday, August 24, 2007 at 21:06:28 CEST, Matt B wrote:
>> Cant use the network because these 4 hosts are database servers
>> that will be dumping close to a Terabyte every night. If we put
>> that over the network all the other servers would be starved
> 
> I'm afraid there aren't many other options than
> 
> - a shared file system like QFS to directly access the SAN devices 
> from different hosts in parallel - add more network capacity (either
> additional network connections or faster links like 10 gigabit
> ethernet) to get the required performance with NFS

Background - I used to work for Sun's CPRE and PTS support
organisations, getting customers back on their feet after
large scale disasters in the storage area.


Here's where I start to take a hard line about configs.


If your 4 hosts are db servers, dumping ~1Tb per night
and you cannot afford either:

- sufficient space for them to have their own large luns, or
- a dedicated GigE network for dumping that data,

then you need to make an investment in your business and
do is very soon. That would either be purchasing QFS, and/
or purchasing extra space for your array.

You haven't - as far as I can see - explained why you must
have each of these hosts being able to read and write that
data in a shared configuration.


Perhaps if you explain what you are really trying to
achieve we could help you more appropriately.



James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
   http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss