Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Marcus Sundman
Bart Smaalders <[EMAIL PROTECTED]> wrote:
> Marcus Sundman wrote:
> > Bart Smaalders <[EMAIL PROTECTED]> wrote:
> >> UTF8 is the answer here.  If you care about anything more than
> >> simple ascii and you work in more than a single locale/encoding,
> >> use UTF8. You may not understand the meaning of a filename, but at
> >> least you'll see the same characters as the person who wrote it.
> > 
> > I think you are a bit confused.
> > 
> > A) If you meant that _I_ should use UTF-8 then that alone won't
> > help. Let's say the person who created the file used ISO-8859-1 and
> > named it 'häst', i.e., 0x68e47374. If I then use UTF-8 when
> > displaying the filename my program will be faced with the problem
> > of what to do with the second byte, 0xe4, which can't be decoded
> > using UTF-8. ("häst" is 0x68c3a47374 in UTF-8, in case someone
> > wonders.)
> 
> What I mean is very simple:
> 
> The OS has no way of merging your various encodings.  If I create a
> directory, and have people from around the world create a file
> in that directory named after themselves in their own character sets,
> what should I see when I invoke:
> 
> % ls -l | less
> 
> in that directory?

Either (1) programs can find out what the encoding is, or (2) programs
must assume the encoding is what some environment variable (or
somesuch) is set to.

(1) The OS doesn't have to "merge" anything, just let the programs
handle any conversions the programs see fit.

(2) The OS must transcode the filenames. If a filename is incompatible
with the target encoding then the offending characters must be escaped.


> If you wish to share filenames across locales, I suggest you and
> everyone else writing to that directory use an encoding that will work
> across all those locales.  The encoding that works well for this on
> Unix systems is UTF8, since it leaves '/' and NULL alone.

Again, that won't work. First of all there is no way to enforce
programs to use UTF-8. I can't even force my own programs to do that.
(E.g., unrar or unzip or tar or 7z (can't remember which one(s)) just
dump the filename data to the fs in whatever encoding they were inside
the archive, and I have at least one collaboration program that also
does it similarly.) Now, if I force the fs to only accept filenames
compatible with UTF-8 (i.e., utf8only) then I risk losing files. I'd
rather have files with incomprehensible filenames than not have them at
all. OTOH, if I allow filenames incompatible with UTF-8 then my
programs can't necessarily access them if I use UTF-8. I could use some
8bits/char encoding (e.g., iso-8859-15), but I'd rather not, since the
world is going the way of UTF-8 and so I'd just be dragging behind. And
then I would also have problems with garbage-filenames when they use
UTF-8 or some other encoding. Also, I'm quite sure I do have files with
names with characters not in iso-8859-15.

So, you see, there is no way for me to use filenames intelligibly unless
their encodings are knowable. (In fact I'm quite surprised that zfs
doesn't (and even can't) know the encoding(s) of filenames. Usually Sun
seems to make relatively sane design decisions. This, however, is more
what I'd expect from linux with their overpragmatic "who cares if it's
sane, as long as it kinda works"-attitudes.)


Regards,

Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Marcus Sundman
Chris Gilligan <[EMAIL PROTECTED]> wrote:
> 2. in a raidz do all the disks have to be the same size?

Related question:
Does a raidz have to be either only full disks or only slices, or can
it be mixed? E.g., can you do a 3-way raidz with 2 complete disks and
one slice (of equal size as the disks) on a 3rd, larger, disk?


- Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Adam Leventhal
>> 2. in a raidz do all the disks have to be the same size?

Disks don't have to be the same size, but only as much space will be  
used
on the larger disks will be used as is available on the smallest disk.  
In
other words, there's no benefit to be gained from this approach.

> Related question:
> Does a raidz have to be either only full disks or only slices, or can
> it be mixed? E.g., can you do a 3-way raidz with 2 complete disks and
> one slice (of equal size as the disks) on a 3rd, larger, disk?

Sure. One could do this, but it's kind of a hack. I imagine you'd like
to do something like match a disk of size N with another disk of size 2N
and use RAID-Z to turn them into a single vdev. At that point it's
probably a better idea to build a striped vdev and use ditto blocks to  
do
your data redundancy by setting copies=2.

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Joerg Schilling
Marcus Sundman <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED] (Joerg Schilling) wrote:
> > Marcus Sundman <[EMAIL PROTECTED]> wrote:
> > > [EMAIL PROTECTED] (Joerg Schilling) wrote:
> > > > [...] ISO-8859-1 (the low 8 bits of UNOICODE) [...]
> > >
> > > Unicode is not an encoding, but you probably mean "the low 8 bits of
> > > UCS-2" or "the first 256 codepoints in Unicode" or somesuch.
> > 
> > Unicode _is_ an encoding that uses 21 (IIRC) bits.
>
> AFAIK you are incorrect. Unicode is a standard that, among other
> things, defines a _number_ for each character. A number does not equal

And I tend to call the relation Character <-> number an "encoding".

As the "number" may be outside the range of "classical characters" that
on most systems live inside octetts, there is a need to use another encoding
on top of the unicode encoding. This second encoding is typically UTF-8 on UNIX.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Joerg Schilling
Bart Smaalders <[EMAIL PROTECTED]> wrote:

> The OS has no way of merging your various encodings.  If I create a
> directory, and have people from around the world create a file
> in that directory named after themselves in their own character sets,
> what should I see when I invoke:
>
> % ls -l | less
>
> in that directory?
>
> If you wish to share filenames across locales, I suggest you and
> everyone else writing to that directory use an encoding that will work
> across all those locales.  The encoding that works well for this on
> Unix systems is UTF8, since it leaves '/' and NULL alone.

The problem with this aproach is that all users need to change their locale 
encoding. Some of them may not be able to do so because they need to login into
older systems that do not support UTF-8.

We had less problems if UNICODE was introduced 10 years ealier. Because of 
missing encoding support for their countries, people in russia, china, ...
did create own encoding schemes in the 1980s that are still in use.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] On Disk Encryption Projects: ZFS & lofi Crypto at LOSUG 19/03

2008-03-05 Thread Joy Marshall
We are lucky to have Darren Moffatt, Sun Senior Staff Engineer and Project Lead 
for the ZFS & lofi Crypto On Disk Encryption OpenSolaris project speaking at 
the next LOSUG meeting on 19th March 2008.

For full details, take a look at http://www.opensolaris.org/os/project/losug/ & 
don't forget to register in good time to book your seat!

Joy
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Anton B. Rang
> Do you happen to know where programs in (Open)Solaris look when they
> want to know how to encode text to be used in a filename? Is it
> LC_CTYPE?

In general, they don't.  Command-line utilities just use the sequence of
bytes entered by the user.  GUI-based software does as well, but the
encoding used for user input can sometimes be selected

> > NFS doesn't provide a mechanism to send the encoding with the
> > filename; I don't believe that CIFS does, either.
> 
> Really?!? That's insane! How do programs know how to
> encode filenames to be sent over NFS or CIFS?

For NFSv3, you guess.  :-)  It's just stream-of-bytes.

For NFSv4, the encoding used to transmit data is supposed to be UTF-8,
but this isn't enforced by most clients.  What's more, since the encoding
isn't stored, the reverse translation (UTF-8 to local encoding) would have
to be done by the NFS client based on ... something.  Usually this is
"just return the raw bytes and let the application deal with the mess."

For CIFS, you can send either "ASCII" (which I believe really means
uninterpreted bytes) or UTF-16.  If you're working in UTF-16, and you're on
Windows, there are two sets of APIs.  The Unicode APIs will return the
proper Unicode names.  The non-Unicode (legacy) APIs will encode the
names according to your system's current "code page" setting.

-- Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Marcus Sundman
"Anton B. Rang" <[EMAIL PROTECTED]> wrote:
> > Do you happen to know where programs in (Open)Solaris look when they
> > want to know how to encode text to be used in a filename? Is it
> > LC_CTYPE?
> 
> In general, they don't.  Command-line utilities just use the sequence
> of bytes entered by the user.

Obviously that depends on the application. A command-line utility that
interprets an normal xml file containing filenames know the characters
but not the bytes. The same goes for command-line utilities that
receive the filenames as text (e.g., some file transfer utility or
daemon).

> GUI-based software does as well, but the encoding used for user input
> can sometimes be selected

Hmm.. I'm usually programming at quite high a level, so I'm not very
familiar with how stuff works under the hood...
If I run xev on my linux box (I don't have X on any (Open)Solaris) and
press the Ä-key on my keyboard it says "keycode 48" and "keysym 0xe4",
and then "XLookupString gives 2 bytes: (c3 a4) "ä"". Thus at least
XLookupString seems to know that I'm using UTF-8. Where did it (or
whoever converted 0xe4 to 0xc3a4) get the needed info?


- Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Chris Gilligan
ok maybe i should rewrite my question in a better way.

My data is mostly made up of things i can afford to lose but would very much 
not like to lose if a disk dies if at all possible.  Due to this i have used a 
raid5 array in the past. The issue i have had with this is a need to replace 
all 10 disks at once to increase my storage in the raid and borrow another raid 
card so i can connect all 20 disks at once while i move the data.

What i would like to be able to do is slowly grow my capacity by replacing 
320gb disks with 1tb disks one at a time and sometimes adding in extra disks to 
the system as i can support up to 12 disks.  Does anyone have any ideas on the 
best way to do this with minimum space loss? 

This just home based storage so i am trying to do everything on the cheap. I 
thought raidz may have been the answer but that does not seem to be the case.

CHris
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing failing drive

2008-03-05 Thread Matt Cohen
Hi.  We have a hard drive failing in one of our production servers.

The server has two drives, mirrored.  It is split between UFS with SVM, and ZFS.

Both drives are setup as follows.  The drives are c0t0d0 and c0t1d0.  c0t1d0 is 
the failing drive.

slice 0 - 3.00GB UFS  (root partition)
slice 1 - 1.00GB swap
slice 3 - 4.00GB UFS  (var partition)
slice 4 - 60GB ZFS  (mirrored slice in our zfs pool)
slice 6 - 54MB metadb
slice 7 - 54MB metadb

I think I have the plan to replace the harddrive without interrupting either 
the SVM mirrors on slices 0,1,3 or the ZFS pool which is mirrored on slice 4.  
I am hoping someone can take a quick look and let me know if I missed anything:

1)  Detach the SVM mirrors on the failing drive
===
metadetach -f d0 d20
metaclear d20
metadetach -f d1 d21
metaclear d21
metadetach -f d3 d23
metaclear d23

2)  Remove the metadb's from the failing drive:
===
metadb -f -d c0t1d0s6
metadb -f -d c0t1d0s7

3)  Offline the ZFS mirror slice
===
zpool offline  c0t1d0s0

4)  At this point it should be safe to remove the drive.  All SVM mirrors are 
detached, the metadb's on the failed drive are deleted, and the ZFS slice is 
offline.

5)  Insert and partition the new drive so it's partitions are the same as the 
working drive.

6)  Create the SVM partitions and attach them
===
metainit d20 1 1 c0t1d0s0
metattach d0 d20
metainit d21 1 1 c0t1d0s1
metattach d1 d21
metainit d23 1 1 c0t1d0s3
metattach d3 d23

7)  Add the metadb's back to the new drive
===
metadb -a -f -c2 c0t1d0s6 c0t1d0s7

8)  Add the ZFS slice back to the zfs pool as part of the mirrored pool
===
zpool replace hrlpool c0t1d0s4
zpool online c0t1d0s4

DONE

The drive should be functioning at this point.

Does this look correct?  Have I missed anything obvious?

I know this isn't totally ZFS related, but I wasn't sure where to put it since 
it has both SVM and ZFS mirrored slices.

Thanks in advance for any input.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Cindy . Swearingen
Chris,

You can replace the disks one at a time with larger disks. No problem.
You can also add another raidz vdev, but you can't add disks to an
existing raidz vdev.

See the sample output below. This might not solve all your problems,
but should give you some ideas...

Cindy

# zpool create rpool raidz disk01-320 disk02-320 disk03-320 disk04-320 \
disk05-320 disk06-320
# zpool status rpool
pool: rpool
  state: ONLINE
  scrub: resilver completed with 0 errors on Wed Mar  5 14:36:53 2008
config:

 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   raidz1ONLINE   0 0 0
 disk01-320  ONLINE   0 0 0
 disk02-320  ONLINE   0 0 0
 disk03-320  ONLINE   0 0 0
 disk04-320  ONLINE   0 0 0
 disk05-320  ONLINE   0 0 0
 disk06-320  ONLINE   0 0 0
# zpool replace rpool disk01-320 disk01-1TB
and so on until each disk0*-320 is replaced with disk0*-1TB
# zpool add rpool raidz disk07-1TB disk08-1TB disk09-1TB disk10-1TB \
disk11-1TB disk12-1TB
# zpool status rpool
pool: rpool
  state: ONLINE
  scrub: resilver completed with 0 errors on Wed Mar  5 14:38:53 2008
config:

 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   raidz1ONLINE   0 0 0
 disk01-1TB  ONLINE   0 0 0
 disk02-1TB  ONLINE   0 0 0
 disk03-1TB  ONLINE   0 0 0
 disk04-1TB  ONLINE   0 0 0
 disk05-1TB  ONLINE   0 0 0
 disk06-1TB  ONLINE   0 0 0
   raidz1ONLINE   0 0 0
 disk07-1TB  ONLINE   0 0 0
 disk08-1TB  ONLINE   0 0 0
 disk09-1TB  ONLINE   0 0 0
 disk10-1TB  ONLINE   0 0 0
 disk11-1TB  ONLINE   0 0 0
 disk12-1TB  ONLINE   0 0 0

Chris Gilligan wrote:
> ok maybe i should rewrite my question in a better way.
> 
> My data is mostly made up of things i can afford to lose but would very much 
> not like to lose if a disk dies if at all possible.  Due to this i have used 
> a raid5 array in the past. The issue i have had with this is a need to 
> replace all 10 disks at once to increase my storage in the raid and borrow 
> another raid card so i can connect all 20 disks at once while i move the data.
> 
> What i would like to be able to do is slowly grow my capacity by replacing 
> 320gb disks with 1tb disks one at a time and sometimes adding in extra disks 
> to the system as i can support up to 12 disks.  Does anyone have any ideas on 
> the best way to do this with minimum space loss? 
> 
> This just home based storage so i am trying to do everything on the cheap. I 
> thought raidz may have been the answer but that does not seem to be the case.
> 
> CHris
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Boyd Adamson
Marcus Sundman <[EMAIL PROTECTED]> writes:
> So, you see, there is no way for me to use filenames intelligibly unless
> their encodings are knowable. (In fact I'm quite surprised that zfs
> doesn't (and even can't) know the encoding(s) of filenames. Usually Sun
> seems to make relatively sane design decisions. This, however, is more
> what I'd expect from linux with their overpragmatic "who cares if it's
> sane, as long as it kinda works"-attitudes.)

To be fair, ZFS is constrained by compatibility requirements with
existing systems. For the longest time the only interpretation that Unix
kernels put on the filenames passed by applications was to treat "/" and
"\000" specially. The interfaces provided to applications assume this is
the entire extent of the process. 

Changing this incompatibly is not an option, and adding new interfaces
to support this is meaningless unless there is a critical mass of
applications that use them. It's not reasonable to talk about "ZFS"
doing this, since it's just a part of the wider ecosystem.

To solve this problem at the moment takes one of two approaches.

1. A userland convention is adopted to decide on what meaning the byte
strings that the kernel provides have.

2. Some new interfaces are created to pass this information into the
kernel and get it back.

Leaving aside the merits of either approach, both of them require
significant agreement from applications to use a certain approach before
they reap any benefits. There's not much ZFS itself can do there.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Chris Gilligan
> Chris,
> 
> You can replace the disks one at a time with larger
> disks. No problem.
> You can also add another raidz vdev, but you can't
> add disks to an
> existing raidz vdev.
> 
> See the sample output below. This might not solve all
> your problems,
> but should give you some ideas...
> 
> Cindy

Cindy,
What you said would be perfect if i could just replace 1-2 disks at a time 
(think per month). What i might need to do is split the disks into multi 
partitions so i can use all the space but then again that wont work as you can 
not add to a raidz. Also are you sure the size of a raidz will increase if you 
replace all the disks?

If the raidz supports disks of different sizes in the raid i would be set. I 
noticed some discussion on this topic but for now it seems like it is not 
looking good.

I guess there just is not really any file systems designed for buget file 
storage expansion. I was really hoping there was a way. but even replacing 5-6 
disks at once is too costly for me

Chris
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Richard Elling
Chris Gilligan wrote:
> ok maybe i should rewrite my question in a better way.
>   

No, the reason nobody answered was that this a frequent FAQ,
second only to CR 4852783 reduce pool capacity.

Cindy, can we update the opensolaris.org FAQ to include some words
about these two questions?
 -- richard


> My data is mostly made up of things i can afford to lose but would very much 
> not like to lose if a disk dies if at all possible.  Due to this i have used 
> a raid5 array in the past. The issue i have had with this is a need to 
> replace all 10 disks at once to increase my storage in the raid and borrow 
> another raid card so i can connect all 20 disks at once while i move the data.
>
> What i would like to be able to do is slowly grow my capacity by replacing 
> 320gb disks with 1tb disks one at a time and sometimes adding in extra disks 
> to the system as i can support up to 12 disks.  Does anyone have any ideas on 
> the best way to do this with minimum space loss? 
>
> This just home based storage so i am trying to do everything on the cheap. I 
> thought raidz may have been the answer but that does not seem to be the case.
>
> CHris
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing failing drive

2008-03-05 Thread Richard Elling
I won't comment on the SVM bits because I haven't used it in many years.
For the ZFS bits you just need to "detach" it from the zpool, then "attach"
after you replace the drive.
 -- richard

Matt Cohen wrote:
> Hi.  We have a hard drive failing in one of our production servers.
>
> The server has two drives, mirrored.  It is split between UFS with SVM, and 
> ZFS.
>
> Both drives are setup as follows.  The drives are c0t0d0 and c0t1d0.  c0t1d0 
> is the failing drive.
>
> slice 0 - 3.00GB UFS  (root partition)
> slice 1 - 1.00GB swap
> slice 3 - 4.00GB UFS  (var partition)
> slice 4 - 60GB ZFS  (mirrored slice in our zfs pool)
> slice 6 - 54MB metadb
> slice 7 - 54MB metadb
>
> I think I have the plan to replace the harddrive without interrupting either 
> the SVM mirrors on slices 0,1,3 or the ZFS pool which is mirrored on slice 4. 
>  I am hoping someone can take a quick look and let me know if I missed 
> anything:
>
> 1)  Detach the SVM mirrors on the failing drive
> ===
> metadetach -f d0 d20
> metaclear d20
> metadetach -f d1 d21
> metaclear d21
> metadetach -f d3 d23
> metaclear d23
>
> 2)  Remove the metadb's from the failing drive:
> ===
> metadb -f -d c0t1d0s6
> metadb -f -d c0t1d0s7
>
> 3)  Offline the ZFS mirror slice
> ===
> zpool offline  c0t1d0s0
>
> 4)  At this point it should be safe to remove the drive.  All SVM mirrors are 
> detached, the metadb's on the failed drive are deleted, and the ZFS slice is 
> offline.
>
> 5)  Insert and partition the new drive so it's partitions are the same as the 
> working drive.
>
> 6)  Create the SVM partitions and attach them
> ===
> metainit d20 1 1 c0t1d0s0
> metattach d0 d20
> metainit d21 1 1 c0t1d0s1
> metattach d1 d21
> metainit d23 1 1 c0t1d0s3
> metattach d3 d23
>
> 7)  Add the metadb's back to the new drive
> ===
> metadb -a -f -c2 c0t1d0s6 c0t1d0s7
>
> 8)  Add the ZFS slice back to the zfs pool as part of the mirrored pool
> ===
> zpool replace hrlpool c0t1d0s4
> zpool online c0t1d0s4
>
> DONE
>
> The drive should be functioning at this point.
>
> Does this look correct?  Have I missed anything obvious?
>
> I know this isn't totally ZFS related, but I wasn't sure where to put it 
> since it has both SVM and ZFS mirrored slices.
>
> Thanks in advance for any input.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Chris Gilligan
> Chris Gilligan wrote:
> > ok maybe i should rewrite my question in a better
> way.
> >   
> 
> No, the reason nobody answered was that this a
> frequent FAQ,
> second only to CR 4852783 reduce pool capacity.
> 

Famous last words but i thought i read everything in the FAQ but maybe i missed 
it and i want to make sure i have this right.

a raidz is basically constant.  you can't add extra disks, you can't reduce the 
number of disks, you can't increase it size by adding some larger disks and it 
does not take advantage of say 2x 250gb and 3x 1tb disks.  it would treat them 
all as 250gb.

Also from what i have read in other posts there are no plans to change this.  

Lastly am i also right in saying there is no easy way to replace one raidz of 
say 6 250gb's with a new raidz of 4x 1tb disks? liek you replace disks in a 
raidz but instead replace raidz in a pool

thanks

Chris
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Richard Elling
Chris Gilligan wrote:
>> Chris Gilligan wrote:
>> 
>>> ok maybe i should rewrite my question in a better
>>>   
>> way.
>> 
>>>   
>>>   
>> No, the reason nobody answered was that this a
>> frequent FAQ,
>> second only to CR 4852783 reduce pool capacity.
>>
>> 
>
> Famous last words but i thought i read everything in the FAQ but maybe i 
> missed it and i want to make sure i have this right.
>
> a raidz is basically constant.  you can't add extra disks, you can't reduce 
> the number of disks, you can't increase it size by adding some larger disks 
> and it does not take advantage of say 2x 250gb and 3x 1tb disks.  it would 
> treat them all as 250gb.
>
> Also from what i have read in other posts there are no plans to change this.  
>
> Lastly am i also right in saying there is no easy way to replace one raidz of 
> say 6 250gb's with a new raidz of 4x 1tb disks? liek you replace disks in a 
> raidz but instead replace raidz in a pool
>   

I guess it depends on your definition of "easy."  You could back
everything up to floppy and restore on a new pool.  IMHO it is
much easier to just use mirrors which aren't subject to the set
restrictions of raidz or raidz2.
 -- richard


> thanks
>
> Chris
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Cindy . Swearingen
Chris,

You would need to replace all the disks to see the expanded space. 
Otherwise, space on the 1-2 larger disks would be wasted. If
you replace all the disks with larger disks, then yes, the
disk space in the raidz config would be expanded.

A ZFS mirrored config would be more flexible but it uses more space,
obviously. For example, you could start with a two-disk mirrored config, 
replace those disks with large disks, eventually add another two-disk
mirror, or even add disks to create two 3-way mirrors. Then, change your 
mind and detach one disk from each mirror.

I hope you've seen the best practices site. You might get more ideas
about whether you want to use slices or not. Eventually, you could
replace a sliced config with a whole disk config...

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Cindy

Chris Gilligan wrote:
>>Chris,
>>
>>You can replace the disks one at a time with larger
>>disks. No problem.
>>You can also add another raidz vdev, but you can't
>>add disks to an
>>existing raidz vdev.
>>
>>See the sample output below. This might not solve all
>>your problems,
>>but should give you some ideas...
>>
>>Cindy
> 
> 
> Cindy,
> What you said would be perfect if i could just replace 1-2 disks at a time 
> (think per month). What i might need to do is split the disks into multi 
> partitions so i can use all the space but then again that wont work as you 
> can not add to a raidz. Also are you sure the size of a raidz will increase 
> if you replace all the disks?
> 
> If the raidz supports disks of different sizes in the raid i would be set. I 
> noticed some discussion on this topic but for now it seems like it is not 
> looking good.
> 
> I guess there just is not really any file systems designed for buget file 
> storage expansion. I was really hoping there was a way. but even replacing 
> 5-6 disks at once is too costly for me
> 
> Chris
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/recv question

2008-03-05 Thread Bill Shannon
If I do something like this:

zfs snapshot [EMAIL PROTECTED]
zfs send [EMAIL PROTECTED] > tank.backup
sleep 86400
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs snapshot [EMAIL PROTECTED]
zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] > tank.incr

Am I going to be able to restore the streams?
Or is it going to be confused because I renamed
the snapshot?

(In case you can't tell, I'm trying to come up
with a reasonable zfs backup strategy to replace
what I used to do with ufsdump.)

Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Anton B. Rang
> > In general, they don't.  Command-line utilities just use the sequence
> > of bytes entered by the user.
> 
> Obviously that depends on the application. A command-line utility that
> interprets an normal xml file containing filenames know the characters
> but not the bytes. The same goes for command-line utilities that
> receive the filenames as text (e.g., some file transfer utility or daemon).

It's true that they know the characters, and not necessarily the bytes -- but
all of the tools I'm aware of ignore the characters and simply treat these
as bytes when it comes to making calls into the file system.

> If I run xev on my linux box (I don't have X on any (Open)Solaris) and
> press the Ä-key on my keyboard it says "keycode 48" and "keysym 0xe4",
> and then "XLookupString gives 2 bytes: (c3 a4) "ä"". Thus at least
> XLookupString seems to know that I'm using UTF-8. Where did it (or
> whoever converted 0xe4 to 0xc3a4) get the needed info?

Depending on what version of xev you've got, there's a good chance it made a 
call to XmbLookupString (the "multibyte" version of XLookupString). This uses 
the current locale for the encoding; the locale is stored in an environment 
variable which can be queried by the application. (But this has wandered afield 
of file systems -- though it's true that the file system could potentially look 
at environment variables to make encoding choices!)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs 32bits

2008-03-05 Thread Ben
Hi,

I know that is not recommended by Sun
to use ZFS on 32 bits machines but,
what  are really the consequences of doing this ?

I have an old Bipro Xeon server (6 GB ram , 6 disks),
and I would like to do a raidz with 4 disks with Solaris 10 update 4.

Thanks,
Ben

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss