Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-08 Thread Tiernan OToole
Ok, so, after reading a bit more of this discussion and after playing
around at the weekend, i have a couple of questions to ask...

1: Do my pools need to be the same? for example, the pool in the datacenter
is 2 1Tb drives in Mirror. in house i have 5 200Gb virtual drives in
RAIDZ1, giving 800Gb usable. If i am backing up stuff to the home server,
can i still do a ZFS Send, even though underlying system is different?
2: If i give out a partition as an iSCSI LUN, can this be ZFS Sended as
normal, or is there any difference?

Thanks.

--Tiernan

On Mon, Oct 8, 2012 at 3:51 AM, Richard Elling wrote:

> On Oct 7, 2012, at 3:50 PM, Johannes Totz  wrote:
>
> > On 05/10/2012 15:01, Edward Ned Harvey
> > (opensolarisisdeadlongliveopensolaris) wrote:
> >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> >>> boun...@opensolaris.org] On Behalf Of Tiernan OToole
> >>>
> >>> I am in the process of planning a system which will have 2 ZFS
> >>> servers, one on site, one off site. The on site server will be
> >>> used by workstations and servers in house, and most of that will
> >>> stay in house. There will, however, be data i want backed up
> >>> somewhere else, which is where the offsite server comes in... This
> >>> server will be sitting in a Data Center and will have some storage
> >>> available to it (the whole server currently has 2 3Tb drives,
> >>> though they are not dedicated to the ZFS box, they are on VMware
> >>> ESXi). There is then some storage (currently 100Gb, but more can
> >>> be requested) of SFTP enabled backup which i plan to use for some
> >>> snapshots, but more on that later.
> >>>
> >>> Anyway, i want to confirm my plan and make sure i am not missing
> >>> anything here...
> >>>
> >>> * build server in house with storage, pools, etc... * have a
> >>> server in data center with enough storage for its reason, plus the
> >>> extra for offsite backup * have one pool set as my "offsite"
> >>> pool... anything in here should be backed up off site also... *
> >>> possibly have another set as "very offsite" which will also be
> >>> pushed to the SFTP server, but not sure... * give these pools out
> >>> via SMB/NFS/iSCSI * every 6 or so hours take a snapshot of the 2
> >>> offsite pools. * do a ZFS send to the data center box * nightly,
> >>> on the very offsite pool, do a ZFS send to the SFTP server * if
> >>> anything goes wrong (my server dies, DC server dies, etc), Panic,
> >>> download, pray... the usual... :)
> >>>
> >>> Anyway, I want to make sure i am doing this correctly... Is there
> >>> anything on that list that sounds stupid or am i doing anything
> >>> wrong? am i missing anything?
> >>>
> >>> Also, as a follow up question, but slightly unrelated, when it
> >>> comes to the ZFS Send, i could use SSH to do the send, directly to
> >>> the machine... Or i could upload the compressed, and possibly
> >>> encrypted dump to the server... Which, for resume-ability and
> >>> speed, would be suggested? And if i where to go with an upload
> >>> option, any suggestions on what i should use?
> >>
> >> It is recommended, whenever possible, you should pipe the "zfs send"
> >> directly into a "zfs receive" on the receiving system.  For two
> >> solid reasons:
> >>
> >> If a single bit is corrupted, the whole stream checksum is wrong and
> >> therefore the whole stream is rejected.  So if this occurs, you want
> >> to detect it (in the form of one incremental failed) and then
> >> correct it (in the form of the next incremental succeeding).
> >> Whereas, if you store your streams on storage, it will go undetected,
> >> and everything after that point will be broken.
> >>
> >> If you need to do a restore, from a stream stored on storage, then
> >> your only choice is to restore the whole stream.  You cannot look
> >> inside and just get one file.  But if you had been doing send |
> >> receive, then you obviously can look inside the receiving filesystem
> >> and extract some individual specifics.
> >>
> >> If the recipient system doesn't support "zfs receive," [...]
> >
> > On that note, is there a minimal user-mode zfs thing that would allow
> > receiving a stream into an image file? No need for file/directory access
> > etc.
>
> cat :-)
>
> > I was thinking maybe the zfs-fuse-on-linux project may have suitable
> bits?
>
> I'm sure most Linux distros have cat
>  -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-08 Thread Ian Collins

On 10/08/12 20:08, Tiernan OToole wrote:
Ok, so, after reading a bit more of this discussion and after playing 
around at the weekend, i have a couple of questions to ask...


1: Do my pools need to be the same? for example, the pool in the 
datacenter is 2 1Tb drives in Mirror. in house i have 5 200Gb virtual 
drives in RAIDZ1, giving 800Gb usable. If i am backing up stuff to the 
home server, can i still do a ZFS Send, even though underlying system 
is different?


Yes you can, just make sure you have enough space!


2: If i give out a partition as an iSCSI LUN, can this be ZFS Sended 
as normal, or is there any difference?




It can be sent as normal.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-08 Thread Tiernan OToole
Cool beans lads. Thanks!

On Mon, Oct 8, 2012 at 8:17 AM, Ian Collins  wrote:

> On 10/08/12 20:08, Tiernan OToole wrote:
>
>> Ok, so, after reading a bit more of this discussion and after playing
>> around at the weekend, i have a couple of questions to ask...
>>
>> 1: Do my pools need to be the same? for example, the pool in the
>> datacenter is 2 1Tb drives in Mirror. in house i have 5 200Gb virtual
>> drives in RAIDZ1, giving 800Gb usable. If i am backing up stuff to the home
>> server, can i still do a ZFS Send, even though underlying system is
>> different?
>>
>
> Yes you can, just make sure you have enough space!
>
>
>  2: If i give out a partition as an iSCSI LUN, can this be ZFS Sended as
>> normal, or is there any difference?
>>
>>
> It can be sent as normal.
>
> --
> Ian.
>
> __**_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>



-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How many disk in one pool

2012-10-08 Thread Brad Stone
Here's an example of a ZFS-based product you can buy with a large
number of disks in the volume:

http://www.aberdeeninc.com/abcatg/petarack.htm
360 3T drives
A full petabyte of storage (1080TB) in a single rack, under a single
namespace or volume


On Sat, Oct 6, 2012 at 11:48 AM, Richard Elling
 wrote:
> On Oct 5, 2012, at 1:57 PM, Albert Shih  wrote:
>
>> Hi all,
>>
>> I'm actually running ZFS under FreeBSD. I've a question about how many
>> disks I «can» have in one pool.
>>
>> At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
>> (Dell) meaning 48 disks. I've configure with 4 raidz2 in the pool (one on
>> each MD1200)
>>
>> On what I understand I can add more more MD1200. But if I loose one MD1200
>> for any reason I lost the entire pool.
>>
>> In your experience what's the «limit» ? 100 disk ?
>
> I can't speak for current FreeBSD, but I've seen more than 400
> disks (HDDs) in a single pool.
>
>  -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Directory is not accessible

2012-10-08 Thread Sami Tuominen
Hi

I have raidz pool with one directory, which is not accessible. It only gives
"Input/output error" when trying to access it. Is there any way to fix that?

nas4free:/tankki/media# zpool get version tankki
NAMEPROPERTY  VALUESOURCE
tankki  version   15   local

nas4free:/tankki/media# zpool status -v
  pool: tankki
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub in progress since Sun Oct  7 21:18:19 2012
494G scanned out of 5.92T at 261M/s, 6h4m to go
0 repaired, 8.15% done
config:

NAMESTATE READ WRITE CKSUM
tankki  ONLINE   0 0 3.62K
  raidz1-0  ONLINE   0 0 14.5K
ada5p2  ONLINE   0 0 0
ada2p2  ONLINE   0 0 0
ada4p2  ONLINE   0 0 0
ada3p2  ONLINE   0 0 0
ada0p2  ONLINE   0 0 0
ada1p2  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

tankki/media:<0x0>

nas4free:/tankki/media# ls
.windowsTalo
001 Thumbs.db
ChromeStandaloneSetup.exe   Video
Compaq  Web Sites
Dokumentit  clonezilla
Kuvat   home
Lontoo  password-export-2012-06-28.xml
Software

nas4free:/tankki/media# cd Dokumentit
Dokumentit: Input/output error.
nas4free:/tankki/media#



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Directory is not accessible

2012-10-08 Thread Jan Owoc
On Sun, Oct 7, 2012 at 12:59 PM, Sami Tuominen  wrote:
> Hi
>
> I have raidz pool with one directory, which is not accessible. It only gives
> "Input/output error" when trying to access it. Is there any way to fix that?
[...]
> nas4free:/tankki/media# zpool status -v
>   pool: tankki
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://illumos.org/msg/ZFS-8000-8A
>   scan: scrub in progress since Sun Oct  7 21:18:19 2012
> 494G scanned out of 5.92T at 261M/s, 6h4m to go
> 0 repaired, 8.15% done
> config:
>
> NAMESTATE READ WRITE CKSUM
> tankki  ONLINE   0 0 3.62K
>   raidz1-0  ONLINE   0 0 14.5K
> ada5p2  ONLINE   0 0 0
> ada2p2  ONLINE   0 0 0
> ada4p2  ONLINE   0 0 0
> ada3p2  ONLINE   0 0 0
> ada0p2  ONLINE   0 0 0
> ada1p2  ONLINE   0 0 0
>
> errors: Permanent errors have been detected in the following files:
>
> tankki/media:<0x0>

It's as it says it is: the error is "permanent" in that ZFS has done
what it could to recover the data from parity information and ditto
blocks. Sometimes the error is only in the current version of a
file/directory, so you can recover the data from a snapshot.


> nas4free:/tankki/media# cd Dokumentit
> Dokumentit: Input/output error.
> nas4free:/tankki/media#

Do you have a snapshot that you can navigate to and determine if the
directory appears intact?


Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Directory is not accessible

2012-10-08 Thread Sami Tuominen

>>
>> NAMESTATE READ WRITE CKSUM
>> tankki  ONLINE   0 0 3.62K
>>   raidz1-0  ONLINE   0 0 14.5K
>> ada5p2  ONLINE   0 0 0
>> ada2p2  ONLINE   0 0 0
>> ada4p2  ONLINE   0 0 0
>> ada3p2  ONLINE   0 0 0
>> ada0p2  ONLINE   0 0 0
>> ada1p2  ONLINE   0 0 0
>>
>> errors: Permanent errors have been detected in the following files:
>>
>> tankki/media:<0x0>
> 
> It's as it says it is: the error is "permanent" in that ZFS has done what it 
> could to recover the data
> from parity information and ditto blocks. Sometimes the error is only in the 
> current version of a
>  file/directory, so you can recover the data from a snapshot.

Unfortunately there aren't any snapshots.
The version of zpool is 15. Is it safe to upgrade that?
Is zpool clear -F supported or of any use here?

Sami

> Jan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss