assed a certain block. Then *wham!* corruptions-galore.)
//Svein
--
----+-------+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key:
aggregation has to
>> work. See "Ordering of frames" at
>> http://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol#Link_Aggregation_Control_Protocol
>>
>>
>>
> Arse, thanks for reminding me Richard! A single stream will only use one
> path in a LA
silly.
//Svein
- --
- +---+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
|
rdard raid
> setups, they seem tricky.
raidz "eats" one disk. Like RAID5
raidz2 digests another one. Like RAID6
raidz3 yet another one. Like ... h...
//Svein
- --
- +---+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset
not stripe size) to 4K as well)
//Svein
- --
- +---+-------
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway
If you can use FreeBSD but OpenSolaris doesn't have the
> driver for your hardware, go for it.
Finally something we agree on. ;)
FreeBSD also has a less restrictive license.
//Svein
- --
- +---+---
/"\ |Svein Skogen
l HP one if you want. I've done it both ways).
//Svein
- --
- +-------+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xC
to that. Something that is inexpensive and small (4GB?) and works in
> a PCI express slot.
Maybe someone should look at implementing the zfs code for the XScale
range of io-processors (such as the IOP333)?
//Svein
- --
- +---+-------
/"\ |Svein
On 24.03.2010 17:42, Richard Elling wrote:
Nonvolatile write caches are not a problem.
Which is why ZFS isn't a replacement for proper array controllers
(defining proper as those with sufficient battery to leave you with a
seemingly intact filesystem), but a very nice augmentation for them. ;
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Forgot to cc the list, well here goes...
- Original Message
Subject: Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller
Date: Wed, 24 Mar 2010 17:10:58 +0100
From: Svein Skogen
To: Dusan Radovanovic
On 24.03.2010 17:01, Dusan
On 22.03.2010 18:10, Richard Elling wrote:
On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's "zpool scrub", and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
comm
On 22.03.2010 16:24, Cooper Hubbell wrote:
I've moved to 7200RPM 2.5" laptop drives over 3.5"
drives, for a
combination of reasons: lower-power, better
performance than a
comparable sized 3.5" drives, and generally
lower-capacities meaning
resilver times are smaller. They're a bit more $/GB,
but
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's "zpool scrub", and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So ei
On 22.03.2010 13:35, Edward Ned Harvey wrote:
Does cron happen to know how many other scrubs are running, bogging
down
your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for f
On 21.03.2010 01:25, Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when w
On 22.03.2010 02:13, Edward Ned Harvey wrote:
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with "share" and "dfstab?" And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
NFS only works per f
On 21.03.2010 14:26, Edward Ned Harvey wrote:
Most software introduced in Linux clearly violates the "UNIX
philosophy".
Hehehe, don't get me started on OSX. ;-) And for the love of all things
sacred, never say OSX is not UNIX. I made that mistake once. Which is not
to say I was proven wrong
On 21.03.2010 00:14, Erik Trimble wrote:
Richard Elling wrote:
I see this on occasion. However, the cause is rarely attributed to a bad
batch of drives. More common is power supplies, HBA firmware, cables,
Pepsi syndrome, or similar.
-- richard
Mmmm. Pepsi Syndrome. I take it this is similar to
On 20.03.2010 23:00, Gary Gendel wrote:
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring a
On 20.03.2010 20:53, Giovanni Tirloni wrote:
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen mailto:sv...@stillbilde.net>> wrote:
We all know that data corruption may happen, even on the most
reliable of hardware. That's why zfs har pool scrubbing.
Could we introduce a z
We all know that data corruption may happen, even on the most reliable
of hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (as in zpool set )
for "scrub period", in "number of hours" (with 0 being no automatic
scrubbing).
I see several modern raidcontrollers (s
On 20.03.2010 17:39, Henk Langeveld wrote:
On 2010-03-15 16:50, Khyron:
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2
levels of
parity for the data. Out of 3 disks, the equivalent of 2 disks will be
used
t
problem)
//Svein
- --
- +---+-------
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
|
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 17:49, erik.ableson wrote:
> Conceptually, think of a ZFS system as a SAN Box with built-in asynchronous
> replication (free!) with block-level granularity. Then look at your other
> backup requirements and attach whatever is required
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:37, Darren J Moffat wrote:
> On 18/03/2010 17:34, Svein Skogen wrote:
>> How would NDMP help with this any more than running a local pipe
>> splitting the stream (and handling the robotics for feeding in t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:28, Darren J Moffat wrote:
> On 18/03/2010 17:26, Svein Skogen wrote:
>>>> The utility: Can't handle streams being split (in case of streams being
>>>> larger that a single backup media).
>>>
&
gt; NDMP that are in Solaris. I don't know enough about NDMP to be sure but
> I think that should be possible.
>
And here I was thinking that the NDMP stack basically was tapedev and/or
autoloader device via network? (i.e. not a backup utility at all but a
method for the software managi
people,
I'm not, really. But I've had my share of helpful people who have been
anything but helpful. Most of them taking care not to put their answers
on the lists, and quite a lot of them wanting to sell me services or
software (or rainwear such as macintoshes)
//Svein
- --
- -
hat this cannot be fixed without
> introducing a different archive format.
>
> Star implements incremental backups and restores based on POSIX compliant
> archives.
And how does your favourite tool handle zvols?
//Svein
- --
- +---+--
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 10:31, Joerg Schilling wrote:
> Svein Skogen wrote:
>
>> Please, don't compare proper backup drives to that rotating head
>> non-standard catastrophy... DDS was (in)famous for being a delayed-fuse
>> tape
significant, the
> drive cost becomes less of an issue, and so forth.
Please, don't compare proper backup drives to that rotating head
non-standard catastrophy... DDS was (in)famous for being a delayed-fuse
tape-shredder.
//Svein
- --
- +---+--
s to the ground. A proper disaster
recovery plan would mean that anyone competent can read the hardcopy of
the plan, and restore the system.
And so on.
Maybe someone should add some "plan for disaster" to the "best current
practices" zfs-page? ;)
//Svein
- --
- +---
LTO-3 tapes. ;)
Hence my earlier questions about the possibility of simply adding a
parameter to the send/receive commands to use the ALREADY BUILT IN code
for providing some sort of FEC to the stream. It _REALLY_ would solve
the "store" problem.
//Svein
- --
- +
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 13:31, Svein Skogen wrote:
> On 17.03.2010 12:28, Khyron wrote:
>> Note to readers: There are multiple topics discussed herein. Please
>> identify which
*SNIP*
>
> How does backing up the NFSv4 acls help you bac
d over and over, with good information being spread over multiple
> threads?
> I personally find it interesting that so few people read first before
> posting. Few
> even seem to bother to do so much (little?) as a Google search which
> would yield
> several previous discussions on
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 22:31, erik.ableson wrote:
>
> On 16 mars 2010, at 21:00, Marc Nicholas wrote:
>> On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen > <mailto:sv...@stillbilde.net>> wrote:
>>
>>
>> > I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 19:57, Marc Nicholas wrote:
>
>
> On Tue, Mar 16, 2010 at 2:46 PM, Svein Skogen <mailto:sv...@stillbilde.net>> wrote:
>
>
> > Not quite a one liner. After you create the target once (step 3),
&g
or non-paid-by-the-hour san
admins) is gone.
Thanks a lot. ;)
//Svein
- --
- ----+---+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PG
Things used to be simple.
"zfs create -V xxg -o shareiscsi=on pool/iSCSI/mynewvolume"
It worked.
Now we've got a new, feature-rich baby in town, called comstar, and so far all
attempts at groking the excuse of a manpage has simply left me with a nasty
headache.
_WHERE_ is the replacement one
med power (on the 220v side) drops.
//Svein
- --
- +---+-------
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
How does it fare, with regards to BUG ID 689477?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6894775
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
didn't know, maybe I should have RTFMmed more).
I'll look into it, as soon as I figure WTF?!? this tapeloader is up to.
//Svein
- --
- +---+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP
Wouldn't his scripts in the other thread (zfs send plugin for Bacula) work with
zvol as well?
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
I can't help but keep wondering if not some sort of FEC wrapper (optional of
course) might solve both the "backup" and some of the long-distance-transfer
(where retransmissions really isn't wanted) issues.
Reason I'm saying long-distance, is this is where latency-on-the-link starts
rearing its
Drop both backups onto the ground.
- -Try to restore both backups...
See any differences in reliability for disasters here?
;)
//Svein
- --
- +---+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP
piping the data trough, feel
free to point me in the right direction, because my google-searches
didn't give me any real clues. Maybe my google-fu isn't up to scratch.
//Svein
- --
- +---+---
/"\ |Svein Skogen | sv..
t replace proofreading the first one.
//Svein
- --
- +---+---
/"\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08.03.2010 13:55, Erik Trimble wrote:
> Svein Skogen wrote:
>> Let's say for a moment I should go for this solution, with the rpool
>> tucked away on an usb-stick in the same case as the LTO-3 tapes it
>> "matches&quo
Let's say for a moment I should go for this solution, with the rpool tucked
away on an usb-stick in the same case as the LTO-3 tapes it "matches"
timelinewise (I'm using HP C8017A kits) as a zfs send -R to a file on the USB
stick. (If, and that's a big if, I get amanda or bacula to do a job I'm
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04.03.2010 13:18, Erik Trimble wrote:
> Svein Skogen wrote:
>> And again ...
>>
>> Is there any work on an upgrade of zfs send/receive to handle resuming
>> on next media?
>>
>> I was thinking something along
Just disregard this thread. I'm resolving the issue using other methods (not
including Solaris).
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming on next
media?
I was thinking something along the lines of zfs send (when device goes full)
returning
"send suspended. To resume insert new media and issue zfs resume "
and receive handling:
"zfs receive stre
The problem is that all disks I've seen so far, has been more fragile than
tapes (given a real disaster, such as a clumsy sysadmin, or a burning home
falling on top of them)... Trust me to knock over a disk.
//Svein
--
This message posted from opensolaris.org
___
Until now, I've ran Windows Storage server 2008, but the ... lack of iSCSI
performance has gotten me so fed up that I've now moved all the files off the
server, to install opensolaris with two zpools, and nfs+smb+iSCSI sharing
towards my windows clients, and vmware ESXi boxes (two of them). So f
54 matches
Mail list logo