There are some Disk Arrays that only support Mirroring or Raid5.
Raid5 stores the parity on a different disk, where StripeZ would store parity
in the standard record size block. ie as long as you can read the block, it has
a chance of recovering it. The Array H/W is responsible for ensure the
On Tue, Aug 18, 2009 at 7:54 PM, Mattias Pantzare wrote:
> On Tue, Aug 18, 2009 at 22:22, Paul Kraus wrote:
>> Posted from the wrong address the first time, sorry.
>>
>> Is the speed of a 'zfs send' dependant on file size / number of files ?
>>
>> We have a system with some large datasets (3
On Tue, Aug 18, 2009 at 22:22, Paul Kraus wrote:
> Posted from the wrong address the first time, sorry.
>
> Is the speed of a 'zfs send' dependant on file size / number of files ?
>
> We have a system with some large datasets (3.3 TB and about 35
> million files) and conventional backups tak
Is the speed of a 'zfs send' dependant on file size / number of files ?
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two and three days, differential
incrementals, even with
Garrett D'Amore wrote:
[replying off arc]
Hmm that's too bad. I think it would still be preferable for
changes of this nature to get at least some minimal review at ARC.
Although as we don't have any ZFS representation at ARC, maybe review of
such changes wouldn't add anything useful.
I
Darren J Moffat wrote:
Garrett D'Amore wrote:
Darren J Moffat wrote:
Dataset rename restrictions
---
On rename a dataset can non be moved out of its wrapping key hierarchy
ie where it inherits the keysource property from. This is best
explained by example:
# zfs get
On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
> We have a system with some large datasets (3.3 TB and about 35
> million files) and conventional backups take a long time (using
> Netbackup 6.5 a FULL takes between two and three days, differential
> incrementals, even with very
I am dealing with a zpool that's refusing to import, and reporting
invalid vdev configuration.
How can I learn more about what exactly this means? Can I isolate
which disk(s) are missing or corrupted/failing?
zpool import provides some information, but not enough. Confusingly,
it lists al
Is there perhaps a workaround for this? A way to condense the free blocks
information?
If not, any idea when an improvement might be implemented?
We are currently suffering from incremental snapshots that refer to zero new
blocks, but where incremental snapshots required over a gigabyte even
>Is the speed of a 'zfs send' dependant on file size / number of files ?
I am going to say no, I have *far* inferior iron that I am running a backup
rig on, and doing a send/recv over ssh through gige and last night's replication
gave the following: "received 40.2GB stream in 3498 seconds (11.8MB/
Posted from the wrong address the first time, sorry.
Is the speed of a 'zfs send' dependant on file size / number of files ?
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two
Perhaps an open 14GB, zero-link file?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bingo!
After several updates I have many boot environments.
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't have quotas set, so I think I'll have to put this down to some sort of
bug. I'm on SXCE 105 at the minute, ZFS version is 3, but zpool is version 13
(could be 14 if I upgrade). I don't have everything backed-up so won't do a
"zpool upgrade" just at the minute. I think when SXCE 120 is re
If you understand how copy on write works and how snapshots work then the
concept of the extra space should make perfect since. If you want a
mathmatic formula for how to figure it out i would have to say that it would
be based on how DIFFERENT the data is between snapshots AND how MUCH data it
is
Ha ha, I know! Like I say, I do get COW principles!
I guess what I'm after is for someone to look at my specific example (in txt
file attached to first post) and tell me specifically how to find out where the
13.8GB number is coming from.
I feel like a total numpty for going on about this, I re
Hello,
Is it possible to replicate an entire zpool with AVS? From what I see,
you can replicate a zvol, because AVS is filesystem agnostic. I can
create zvols within a pool, and AVS can replicate replicate those, but
that's not really what I want.
If I create a zpool called "disk1",
paulc..
dude, i just explained it =)
ok...let me see if i can do better...
if you have a file that's 1 gb , in zfs you have those blocks added. on a
normal filesystem when you edit the file or add to it, it will erase the old
file and add a new one over it (more or less).
on zfs, you have the blocks ad
On Aug 18, 2009, at 9:04 AM, Matthew Stevenson wrote:
I do understand these concepts, but to me that still doesn't explain
why adding the size of each snapshot together doesn't equal the size
reported by zfs list in USEDSNAP.
Here is the pertinent text from the ZFS Admin Guide.
usedbysnap
I do understand these concepts, but to me that still doesn't explain why adding
the size of each snapshot together doesn't equal the size reported by zfs list
in USEDSNAP.
I'm clearly missing something. Hmmm...
--
This message posted from opensolaris.org
Hi Ashley;
RaidZ Group is Ok for throughput but due to the design whole RaidZ Group
behavies like a single disk so your max IOPS is around 100. I'd personaly
use Raid10 instead. Also you seem to have no write cache which can effect
performance. Try using a log device
Best regards
Mertol
There are Works to make NDMP more efficient in highly fregmanted file
Systems with a lot of small files.
I am not a development engineer so I don't know much and I do not think that
there is any committed work. However ZFS engineers on the forum may comment
more
Mertol
Mertol Ozyoney
Storage
it's pretty simple, if i understand it correctly. When you add some blocks
to zfs...
xxx
then take a snapshot
(snapshot of x)
the disk has the space of the x's and the snapshot does't take up any space
yet
then you add more to the drive
and maybe take another snapshot
xx
Brian Hechinger wrote:
On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:
Hi Darren,
Thank you for the update.
Have you got any ETA (build number) for the crypto project?
Also, is there any word on if this will support the hardware crypto stuff
That has always been the plan a
Does anybody aware if this bug is going to be fixed in nearest future?
IBM just started to sale new X25 model for half a price.
--
Roman
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Hi,
Brian Hechinger wrote:
On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:
Hi Darren,
Thank you for the update.
Have you got any ETA (build number) for the crypto project?
Also, is there any word on if this will support the hardware crypto stuff
in the VIA CPUs natively? T
On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:
> Hi Darren,
>
> Thank you for the update.
> Have you got any ETA (build number) for the crypto project?
Also, is there any word on if this will support the hardware crypto stuff
in the VIA CPUs natively? That would be nice. :)
-
> Well, I see USEDSNAP 13.8 GB for the dataset, so if you delete ALL
> snapshots you'd probably be able to get that much.
I agree, it's just hard to see how...
> As for "which snapshot to delete to get most space",
> that's a liitle
> bit tricky. See
> rpool/export/home/m...@zfs-auto-snap:monthly
Garrett D'Amore wrote:
Darren J Moffat wrote:
Dataset rename restrictions
---
On rename a dataset can non be moved out of its wrapping key hierarchy
ie where it inherits the keysource property from. This is best
explained by example:
# zfs get -r keysource tankNA
On Tue, Aug 18, 2009 at 4:09 PM, Matthew
Stevenson wrote:
> Hi, thanks for the info.
>
> Can you have a look at the attachment on the original post for me?
>
> Everything you said is what I expected to see in the output there, but a lot
> of the values are blank where I hoped they would at least b
Hi, thanks for the info.
Can you have a look at the attachment on the original post for me?
Everything you said is what I expected to see in the output there, but a lot of
the values are blank where I hoped they would at least be able to tell me a
breakdown of the USEDSNAP figure
As far as I k
On Tue, Aug 18, 2009 at 2:37 PM, Matthew
Stevenson wrote:
> So there must be basically lots of references to data that hide themselves
> from the surface and can't really be found using zfs list.
"zfs list -t all" usually works for me. Look at "USED" and "REFER"
My understanding is like this:
RE
I'm no expert, but it sounds like this:
http://opensolaris.org/jive/thread.jspa?threadID=80232
Can you remove the faulted disk?
I found this as well, but I don't think I'd be too comfortable using "zpool
destroy" as a recovery tool...
http://forums.sun.com/thread.jspa?threadID=5259623
It also a
The guide is good, but didn't tell me anything I didn't already know about this
area unfortunately.
Anyway, I freed up a big chunk of space by first deleting the snapshot which
was reported by zfs list as being the largest (2GB). Doing zfs list after this
deletion revealed that several of the n
34 matches
Mail list logo