Dick Davies wrote:
On 13/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> For the sake of argument, let's assume:
>
> 1. disk is expensive
> 2. someone is keeping valuable files on a non-redundant zpool
> 3. they can't scrape enough vdevs to make a redundant zpool
>(reme
Torrey McMahon wrote:
eric kustarz wrote:
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to a
David Dyer-Bennet wrote:
On 9/12/06, eric kustarz <[EMAIL PROTECTED]> wrote:
So it seems to me that having this feature per-file is really useful.
Say i have a presentation to give in Pleasanton, and the presentation
lives on my single-disk laptop - I want all the meta-data and the actual
pres
Torrey McMahon wrote:
Matthew Ahrens wrote:
The problem that this feature attempts to address is when you have
some data that is more important (and thus needs a higher level of
redundancy) than other data. Of course in some situations you can use
multiple pools, but that is antithetical to Z
On 13/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> For the sake of argument, let's assume:
>
> 1. disk is expensive
> 2. someone is keeping valuable files on a non-redundant zpool
> 3. they can't scrape enough vdevs to make a redundant zpool
>(remembering you can buil
Celso wrote:
a couple of points
One could make the argument that the feature could
cause enough
confusion to not warrant its inclusion. If I'm a
typical user and I
write a file to the filesystem where the admin set
three copies but
didn't tell me it might throw me into a tizzy trying
to f
On Tue, Sep 12, 2006 at 03:56:00PM -0700, Matthew Ahrens wrote:
> The problem that this feature attempts to address is when you have some
> data that is more important (and thus needs a higher level of
> redundancy) than other data. Of course in some situations you can use
> multiple pools, but
Richard Elling wrote:
Frank Cusack wrote:
It would be interesting to have a zfs enabled HBA to offload the checksum
and parity calculations. How much of zfs would such an HBA have to
understand?
[warning: chum]
Disagree. HBAs are pretty wimpy. It is much less expensive and more
efficient to
a couple of points
> One could make the argument that the feature could
> cause enough
> confusion to not warrant its inclusion. If I'm a
> typical user and I
> write a file to the filesystem where the admin set
> three copies but
> didn't tell me it might throw me into a tizzy trying
> to figu
Frank Cusack wrote:
It would be interesting to have a zfs enabled HBA to offload the checksum
and parity calculations. How much of zfs would such an HBA have to
understand?
[warning: chum]
Disagree. HBAs are pretty wimpy. It is much less expensive and more
efficient to move that (flexible!) f
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
some data that is more im
[dang, this thread started on the one week this quarter that I don't have
any spare time... please accept this one comment, more later...]
Mike Gerdts wrote:
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many co
David Dyer-Bennet wrote:
While I'm not a big fan of this feature, if the work is that well
understood and that small, I have no objection to it. (Boy that
sounds snotty; apologies, not what I intend here. Those of you
reading this know how muich you care about my opinion, that's up to
you.)
David Dyer-Bennet wrote:
The more I look at it the more I think that a second copy on the same
disk doesn't protect against very much real-world risk. Am I wrong
here? Are partial(small) disk corruptions more common than I think?
I don't have a good statistical view of disk failures.
I don't
On 9/12/06, Celso <[EMAIL PROTECTED]> wrote:
> Whether it's hard to understand is debatable, but
> this feature
> integrates very smoothly with the existing
> infrastructure and wouldn't
> cause any trouble when extending or porting ZFS.
>
OK, given this statement...
>
> Just for the record, t
On 9/12/06, eric kustarz <[EMAIL PROTECTED]> wrote:
So it seems to me that having this feature per-file is really useful.
Say i have a presentation to give in Pleasanton, and the presentation
lives on my single-disk laptop - I want all the meta-data and the actual
presentation to be replicated.
eric kustarz wrote:
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
s
Anantha,
How's the output of:
dtrace -F -n 'fbt:zfs::/pid==/{trace(timestamp)}'
--
Just me,
Wire ...
On 9/13/06, Anantha N. Srirama <[EMAIL PROTECTED]> wrote:
Here's the information you requested.
___
zfs-discuss mailing list
zfs-discuss@opensolar
Here's the information you requested.
Script started on Tue Sep 12 16:46:46 2006
# uname -a
SunOS umt1a-bio-srv2 5.10 Generic_118833-18 sun4u sparc SUNW,Netra-T12
# prtdiag
System Configuration: Sun Microsystems sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 96GB
==
Chad Lewis wrote:
On Sep 12, 2006, at 4:39 PM, Celso wrote:
the proposed solution differs in one important aspect: it automatically
detects data corruption.
Detecting data corruption is a function of the ZFS checksumming feature. The
proposed solution has _nothing_ to do with detecting corru
>
> It seems to me that asking the user to solve this
> problem by manually
> making copies of all his files puts all the burden on
> the
> user/administrator and is a poor solution.
I completely agree
> For one, they have to remember to do it pretty often.
> For two, when
> hey do experie
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
some data that is more
On Sep 12, 2006, at 4:39 PM, Celso wrote:
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
I think it has already been said that in many
peoples experience, when a disk fails, it completely
fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly
> On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
>
> > I think it has already been said that in many
> peoples experience, when a disk fails, it completely
> fails. Especially on laptops. Of course ditto blocks
> wouldn't help you in this situation either!
>
> Exactly.
>
> > I still think that si
Dick Davies wrote:
For the sake of argument, let's assume:
1. disk is expensive
2. someone is keeping valuable files on a non-redundant zpool
3. they can't scrape enough vdevs to make a redundant zpool
(remembering you can build vdevs out of *flat files*)
Given those assumptions, I think th
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more im
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
I think it has already been said that in many peoples experience, when a disk
fails, it completely fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly.
I still think that silent data corrupti
On 9/12/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Matthew Ahrens wrote:
> Here is a proposal for a new 'copies' property which would allow
> different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when y
> Matthew Ahrens wrote:
> > Here is a proposal for a new 'copies' property
> which would allow
> > different levels of replication for different
> filesystems.
>
> Thanks everyone for your input.
>
> The problem that this feature attempts to address is
> when you have some
> data that is more i
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more important (and thus needs
Celso wrote:
Hopefully we can agree that you lose nothing by adding this feature, even if
you personally don't see a need for it.
If I read correctly user tools will show more space in use when adding
copies, quotas are impacted, etc. One could argue the added confusion
outweighs the addit
> On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
>
> > ...you split one disk in two. you then have
> effectively two partitions which you can then create
> a new mirrored zpool with. Then everything is
> mirrored. Correct?
>
> Everything in the filesystems in the pool, yes.
>
> > With ditto block
Also, where do I set arc.c_max? In etc/system? Out of
curiosity, why isn't
limiting arc.c_max considered best practice (I just want to make
sure I am
not missing something about the effect limiting it will have)?
My guess is
that in our case (lots of small groups -- 50 people or less --
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
...you split one disk in two. you then have effectively two partitions which
you can then create a new mirrored zpool with. Then everything is mirrored.
Correct?
Everything in the filesystems in the pool, yes.
With ditto blocks, you can selecti
> 1) You should be able to limit your cache max size by
> setting arc.c_max. Its currently initialized to be
> phys-mem-size - 1GB.
Mark's assertion that this is not a best practice is something of an
understatement. ZFS was designed so that users/administrators wouldn't have to
configure tuna
Thomas Burns wrote:
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote:
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall, really
like it. However, we have run into a major problem -- zfs's memory
requirements
crowd out our primary application. Ultimat
zfs export
On September 12, 2006 2:41:27 PM -0700 David Smith <[EMAIL PROTECTED]> wrote:
I currently have a system which has two ZFS storage pools. One of the pools is
coming from a
faulty piece of hardware. I would like to bring up our server mounting the
storage pool which is
okay and NOT
I currently have a system which has two ZFS storage pools. One of the pools is
coming from a faulty piece of hardware. I would like to bring up our server
mounting the storage pool which is okay and NOT mounting the one with from the
hardware with problems. Is there a simple way to NOT mount
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote:
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall,
really
like it. However, we have run into a major problem -- zfs's
memory requirements
crowd out our primary application. Ultimately, we have to reboo
Joe Little wrote:
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip
On Tue, 12 Sep 2006, Mark Maybee wrote:
> Thomas Burns wrote:
> > Hi,
> >
> > We have been using zfs for a couple of months now, and, overall, really
> > like it. However, we have run into a major problem -- zfs's memory
> > requirements
> > crowd out our primary application. Ultimately, we have
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall, really
like it. However, we have run into a major problem -- zfs's memory
requirements
crowd out our primary application. Ultimately, we have to reboot the
machine
so there is enough free memory to st
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip stepping or unable
> On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
>
> > One of the great things about zfs, is that it
> protects not just against mechanical failure, but
> against silent data corruption. Having this available
> to laptop owners seems to me to be important to
> making zfs even more attractive.
>
>
UNIX admin wrote:
This is simply not true. ZFS would protect against
the same type of
errors seen on an individual drive as it would on a
pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a
LUN that is
already protected in some way by the devices internal
RAID code but i
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
One of the great things about zfs, is that it protects not just against
mechanical failure, but against silent data corruption. Having this available
to laptop owners seems to me to be important to making zfs even more attractive.
I'm not arguing
Take this for what it is: the opinion on someone who knows less about zfs than
probably anyone else on this thread ,but...
I would like to add my support for this proposal.
As I understand it, the reason for using ditto blocks on metadata, is that
maintaining their integrity is vital for the he
Vladimir Kotal wrote:
Hello,
I'm trying to set ZFS to work with RBAC so that I could manage all ZFS
stuff w/out root. However, in my setup there is sys_mount privilege
needed:
- without sys_mount:
Currently, anything in zfs that changes dataset configurations, such as
file systems and prope
Darren said:
> Right, that is a very important issue. Would a
> ZFS "scrub" framework do copy on write ?
> As you point out if it doesn't then we still need
> to do something about the old clear text blocks
> because strings(1) over the raw disk will show them.
>
> I see the desire to have a knob
On September 12, 2006 11:35:54 AM -0700 UNIX admin <[EMAIL PROTECTED]>
wrote:
There are also the speed enhancement provided by a HW
raid array, and
usually RAS too, compared to a native disk drive but
the numbers on
that are still coming in and being analyzed. (See
previous threads.)
It would
> There are also the speed enhancement provided by a HW
> raid array, and
> usually RAS too, compared to a native disk drive but
> the numbers on
> that are still coming in and being analyzed. (See
> previous threads.)
Speed enhancements? What is the baseline of comparison?
Hardware RAIDs can
> This is simply not true. ZFS would protect against
> the same type of
> errors seen on an individual drive as it would on a
> pool made of HW raid
> LUN(s). It might be overkill to layer ZFS on top of a
> LUN that is
> already protected in some way by the devices internal
> RAID code but it
>
Anantha N. Srirama wrote:
I'm experiencing a bizzare write performance problem while using a
ZFS filesystem. Here are the relevant facts:
[b]No error messages listed by zpool or /var/opt/messages.[/b] When I
try to save a file the operation takes an inordinate amount of time,
in the 30+ second r
Ben Miller wrote:
I had a strange ZFS problem this morning. The entire system would
hang when mounting the ZFS filesystems. After trial and error I
determined that the problem was with one of the 2500 ZFS filesystems.
When mounting that users' home the system would hang and need to be
rebooted.
Robert Milkowski wrote:
Hello Mark,
Monday, September 11, 2006, 4:25:40 PM, you wrote:
MM> Jeremy Teo wrote:
Hello,
how are writes distributed as the free space within a pool reaches a
very small percentage?
I understand that when free space is available, ZFS will batch writes
and then issue
Hi,
We have been using zfs for a couple of months now, and, overall, really
like it. However, we have run into a major problem -- zfs's memory
requirements
crowd out our primary application. Ultimately, we have to reboot the
machine
so there is enough free memory to start the application.
Hello,
I'm trying to set ZFS to work with RBAC so that I could manage all ZFS
stuff w/out root. However, in my setup there is sys_mount privilege
needed:
- without sys_mount:
vk199839:tessier:~$ zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
local
On Tue, 12 Sep 2006, Anton B. Rang wrote:
reformatted
> >True - I'm a laptop user myself. But as I said, I'd assume the whole disk
> >would fail (it does in my experience).
Usually a laptop disk suffers a mechanical failure - and the failure rate
is a lot higher than disks in a fixed lo
On Tue, Sep 12, 2006 at 05:17:16PM +0100, Darren J Moffat wrote:
> I see the desire to have a knob that says "make this encrypted now" but
> I personally believe that it is actually better if you can make this
> choice at the time you create the ZFS data set.
Including when creating the dataset
On Tue, Sep 12, 2006 at 10:36:30AM +0100, Darren J Moffat wrote:
> Mike Gerdts wrote:
> >Is there anything in the works to compress (or encrypt) existing data
> >after the fact? For example, a special option to scrub that causes
> >the data to be re-written with the new properties could potentiall
On Tue, Sep 12, 2006 at 07:23:00AM -0400, Jeff A. Earickson wrote:
>
> Modify the dovecot IMAP server so that it can get zfs quota information
> to be able to implement the QUOTA feature of the IMAP protocol (RFC 2087).
> In this case pull the zfs quota numbers for quoted home directory/zfs
> file
Neil A. Wilson wrote:
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS "framework" becomes available.
The biggest problem I see with this is one of observa
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS "framework" becomes available.
The biggest problem I see with this is one of observability, if not all
of
I had a strange ZFS problem this morning. The entire system would hang when
mounting the ZFS filesystems. After trial and error I determined that the
problem was with one of the 2500 ZFS filesystems. When mounting that users'
home the system would hang and need to be rebooted. After I remove
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
I've read the proposal, and followed the discussion so far. I have to
say that I don
>And if we are still writing to the file systems at that time ?
New writes should be done according to the new state (if encryption is being
enabled, all new writes are encrypted), since the goal is that eventually the
whole disk will be in the new state.
The completion percentage should probab
>True - I'm a laptop user myself. But as I said, I'd assume the whole disk
>would fail (it does in my experience).
That's usually the case, but single-block failures can occur as well. They're
rare (check the "uncorrectable bit error rate" specifications) but if they
happen to hit a critical fil
Anton B. Rang wrote:
The biggest problem I see with this is one of observability, if not all
of the data is encrypted yet what should the encryption property say ?
If it says encryption is on then the admin might think the data is
"safe", but if it says it is off that isn't the truth either bec
>The biggest problem I see with this is one of observability, if not all
>of the data is encrypted yet what should the encryption property say ?
>If it says encryption is on then the admin might think the data is
>"safe", but if it says it is off that isn't the truth either because
>some of it
On Tue, Sep 12, 2006 at 05:57:33PM +1000, Boyd Adamson wrote:
> On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:
> >Now you have a persistent SSH connection to remote-host that forwards
> >connections to localhost:12345 to port 56789 on remote-host.
> >
> >So now you can use your Perl scripts mor
This proposal would benefit greatly by a "problem statement." As it stands, it
feels like a solution looking for a problem.
The Introduction mentions a different problem and solution, but then pretends that
there is value to this solution. The Description section mentions some benefits
of 'c
I'm experiencing a bizzare write performance problem while using a ZFS
filesystem. Here are the relevant facts:
[b]# zpool list[/b]
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mtdc 3.27T502G 2.78T14% ONLINE -
zfspool68.5
Anton B. Rang writes:
> The bigger problem with system utilization for software
RAID is the cache, not the CPU cycles proper. Simply
preparing to write 1 MB of data will flush half of a 2 MB L2
cache. This hurts overall system performance far more than
the few microseconds
The multiple copies needs to be thought out carefully for interactions
with ZFS crypto since. I'm not sure what the impact is yet, it would
help to know at what layer in the ZIO pipeline this is done - eg today
before or after compression.
--
Darren J Moffat
__
Dick Davies wrote:
On 12/09/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> The only real use I'd see would be for redundant copies
> on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
On 12/09/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> The only real use I'd see would be for redundant copies
> on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
True - I'm a laptop
Hello Matthew,
Saturday, September 9, 2006, 9:09:07 PM, you wrote:
MA> Robert Milkowski wrote:
>> Hi.
>>
>> bash-3.00# zfs get quota f3-1/d611
>> NAME PROPERTY VALUE SOURCE
>> f3-1/d611quota 400G local
>> bash-3.00#
>>
Hello Mark,
Monday, September 11, 2006, 4:25:40 PM, you wrote:
MM> Jeremy Teo wrote:
>> Hello,
>>
>> how are writes distributed as the free space within a pool reaches a
>> very small percentage?
>>
>> I understand that when free space is available, ZFS will batch writes
>> and then issue them
On Tue, 12 Sep 2006, Darren J Moffat wrote:
Date: Tue, 12 Sep 2006 10:30:33 +0100
From: Darren J Moffat <[EMAIL PROTECTED]>
To: Jeff A. Earickson <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS API (again!), need quotactl(7I)
Jeff A. Earickson wrote:
Hi,
I w
On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:
On Mon, Sep 11, 2006 at 06:39:28AM -0700, Bui Minh Truong wrote:
Does "ssh -v" tell you any more ?
I don't think problem is ZFS send/recv. I think it's take a lot of
time to connect over SSH.
I tried to access SSH by typing: ssh remote_machine
Dick Davies wrote:
The only real use I'd see would be for redundant copies
on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
--
Darren J Moffat
___
zfs-discuss
Thank you all for your advices.
Finally, I chose the way writing 2 scripts ( client & server) using Port
forwading via SSH for security reasons.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
> Hi Matt,
> Interesting proposal. Has there been any
> consideration if free space being reported for a ZFS
> filesystem would take into account the copies
> setting?
>
> Example:
> zfs create mypool/nonredundant_data
> zfs create mypool/redundant_data
> df -h /mypool/nonredundant_data
> /
On 12/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
Flexibility is always nice, but this seems to greatly complicate things,
both techni
Mike Gerdts wrote:
On 9/11/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many copies
of the given filesystem will be stored. Its value must be 1, 2, or 3.
Like other properties (eg. checksum, compression), it only affe
> I have a Sun x4200 with 4x gigabit ethernet NICs. I
> have several of
> them configured with distinct IP addresses on an
> internal (10.0.0.0)
> network.
[off topic]
Why are you using distinct IP addresses instead of IPMP ?
[/off]
This message posted from opensolaris.org
_
Jeff A. Earickson wrote:
Hi,
I was looking for the zfs system calls to check zfs quotas from
within C code, analogous to the quotactl(7I) interface for UFS,
and realized that there was nothing similar. Is anything like this
planned? Why no public API for ZFS?
Do I start making calls to zfs_pr
87 matches
Mail list logo