Colin Raven wrote:
Folks,
I've been reading Jeff Bonwick's fascinating dedup post. This is going
to sound like either the dumbest or the most obvious question ever
asked, but, if you don't know and can't produce meaningful RTFM
resultsask...so here goes:
Assuming you have a dataset in a
On Mon, Nov 23, 2009 at 8:24 PM, Trevor Pretty
wrote:
>
> I'm persuading a customer that when he goes to S10 he should use ZFS for
> everything. We only have one M3000 and a J4200 connected to it. We are not
> talking about a massive site here with a SAN etc. The M3000 is their
> "mainframe". His
Folks,
I've been reading Jeff Bonwick's fascinating dedup post. This is going to
sound like either the dumbest or the most obvious question ever asked, but,
if you don't know and can't produce meaningful RTFM resultsask...so here
goes:
Assuming you have a dataset in a zfs pool that's been dedu
Hi All,
I'd like to announce the immediate availability of NexentaStor
Developer Edition v2.2.0.
Since the previous announcement, many exciting additions have gone
into NexentaStor Developer edition.
* This is a major stable release.
* Storage limit increased to 4TB.
* Built-in antivirus capabil
Finally, just to be clear, one last point: the two fixes integrated
today only affect you if you've explicitly set dedup=fletcher4,verify.
To quote Matt:
> This is not the default dedup setting; pools that only used "zfs set
> dedup=on" (or =sha256, or =verify, or =sha256,verify) are unaffected.
On Nov 23, 2009, at 7:28 PM, Travis Tabbal wrote:
I have a possible workaround. Mark Johnson
has been emailing me today about this issue and he proposed the
following:
You can try adding the following to /etc/system, then rebooting...
set xpv_psm:xen_support_msi = -1
would this change
And, for the record, this is my fault. There is an aspect of endianness
that I simply hadn't thought of. When I have a little more time I will
blog about the whole thing, because there are many useful lessons here.
Thank you, Matt, for all your help with this. And my apologies to
everyone else
We discovered another, more fundamental problem with dedup=fletcher4,verify.
I've just putback the fix for:
6904243 zpool scrub/resilver doesn't work with cross-endian
dedup=fletcher4,verify blocks
The same instructions as below apply, but in addition, the
dedup=fletcher4,verify functio
On Nov 23, 2009, at 8:24 PM, Trevor Pretty wrote:
I'm persuading a customer that when he goes to S10 he should use ZFS
for everything. We only have one M3000 and a J4200 connected to it.
We are not talking about a massive site here with a SAN etc. The
M3000 is their "mainframe". His RTO a
I'm persuading a customer that when he goes to S10 he should use ZFS
for everything. We only have one M3000 and a J4200 connected
to it. We are not
talking about a massive site here with a SAN etc. The M3000 is their
"mainframe". His RTO and RPO are both about 12 hours, his business gets
diffi
Travis Tabbal wrote:
I have a possible workaround. Mark Johnson has
been emailing me today about this issue and he proposed the
following:
You can try adding the following to /etc/system, then rebooting...
set xpv_psm:xen_support_msi = -1
I am also running XVM, and after modifying /etc/syste
I have a possible workaround. Mark Johnson has been
emailing me today about this issue and he proposed the following:
> You can try adding the following to /etc/system, then rebooting...
> set xpv_psm:xen_support_msi = -1
I have been able to format a ZVOL container from a VM 3 times while oth
Most ECC setups are as you describe. The memory hardware detects and corrects
all 1-bit errors, and detects all two-bit errors on its own. What ... should
... happen is that the OS should get an interrupt when this happens so it has
the opportunity to note the error in logs and to higher level s
Andrew Gabriel wrote:
Kjetil Torgrim Homme wrote:
Daniel Carosone writes:
Would there be a way to avoid taking snapshots if they're going to be
zero-sized?
I don't think it is easy to do, the txg counter is on a pool level,
AFAIK:
# zdb -u spool
Uberblock
magic = 00bab
Most ECC setups are as you describe. The memory hardware detects and corrects
all 1-bit errors, and detects all two-bit errors on its own. What ... should
... happen is that the OS should get an interrupt when this happens so it has
the opportunity to note the error in logs and to higher level s
Kjetil Torgrim Homme wrote:
Daniel Carosone writes:
Would there be a way to avoid taking snapshots if they're going to be
zero-sized?
I don't think it is easy to do, the txg counter is on a pool level,
AFAIK:
# zdb -u spool
Uberblock
magic = 00bab10c
version = 1
Travis Tabbal wrote:
I will give you all of this information on monday.
This is great news :)
Indeed. I will also be posting this information when I get to the server
tonight. Perhaps it will help. I don't think I want to try using that old
driver though, it seems too risky for my taste.
D
> I will give you all of this information on monday.
> This is great news :)
Indeed. I will also be posting this information when I get to the server
tonight. Perhaps it will help. I don't think I want to try using that old
driver though, it seems too risky for my taste.
Is there a command
> Daniel Carosone writes:
>
> > Would there be a way to avoid taking snapshots if
> > they're going to be zero-sized?
>
> I don't think it is easy to do, the txg counter is on
> a pool level,
> [..]
> it would help when the entire pool is idle, though.
.. which is exactly the scenario in questi
> "lz" == Len Zaifman writes:
lz> So I now have 2 disk paths and two network paths as opposed to
lz> only one in the 7310 cluster.
You're configuring all your failover on the client, so the HA stuff is
stateless wrt the server? sounds like the smart way since you control
both ends
Get the 7310 setup. Vs. the X4540 it is:
(1) less configuration on your clients
(2) instant failover with no intervention on your part
(3) less expensive
(4) expandable to 3x your current disk space
(5) lower power draw & less rack space
(6) So Simple, A Caveman Could Do It (tm)
-Erik
On Mon,
On Nov 23, 2009, at 14:46, Len Zaifman wrote:
Under these circumstances what advantage would a 7310 cluster over 2
X4540s backing each other up and splitting the load?
Do you want to worry about your storage system at 3 AM?
That's what all these appliances (regardless of vendor) get you for
Len Zaifman wrote:
Under these circumstances what advantage would a 7310 cluster over 2 X4540s backing each other up and splitting the load?
FISH! My wife could drive a
7310 :-)
www.eagle.co.nz
This email is confidential and may be legally
privileged. If received in error plea
On Nov 23, 2009, at 12:48 PM, Miles Nordin wrote:
"tc" == Tim Cook writes:
tc> I believe that opensolaris can do the ECC scrubbing in
tc> software even of the motherboard BIOS doesn't support it.
yeah, I don't really understand how the solaris idle page scrubbing
interacts with whatev
Erik and Richard: thanks for the information -- this is all very good stuff.
Erik Trimble wrote:
Something occurs to me: how full is your current 4 vdev pool? I'm
assuming it's not over 70% or so.
yes, by adding another 3 vdevs, any writes will be biased towards the
"empty" vdevs, but that
> "tc" == Tim Cook writes:
tc> I believe that opensolaris can do the ECC scrubbing in
tc> software even of the motherboard BIOS doesn't support it.
yeah, I don't really understand how the solaris idle page scrubbing
interacts with whatever. scrubbing's a hardware feature for AMD. I
On Nov 4, 2009, at 6:02 PM, Jim Klimov wrote:
> Thanks for the link, but the main concern in spinning down drives of a ZFS
> pool
> is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a
> transaction
> group (TXG) which requires a synchronous write of metadata to disk.
I'm r
On Mon, November 23, 2009 12:42, Eric D. Mudama wrote:
> On Mon, Nov 23 at 9:44, sundeep dhall wrote:
>>All,
>>
>>I have a test environment with 4 internal disks and RAIDZ option.
>>
>>Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz
>> handles things well without data er
If the 7310s can meet your performance expectations, they sound much better
than a pair of x4540s. Auto-fail over, SSD performance (although these can be
added to the 4540s), ease of management, and a great front end.
I haven't seen if you can use your backup software with the 7310s, but from
I asked this question a week ago but now I have what I feel are reasonable
pricing numbers :
For 2 X4540s (24 TB each) I pay 6% more than for one 7310 redundant cluster
(2 7310s in a cluster configuration) with 22 TB of disk and 2 x 18 GB SSDs.
I lose live redundancy, but can switch the filer
On Nov 23, 2009, at 9:44 AM, sundeep dhall wrote:
All,
I have a test environment with 4 internal disks and RAIDZ option.
Q) How do I simulate a sudden 1-disk failure to validate that zfs /
raidz handles things well without data errors
First, list the failure modes you expect to see.
Second,
If you did not do "zfs set dedup=fletcher4,verify " (which is available
in build 128 and nightly bits since then), you can ignore this message.
We have changed the on-disk format of the pool when using
dedup=fletcher4,verify with the integration of:
6903705 dedup=fletcher4,verify doesn't byt
Hi everyone,
We've been reasonably happy with ZFS running on Dell 2970/MD1000 hardware. We
are running pools of about 50TB usable (60 drives) made up of many RaidZ2
groups.
Anyhow, we now have the desire to build pool's even larger - in the 300TB
range. Having that much disk behind a single
# 1. It may help to use 15k disks as the zil. When I tested using three 15k
disks striped as my zil, it made my workload go slower, even though it seems
like it should have been faster. My suggestion is to test it out, and see if it
helps.
#3. You may get good performance with an inexpensive SS
sundeep dhall writes:
> Q) How do I simulate a sudden 1-disk failure to validate that zfs /
> raidz handles things well without data errors
>
> Options considered
> 1. suddenly pulling a disk out
> 2. using zpool offline
>
> I think both these have issues in simulating a sudden failure
why not
I would try using hdadm or cfgadm to specifically offline devices out from
under ZFS.
I have done that previously with cfgadm for systems I cannot physically access.
You can also use file backed storage to create your raidz and move, delete,
overwrite the files to simulate issues.
Shawn
On No
On Mon, Nov 23 at 9:44, sundeep dhall wrote:
All,
I have a test environment with 4 internal disks and RAIDZ option.
Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz
handles things well without data errors
Options considered
1. suddenly pulling a disk out
2. using zpo
On Mon, November 23, 2009 11:44, sundeep dhall wrote:
> All,
>
> I have a test environment with 4 internal disks and RAIDZ option.
>
> Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz
> handles things well without data errors
>
> Options considered
> 1. suddenly pulling a
Your system must be in Solaris 10 10/08 (update 6) to provide ZFS boot support
before going from UFS to ZFS.
1st update to Update 6
then move from UFS to ZFS.
F.
On 11/23/09 18:07, Arnold Bob wrote:
Sorry, I forgot to put this in the post:
I did "zpool create boot c1t1d0s0" after the form
All,
I have a test environment with 4 internal disks and RAIDZ option.
Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz
handles things well without data errors
Options considered
1. suddenly pulling a disk out
2. using zpool offline
I think both these have issues in s
> > On 11/23/09 10:10 AM, David Dyer-Bennet wrote:
> Lots of storage servers, outside the big corporate
>environment, can't
> afford full-blown redundancy. For many of us, we're
> just taking the first
> steps into using any kind of redundancy at all in
> disks for our file
> servers. Full enter
Your point is well taken, Frank, and I agree - there has to be some serious
design work for reliability. My background includes both hardware design for
reliability and field service engineering support, so the issues are not at all
foreign to me. Nor are the limits of something like a volunteer
Sorry, I forgot to put this in the post:
I did "zpool create boot c1t1d0s0" after the format command and before the
lucreate command and got that error once I ran lucreate.
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing li
>
>
>
> Thanks old friend
>
> I was surprised to read in the S10 zfs man page that
> there was the
> option sharesmb=on.
> I though I had missed the CIFs server making S10
> whilst I was not
> looking, but I was quickly coming to the conclusion
> that the CIFs stuff
> was just not there, despi
Hey everyone -
I'm trying to live upgrade a Solaris 10 5/08 system on UFS to Solaris 10 10/90
on ZFS.
/ is mounted on c1t0d0s0 (UFS). I have a 2nd disk, c1t1d0, that is not being
used and is available for the ZFS migration.
What is the correct procedure for making a bootable ZFS slice? This i
On Nov 23, 2009, at 1:43 AM, Nishchaya Bahuguna wrote:
Hi experts,
I have a scenario where I need to use a zpool version which is a
part of Nevada currently (snv 125 onwards), but not yet a part of S10.
What is the best way to do it?
XVM or VirtualBox.
What all packages do I need to instal
Daniel Carosone writes:
> Would there be a way to avoid taking snapshots if they're going to be
> zero-sized?
I don't think it is easy to do, the txg counter is on a pool level,
AFAIK:
# zdb -u spool
Uberblock
magic = 00bab10c
version = 13
txg = 1773324
On Mon, November 23, 2009 09:53, Frank Middleton wrote:
> On 11/23/09 10:10 AM, David Dyer-Bennet wrote:
>
>> Is there enough information available from system configuration
>> utilities
>> to make an automatic HCL (or unofficial HCL competitor) feasible?
>> Someone
>> could write an application p
On 11/23/09 10:10 AM, David Dyer-Bennet wrote:
Is there enough information available from system configuration utilities
to make an automatic HCL (or unofficial HCL competitor) feasible? Someone
could write an application people could run which would report their
opinion on how well it works, p
Kjetil Torgrim Homme writes:
> Cindy Swearingen writes:
>> You might check the slides on this page:
>>
>> http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
>>
>> Particularly, slides 14-18.
>>
>> In this case, graphic illustrations are probably the best way
>> to answer your questions.
On Sat, November 21, 2009 20:25, Al Hopper wrote:
>> And the last silly question. It seems to me that you'd have many, many
>> adopters if there was a real answer to what the HCL tries to be and
>> isn't - an answer to "if I buy this stuff, do I have a prayer of making
>> it work, or is there a s
On Sun, Nov 22, 2009 at 12:49 PM, Tim Cook wrote:
>
snip
> Someone can correct me if I'm wrong... but I believe that opensolaris can do
> the ECC scrubbing in software even of the motherboard BIOS doesn't support
> it.
The OS is not involved with the ECC functionality of the hardware
AF
On 11/22/09 16:48, Tim Cook wrote:
On Sun, Nov 22, 2009 at 4:18 PM, Trevor Pretty
mailto:trevor_pre...@eagle.co.nz>> wrote:
Team
I'm missing something? First off I normally play around with
OpenSolaris & it's been a while since I played with Solaris 10.
I'm doing all thi
Hi experts,
I have a scenario where I need to use a zpool version which is a part of
Nevada currently (snv 125 onwards), but not yet a part of S10.
What is the best way to do it? What all packages do I need to install
from snv in order to upgrade my zpool version?
Thanks in advance,
Nishchaya
I have noticed for a while that "zfs promote " doesn't seem to
complete properly, even though no error is returned.
I'm still running the nevada builds and use lucreate/luupgrade/ludelete
to manage my BE:s. Due to issues with ludelete failing to clean up old
BE:s properly I've made it a habit
55 matches
Mail list logo