Thanks, fixes following two issues, I can get the right value:(1) Dividing
offset 0x657800(6649856) by 512 and take it as the iseek value.
(2) Run the dd command on device c2t0d0s0, not c2t0d0.
Zhihui
2009/6/26 m...@bruningsystems.com
> Hi Zhihui Chen,
>
> zhihui Chen wrote:
>
>> Find that zio-
Find that zio->io_offset is the absolute offset of device, not in sector
unit. And If we need use zdb -R to dump the block, we should use the offset
(zio->io_offset-0x40).
2009/6/25 zhihui Chen
> I use following dtrace script to trace the postion of one file on zfs:
>
> #!/usr/sbin/dtrace -qs
Simon Breden wrote:
Miles, thanks for helping clear up the confusion surrounding this subject!
My decision is now as above: for my existing NAS to leave the pool as-is, and
seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs
that I want to add.
For the next NAS build l
Simon Breden wrote:
I think the confusion is because the 1068 can do "hardware" RAID, it
can and does write its own labels, as well as reserve space for replacements
of disks with slightly different sizes. But that is only one mode of
operation.
So, it sounds like if I use a 1068-based d
OK, thanks James.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 25 Jun 2009 16:11:04 -0700 (PDT)
Simon Breden wrote:
> That sounds even better :)
>
> So what's the procedure to create a zpool using the 1068?
same as any other device:
# zpool create poolname vdev vdev vdev
> Also, any special 'tricks /tips' / commands required for using a 1068-ba
That sounds even better :)
So what's the procedure to create a zpool using the 1068?
Also, any special 'tricks /tips' / commands required for using a 1068-based
SAS/SATA device?
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailin
> Isn't that section of the evil tuning guide you're quoting actually about
> checking if the NVRAM/driver connection is working right or not?
Miles, yes, you are correct. I just thought it was interesting reading about
how syncs and such work within ZFS.
Regarding my NFS test, you remind me tha
On Fri, Jun 26 at 8:55, James C. McPherson wrote:
On Thu, 25 Jun 2009 15:43:17 -0700 (PDT)
Simon Breden wrote:
> I think the confusion is because the 1068 can do "hardware" RAID,
> it can and does write its own labels, as well as reserve space
> for replacements of disks with slightly differe
On Thu, 25 Jun 2009 15:43:17 -0700 (PDT)
Simon Breden wrote:
> > I think the confusion is because the 1068 can do "hardware" RAID, it
> can and does write its own labels, as well as reserve space for replacements
> of disks with slightly different sizes. But that is only one mode of
> operation.
> I think the confusion is because the 1068 can do "hardware" RAID, it
can and does write its own labels, as well as reserve space for replacements
of disks with slightly different sizes. But that is only one mode of
operation.
So, it sounds like if I use a 1068-based device, and I *don't* want i
Miles, thanks for helping clear up the confusion surrounding this subject!
My decision is now as above: for my existing NAS to leave the pool as-is, and
seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs
that I want to add.
For the next NAS build later on this summer, I
Thanx to all for the efforts but i was able to import the zpool after disabling
first HBA cards do not know the reason for this but now the pool is imported
and there was not disk lost :-)
--
This message posted from opensolaris.org
___
zfs-discuss mai
Miles Nordin wrote:
"sb" == Simon Breden writes:
sb> The situation regarding lack of open source drivers for these
sb> LSI 1068/1078-based cards is quite scary.
meh I dunno. The amount of confusion is a little scary, I guess.
sb> And did I understand you correctly w
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama
wrote:
> True. In $ per sequential GB/s, rotating rust still wins by far.
> However, your comment about all flash being slower than rotating at
> sequential writes was mistaken. Even at 10x the price, if you're
> working with a dataset that needs r
> "sb" == Simon Breden writes:
sb> The situation regarding lack of open source drivers for these
sb> LSI 1068/1078-based cards is quite scary.
meh I dunno. The amount of confusion is a little scary, I guess.
sb> And did I understand you correctly when you say that these LSI
Thanks Tim!
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The situation regarding lack of open source drivers for these LSI
1068/1078-based cards is quite scary.
And did I understand you correctly when you say that these LSI 1068/1078
drivers write labels to drives, meaning you can't move drives from an LSI
controlled array to another arbitrary array
and regarding the path my other system has same and its working fine see the
below output
# zpool status
pool: emcpool1
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool
zpool cache is in /etc/zfs/zpool.cache or it can be viewed as zdb -C
but in my case its blank :-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> "sm" == Scott Meilicke writes:
sm> Some storage will flush their caches despite the fact that the
sm> NVRAM protection makes those caches as good as stable
sm> storage. [...] ZFS also issues a flush every time an
sm> application requests a synchronous write (O_DSYNC, fsync,
> "jl" == James Lever writes:
jl> I thought they were both closed source
yes, both are closed source / proprietary. If you are really confused
and not just trying to pick a dictionary fight, I can start saying
``closed source / proprietary'' on Solaris lists from now on.
On Linux list
Ketan writes:
> thats the problem this system has just 2 LUNs assigned and both are present
> as you can see from format output
>
> 10. emcpower0a
> /pseudo/e...@0
> 11. emcpower1a
> /pseudo/e...@1
ahhh.
so the path has changed.
your old path was emcpower0c
now you have emcpower0a and emcpow
thats the problem this system has just 2 LUNs assigned and both are present as
you can see from format output
10. emcpower0a
/pseudo/e...@0
11. emcpower1a
/pseudo/e...@1
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
Ketan writes:
> no idea path changed or not .. but following is output from my format .. and
> nothing has changed
>
> AVAILABLE DISK SELECTIONS:
>0. c1t0d0
> /p...@0/p...@0/p...@2/s...@0/s...@0,0
>1. c1t1d0
> /p...@0/p...@0/p...@2/s...@0/s...@1,0
>
no idea path changed or not .. but following is output from my format .. and
nothing has changed
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/p...@0/p...@0/p...@2/s...@0/s...@0,0
1. c1t1d0
/p...@0/p...@0/p...@2/s...@0/s...@1,0
2. c3t5006016841E0A08Dd0
could it be possible that your path changed?
just do "format" CTRL+D
and look if emcpower0c is now located somewhere else.
regards
daniel
Ketan writes:
> Hi , I had a zfs pool which i exported before our SAN maintenance
> and powerpath upgrade but now after the powerpath upgrade and
> mainten
Hi , I had a zfs pool which i exported before our SAN maintenance and
powerpath upgrade but now after the powerpath upgrade and maintenance i 'm
unable to import the pool it give following errors
# zpool import
pool: emcpool1
id: 5596268873059055768
state: UNAVAIL
status: One or more d
Chookiex wrote:
thank you ;)
I mean that it would be faster in reading compressed data IF the write
with compression is faster than non-compressed? Just like lzjb.
Do you mean that it would be faster to read compressed data than
uncompressed data, or it would be faster to read compressed dat
On Wed, Jun 24 at 18:43, Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Eric D. Mudama wrote:
The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.
In the "available 5 months ago" category, the Intel X25-E will wr
On Thu, 25 Jun 2009, Ross wrote:
But the unintended side effect of this is that ZFS's attempt to
optimize writes will causes jerky read and write behaviour any time
you have a large amount of writes going on, and when you should be
pushing the disks to 100% usage you're never going to reach th
I think I am getting closer to ideas as to how to back this up. I will do as
you said to backup the os, take an image or something of that nature. I will
take a full backup every one to three months of the virtual machines, however
the data that the vm is working with will be mounted seperately
> if those servers are on physical boxes right now i'd do some perfmon
> caps and add up the iops.
Using perfmon to get a sense of what is required is a good idea. Use the 95
percentile to be conservative. The counters I have used are in the Physical
disk object. Don't ignore the latency counter
Im having the same problems.
aprox. every 1-9 hours it crashes and the backtrace is exactly the same as
posted here.
the machine ran b98 rock-solid for a long time...
Anyone have a clue where to start?
--
This message posted from opensolaris.org
___
On Wed, 24 Jun 2009, Lejun Zhu wrote:
There is a bug in the database about reads blocked by writes which may be
related:
http://bugs.opensolaris.org/view_bug.do?bug_id=6471212
The symptom is sometimes reducing queue depth makes read perform better.
This one certainly sounds promising. Sinc
Hi Ross,
On Thu, 2009-06-25 at 04:24 -0700, Ross wrote:
> Thanks Tim, do you know which build this is going to appear in?
I've actually no idea - SUNWzfs-auto-snapshot gets delivered by the
Desktop consolidation, not me. I'm checking in with them to see what the
story is.
That said, it probably
thank you ;)
I mean that it would be faster in reading compressed data IF the write with
compression is faster than non-compressed? Just like lzjb.
But i can't understand why the read performance is generally unaffected by
compression? Because the uncompression (lzjb, gzip) is faster than compr
Thanks Tim, do you know which build this is going to appear in?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I am not sure how zfs would know the rate of the
> underlying disk storage
Easy: Is the buffer growing? :-)
If the amount of data in the buffer is growing, you need to throttle back a bit
until the disks catch up. Don't stop writes until the buffer is empty, just
slow them down to match t
I use following dtrace script to trace the postion of one file on zfs:
#!/usr/sbin/dtrace -qs
zio_done:entry
/((zio_t *)(arg0))->io_vd/
{
zio=(zio_t *)arg0;
printf("Offset:%x and Size:%x\n",zio->io_offset,zio->io_size);
printf("vd:%x\n",(unsigned long)(zio->io_vd));
Hi all,
Just a quick plug: the latest version of ZFS Automatic Snapshots SMF
service hit the hg repository yesterday.
If you're using 0.11 or older, it's well worth upgrading to get the few
bugfixes (especially if you're using CIFs - we use '_' instead of ':' in
snapshot names now)
More at:
http
> It might be easier to look for the pool status thusly
> zpool get health poolname
Correct me if I'm wrong but "zpool get" is available only in some latest
versions of OS and Solaris 10 (we are using on some boxes some older
versions of Solaris 10).
Nevertheless IMO "zpoll status -x" should
Thanks very much everyone.
Victor, I did think about using VirtualBox, but I have a real machine and a
supply of hard drives for a short time, for I'll test it out using that if I
can.
Scott, of course, at work we use three mirrors and it works very well, has
saved us on occasion where we have
Miles Nordin wrote:
There's also been talk of two tools, MegaCli and lsiutil, which are
both binary only and exist for both Linux and Solaris, and I think are
used only with the 1078 cards but maybe not.
lsiutil works with LSI chips that use the Fusion-MPT interface (SCSI,
SAS, and FC), inclu
> Yep, it also suffers from the bug that restarts
> resilvers when you take a
> snapshot. This was fixed in b94.
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu
> g_id=6343667
> -- richard
Hats off to Richard for saving the day. This was exactly the issue. I shut
off my automatic snap
On 25/06/2009, at 5:16 AM, Miles Nordin wrote:
and mpt is the 1068 driver, proprietary, works on x86 and SPARC.
then there is also itmpt, the third-party-downloadable closed-source
driver from LSI Logic, dunno much about it but someone here used it.
I'm confused. Why do you say the mpt dr
46 matches
Mail list logo