report exactly the same usage of
> 3.7TByte.
>
Please check the ZFS FAQ:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
There is a question regarding the difference between du, df and zfs list.
--
Giovanni Tirloni
sysdroid.com
_
on then.
Every other vendor out there is releasing products with deduplication.
Personally, I would just wait 2-3 releases before using it in a black box
like the 7000s.
The hardware on the other hand is incredible in terms of resilience and
performance, no doubt. Which makes me think t
ribes while destroying datasets
recursively (>600GB and with 7 snapshots). It seems that close to the end
the server stalls for 10-15 minutes and NFS activity stops. For small
datasets/snapshots that doesn't happen or is harder to notice.
Does ZFS h
o Abdullah,
I don't think I understand. How are you seeing files being created on the
SSD disk ?
You can check device usage with `zpool iostat -v hdd`. Please also send the
output of `zpool status hdd`.
Thank you,
--
Giovanni Tirloni
sysdroid.com
rite
data there otherwise there is nothing to read back from later when a read()
misses the ARC cache and checks L2ARC.
I don't know what your OLTP benchmark does but my advice is to check if it's
really writing files in the 'hdd' zpool mount point.
--
Giovanni Tirloni
sy
toreplace tank
NAME PROPERTY VALUESOURCE
tank autoreplace on local
# fmdump -e -t 08Mar2010
TIME CLASS
As you can see, no error report was posted. You can try to import the pool
again and see if `fmdump -e` lists any errors afterwards.
You use the
t the same time we built the server) and plan to replace that
> tonight. Does that seem like the correct course of action? Are there any
> steps I can take beforehand to zero in on the problem? Any words of
> encouragement or wisdom?
>
What does `iostat -En` say ?
My s
On Mon, Mar 15, 2010 at 5:39 PM, Abdullah Al-Dahlawi wrote:
> Greeting ALL
>
>
> I understand that L2ARC is still under enhancement. Does any one know if
> ZFS can be upgrades to include "Persistent L2ARC", ie. L2ARC will not loose
> its contents after system reboot ?
>
There is a bug opened for
On Wed, Mar 17, 2010 at 9:34 AM, David Dyer-Bennet wrote:
> On 3/16/2010 23:21, Erik Trimble wrote:
>
>> On 3/16/2010 8:29 PM, David Dyer-Bennet wrote:
>>
>>> On 3/16/2010 17:45, Erik Trimble wrote:
>>>
David Dyer-Bennet wrote:
> On Tue, March 16, 2010 14:59, Erik Trimble wrote:
>>>
On Wed, Mar 17, 2010 at 6:43 AM, wensheng liu wrote:
> Hi all,
>
> How to reserve a space on a zfs filesystem? For mkfiel or dd will write
> data to the
> block, it is time consuming. whiel "mkfile -n" will not really hold the
> space.
> And zfs's set reservation only work on filesytem, not on fil
On Wed, Mar 17, 2010 at 11:23 AM, wrote:
>
>
> >IMHO, what matters is that pretty much everything from the disk controller
> >to the CPU and network interface is advertised in power-of-2 terms and
> disks
> >sit alone using power-of-10. And students are taught that computers work
> >with bits and
On Wed, Mar 17, 2010 at 7:09 PM, Bill Sommerfeld wrote:
> On 03/17/10 14:03, Ian Collins wrote:
>
>> I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100%
>> done, but not complete:
>>
>> scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go
>>
>
> Don't panic. If "zpo
On Thu, Mar 18, 2010 at 1:19 AM, Chris Paul wrote:
> OK I have a very large zfs snapshot I want to destroy. When I do this, the
> system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with
> 128GB of memory. Now this may be more of a function of the IO device, but
> let's say I do
On Fri, Mar 19, 2010 at 1:26 PM, Grant Lowe wrote:
> Hi all,
>
> I'm trying to delete a zpool and when I do, I get this error:
>
> # zpool destroy oradata_fs1
> cannot open 'oradata_fs1': I/O error
> #
>
> The pools I have on this box look like this:
>
> #zpool list
> NAME SIZE USED A
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen wrote:
> We all know that data corruption may happen, even on the most reliable of
> hardware. That's why zfs har pool scrubbing.
>
> Could we introduce a zpool option (as in zpool set ) for
> "scrub period", in "number of hours" (with 0 being no aut
On Tue, Mar 23, 2010 at 2:00 PM, Ray Van Dolson wrote:
> Kind of a newbie question here -- or I haven't been able to find great
> search terms for this...
>
> Does ZFS recognize zpool members based on drive serial number or some
> other unique, drive-associated ID? Or is it based off the drive's
On Sat, Mar 27, 2010 at 6:02 PM, Harry Putnam wrote:
> Bob Friesenhahn writes:
>
> > On Sat, 27 Mar 2010, Harry Putnam wrote:
> >
> >> What to do with a status report like the one included below?
> >>
> >> What does it mean to have an unrecoverable error but no data errors?
> >
> > I think that
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey wrote:
> > From the information I've been reading about the loss of a ZIL device,
> What the heck? Didn't I just answer that question?
> I know I said this is answered in ZFS Best Practices Guide.
>
> http://www.solarisinternals.com/wiki/index.php
On Fri, May 7, 2010 at 8:07 AM, Emily Grettel wrote:
> Hi,
>
> I've had my RAIDz volume working well on SNV_131 but it has come to my
> attention that there has been some read issues with the drives. Previously I
> thought this was a CIFS problem but I'm noticing that when transfering files
> or
On Thu, May 20, 2010 at 2:19 AM, Marc Bevand wrote:
> Deon Cui gmail.com> writes:
>>
>> So I had a bunch of them lying around. We've bought a 16x SAS hotswap
>> case and I've put in an AMD X4 955 BE with an ASUS M4A89GTD Pro as
>> the mobo.
>>
>> In the two 16x PCI-E slots I've put in the 1068E c
On Wed, May 26, 2010 at 9:22 PM, Brandon High wrote:
> On Wed, May 26, 2010 at 4:27 PM, Giovanni Tirloni
> wrote:
>> SuperMicro X8DTi motherboard
>> SuperMicro SC846E1 chassis (3Gb/s backplane)
>> LSI 9211-4i (PCIex x4) connected to backplane with a SFF-8087 cable (4-la
On Thu, May 27, 2010 at 2:39 AM, Marc Bevand wrote:
> Hi,
>
> Brandon High freaks.com> writes:
>>
>> I only looked at the Megaraid that he mentioned, which has a PCIe
>> 1.0 4x interface, or 1000MB/s.
>
> You mean x8 interface (theoretically plugged into that x4 slot below...)
>
>> The board
On Tue, Jun 15, 2010 at 1:56 PM, Scott Squires wrote:
> Is ZFS dependent on the order of the drives? Will this cause any issue down
> the road? Thank you all;
No. In your case the logical names changed but ZFS managed to order
the disks correctly as they were before.
--
Giovanni
I/O will hang for over 1 minute
at random under heavy load.
Swapping the 9211-4i for a MegaRAID ELP (mega_sas) improves
performance by 30-40% instantly and there are no hangs anymore so I'm
guessing it's something related to the mpt_sas driver.
I submitted bug #6963321 a few minute
g without any issues in this
particular setup.
This system will be available to me for quite some time, so if anyone
wants all kinds of tests to understand what's happening, I would be
happy to provide those.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e mpt_sas driver.
>
> Wait. The mpt_sas driver by default uses scsi_vhci, and scsi_vhci by
> default does load-balance round-robin. Have you tried setting
> load-balance="none" in scsi_vhci.conf?
That didn't help.
--
Giovanni Tirloni
gtirl...@sysdroid.com
w.nexenta.com/corp/documentation/nexentastor-changelog
Is there a bug tracker were one can objectively list all the bugs
(with details) that went into a release ?
"Many bug fixes" is a bit too general.
--
Giovanni Tirloni
gtirl...@sysdroid.com
we've a new release.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng I/O down with it.
Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`
Any timeouts or retries in /var/adm/messages ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lege level) people available
> to support them.
Generic != black boxes. Quite the opposite.
Some companies are successfully doing the opposite of you: They are
using standard parts and a competent staff that knows how to create
solutions out of them without having to pay for GUI-powered systems
an
porting a fault.
Speaking of that, is there a place where one can see/change these thresholds ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
can't
> get zpool status to show my pool.
> vdev_path = /dev/dsk/c9t0d0s0
> vdev_devid = id1,s...@ahitachi_hds7225scsun250g_0719bn9e3k=vfa100r1dn9e3k/a
> parent_guid = 0xb89f3c5a72a22939
Does format(1M) show the devices where they once where ?
--
Giovanni
;t have any experience with Windows 7 to guarantee that
it hasn't messes with the disk contents.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ased those bits to further forster external
collaboration. But now that's all history and discussing about how
things could have been done won't change anything.
I hope that if we want to be able to move OpenSolaris to the next
level, we ca
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling
wrote:
> Giovanni Tirloni wrote:
>
>> On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin wrote:
>> > IMHO it's important we don't get stuck running Nexenta in the same
>> > spot we're now stuck wit
ersion 134.
Have you enabled compression or deduplication ?
Check the disks with `iostat -XCn 1` (look for high asvc_t times) and
`iostat -En` (hard and soft errors).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
share what implementations (OS, switch) have you tested and
how it was done ? I would like to try to simulate these issues.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ndow (where a vdev is degraded).
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jul 23, 2010 at 11:59 AM, Richard Elling wrote:
> On Jul 23, 2010, at 2:31 AM, Giovanni Tirloni wrote:
>
>> Hello,
>>
>> We've seen some resilvers on idle servers that are taking ages. Is it
>> possible to speed up resilver operations somehow?
>>
On Fri, Jul 23, 2010 at 12:50 PM, Bill Sommerfeld
wrote:
> On 07/23/10 02:31, Giovanni Tirloni wrote:
>>
>> We've seen some resilvers on idle servers that are taking ages. Is it
>> possible to speed up resilver operations somehow?
>>
>> Eg. iostat sh
e
of the development builds.
If you run a `pkg image-update` right away, the latest bits you'll get
are from build 134 which people have reported works OK.
If you want to try something in between b111 and b134, see the
following instructions:
http://blogs.sun.com/observatory/entry/updating_t
s. Was the autoreplace code
supposed to replace the faulty disk and release the spare when
resilver is done ?
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
e doesn't detach after the resilver is complete, then just
> detach it manually.
Yes, that's working as expected (spare detaches after resilver).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
try the same thing with c1t3d0 and c1t3d0/o
> swapped around.
Recently fixed in b147:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=67825
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Jan 2, 2010 at 4:07 PM, R.G. Keen wrote:
> OK. From the above suppositions, if we had a desktop (infinitely
> long retry on fail) disk and a soft-fail error in a sector, then the
> disk would effectively hang each time the sector was accessed.
> This would lead to
> (1) ZFS->SD-> disk read
On Mon, Jan 4, 2010 at 3:51 PM, Joerg Schilling
wrote:
> Giovanni Tirloni wrote:
>
>> We use Seagate Barracuda ES.2 1TB disks and every time the OS starts
>> to bang on a region of the disk with bad blocks (which essentially
>> degrades the performance of the whole pool
On Tue, Feb 2, 2010 at 1:58 PM, Tim Cook wrote:
>
> It's called spreading the costs around. Would you really rather pay 10x
> the price on everything else besides the drives? This is essentially Sun's
> way of tiered pricing. Rather than charge you a software fee based on how
> much storage yo
On Tue, Feb 2, 2010 at 9:07 PM, Marc Nicholas wrote:
> I believe magical unicorn controllers and drives are both bug-free and
> 100% spec compliant. The leprichorns sell them if you're trying to
> find them ;)
>
Well, "perfect" and "bug free" sure don't exist in our industry.
The problem is tha
On Tue, Feb 9, 2010 at 2:04 AM, Thomas Burgess wrote:
>
> On Mon, Feb 08, 2010 at 09:33:12PM -0500, Thomas Burgess wrote:
>> > This is a far cry from an apples to apples comparison though.
>>
>> As much as I'm no fan of Apple, it's a pity they dropped ZFS because
>> that would have brought consid
backing up
> the same amount of data, but it now occupies so much more on disk.
> That of course means we can't keep nearly as many snapshots, and that
> makes us all very nervous.
>
> Any ideas?
>
Is it possible that your users are now deleting everything before s
cle and then decide what your
company is going to do. You can always install Solaris if that makes sense
for you.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
noted, it doesn't seem possible.
You could create a new zpool with this larger LUN and use zfs send/receive
to migrate your data.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
logs ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
cache
c7t22d0ONLINE 0 0 0
spares
c7t3d0 AVAIL
Any ideas?
Thank you,
--
Giov
harm the
> reliability of the drive and should I just use copies=2?
>
ZFS will honor copies=2 and keep two physical copies, even with
deduplication enabled.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opens
stablish that device and import the pool again.
Right now ZFS cannot import a pool in that state but it's being worked on,
according to Eric Schrock on Feb 6th.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
em striping the mirrors
together.
AFAIK, RAID0+1 is not supported since a vdev can only be of type disk,
mirror or raidz. And all vdevs are stripped together. Someone more
experienced in ZFS can probably confirm/deny this.
--
Giovanni Tirloni
sysdroid.com
little over 1 hour to resilver a 32GB SSD in a
mirror. I've always wondered what exactly it was doing since it was supposed
to be 30 seconds worth of data. It also generates lots of checksum errors.
--
Giovanni Tirloni
gtirl...@sysdroid.com
errors. If not, you
may have to detach `c4t0d0s0/o`.
I believe it's a bug that was fixed in recent builds.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on ECC systems on a monthly basis. It's
not if they'll happen but how often.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
re not
supported under Solaris and refuses to investigate it.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to be implicating the card only when it was connected to the
> backplane.
>
>
I only tested the LSI 2004/2008 HBAs connected to the backplane (both 3Gb/s
and 6Gb/s).
The MegaRAID 8888ELP, when connected to the same backplane, doesn't exhibit
that behavior.
--
Giovanni Tirloni
gtirl...
0 ONLINE 0 0 0
> c8t5000C50019C1D460d0ONLINE 0 0 0
> 4.06G resilvered
>
>
> Any idea for this type of situation?
>
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970
--
Giovanni Tirloni
gtirl...@sysdroid.co
0.0 4426.8 0.0 0.0 0.7 0.1 4.0 2 30 c4t8d0
195.0 0.0 4430.3 0.0 0.0 0.7 0.1 3.7 2 32 c4t10d0
^C
Anyone else with over 600 hours of resilver time? :-)
Thank you,
Giovanni Tirloni (gtirl...@sysdroid.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oned VMs? Sparse files maybe?
Thanks,
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, May 4, 2011 at 9:04 PM, Brandon High wrote:
> On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni
> wrote:
> > The problem we've started seeing is that a zfs send -i is taking hours
> to
> > send a very small amount of data (eg. 20GB in 6 hours) while a zf
hanges to point weaknesses in ZFS
we start seeing "that is not a problem" comments. With the 7000s appliance
I've heard that the 900hr estimated resilver time was "normal" and
"everything is working as expected". Can't help but think there is some
walled ga
performance
requirements being different), ZFS will keep them separated. And
again, you will create filesystems/datasets from each one
independently.
http://download.oracle.com/docs/cd/E19963-01/html/821-1448/index.html
http://download.oracle.com/docs/cd/E18752_01/html/819-5461/index.html
--
Gi
The system shouldn't panic
just because it can't import a pool.
Try booting with the kernel debugger on (add "-kv" to the grub kernel
line). Take a look at dumpadm.
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@open
ctices Guide (
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide).
You're probably looking for maximum performance with availability so that
narrows it down to a mirrored pool, unless your Postgresql workload is very
specific that raidz would fit, but beware of the perfor
69 matches
Mail list logo