Hi guys
I am new to Opensolaris and ZFS world, I have 6x2TB SATA hdds on my system, I
picked a single 2TB disk and installed opensolaris (therefore zpool was created
by the installer)
I went ahead and created a new pool "gpool" with raidz (the kind of redundancy
I want. Here's the output:
@se
Thank you. I was not aware that root pools could not be moved.
But here's the kicker, what if I have a single drive for root pool, and its
failing... I connect a new HDD to replace the boot drive thats dying, ZFS has
no way of migrating to a new drive?
Thanks
--
This message posted from openso
Hi guys,
I have a quick question, I am playing around with ZFS and here's what I did.
I created a storage pool with several drives. I unplugged 3 out of 5 drives
from the array, currently:
NAMESTATE READ WRITE CKSUM
gpool UNAVAIL 0 0 0 insufficien
Hi,
Were you ever able to solve this problem on your AOC-SAT2-MV8 card? I am in
need of purchasing it to add more drives to my server.
Thanks
Giovanni
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Hi Guys
I am having trouble installing Opensolaris 2009.06 into my Biostar Tpower I45
motherboard, approved on BigAdmin HCL here:
http://www.sun.com/bigadmin/hcl/data/systems/details/26409.html -- why is it
not working?
My setup:
3x 1TB hard-drives SATA
1x 500GB hard-drive (I have only left
Hi guys
I wanted to ask how i could setup a iSCSI device to be shared by 2 computers
concurrently, by that i mean sharing files like it was a NFS share but use
iSCSI instead.
I tried and setup iSCSI on both computers and was able to see my files (I had
formatted it NTFS before), from my laptop
Thanks guys - I will take a look at those clustered file systems.
My goal is not to stick with Windows - I would like to have a Storage pool for
XenServer (free) so that I can have guests, but using a storage server
(Opensolaris - ZFS) as the iSCSI storage pool.
Any suggestions for the added re
report exactly the same usage of
> 3.7TByte.
>
Please check the ZFS FAQ:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
There is a question regarding the difference between du, df and zfs list.
--
Giovanni Tirloni
sysdroid.com
_
on then.
Every other vendor out there is releasing products with deduplication.
Personally, I would just wait 2-3 releases before using it in a black box
like the 7000s.
The hardware on the other hand is incredible in terms of resilience and
performance, no doubt. Which makes me think t
ribes while destroying datasets
recursively (>600GB and with 7 snapshots). It seems that close to the end
the server stalls for 10-15 minutes and NFS activity stops. For small
datasets/snapshots that doesn't happen or is harder to notice.
Does ZFS h
o Abdullah,
I don't think I understand. How are you seeing files being created on the
SSD disk ?
You can check device usage with `zpool iostat -v hdd`. Please also send the
output of `zpool status hdd`.
Thank you,
--
Giovanni Tirloni
sysdroid.com
rite
data there otherwise there is nothing to read back from later when a read()
misses the ARC cache and checks L2ARC.
I don't know what your OLTP benchmark does but my advice is to check if it's
really writing files in the 'hdd' zpool mount point.
--
Giovanni Tirloni
sy
toreplace tank
NAME PROPERTY VALUESOURCE
tank autoreplace on local
# fmdump -e -t 08Mar2010
TIME CLASS
As you can see, no error report was posted. You can try to import the pool
again and see if `fmdump -e` lists any errors afterwards.
You use the
t the same time we built the server) and plan to replace that
> tonight. Does that seem like the correct course of action? Are there any
> steps I can take beforehand to zero in on the problem? Any words of
> encouragement or wisdom?
>
What does `iostat -En` say ?
My s
>
There is a bug opened for that but it doesn't seem to be implemented yet.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6662467
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e 1401, had "8K" of core, and that was 8,000 locations, not
> 8,192. This was right on 40 years ago (fall of 1969 when I started working
> on the 1401). Yes, neither was brand new, but IBM was still leasing them to
> customers (it came in configurations of 4k, 8k, 12k, and I th
file and read it's metadata before enforcing the filesystem
reservation. I'm not sure it's doable.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lash/..: power of 10 (bytes)
>Bus speed: power of 10
>
> Main memory is the odd one out.
>
My bad on generalizing that information.
Perhaps the software stack dealing with disks should be changed to use
power-of-10. Unlikely too.
--
Giovanni
___
if resilvers were mentioned ?
BTW, since this bug only exists in the bug database, does it mean it was
filled by a Sun engineer or a customer ? What's the relationship between
that and the defect database ? I'm still trying to understand the flow of
information here, since both databases seem
s to sleep if the cpu
> idle time falls under a certain percentage.
>
What build of OpenSolaris are you using ?
Is it nearly freezing during all the process or just at the end ?
There was another thread where a similar issue was discusses a week ago.
--
Giovanni
___
end the result of zpool status.
Your devices are probably all offline but that shouldn't stop you from
removing it, at least not on OpenSolaris.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
That would add unnecessary code to the ZFS layer for something that cron can
handle in one line.
Someone could hack zfs.c to automatically handle editing the crontab but I
don't know if it's worth the effort.
Are you worried that cron will fail or is it just an aesthetic requir
of time? What about for the root pool?
>
No need. Same goes for the rpool, you only need to make sure your system
will boot from the correct disk.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ntation somewhere that tells how to read these
> status reports?
>
Your pool is not degraded so I don't think anything will show up in fmdump.
But check 'fmdump -eV' and see the actual errors that got created. You could
find something there.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
27;ve to import your pool later
(server rebooted, etc)... then you lost your pool (prior to version 19).
Right ?
This happened on OpenSolaris 2009.6.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng high asvc_t times but it turned out to be a
firmware issue in the disk controller. It was very erratic (1-2 drives out
of 24 would show that).
If you look in the archives, people have sent a few averaged I/O performance
numbers that you could compare to your workload.
--
Giovanni
them back as "configured" any help is appreciated. Thanks
On Fri, May 7, 2010 at 9:45 PM, Ian Collins wrote:
> On 05/ 8/10 04:38 PM, Giovanni wrote:
>
>> Hi guys,
>>
>> I have a quick question, I am playing around with ZFS and here's what I
>> did
at 800MB/s with the 18 disks in
RAID-0. Same performance was achieved swapping the 9211-4i for a
MegaRAID ELP.
I'm guessing the backplane and cable are the bottleneck here.
Any comments ?
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, May 26, 2010 at 9:22 PM, Brandon High wrote:
> On Wed, May 26, 2010 at 4:27 PM, Giovanni Tirloni
> wrote:
>> SuperMicro X8DTi motherboard
>> SuperMicro SC846E1 chassis (3Gb/s backplane)
>> LSI 9211-4i (PCIex x4) connected to backplane with a SFF-8087 cable (4-la
irectly to the Intel 5520 chipset.
I totally ignored the differences between PCIe 1.0 and 2.0. My fault.
>
> If Giovanni had put the Megaraid in this slot, he would have seen
> an even lower throughput, around 600MB/s:
>
> This slot is provided by the ICH10R which as you can s
On Tue, Jun 15, 2010 at 1:56 PM, Scott Squires wrote:
> Is ZFS dependent on the order of the drives? Will this cause any issue down
> the road? Thank you all;
No. In your case the logical names changed but ZFS managed to order
the disks correctly as they were before.
--
Gi
I/O will hang for over 1 minute
at random under heavy load.
Swapping the 9211-4i for a MegaRAID ELP (mega_sas) improves
performance by 30-40% instantly and there are no hangs anymore so I'm
guessing it's something related to the mpt_sas driver.
I submitted bug #6963321 a few minute
g without any issues in this
particular setup.
This system will be available to me for quite some time, so if anyone
wants all kinds of tests to understand what's happening, I would be
happy to provide those.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e mpt_sas driver.
>
> Wait. The mpt_sas driver by default uses scsi_vhci, and scsi_vhci by
> default does load-balance round-robin. Have you tried setting
> load-balance="none" in scsi_vhci.conf?
That didn't help.
--
Giovanni Tirloni
gtirl...@sysdroid.com
w.nexenta.com/corp/documentation/nexentastor-changelog
Is there a bug tracker were one can objectively list all the bugs
(with details) that went into a release ?
"Many bug fixes" is a bit too general.
--
Giovanni Tirloni
gtirl...@sysdroid.com
we've a new release.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng I/O down with it.
Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`
Any timeouts or retries in /var/adm/messages ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lege level) people available
> to support them.
Generic != black boxes. Quite the opposite.
Some companies are successfully doing the opposite of you: They are
using standard parts and a competent staff that knows how to create
solutions out of them without having to pay for GUI-powered systems
an
porting a fault.
Speaking of that, is there a place where one can see/change these thresholds ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
can't
> get zpool status to show my pool.
> vdev_path = /dev/dsk/c9t0d0s0
> vdev_devid = id1,s...@ahitachi_hds7225scsun250g_0719bn9e3k=vfa100r1dn9e3k/a
> parent_guid = 0xb89f3c5a72a22939
Does format(1M) show the devices where they once where ?
--
Giovanni
;t have any experience with Windows 7 to guarantee that
it hasn't messes with the disk contents.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ased those bits to further forster external
collaboration. But now that's all history and discussing about how
things could have been done won't change anything.
I hope that if we want to be able to move OpenSolaris to the next
level, we ca
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling
wrote:
> Giovanni Tirloni wrote:
>
>> On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin wrote:
>> > IMHO it's important we don't get stuck running Nexenta in the same
>> > spot we're now stuck wit
ersion 134.
Have you enabled compression or deduplication ?
Check the disks with `iostat -XCn 1` (look for high asvc_t times) and
`iostat -En` (hard and soft errors).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
share what implementations (OS, switch) have you tested and
how it was done ? I would like to try to simulate these issues.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ndow (where a vdev is degraded).
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jul 23, 2010 at 11:59 AM, Richard Elling wrote:
> On Jul 23, 2010, at 2:31 AM, Giovanni Tirloni wrote:
>
>> Hello,
>>
>> We've seen some resilvers on idle servers that are taking ages. Is it
>> possible to speed up resilver operations somehow?
>>
On Fri, Jul 23, 2010 at 12:50 PM, Bill Sommerfeld
wrote:
> On 07/23/10 02:31, Giovanni Tirloni wrote:
>>
>> We've seen some resilvers on idle servers that are taking ages. Is it
>> possible to speed up resilver operations somehow?
>>
>> Eg. iostat sh
e
of the development builds.
If you run a `pkg image-update` right away, the latest bits you'll get
are from build 134 which people have reported works OK.
If you want to try something in between b111 and b134, see the
following instructions:
http://blogs.sun.com/observatory/entry/updating_t
s. Was the autoreplace code
supposed to replace the faulty disk and release the spare when
resilver is done ?
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
On Wed, Aug 11, 2010 at 4:06 PM, Cindy Swearingen
wrote:
> Hi Giovanni,
>
> The spare behavior and the autoreplace property behavior are separate
> but they should work pretty well in recent builds.
>
> You should not need to perform a zpool replace operation if the
> autore
try the same thing with c1t3d0 and c1t3d0/o
> swapped around.
Recently fixed in b147:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=67825
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
've seen it last for a whole hour while the drive is
slowly dying. Off-lining the faulty disk fixes it.
I'm trying to find out how the disks' firmware is programmed
(timeouts, retries, etc) but so far nothing in the official docs. In
this case the disk
On Mon, Jan 4, 2010 at 3:51 PM, Joerg Schilling
wrote:
> Giovanni Tirloni wrote:
>
>> We use Seagate Barracuda ES.2 1TB disks and every time the OS starts
>> to bang on a region of the disk with bad blocks (which essentially
>> degrades the performance of the whole pool
provide a new firmware for tests ? Do those
bugs get fixed in other drives that Seagate/WD sells ?
For me it's just hard to objectively point out the differences between
Seagate's enterprise drives and the ones provided by Sun except that they
were tested more.
1
nvenient how the industry is organized. That is, for disk and
storage vendors. Not customers.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
was dead on. You don't have to agree with a vendors
> practices to understand them. If you have a more fitting analogy, then by
> all means lets hear it.
>
Dell joins the party:
http://lists.us.dell.com/pipermail/linux-poweredge/2010-February/041335.html
--
Giovanni
__
backing up
> the same amount of data, but it now occupies so much more on disk.
> That of course means we can't keep nearly as many snapshots, and that
> makes us all very nervous.
>
> Any ideas?
>
Is it possible that your users are now deleting everything before s
cle and then decide what your
company is going to do. You can always install Solaris if that makes sense
for you.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
noted, it doesn't seem possible.
You could create a new zpool with this larger LUN and use zfs send/receive
to migrate your data.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
logs ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
cache
c7t22d0ONLINE 0 0 0
spares
c7t3d0 AVAIL
Any ideas?
Thank you,
--
Giov
harm the
> reliability of the drive and should I just use copies=2?
>
ZFS will honor copies=2 and keep two physical copies, even with
deduplication enabled.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opens
stablish that device and import the pool again.
Right now ZFS cannot import a pool in that state but it's being worked on,
according to Eric Schrock on Feb 6th.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
em striping the mirrors
together.
AFAIK, RAID0+1 is not supported since a vdev can only be of type disk,
mirror or raidz. And all vdevs are stripped together. Someone more
experienced in ZFS can probably confirm/deny this.
--
Giovanni Tirloni
sysdroid.com
little over 1 hour to resilver a 32GB SSD in a
mirror. I've always wondered what exactly it was doing since it was supposed
to be 30 seconds worth of data. It also generates lots of checksum errors.
--
Giovanni Tirloni
gtirl...@sysdroid.com
errors. If not, you
may have to detach `c4t0d0s0/o`.
I believe it's a bug that was fixed in recent builds.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on ECC systems on a monthly basis. It's
not if they'll happen but how often.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
re not
supported under Solaris and refuses to investigate it.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to be implicating the card only when it was connected to the
> backplane.
>
>
I only tested the LSI 2004/2008 HBAs connected to the backplane (both 3Gb/s
and 6Gb/s).
The MegaRAID 8888ELP, when connected to the same backplane, doesn't exhibit
that behavior.
--
Giovanni Tirloni
gtirl...
0 ONLINE 0 0 0
> c8t5000C50019C1D460d0ONLINE 0 0 0
> 4.06G resilvered
>
>
> Any idea for this type of situation?
>
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970
--
Giovanni Tirloni
gtirl...@sysdroid.co
0.0 4426.8 0.0 0.0 0.7 0.1 4.0 2 30 c4t8d0
195.0 0.0 4430.3 0.0 0.0 0.7 0.1 3.7 2 32 c4t10d0
^C
Anyone else with over 600 hours of resilver time? :-)
Thank you,
Giovanni Tirloni (gtirl...@sysdroid.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oned VMs? Sparse files maybe?
Thanks,
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, May 4, 2011 at 9:04 PM, Brandon High wrote:
> On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni
> wrote:
> > The problem we've started seeing is that a zfs send -i is taking hours
> to
> > send a very small amount of data (eg. 20GB in 6 hours) while a zf
hanges to point weaknesses in ZFS
we start seeing "that is not a problem" comments. With the 7000s appliance
I've heard that the 900hr estimated resilver time was "normal" and
"everything is working as expected". Can't help but think there is some
walled ga
performance
requirements being different), ZFS will keep them separated. And
again, you will create filesystems/datasets from each one
independently.
http://download.oracle.com/docs/cd/E19963-01/html/821-1448/index.html
http://download.oracle.com/docs/cd/E18752_01/html/819-5461/index.html
--
Gi
The system shouldn't panic
just because it can't import a pool.
Try booting with the kernel debugger on (add "-kv" to the grub kernel
line). Take a look at dumpadm.
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@open
ctices Guide (
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide).
You're probably looking for maximum performance with availability so that
narrows it down to a mirrored pool, unless your Postgresql workload is very
specific that raidz would fit, but beware of the perfor
77 matches
Mail list logo