About 2 years ago I used to run snv_55b with a raidz on top of 5 500GB SATA
drives. After 10 months I ran out of space and added a mirror of 2 250GB
drives to my pool with "zpool add". No pb. I scrubbed it weekly. I only saw 1
CKSUM error one day (ZFS self-healed itself automatically of course).
Well, my colleague & myself have recently had a basic Vmare ESX cluster working,
with the Solaris iscsi target, in the Lab at work, so I know it does work.
We used ESX 3.5i on two Dell Precision 390 workstations,
booted from USB memory sticks.
We used snv_97 and no special tweaks required.
We used
I recovered the pool by doing export, import and scrub.
Apparently you could export pool with a FAILED device, and import will restore
labels from backup copies. Data errors are still there after import, so you
need to scrub pool. After all that the filesystem is back with no
errors/problems.
Hello Roch!
>
> Leave the default recordsize. With 128K recordsize,
> files smaller than
> 128K are stored as single record
> tightly fitted to the smallest possible # of disk
> sectors. Reads and
> writes are then managed with fewer ops.
In the write ZFS is dynamic, but in the read?
If i
Hello all,
Did you make a install on the USB stick, or did you use the Distribution
Constructor (DC)?
Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
We're using a rather large (3.8TB) ZFS volume for our mailstores on a
JMS setup. Does anybody have any tips for tuning ZFS for JMS? I'm
looking for even the most obvious tips, as I am a bit of a novice. Thanks,
Adam
___
zfs-discuss mailing list
zfs-discu
On 10/21/08 04:52, Paul B. Henson wrote:
On Mon, 20 Oct 2008, Pramod Batni wrote:
Yes, the implementation of the above ioctl walks the list of mounted
filesystems 'vfslist' [in this case it walks 5000 nodes of a linked list
before the ioctl returns] This in-kernel traversal of the filesyst
Hi, my present RaidZ pool is now almost full so I've recently bought a Adaptec
3805 to start building a 2nd one, but since these are so expensive I don't have
enough money left over to buy 8x1TB disks, but I can still buy 3, and I have 5x
0.5TB disks lying around.
Is it possible to build a Raid
On Tue, 21 Oct 2008, Håvard Krüger wrote:
Is it possible to build a RaidZ with 3x 1TB disks and 5x 0.5TB
disks, and then swap out the 0.5 TB disks as time goes by? Is there
a documentation/wiki on doing this?
Yes, you can build a raidz vdev with all of these drives but only
0.5TB will be use
On Mon, Oct 20, 2008 at 04:57:22PM -0500, Nicolas Williams wrote:
> I've a report that the mismatch between SQLite3's default block size and
> ZFS' causes some performance problems for Thunderbird users.
>
> It'd be great if there was an API by which SQLite3 could set its block
> size to match the
I've confirmed the problem with automatic resilvers as well. I will see about
submitting a bug.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Looks like there is a closed bug for this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
It's been closed as 'not reproducible', but I can reproduce consistently on Sol
10 5/08. How can I re-open this bug?
I'm using a pair of Supermicro AOC-SAT2-MV8 on a fully patched install of
Sola
I have a very similar problem with SNV_(( and Virtual Iron
(http://www.opensolaris.org/jive/thread.jspa?threadID=79831&tstart=0)
I am using IBM x3650 Server with 6 SAS drives. And what we have in common is
Broadcomm network cards (BNX driver). From previous experiance I know this
cards had a dr
Blake Irvin wrote:
> Looks like there is a closed bug for this:
>
> http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
>
> It's been closed as 'not reproducible', but I can reproduce consistently on
> Sol 10 5/08. How can I re-open this bug?
Have you tried to reproduce it with Nevada build
Pls pardon the off-topic question, but is there a Solaris backport of the fix?
On Tue, Oct 21, 2008 at 2:15 PM, Victor Latushkin
<[EMAIL PROTECTED]> wrote:
> Blake Irvin wrote:
>> Looks like there is a closed bug for this:
>>
>> http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
>>
>> It's be
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
> I've a report that the mismatch between SQLite3's default block size and
> ZFS' causes some performance problems for Thunderbird users.
I was seeing a severe performance problem with sqlite3 databases as used
by evolution (not thunderbir
On Tue, Oct 21, 2008 at 03:43:08PM -0400, Bill Sommerfeld wrote:
> On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
> > I've a report that the mismatch between SQLite3's default block size and
> > ZFS' causes some performance problems for Thunderbird users.
>
> I was seeing a severe perf
The poweredge 1850 has an intel etherexpress pro 1000 internal cards in it.
However, some new updates, even the microsoft initiator hung writing a 1.5
gigabyte file to the iscsitarget on the opensolaris box.
I've installed linux iscsitarget on the same box and will reattempt the iscsi
targets
one more update:
common hardware between all my machines soo far has been the PERC (Poweredge
Raid Controllers) or also known as the LSI MEGA RAID controller.
The 1850 has a PERC 4d/i
the 1900 has a PERC 5/i
I'll be testing the iscsitarget with a SATA controller to test my hypothesis.
--
This
Hi tano
I hope you can try with the 'iscsisnoop.d' script, so
we can see if your problem is the same as what Eugene is seeing.
Please can you also check the contents of the file:
/var/svc/log/system-iscsitgt\:default.log
.. just to make sure that the iscsi target is not core dumping & restarting.
Hello Adam,
Tuesday, October 21, 2008, 2:00:46 PM, you wrote:
ANC> We're using a rather large (3.8TB) ZFS volume for our mailstores on a
ANC> JMS setup. Does anybody have any tips for tuning ZFS for JMS? I'm
ANC> looking for even the most obvious tips, as I am a bit of a novice. Thanks,
Well, it
Hello Nicolas,
Monday, October 20, 2008, 10:57:22 PM, you wrote:
NW> I've a report that the mismatch between SQLite3's default block size and
NW> ZFS' causes some performance problems for Thunderbird users.
NW> It'd be great if there was an API by which SQLite3 could set its block
NW> size to ma
Hello Marc,
Tuesday, October 21, 2008, 8:14:17 AM, you wrote:
MB> About 2 years ago I used to run snv_55b with a raidz on top of 5 500GB SATA
MB> drives. After 10 months I ran out of space and added a mirror of 2 250GB
MB> drives to my pool with "zpool add". No pb. I scrubbed it weekly. I only sa
> Yes, we've been pleasantly surprised by the demand.
> But, that doesn't mean we're not anxious to expand
> our ability to address such an important market as
> OpenSolaris and ZFS.
>
> We're actively working on OpenSolaris drivers. We
> don't expect it to take long - I'll keep you posted.
>
>
Hi,
I have a customer with the following question...
She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If
this is possible how is this done? Is there any documentation on this
that I can provide to them?
--
Regards,
Dave
--
My normal working hours are Sunday through Wednesday
> When will L2ARC be available in Solaris 10?
My Sun SE said L2ARC should be in S10U7. It was scheduled for S10U6 (shipping
in a few weeks), but didn't make it in time. At least S10U6 will have ZIL
offload and ZFS boot.
--
This message posted from opensolaris.org
__
On Tue, 21 Oct 2008, MC wrote:
>
> There is a fusion-io user here claiming that the performance drops
> 90% after the device has written to capacity once. Does fusion-io
Isn't this what is expected from FLASH-based SSD devices? They have
to erase before they can write. The vendor is kind enou
Oracle on ZFS best practice? docs? blogs? Any recent/new info related
to Running Oracle 10g and/or 11g on ZFS Solaris 10?
dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
28 matches
Mail list logo