I have to admit that fmthard does appear to be a bit of a sledgehammer in this
case. I thought I was doing wrong with that, but you've confirmed that now.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
Many thanks, I'll try that tonight.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jan 28, 2010 at 09:33:19PM -0800, Ed Fang wrote:
> We considered a SSD ZIL as well but from my understanding it won't
> help much on sequential bulk writes but really helps on random
> writes (to sequence going to disk better).
slog will only help if your write load involves lots of sync
Thanks for the responses guys. It looks like I'll probably use RaidZ2 with 8
drives. The write bandwidth isn't that great as it'll be a hundred gigs every
couple weeks but in a bulk load type of environment. So, not a major issue.
Testing with 8 drives in a raidz2 easily saturated a GigE con
Attached the zpool history.
Things to note: raidz2-0 was created on FreeBSD 8
2010-01-16.16:30:05 zpool create rzpool2 raidz2 da1 da2 da3 da4 da5 da6 da7 da8
da9
2010-01-18.17:04:17 zpool export rzpool2
2010-01-18.21:00:35 zpool import rzpool2
2010-01-23.22:11:03 zpool export rzpool2
2010-01-24.
> Also, I noticed you're using 'EARS' series drives.
> Again, I'm not sure if the WD10EARS drives suffer
> from a problem mentioned in these posts, but it might
> be worth looking into -- especially the last link:
Aren't the EARS drives the first ones using 4k sectors? Does OpenSolaris
support th
Getting the following error when trying to do a ZFS Flash install via
jumpstart.
error: field 1 - keyword "pool"
Do I have to have Solaris 10 u8 installed as the mini-root, or will previous
versions of Solaris 10 work?
jumpstart profile below
install_type flash_install
archive_location nfs://19
On Thu, Jan 28, 2010 at 7:58 PM, Tiernan OToole wrote:
> Good morning. This is more than likley a stupid question on this alias
> but I will ask anyway. I am building a media server in the house and
> am trying to figure out what os to install. I know it must have zfs
> support but can't figure i
Hmm, no... that's the item I linked to in my first post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't have a lot of time to help here, but this post of mine might possibly
help with ACLs:
http://breden.org.uk/2009/05/10/home-fileserver-zfs-file-systems/
Cheers,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
On Jan 28, 2010, at 4:58 PM, Tiernan OToole wrote:
> Good morning. This is more than likley a stupid question on this alias
> but I will ask anyway. I am building a media server in the house and
> am trying to figure out what os to install. I know it must have zfs
> support but can't figure if I s
Good morning. This is more than likley a stupid question on this alias
but I will ask anyway. I am building a media server in the house and
am trying to figure out what os to install. I know it must have zfs
support but can't figure if I should use Freenas or open solaris.
Free nas has the advanta
On Thu, Jan 28, 2010 at 07:26:42AM -0800, Ed Fang wrote:
> 4 x x6 vdevs in RaidZ1 configuration
> 3 x x8 vdevs in RaidZ2 configuration
Another choice might be
2 x x12 vdevs in raidz2 configuration
This gets you the space of the first, with the recovery properties of
the second - at a cost in pot
Are you using the latest IT mode firmware? (1.26.00 I think, listed above and
without checking mine using AOC-USAS-L8i which uses same controller)
Also, I noticed you're using 'EARS' series drives.
Again, I'm not sure if the WD10EARS drives suffer from a problem mentioned in
these posts, but it
> Replacing my current media server with another larger capacity media
> server. Also switching over to solaris/zfs.
>
> Anyhow we have 24 drive capacity. These are for large sequential
> access (large media files) used by no more than 3 or 5 users at a time.
What type of disks are you using,
Are you using the latest IT mode firmware? (1.26.00 I think, listed above and
without checking mine using AOC-USAS-L8i which uses same controller)
Also, I noticed you're using 'EARS' series drives.
Again, I'm not sure if the WD10EARS drives suffer from a problem mentioned in
these posts, but it
I think the SATA(2)-->SATA(1) connection will negotiate correctly,
but maybe some hardware expert will confirm.
cs
On 01/28/10 15:27, dick hoogendijk wrote:
On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
Or, if possible, connect another larger disk and attach it to the original roo
My experience was different again.
I have the same timeout issues with both the LSI and Supermicro cards in IT
mode.
IR mode on the Supermicro card didn't solve the problem, but seems to have
reduced it .
Server has 1 x 16 bay chassis and 1 x 24 bay chassis (both use expander)
test pool has 24 x
Hi James
> I do not think that you are reading the data
> correctly.
>
> The issues that we have seen via this list and
> storage-discuss
> have implicated downrev firmware on cards, and the
> various different
> disk drives that people choose to attach to those
> cards.
Thanks for pointing that
On Jan 28, 2010, at 2:23 PM, Michelle Knight wrote:
> Hi Folks,
>
> As usual, trust me to come up with the unusual. I'm planning ahead for
> future expansion and running tests.
>
> Unfortunately until 2010-2 comes out I'm stuck with 111b (no way to upgrade
> to anything than 130, which gives
Hi Michelle,
Your previous mail about the disk label reverting to EFI makes me wonder
whether you used the format -e option to relabel the disk, but your disk
label below looks fine.
This also might be a known bug (6419310), whose workaround is to use the
-f option to zpool attach.
An interim
Personally, I'd go with 4x raidz2 vdevs, each with 6 drives. You may not get
as much raw storage space, but you can lose up to 2 drives per vdev, and you'll
get more IOPS than with a 3x vdev setup.
Our current 24-drive storage servers use the 3x raidz2 vdevs with 8 drives in
each. Performance
Hi--
I need to collect some more info:
1. What Solaris release is this?
2. Send me the output of this command on the file system below:
# zfs get aclmode,aclinherit pool/dataset
3. What copy command are you using to copy testfile?
In addition, are you using any options.
Thanks,
Cindy
On 01
Also write performance may drop because of write dache disable:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools
Just a hint, have not tested this.
Robert
--
This message posted from opensolaris.org
___
zfs-discus
A bit more information... this is what I've used the all free hog to
generate
Part TagFlag CylindersSizeBlocks
0 unassignedwm 3 - 9725 74.48GB(9723/0/0) 15615
1 unassignedwm 0 0 (0/0/0)0
On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
> Or, if possible, connect another larger disk and attach it to the original
> root
> disk or even replace the smaller root pool disk with the larger disk.
I go for that one. But since it's a smoewhat older system I only have
IDE and SAT
Hi Folks,
As usual, trust me to come up with the unusual. I'm planning ahead for future
expansion and running tests.
Unfortunately until 2010-2 comes out I'm stuck with 111b (no way to upgrade to
anything than 130, which gives me problems)
Anyway, here is the situation.
Initial installation
While thinking about ZFS as the next generation filesystem without limits I am
wondering if the real world is ready for this kind of incredible technology ...
I'm actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug
(http://bugs.opensolaris.org/bugdatabase/v
On 01/28/10 14:19, Lori Alt wrote:
On 01/28/10 14:08, dick hoogendijk wrote:
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
But those could be copied by send/recv from the larger disk (current
root pool) to the smaller disk (intended new root pool). You won't be
attaching anything un
No picture, but something like this:
http://www.provantage.com/supermicro-aoc-smp-lsiss9252~7SUP91MC.htm ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
I managed to get a picture of the interposer card:
http://i46.tinypic.com/wspoxu.jpg
So from that, you can see that it uses LSI's LSISS1320 AAMUX, but specifically,
it looks like they use a custom produced version of the LSISS9132, like this:
http://www.lsi.com/DistributionSystem/AssetDocument/
Hey, thanks for replying!
I've been accessing my server with samba, but now that I'm switching
over to nfs, I can't seem to get the ACL right..
Basically, moving and overwriting files seems to work fine. But if I
copy a file - either from an external source or internally on the server
- the
Hello,
Can anybody tell me if EMC's Replication software is supported using
ZFS, and if so is there any particular version
of Solaris that it is supported with?
thanks,
-mark
--
Mark Woelfel
Storage TSC Backline Volume Products
Sun Microsystems
Work: 781-442-
On 01/28/10 14:08, dick hoogendijk wrote:
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
But those could be copied by send/recv from the larger disk (current
root pool) to the smaller disk (intended new root pool). You won't be
attaching anything until you can boot off the smaller disk
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
> But those could be copied by send/recv from the larger disk (current
> root pool) to the smaller disk (intended new root pool). You won't be
> attaching anything until you can boot off the smaller disk and then it
> won't matter what's on the
On 28/01/10 09:36 PM, Tonmaus wrote:
Thanks for your answer.
I asked primarily because of the mpt timeout issues I
saw on the list.
Hi Arnaud,
I am looking into the LSI SAS 3081 as well. My current understanding
with mpt issues is that the "sticky" part of these problems is rather
related to
Going by the parts lists here:
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/J4400/components&source=
All the SATA drive kits come with the hard drive, mounting bracket, and a SATA
interposer card.
We want to use 2TB drives, but Sun wants $1645 (!) for a single kit. That's
On 01/28/10 12:05, Dick Hoogendijk wrote:
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
D
Yes, here it is (performance is vmware on laptop, so sorry for that)
How did I test ?
1) My Disks:
LUN ID DeviceType Size Volume Mounted Remov Attach
c0t0d0 sd4 cdromNo Media no yes ata
c1t0d0 sd0 disk 8G
It looks like there is not a free slot for a hot spare? If that is the case,
then it is one more factor to push towards raidz2, as you will need time to
remove the failed disk and insert a new one. During that time you don't want to
be left unprotected.
--
This message posted from opensolaris.o
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
Dick, you will need to update the BIOS to bo
On Jan 28, 2010, at 10:54 AM, Lutz Schumann wrote:
> Actuall I tested this.
>
> If I add a l2arc device to the syspool it is not used when issueing I/O to
> the data pool (note: on root pool it must no be a whole disk, but only a
> slice of it otherwise ZFS complains that root disks may not co
Actuall I tested this.
If I add a l2arc device to the syspool it is not used when issueing I/O to the
data pool (note: on root pool it must no be a whole disk, but only a slice of
it otherwise ZFS complains that root disks may not contain some EFI label).
So this does not work - unfortunately
First, you might want to send this out to caiman-disc...@opensolaris.org
as well in order to find the experts in the OpenSolaris install
process. (I am familiar with zfs booting in general and the legacy
installer, but not so much about the OpenSolaris installer).
Second, including the text
Some very interesting insights on the availability calculations:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
For streaming also look at:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6732803
Regards,
Robert
--
This message posted from opensolaris.o
On Wed, 27 Jan 2010, RayLicon wrote:
If no one has any data on this issue then fine, but I didn't waste
my time posting to this site to get responses that simply say -don't
swap
Perhaps you can set up a test environment, measure this in a
scientific way, and provide a formal summary for our
That must be a combination of many things to make it happen.
ie. expander revision, SAS HBA revision, firmware, disk model, firmware, etc.
I didn't see the problem on my system but I haven't used SATA disks with it so
I can't say.
--
This message posted from opensolaris.org
_
Thanks for info Dan,
I will test it out, but won't be anytime soon. Waiting for that SSD.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
Dick, you will need to update the BIOS to bo
Op 28-1-2010 16:52, Thomas Maier-Komor schreef:
have you considered creating an alternate boot environment on the
smaller disk, rebooting into this new boot environment, and then
attaching the larger disk after destroy the old boot environment?
beadm might do this job for you...
What a gre
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using the -p option.
The beadm operation will set the bootfs pool property and update the
GRUB entry.
Dick, you will need to update the BIOS to boot from the smaller disk.
Thanks,
Cindy
On
...how this can happen is not a topic of this message.
now, there is a problem and I need to solve it, if it is possible.
have one HDD device (80gb), entire disk is for rpool, system on it and home
folders.
this is no problem to reinstall system, but need to save some files from user
dirs.
an, o
On 28.01.2010 15:55, dick hoogendijk wrote:
>
> Cindy Swearingen wrote:
>
>> On some disks, the default partitioning is not optimal and you have to
>> modify it so that the bulk of the disk space is in slice 0.
>
> Yes, I know, but in this case the second disk indeed is smaller ;-(
> So I wonder
if a vdev fails you loose the pool.
if you go with raidz1 and 2 of the RIGHT drives fail (2 in the same vdev)
your pool is lost.
I was faced with a similar situation recently and decided that raidz2 was
the better option.
It's comes down to resilver timesif you look at how long it will take
Hi Dick,
Yes, you can use zfs send|recv to recreate the root pool snapshots on
the other disk in addition to the other steps that are needed for full
root pool recovery is my assessment. See the link below, following the
steps for storing the root pool snapshots as snapshots rather than
files. I
Replacing my current media server with another larger capacity media server.
Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large
media files) used by no more than 3 or 5 users at a time. I'm inquiring as to
what the best configu
Cindy Swearingen wrote:
> On some disks, the default partitioning is not optimal and you have to
> modify it so that the bulk of the disk space is in slice 0.
Yes, I know, but in this case the second disk indeed is smaller ;-(
So I wonder, should I reinstall the whole thing on this smaller disk
On Wed, 27 Jan 2010, TheJay wrote:
Guys,
Need your help. My DEV131 OSOL build with my 21TB disk system somehow got
really screwed:
This is what my zpool status looks like:
NAME STATE READ WRITE CKSUM
rzpool2 DEGRADED 0 0 0
raidz2
Hello.
Thanks for config, but Chenbro badly widespread here, in Russia.
As for 3Ware RAID cards i think better get "dumb" HBA cards and let ZFS
do all work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
On Wed, Jan 27, 2010 at 10:11 PM, Mark Bennett
wrote:
> Hi Giovanni,
>
> I have seen these while testing the mpt timeout issue, and on other systems
> during resilvering of failed disks and while running a scrub.
>
> Once so far on this test scrub, and several on yesterdays.
>
> I checked the ios
> Thanks for your answer.
>
> I asked primarily because of the mpt timeout issues I
> saw on the list.
Hi Arnaud,
I am looking into the LSI SAS 3081 as well. My current understanding with mpt
issues is that the "sticky" part of these problems is rather related to
multipath features, that is us
Freddie Cash writes:
> We use the following for our storage servers:
> [...]
> 3Ware 9650SE PCIe RAID controller (12-port, muli-lane)
> [...]
> Fully supported by FreeBSD, so everything should work with
> OpenSolaris.
FWIW, I've used the 9650SE with 16 ports in OpenSolaris 2008.11 and
2009.06, a
Albert,
On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote:
> hi there,
>
> maybe this is a stupid question, yet i haven't found an answer anywhere ;)
> let say i got 3x 1,5tb hdds, can i create equal partitions out of each and
> make a raid5 out of it? sure the safety would drop, but
SAS disks more expensive. Besides, there is no 2Tb SAS 7200 drives on market
yet.
Seagate released a 2 TB SAS drive last year.
http://www.seagate.com/ww/v/index.jsp?locale=en-US&vgnextoid=c7712f655373f110VgnVCM10f5ee0a0aRCRD
Yes, it was announced. But it is not available in Russia yet.
64 matches
Mail list logo