Ok, so someone is doing IT and has questions.
Thank you!
[I did not post this using another name, because I am too honorable to do
that.]
This is a list discussion, should not be paused for one voice.
best,
z
[If Orvar has other questions that I have not addressed, please ask me
off-list. It's
Charles Wright wrote:
> Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
> I got errors on all drives that result from SCSI timeout errors.
[snip litany of errors]
I had similar problems on a 1120 card with 2008.05
I upgraded to 2008.11 and the something*.16 sun areca d
Still not happy?
I guess I will have do more spam myself --
So, I have to explain why I didn't like Linux but I like MS and OpenSolaris?
I don't have any religious love for MS or Sun.
Just that I believe, talents are best utilized in an organized and
systematic fashion, to benefit the whole.
Lea
no one is working tonight?
where is the discussions?
ok, I will not be picking on Orvar all the time, if that's why...
the windows statements was heavy, but hey, I am at home, not at work, it was
just because Orvar was suffering.
folks, are we not going to do IT just because I played wi
Ok, Orvar, that's why I always liked you. You really want to get to the
point, otherwise you won't give up.
So this is all about the Windows thing huh?
Yes, I love MS Storage because we shared/sharing a common dream.
Yes, I love King and High because they are not arrogant if you respect them.
A
The way I do IT, very much inline with Ian approach.
I wouldn't care too much about SW RAID or HW RAID, but only because I use a
layer of real "application aware" storage tier to interface with the
specific applications. [they pay other folks to do the RAID layer]
But this approach is clearly o
OMG, we are still doing this Orvar thing?
I am even sick of seafood now and had a 2X-BLT NYC style for lunch today, so
nice I didn't even bother checking on beloved Orvar...
You open folks may not need 7000 because you dunno why it's cool.
If you pay for those, you will never have to be at this l
Vincent Fox wrote:
> I dunno man, just tell you what works well for me with the hardware I have
> here.
>
> If you can go out and buy all new equipment like 7000-series storage then you
> don't need HW RAID.
>
IMHO a 7000 system is "hardware" RAID: a controller providing data
services.
-- ri
2C from Oz:
Windows (at least XP - I have thus far been lucky enough to avoid
running vista on metal) has packet schedulers, quality of service
settings and other crap that can severely impact windows performance on
the network.
I have found that setting the following made a difference to me:
> "ok" == Orvar Korvar writes:
ok> Does HW raid + ZFS give any gains, compared to only
ok> ZFS?
yes. listed in my post.
ok> The question is, is ZFS that good, that HW raid can be
ok> omitted?
No, that is not The Question! You assume there are no downsides to
using ZFS wi
I'm currently experiencing exactly the same problem and it's been driving me
nuts. Tried open soalris and am currently running the latest version of SXCE
both with exactly the same results.
This issue occurs with both CIFS which shows the speed degrade and ISCSI which
just starts off at the low
Don't try to mount the same zvol on two different machines at the same
time. You will end up with a corrupted pool. EXPORT the zpool from your
mac first.
If you run 'zpool import -d /dev/zvol/dsk/' on the solaris
box, your Zpool from the Mac iSCSI volume should show up. If it shows up
then you
M,
Just taking a stab at it.
Yes. This should work - well mounting it locally through ISCSI - there
may be a smarter way ... ??
Install iscsi client (if you don't already have it installed):
pfexec pkg install SUNWiscs
Then follow the documentation on mounting ISCSI luns on Opensolaris
si
I currently am sharing out zfs ISCSI volumes from a solaris server to a Mac. I
installed ZFS locally on the mac, created a local zfs pool and put a zfs
filesystem on the local volume. Can I now mount the volume on the Solaris
server and see the data?
--
This message posted from opensolaris.org
I dunno man, just tell you what works well for me with the hardware I have here.
If you can go out and buy all new equipment like 7000-series storage then you
don't need HW RAID.
If you don't need HA & clustering & all that jazz then just get a bunch of big
drives and ZFS RAID-10 them.
As a pr
Thanks for the reply, I've also had issue with consumer class drives and other
raid cards.
The drives I have here (all 16 drives) are Seagate® Barracuda® ES enterprise
hard drives Model Number ST3500630NS
If the problem was with the drive I would expect the same behavior in both
solaris and o
On Tue, Jan 13, 2009 at 9:16 AM, Jon Tsu wrote:
> Hi
>
> I have a Solaris based home nas and wish to add the Supermicro aoc-sat2-mv8
> 8 port SATA card to add more disks to the existing zpool. Is it a case of
> physically installing the card and using cfgmadm -al ?
>
> Thanks.
> --
>
Since you'
Just a hunch.. but what kind of drives are you using? Many of the raid card
vendors report that "consumer class" drives are incompatible with their cards
because the drives will spend much longer trying to recover from failure than
the "enterprise class" drives will. This causes the card to thin
Le 12 janv. 09 à 17:39, Carson Gaspar a écrit :
> Joerg Schilling wrote:
>> Fabian Wörner wrote:
>>
>>> my post was not to start a discuss gpl<>cddl.
>>> It just an idea to promote ZFS and OPENSOLARIS
>>> If it was against anything than against exfat, nothing else!!!
>>
>> If you like to pro
Orvar Korvar wrote:
> Oh, thanx for your very informative answer. Ive added a link to your
> information in this thread:
>
> But... Sorry, but I wrote wrong. I meant "I will not recommend against HW
> raid + ZFS anymore" instead of "... recommend against HW raid".
>
> The windows people's questio
Le 13 janv. 09 à 21:49, Orvar Korvar a écrit :
> Oh, thanx for your very informative answer. Ive added a link to your
> information in this thread:
>
> But... Sorry, but I wrote wrong. I meant "I will not recommend
> against HW raid + ZFS anymore" instead of "... recommend against HW
> raid
So you recommend ZFS + HW raid, instead of only ZFS? It is preferable to add HW
raid to ZFS?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Oh, thanx for your very informative answer. Ive added a link to your
information in this thread:
But... Sorry, but I wrote wrong. I meant "I will not recommend against HW raid
+ ZFS anymore" instead of "... recommend against HW raid".
The windows people's question is:
which is better?
1. HW rai
Got some more information about HW raid vs ZFS:
http://www.opensolaris.org/jive/thread.jspa?messageID=326654#326654
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
iperf test coming out fine, actually...
iperf -s -w 64k
iperf -c -w 64k -t 900 -i 5
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
totally steady. i could probably implement some tweaks to improve it, but if i
were getting a steady 77% of gigabi
Wanna give
# zpool attach -f rpool c3d0s0 c3d1p0
...a try ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I happen to have some 3510, 3511 on SAN, and older 3310 direct-attach arrays
around here.
Also some newer 2540 arrays.
Our preferred setup for the past year or so, is 2 arrays available to the
server.
>From each array make 2 LUNS available. Take these LUNs on the server and
ZFS them as RAID-10
The client isn't losing the network. After the issue I am still able to ping
other systems and get to the web from the mac. On the other hand I cant ping
any hosts or get to the web from the server.
--
This message posted from opensolaris.org
___
zfs-d
On Tue, January 13, 2009 09:51, Neil Perrin wrote:
> I'm sorry about the problems. We try to be responsive to fixing bugs and
> implementing new features that people are requesting for ZFS.
> It's not always possible to get it right. In this instance I don't think
> the
> bug was reproducible, and
> "ok" == Orvar Korvar writes:
ok> Nobody really knows for sure.
ok> I will tell people that ZFS + HW raid is good enough, and I
ok> will not recommend against HW raid anymore.
jesus, ok fine if you threaten to let ignorant Windows morons strut
around like arrogant experts, voic
I'm sorry about the problems. We try to be responsive to fixing bugs and
implementing new features that people are requesting for ZFS.
It's not always possible to get it right. In this instance I don't think the
bug was reproducible, and perhaps that's why it hasn't received the attention
it deserv
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block:
Hi
I have a Solaris based home nas and wish to add the Supermicro aoc-sat2-mv8 8
port SATA card to add more disks to the existing zpool. Is it a case of
physically installing the card and using cfgmadm -al ?
Thanks.
--
This message posted from opensolaris.org
__
To be honest I am quite surprised as this bug you referring to was submited
early in 2008 and last updated over the summer. Quite surprised that Sun did
not
come up with a fix for it so far. ZFS is certainly gaining some popularity at
my
workplace, and we were thinking of using it instead of v
Hi
Host: VirtualBox 2.1.0 (WinXP SP3)
Guest: OSol 5.11snv_101b
IDE Primary Master: 10 GB, rpool
IDE Primary Slave: 10 GB, empty
format output:
AVAILABLE DISK SELECTIONS:
0. c3d0
/pci0,0/pci-...@1,1/i...@0/c...@0,0
1. c3d1
/pci0,0/pci-...@1,1/i...
On Tue, Jan 13, 2009 at 2:42 PM, Johan Hartzenberg wrote:
>
>
> On Fri, Jan 9, 2009 at 11:51 AM, Johan Hartzenberg
> wrote:
>>
>>
>> I have this situation working and use my "shared" pool between Linux and
>> Solaris. Note: The shared pool needs to reside on a whole physical disk or
>> on a pri
On Fri, Jan 9, 2009 at 11:51 AM, Johan Hartzenberg wrote:
>
>
> I have this situation working and use my "shared" pool between Linux and
> Solaris. Note: The shared pool needs to reside on a whole physical disk or
> on a primary fdisk partition, Unless something changed since I last checked,
> S
Does creating ZFS pools on multiple partitions on the same physical drive still
run into the performance and other issues that putting pools in slices does?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
Ok, I draw the conclusion that there is no consensus on this. Nobody really
knows for sure.
I am in the process of converting some Windows guys to ZFS, and they think that
HW raid + ZFS should be better than only ZFS. I tell them they should ditch
their HW raid, but can not really motivate why
The output of zpool import is now
j...@opensolaris:~# zpool import
pool: tank
id: 12465835398523411309
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be co
This is a known bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6749498
- being a duplicate of an existing bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6462803
cheers,
tim
On Mon, 2009-01-12 at 14:07 -0800, Robert Bauer wrote:
> Time slider is not wo
Robert Bauer wrote:
> I cced my question to the ZFS discuss group.
>
Which was?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I cced my question to the ZFS discuss group.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
43 matches
Mail list logo