Hello all,
I second Al's motion. Even a little script a-la the CoolTools for
tuning Solaris for the T2000 would be great.
-J
On 1/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:
On Wed, 10 Jan 2007, Mark Maybee wrote:
> Jason J. W. Williams wrote:
> > Hi Robert,
> >
> > Thank you! Holy mackerel!
I am running a home fileserver with a pair of 4-port cheapo Silicon Image 3114
based cards. I had to down-rev the firmware on the cards to make them dumb
SATA controllers vs. RAID cards. I bought them at Fry's, they were about
$70/ea, they're "SIIG SATA 4-channel RAID", part number appears to
News Alert!
Fueled by the possibility of an upcoming merger,
(UTVG) is gearing up for an explosion.
Tension is building and soon the scramble to take
a position will push this one off the charts.
Symbol: UTVG
}Short Term Target: $5.00
Long term Target: $10
Finally the market is ready for explo
Hi,
Why would I ever need to specify ZFS mount(s) in /etc/vfstab at all? I see it
in some documents that zfs can be defined in /etc/vfstab with fstype zfs.
Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
On Wed, 10 Jan 2007, Mark Maybee wrote:
> Jason J. W. Williams wrote:
> > Hi Robert,
> >
> > Thank you! Holy mackerel! That's a lot of memory. With that type of a
> > calculation my 4GB arc_max setting is still in the danger zone on a
> > Thumper. I wonder if any of the ZFS developers could shed s
Hi Mark,
Thank you. That makes a lot of sense. In our case we're talking around
10 multi-gigabyte files. The arc_max+3*arc_max+fragmentation was a bit
worrisome. It sounds then that this is mostly an issue on something
like an NFS server which had a ton of small files, where the
minimum_file_node
> Hello Kyle,
>
> Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
>
> KM> Remember though that it's been mathematically
> figured that the
> KM> disadvantages to RaidZ start to show up after 9
> or 10 drives. (That's
>
> Well, nothing like this was proved and definitely not
> mathematicall
Hey guys,
Do to lng URL lookups, the DNLC was pushed to variable
sized entries. The hit rate was dropping because of
"name to long" misses. This was done long ago while I
was at Sun under a bug reported by me..
I don't know your usage, but you should at
Jason J. W. Williams wrote:
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could shed some light
on the calculation?
In a worst-case scenario, Ro
Hello Jason,
Thursday, January 11, 2007, 1:10:10 AM, you wrote:
JJWW> Hi Robert,
JJWW> We've got the default ncsize. I didn't see any advantage to increasing
JJWW> it outside of NFS serving...which this server is not. For speed the
JJWW> X4500 is showing to be a killer MySQL platform. Between th
Hello Peter,
Thursday, January 11, 2007, 1:08:38 AM, you wrote:
>> It's just a common sense advise - for many users keeping raidz groups
>> below 9 disks should give good enough performance. However if someone
>> creates raidz group of 48 disks he/she probable expects also
>> performance and in g
On 10-Jan-07, at 5:29 PM, roland wrote:
# zpool create 500megpool /home/roland/tmp/500meg.dat
cannot create '500megpool': name must begin with a letter
pool name may have been omitted
huh?
ok - no problem if special characters aren`t allowed, but why
_this_ weird looking limitaton ?
Pote
Hi Robert,
We've got the default ncsize. I didn't see any advantage to increasing
it outside of NFS serving...which this server is not. For speed the
X4500 is showing to be a killer MySQL platform. Between the blazing
fast procs and the sheer number of spindles, its perfromance is
tremendous. If
> It's just a common sense advise - for many users keeping raidz groups
> below 9 disks should give good enough performance. However if someone
> creates raidz group of 48 disks he/she probable expects also
> performance and in general raid-z wouldn't offer one.
There is at least one reason for wa
Hello Jason,
Thursday, January 11, 2007, 12:36:46 AM, you wrote:
JJWW> Hi Robert,
JJWW> Thank you! Holy mackerel! That's a lot of memory. With that type of a
JJWW> calculation my 4GB arc_max setting is still in the danger zone on a
JJWW> Thumper. I wonder if any of the ZFS developers could shed
Hello Wade,
Thursday, January 11, 2007, 12:30:40 AM, you wrote:
WSfc> Long story short, I wiped and reinstalled with U3 and raidz2 with
WSfc> hostspares like it should have had in the first place.
The same here.
Besides I always put "my own" system and I'm not using preinstalled
ones - except
Hello Jason,
Thursday, January 11, 2007, 12:46:32 AM, you wrote:
JJWW> Hi Robert,
JJWW> I read the following section from
JJWW> http://blogs.sun.com/roch/entry/when_to_and_not_to as indicating
JJWW> random writes to a RAID-Z had the performance of a single disk
JJWW> regardless of the group size
Hi Robert,
I read the following section from
http://blogs.sun.com/roch/entry/when_to_and_not_to as indicating
random writes to a RAID-Z had the performance of a single disk
regardless of the group size:
Effectively, as a first approximation, an N-disk RAID-Z group will
behave as a single
Robert:
> Better yet would be if memory consumed by ZFS for caching (dnodes,
> vnodes, data, ...) would behave similar to page cache like with UFS so
> applications will be able to get back almost all memory used for ZFS
> caches if needed.
I believe that a better response to memory pressure is a
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could shed some light
on the calculation?
That kind of memory loss makes ZFS almost unusable for a d
[EMAIL PROTECTED] wrote on 01/10/2007 05:16:33 PM:
> Hello Jason,
>
> Wednesday, January 10, 2007, 10:54:29 PM, you wrote:
>
> JJWW> Hi Kyle,
>
> JJWW> I think there was a lot of talk about this behavior on the RAIDZ2
vs.
> JJWW> RAID-10 thread. My understanding from that discussion was that
Hello Jason,
Wednesday, January 10, 2007, 9:45:05 PM, you wrote:
JJWW> Sanjeev & Robert,
JJWW> Thanks guys. We put that in place last night and it seems to be doing
JJWW> a lot better job of consuming less RAM. We set it to 4GB and each of
JJWW> our 2 MySQL instances on the box to a max of 4GB.
Hello Jason,
Wednesday, January 10, 2007, 10:54:29 PM, you wrote:
JJWW> Hi Kyle,
JJWW> I think there was a lot of talk about this behavior on the RAIDZ2 vs.
JJWW> RAID-10 thread. My understanding from that discussion was that every
JJWW> write stripes the block across all disks on a RAIDZ/Z2 gro
# zpool create 500megpool /home/roland/tmp/500meg.dat
cannot create '500megpool': name must begin with a letter
pool name may have been omitted
huh?
ok - no problem if special characters aren`t allowed, but why _this_ weird
looking limitaton ?
This message posted from opensolaris.org
Hi Kyle,
I think there was a lot of talk about this behavior on the RAIDZ2 vs.
RAID-10 thread. My understanding from that discussion was that every
write stripes the block across all disks on a RAIDZ/Z2 group, thereby
making writing the group no faster than writing to a single disk.
However reads
Robert Milkowski wrote:
Hello Kyle,
Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
KM> Remember though that it's been mathematically figured that the
KM> disadvantages to RaidZ start to show up after 9 or 10 drives. (That's
Well, nothing like this was proved and definitely not mathemat
Hi Guys,
After reading through the discussion on this regarding ZFS memory
fragmentation on snv_53 (and forward) and going through our
::kmastat...looks like ZFS is sucking down about 544 MB of RAM in the
various caches. About 360MB of that is in the zio_buf_65536 cache.
Next most notable is 55MB
Sanjeev & Robert,
Thanks guys. We put that in place last night and it seems to be doing
a lot better job of consuming less RAM. We set it to 4GB and each of
our 2 MySQL instances on the box to a max of 4GB. So hopefully slush
of 4GB on the Thumper is enough. I would be interested in what the
othe
Hello Kyle,
Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
KM> Remember though that it's been mathematically figured that the
KM> disadvantages to RaidZ start to show up after 9 or 10 drives. (That's
Well, nothing like this was proved and definitely not mathematically.
It's just a common
[i]I think the original poster, was thinking that non-enterprise users
would be most interested in only having to *purchase* one drive at a time.
Enterprise users aren't likely to balk at purchasing 6-10 drives at a
time, so for them adding an additional *new* RaidZ to stripe across is
easier.
[/i
[i]Enterprise feature questions), but it's possible now to expand a pool
containing raidz devs-- and this is the more likely case with
enterprise users:
# ls -lh /var/tmp/fakedisk/
total 1229568
-rw--T 1 root root 100M Jan 9 20:22 disk1
-rw--T 1 root root 100M Jan 9 20:22 disk2
-rw--T
Martin wrote:
I agree for non enterprise users the expansion of
raidz vdevs is a critical missing feature.
Now you've got me curious. I'm not trying to be inflammatory here, but how is
online expansion a non-enterprise feature? From my perspective, enterprise
users are the ones most li
"Dick Davies" <[EMAIL PROTECTED]> wrote on 01/10/2007 05:26:45 AM:
> On 08/01/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > I think that in addition to lzjb compression, squishing blocks that
contain
> > the same data would buy a lot of space for administrators working in
many
> > c
Jason,
Robert is right...
The point is ARC is the caching module of ZFS and majority of the memory
is consumed through ARC.
Hence by limiting the c_max of ARC we are limiting the amount ARC consumes.
However, other modules of ZFS would consume more but that may not be as
significant as ARC.
On 08/01/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I think that in addition to lzjb compression, squishing blocks that contain
the same data would buy a lot of space for administrators working in many
common workflows.
This idea has occurred to me too - I think there are definite
advant
Hello Jason,
Tuesday, January 9, 2007, 10:28:12 PM, you wrote:
JJWW> Hi Sanjeev,
JJWW> Thank you! I was not able to find anything as useful on the subject as
JJWW> that! We are running build 54 on an X4500, would I be correct in my
JJWW> reading of that article that if I put "set zfs:zfs_arc_ma
36 matches
Mail list logo