No snapshots running. I have only 21 filesystems mounted. Blocksize is the
default one. Slow disk I dont think so because I get read and write rates about
350MB/s. Bios is the last also I tried to splitt the pool to two controllers
all this dont help.
--
This message posted from opensolaris.org
On Mon, 22 Jun 2009, Thomas wrote:
I have and raidz1 conisting 6 5400rpm drives on this zpool. I have
stored some Media in a FS and in an other 200k files. Both FS are
written not much. The Pool is 85% Full.
Could this issue also the reason that if Iam playing(reading) some
Media that the pl
Hi,
I have and raidz1 conisting 6 5400rpm drives on this zpool. I have stored some
Media in a FS and in an other 200k files. Both FS are written not much. The
Pool is 85% Full.
Could this issue also the reason that if Iam playing(reading) some Media that
the playback is lagging?
OSOL ips_111
Le 18 juin 09 à 20:23, Richard Elling a écrit :
Cor Beumer - Storage Solution Architect wrote:
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
...and the average size of the files. For small files, it is likely
that the default
recordsize
Richard Elling writes:
> George would probably have the latest info, but there were a number of
> things which circled around the notorious "Stop looking and start ganging"
> bug report,
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6596237
Indeed: we were seriously bitten by this
Gary Mills wrote:
On Thu, Jun 18, 2009 at 12:12:16PM +0200, Cor Beumer - Storage Solution
Architect wrote:
What they noticed on the the X4500 systems, that when the zpool became
filled up for about 50-60% the performance of the system
did drop enormously.
They do claim this has to do with t
On Thu, Jun 18, 2009 at 12:12:16PM +0200, Cor Beumer - Storage Solution
Architect wrote:
>
> What they noticed on the the X4500 systems, that when the zpool became
> filled up for about 50-60% the performance of the system
> did drop enormously.
> They do claim this has to do with the fragmentat
Cor Beumer - Storage Solution Architect wrote:
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
...and the average size of the files. For small files, it is likely
that the default
recordsize will not be optimal, for several reasons. Are thes
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
I was at a customer an huge internet provider, who had 40x an X4500
with Standard solaris and using ZFS.
All the machines were equiped with 48x 1TB disks. The machines were
used to provide the emai
hi Dirk,
How might we explain running find on a linux client to an NFS mounted
file system under the 7000 taking significantly longer (i.e. performance
behaving as though the command was run from Solaris?) Not sure if find
would have the intelligence to differentiate between file system types
Dirk Nitschke wrote:
> Solaris /usr/bin/find and Linux (GNU-) find work differently! I have
> experienced dramatic runtime differences some time ago. The reason is
> that Solaris find and GNU find use different algorithms.
Correct: Solaris find honors the POSIX standard, GNU find does not :-
>Hi Louis!
>
>Solaris /usr/bin/find and Linux (GNU-) find work differently! I have
>experienced dramatic runtime differences some time ago. The reason is
>that Solaris find and GNU find use different algorithms.
>
>GNU find uses the st_nlink ("number of links") field of the stat
>structure t
Hi Louis!
Solaris /usr/bin/find and Linux (GNU-) find work differently! I have
experienced dramatic runtime differences some time ago. The reason is
that Solaris find and GNU find use different algorithms.
GNU find uses the st_nlink ("number of links") field of the stat
structure to optim
Jose,
I believe the problem is endemic to Solaris. I have run into similar
problems doing a simple find(1) in /etc. On Linux, a find operation in
/etc is almost instantaneous. On solaris, it has a tendency to spin
for a long time. I don't know what their use of find might be but,
running
On Wed, Jun 17 at 13:49, Alan Hargreaves wrote:
Another question worth asking here is, is a find over the entire
filesystem something that they would expect to be executed with
sufficient regularity that it the execution time would have a business
impact.
Exactly. That's such an odd busin
Jose,
I hope our openstorage experts weigh in on 'is this a good idea', it
sounds scary to me but I'm
overly cautious anyway. I did want to raise the question of other
client expectations for this
opportunity, what are the intended data protection requirements, how will
they backup and reco
Le 16 juin 09 à 19:55, Jose Martins a écrit :
Hello experts,
IHAC that wants to put more than 250 Million files on a single
mountpoint (in a directory tree with no more than 100 files on each
directory).
He wants to share such filesystem by NFS and mount it through
many Linux Debian clients
Another question worth asking here is, is a find over the entire
filesystem something that they would expect to be executed with
sufficient regularity that it the execution time would have a business
impact. Part of teh problem that I come across with people
"benchmarking" is that they don't be
Hi Jose,
Enable SSD (cache device usage) only for "meta data" would help?.
Assuming that you have read optimized SSD in place.
I never try it out but worth to try by just turn on.
regards,
Paisit W.
Jose Martins wrote:
Hello experts,
IHAC that wants to put more than 250 Million files on a
19 matches
Mail list logo