On 29/04/2015 09:31 AM, Götz Reinicke - IT Koordinator wrote:
Hi,
may be somewon has a working solution and information on that:
I installed the most recent mysql community on a server and do get a
lot
of "errno: 24 - Too many open files".
There are suggestions to increase the open_files_lim
> -Original Message-
> From: Jim Perrin
> Sent: Tuesday, April 28, 2015 20:45
>
> On 04/28/2015 06:05 PM, Akemi Yagi wrote:
> > On Tue, Apr 28, 2015 at 3:10 PM, Johnny Hughes
> wrote:
> >
> >> CentOS is not approved for DOD use. In fact, CentOS is
> not now, nor has
> >> it ever been
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ..
Gotz,
This is due to systemd, it overrules your settings. Add a file to systemd
config fixes it:
[root@mysql2 ~]# cat /etc/systemd/system/mariadb.service.d/limits.conf
[Service]
LimitNOFILE=1
LimitMEMLOCK=1
On Wed, Apr 29, 2015 at 8:31 AM, Götz Reinicke - IT Koordinator <
goetz.reini...
Hi Johan,
Does systemd also overrule /etc/my.conf?
Thx!
Carl
On Wed, 29 Apr 2015 14:58:52 +0200
Johan Kooijman wrote:
> Gotz,
>
> This is due to systemd, it overrules your settings. Add a file to
> systemd config fixes it:
>
> [root@mysql2 ~]# cat /etc/systemd/system/mariadb.service.d/limits
We have a "compute cluster" of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5.
We did
Matt Garman wrote:
> We have a "compute cluster" of about 100 machines that do a read-only
> NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
> these boxes are analysis/simulation jobs that constantly read data off
> the NAS.
>
> We recently upgraded all these machines from Cen
--On Wednesday, April 29, 2015 08:35:29 AM -0500 Matt Garman
wrote:
All indications are that CentOS 6 seems to be much more "aggressive"
in how it does NFS reads. And likewise, CentOS 5 was very "polite",
to the point that it basically got starved out by the introduction of
the 6.5 boxes.
S
m.r...@5-cent.us wrote:
Matt Garman wrote:
We have a "compute cluster" of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
*IF* I understand you, I've g
James Pearson wrote:
> m.r...@5-cent.us wrote:
>> Matt Garman wrote:
>>
>>>We have a "compute cluster" of about 100 machines that do a read-only
>>>NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
>>>these boxes are analysis/simulation jobs that constantly read data off
>>>the
On Tue, Apr 28, 2015 at 4:05 PM, Akemi Yagi wrote:
> Incidentally, someone has just started a thread related to DoD in the
> RH community discussion session entitled, "A DoD version of RHEL - A
> money maker for RH? Maybe!" :
>
> https://access.redhat.com/comment/913243
A new comment has been po
On Wed, Apr 29, 2015 at 10:36 AM, Devin Reade wrote:
> Have you looked at the client-side NFS cache? Perhaps the C6 cache
> is either disabled, has fewer resources, or is invalidating faster?
> (I don't think that would explain the C5 starvation, though, unless
> it's a secondary effect from retr
On Wed, Apr 29, 2015 at 10:51 AM, wrote:
>> The server in this case isn't a Linux box with an ext4 file system - so
>> that won't help ...
>>
> What kind of filesystem is it? I note that xfs also has barrier as a mount
> option.
The server is a NetApp FAS6280. It's using NetApp's filesystem. I
Carl,
By default my.cnf has to obey the OS limits, so in this case the order is:
systemd > /etc/security/limits* > /etc/my*.
On Wed, Apr 29, 2015 at 3:22 PM, Carl E. Hartung
wrote:
> Hi Johan,
>
> Does systemd also overrule /etc/my.conf?
>
> Thx!
>
> Carl
>
> On Wed, 29 Apr 2015 14:58:52 +0200
I have noanacron installed on a fresh centos 7 install.
I added this too settings.
nano /etc/cron.d/0hourly
*/5 * * * * root run-parts /etc/cron.fiveminutes
*/1 * * * * root run-parts /etc/cron.minute
0,30 * * * * root run-parts /etc/cron.halfhour
and then created the directories for it. Now I
Check selinux context for directory?
30.4.2015 12.19 ap. "Matt" kirjoitti:
> I have noanacron installed on a fresh centos 7 install.
>
> I added this too settings.
>
> nano /etc/cron.d/0hourly
>
> */5 * * * * root run-parts /etc/cron.fiveminutes
> */1 * * * * root run-parts /etc/cron.minute
> 0,3
Thank you for clarifying this, Johan. Very much appreciated!
On Wed, 29 Apr 2015 22:28:00 +0200
Johan Kooijman wrote:
> Carl,
>
> By default my.cnf has to obey the OS limits, so in this case the
> order is: systemd > /etc/security/limits* > /etc/my*.
>
> On Wed, Apr 29, 2015 at 3:22 PM, Carl E.
>> You may want to look at NFSometer and see if it can help.
>
> Haven't seen that, will definitely give it a try!
Try "nfsstat -cn" on the clients to see if any particular NFS operations
occur more or less frequently on the C6 systems.
Also look at the "lookupcache" option found in "man nfs":
18 matches
Mail list logo