On 11/12/2012 4:33 πμ, Markus Falb wrote:
> I had a look at your sreenshot. Output stops at the moment init is
> taking over. I suspect that console output is going elsewhere, maybe to
> a serial console. That way it could well be that the machine is doing
> something but you just can not see it.
On 12/12/2012 7:37 πμ, Gordon Messmer wrote:
> On 12/10/2012 05:01 PM, Nikolaos Milas wrote:
>
>> I still wonder what caused that delay.
> What does "getenforce" output? It sort of looks like you went from an
> SELinux-disabled configuration to an enforcing or permissive
> configuration and requi
Hi there,
I've discovered that most of the hard drives used in our cluster got
misaligned partitions, thus crippling perfs. Is there any way to fix
that without having to delete/recreate properly aligned partitions, then
format it and refill disks ?
I'd be glad not to have to toy with moving sever
On 05/17/2012 03:13 PM, Akemi Yagi wrote:
> On Thu, May 17, 2012 at 11:33 AM, Steve Clark wrote:
>> On 05/05/2012 04:45 PM, Akemi Yagi wrote:
>>
>> On Sat, May 5, 2012 at 12:40 PM, Steve Clark wrote:
>>
>> http://bugs.centos.org/view.php?id=5709
>>
>> I actually took the latest centosplus kernel
On Wed, Dec 12, 2012 at 7:41 AM, Steve Clark wrote:
> On 05/17/2012 03:13 PM, Akemi Yagi wrote:
>
> On Thu, May 17, 2012 at 11:33 AM, Steve Clark wrote:
>
> On 05/05/2012 04:45 PM, Akemi Yagi wrote:
>
> On Sat, May 5, 2012 at 12:40 PM, Steve Clark wrote:
>
> http://bugs.centos.org/view.php?id=57
On 12/12/2012 11:02 AM, Akemi Yagi wrote:
> On Wed, Dec 12, 2012 at 7:41 AM, Steve Clark wrote:
>> On 05/17/2012 03:13 PM, Akemi Yagi wrote:
>>
>> On Thu, May 17, 2012 at 11:33 AM, Steve Clark wrote:
>>
>> On 05/05/2012 04:45 PM, Akemi Yagi wrote:
>>
>> On Sat, May 5, 2012 at 12:40 PM, Steve Clar
On 12.12.2012 11:51, Nikolaos Milas wrote:
> On 11/12/2012 4:33 πμ, Markus Falb wrote:
>
>> I suspect that console output is going elsewhere, maybe to
>> a serial console. That way it could well be that the machine is doing
>> something but you just can not see it.
>>
>> My first bet would have be
On 12/12/2012 7:35 μμ, Markus Falb wrote:
> Sadly, boot.log on my CentOS 5 machines is empty and so will be yours.
Yes, I had checked already, it's always 0 size...
Thanks for your info.
Nick
___
CentOS mailing list
CentOS@centos.org
http://lists.cent
On Tue, Dec 11, 2012 at 1:58 AM, Nicolas KOWALSKI
wrote:
> On Mon, Dec 10, 2012 at 11:37:50AM -0600, Matt Garman wrote:
>> OS is CentOS 5.6, home directory partition is ext3, with options
>> “rw,data=journal,usrquota”.
>
> Is the data=journal option really wanted here? Did you try with the
> other
On Tue, Dec 11, 2012 at 2:24 PM, Dan Young wrote:
> Just going to throw this out there. What is RPCNFSDCOUNT in
> /etc/sysconfig/nfs?
It was 64 (upped from the default of... 8 I think).
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/m
On Tue, Dec 11, 2012 at 4:01 PM, Steve Thompson wrote:
> This is in fact a very interesting question. The default value of
> RPCNFSDCOUNT (8) is in my opinion way too low for many kinds of NFS
> servers. My own setup has 7 NFS servers ranging from small ones (7 TB disk
> served) to larger ones (25
On Wed, Dec 12, 2012 at 12:29 AM, Gordon Messmer wrote:
> That may be difficult at this point, because you really want to start by
> measuring the number of IOPS. That's difficult to do if your
> applications demand more than your hardware currently provices.
Since my original posting, we tempor
On Wed, Dec 12, 2012 at 1:52 PM, Matt Garman wrote:
>>
> I agree with all that. Problem is, there is a higher risk of storage
> failure with RAID-10 compared to RAID-6.
Does someone have the real odds here? I think the big risks are
always that you have unnoticed bad sectors on the remaining
mi
On 12/12/2012 12:16 PM, Les Mikesell wrote:
> On Wed, Dec 12, 2012 at 1:52 PM, Matt Garman wrote:
>>> >>
>> >I agree with all that. Problem is, there is a higher risk of storage
>> >failure with RAID-10 compared to RAID-6.
> Does someone have the real odds here? I think the big risks are
> alway
On Wed, 12 Dec 2012, Matt Garman wrote:
> Could you perhaps elaborate a bit on your scenario? In particular,
> how much memory and CPU cores do the servers have with the really high
> NFSD counts? Is there a rule of thumb for nfsd counts relative to the
> system specs? Or, like so many IO tunin
Matt Garman wrote:
> On Wed, Dec 12, 2012 at 12:29 AM, Gordon Messmer
> wrote:
> As I typed that, I realized we technically do have a hardware
> backup---the other server I mentioned. But even the time to restore
> from backup would make a lot of people extremely unhappy.
>
> How do most people
On Wed, Dec 12, 2012 at 2:24 PM, John R Pierce wrote:
>
>>> >I agree with all that. Problem is, there is a higher risk of storage
>>> >failure with RAID-10 compared to RAID-6.
>> Does someone have the real odds here? I think the big risks are
>> always that you have unnoticed bad sectors on the
On 12/12/2012 09:36 AM, Laurent Wandrebeck wrote:
> I've discovered that most of the hard drives used in our cluster got
> misaligned partitions, thus crippling perfs. Is there any way to fix
> that without having to delete/recreate properly aligned partitions, then
> format it and refill disks ?
>
On Wed, Dec 12, 2012 at 9:36 AM, Laurent Wandrebeck
wrote:
>
> I've discovered that most of the hard drives used in our cluster got
> misaligned partitions, thus crippling perfs. Is there any way to fix
> that without having to delete/recreate properly aligned partitions, then
> format it and refi
On 11.12.2012 10:15, Leon Fauster wrote:
> Am 11.12.2012 um 03:24 schrieb Zippy Zeppoli:
>> I am trying to get the debug version of httpd so I can use it in
>> conjunction with gdb. I am having a hard time getting them, and they don't
>> seem to be in the standard epel-debuginfo repository. What sh
20 matches
Mail list logo