[CentOS] CentOS 5.5/5.6 on Sandy Bridge

2011-01-14 Thread Kenni Lund
Hi list

Has anyone tried to install CentOS 5.5 on a system with one of the new
Sandy Bridge processors with integrated GPU? I can live with bad X11
performance - I'm happy as long as I get a X11 desktop (with VESA or
whatever) with no crashes :) I'll mostly use this system as a KVM host
with a VNC server, but I'll like to be able to hook a screen to it.

CentOS 5.6/6.0 will undoubtly improve the situation in case there're
issues with 5.5, I'm just curious if it will work atm. with the
current CentOS 5.5 + a default Gnome environment.

Thanks,
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Network bandwidth tools. (long)

2011-01-16 Thread Kenni Lund
2011/1/16  :
> Barry Brimer wrote:
> At the risk of pissing off the list for such a long
> post

Personally, I never get pissed off due to long mails, but I do get
pissed off when people keeps changing the subject (and/or use broken
mail clients)...like:
"Network bandwidth tools"
"Network bandwidth tools || RPM Builder"
"Network bandwidth tools (long)"
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Intel DH67BL + CentOS 5.5 IRQ #177 nobody cared

2011-01-20 Thread Kenni Lund
2011/1/18 Drew Weaver :
> Because the installer doesn't have drivers for the onboard and all of our
> installs are PXE and in general it removes a lot of confusion by just
> disabling the onboard NIC and having one single NIC for everything.

Drew, out of curiosity (I have a similar motherboard in backorder),
does the latest CentOS kernel have drivers for the onboard NIC after
the installation? If not, I'll probably cancel my order and try a
Gigabyte board insteadthanks in advance!

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Let's talk about compression rates

2011-01-22 Thread Kenni Lund
2011/1/22 Kai Schaetzl :
> Again, please go somewhere else if you want to discuss general topics. I
> subscribed to this list because of Centos-related issues, not because I
> want to discuss pro's and con's of several compression algorithms. It is
> ok to post the occasional off-topic question, but you are posting *mostly*
> off-topic questions. Please stop this. Thanks.

+1

I'm also starting to get annoyed by this.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID support in kernel?

2011-01-31 Thread Kenni Lund
2011/1/30 Michael Klinosky :
> Robert wrote:
>> You are generally *better off* to *disable* the motherboard RAID
>> controller and use native Linux software RAID.
>
> After my research, I'm realizing that linux doesn't quite support it.
> So, I'll probably do as you suggested.

I don't know if "linux doesn't quite support it" is true, but
nevertheless, even if Linux/CentOS had PERFECT support for it, you
still shouldn't use it IMHO.

The whole point of RAID is to give some sort of protection against
hardware (HDD) failures. Fakeraid is a proprietary software RAID
solution, so if your motherboard suddently decides to die, how will
you then get access to your data? You'll need another
motherboard/system with a fakeraid compatible controller, but how will
you know if the new fakeraid-based controller is compatible with your
HDDs created with the old controller? How will you know if the RAID
controller has the correct firmware? Your best bet is to buy exactly
the same motherbord (if it's still available at that time) and put the
same BIOS version on it as your old board had.

Using Linux software RAID, you'll get the same performance as fakeraid
and you can plug your HDDs into any motherboard running Linux to
access your data. Linux own implementation of software RAID was
introduced in kernel 2.1 (somewhere around ~1997), so you can be
fairly sure that the solution is well tested - something which is most
likely not the case with a fakeraid controller with limited/partly
missing Linux support.

The only valid reason to run fakeraid I can think of, is if you're
going to run Windows on it.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID support in kernel?

2011-01-31 Thread Kenni Lund
2011/1/31 Steve Brooks :
> On Mon, 31 Jan 2011, Les Bell wrote:
>
>>
>> Kenni Lund  wrote:
>>
>>>>
>> Fakeraid is a proprietary software RAID
>> solution, so if your motherboard suddently decides to die, how will
>> you then get access to your data?
>> <<
>>
>> Obviously, you restore it from a backup. RAID is not a substitute for
>> backups.
>>
>> Best,
>>
>> --- Les Bell
>
> Hmm... What percentage of home users keep backups of their systems and
> data .. not enough me thinks?

Ditto...I have backups of all of my important data at home, but not of
the operating systems or of the less important data. When something
breaks, I'll have a backup of all the important stuff, but I'll still
need to spend time on reinstalling the operating system, configuring
it, etc. I think this is true for most home users.

Anyway, the point is not to use RAID as a backup system, since it
obviously isn't, but just not to lock yourself into using a doubtful
vendor specific software RAID solution, when there's a much more
portable solution integrated in the kernel, which at the same time
probably is more well-tested and free of bugs.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] We haven't had a lot of demand for Fedora...people seem okay with CentOS!

2011-02-20 Thread Kenni Lund
2011/2/18 Larry Vaden :
> That just in from chunkhost.com, where you help them beta test Xen for $FREE 
> :)

Wow, I'm really impressed with the professionalism of that site :-P

Quotes from their FAQ:
"Currently, our physical servers are ... ... with RAID 1 (mirroring)
10K SATA drives. The mirroring means double read performance and
complete redundancy."
"Do you have an SLA (Service Level Agreement)? - Yes! We offer an
unconditional 100% network and server uptime guarantee."

Heh...double read performance on RAID1 and 100% guaranteed uptime
(hey! that's more than Amazon EC2!!!). Ohh, well, I'm not putting my
stuff on their servers anyway.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] how to optimize CentOS XEN dom0?

2011-02-22 Thread Kenni Lund
2011/2/23 Rudi Ahlers :
> Hi,
>
> I have a problematic CentOS XEN server and hope someone could point me
> in the right direction to optimize it a bit.

(SNIP)

> the server itself seems to eat up a lot of resources:
>
>
> root@zaxen01:[~]$ free -m
>             total       used       free     shared    buffers     cached
> Mem:           512        472         39          0         13        215
> -/+ buffers/cache:        244        268
> Swap:         4095          0       4095[/CODE]

244MB RAM in use and 0MB swap...looks good to me.

> Is there anything I can optimize on such a server?

It's hard to give any advices without further information about what
the problem is.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] task md1_resync:9770 blocked for more than 120 seconds and OOM errors

2011-03-20 Thread Kenni Lund
2011/3/20 Alexander Farber 

> Hello,
>
> yesterday night I had a problem with
> my server located at a hoster (strato.de).
> I couldn't ssh to it and over the remote serial console
> I saw "out of memory" errors (sorry, don't have the text).
>
> Then I had reinstall CentOS 5.5/64 bit + all my setup (2h work),
> because I have a contract with a social network and
> they will shut down my little card game if it is not reponding.
>
> Now the server seems to work ok,
> but I'm worried about those /var/log/message:
>
>  kernel: INFO: task md1_resync:9770 blocked for more than 120 seconds.
>  kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
>

My guess is that you only saw these messages while the RAID sync was still
going on? You got those messages due to the system I/O being stressed, which
hung the system in periods.

I wouldn't worry about it if your RAID is now in sync and you don't see the
error messages anymore. You can lower the I/O stress of the system under a
RAID-resync by setting a lower maximum kb/sek in
/proc/sys/dev/raid/speed_limit_max (default is 200.000kb/sec ~ 200mb/sec).
This will of course also extend the time used to complete the sync (which
also can be bad, as you want it back in sync as fast as possible).

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] task md1_resync:9770 blocked for more than 120 seconds and OOM errors

2011-03-20 Thread Kenni Lund
2011/3/20 Alexander Farber 
>
> Thank you, I've decreased
> /proc/sys/dev/raid/speed_limit_max
> from 20 to 10.

20 is just the theoretical maximum. If your discs max out at
8, you'll need to set it lower than that. While syncing, you can
check the current sync speed with:
cat /proc/mdstat

> I think I don't care about the sync speed,
> but I'd like to avoid the OOM errors and
> server lockup like I had yesterday

AFAIK, the errors are harmless, it's some locking bug in the kernel
which just hasn't been fixed in CentOS 5 yet. This is not related to
any out-of-memory errors, and hence most likely not related to the
lockup you experienced.

2011/3/20 Markus Falb :
> https://bugzilla.redhat.com/show_bug.cgi?id=573106#c31

Ahh, yes, I forgot about that bugreport. According to that report, the
issue has been fixed in the kernel in upstream 5.6...so it will get
fixed in CentOS 5.6.

> I do not see how decreasing the speed_limit_max should avoid the
> mdX_resync warnings. I would expect more of these warnings now, because
> sync takes longer?

Hmm, I received the same error messages on a Core i7 system I
installed recently. While syncing, the system was close to being
completely unresponsive (took ages to just get a SSH-connection).
After limiting the I/O by setting a lower maximum sync speed, the
system got responsive and the messages disappeared. Comment #36 in the
bug report actually suggests the same workaround.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-29 Thread Kenni Lund
Den 29/03/2011 15.41 skrev "David Sommerseth" :
> This makes me wondering how well it would go to migrate from SL6 to CentOS
> 6, if all KVM guests are on dedicated/separate LVM volumes and that you
> take a backup of /etc/libvirt.  So when CentOS6 is released, scratch SL6
> and install CentOS6, put back the SL6 libvirt configs ... would there be
> any issues in such an approach?

I would not expect any issues at all, I would expect it to "just
work". As long as you use CentOS6+/SL6+ (or Fedora 12+) *with* the
libvirtd/virsh/virt-manager management tools, you shouldn't run into
any major problems. This is because RH has implemented a stable guest
ABI and stable guest PCI addresses, so the virtual hardware will
remain the same on different KVM/libvirt hosts.

> And what about other KVM based host OSes?

That depends on a lot of things...in general, if you're not using one
of the RH-based distributions mentioned above, use the latest version
of the distribution in question, to hopefully receive some of the
upstreamed bits from the RH-distributions. Luckily things are slowly
stabilizing, so it should only be a question of time, before any
distribution with a recent kernel, recent qemu-kvm executable and a
recent libvirt version, should be compatible with each other in terms
of moving KVM-guests around.

The main problem is Windows guests, which easily chokes on hardware
changes (forced reactivation of Windows or unbootable with BSOD). Each
qemu-kvm version will behave differently, so moving from one major
qemu-kvm version to another (0.1x -> 0.1y), will most likely change
the virtual hardware seen by the guest, unless you have libvirt etc.
configured to keep track of the guest hardware.

If it's only Linux guests, it should work fine when moving the guests
between any recent Linux distribution with KVM. Of course, if you
don't use libvirt or a similar management solution, the hardware in
the guest will likely change, for example causing your MAC-addresses
of your NICs to change, etc, when moving to a new KVM host.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Virtualization platform choice

2011-03-31 Thread Kenni Lund
2011/3/31 David Sommerseth :
> On 29/03/11 21:13, Kenni Lund wrote:
>> The main problem is Windows guests, which easily chokes on hardware
>> changes (forced reactivation of Windows or unbootable with BSOD). Each
>> qemu-kvm version will behave differently, so moving from one major
>> qemu-kvm version to another (0.1x -> 0.1y), will most likely change
>> the virtual hardware seen by the guest, unless you have libvirt etc.
>> configured to keep track of the guest hardware.
>
> Do you know how to set up this?  Or where to look for more details about
> this?  I do have one Windows guest, and I can't break this one.

AFAIR, the BSOD I've seen while moving Windows 2003 Server guests to
new hosts (Fedora 7->8->9->10->11->CentOS5), was caused by old VirtIO
guest block drivers. If you've installed recent VirtIO drivers in the
guest (like virtio-win from Fedora) and are using a recent
kernel/qemu-kvm on the host, then I don't think you'll have any BSOD
or breakage of the guest. You'll need to reactivate once, but that
should be it.

If it was me who were going to move from to CentOS/SL 6 from a non-RH
distribution with a different libvirt/qemu-kvm version, I would not
use the old configuration file directly. Instead I would create a
similar guest from scratch with virt-manager/virt-install, shut down
the guest before installing anything, overwrite the new (empty) image
with your old backup image and then compare the old XML configuration
with the new one and manually move over some specific configurations,
if needed. On first boot, you'll probably have to reactivate Windows,
but at least now you know that the libvirt XML-configuration for the
guest should be compatible with CentOS/SL 6+, and hence have stable
guest hardware in future host upgrades.

You can read some more about it here:
http://fedoraproject.org/wiki/Features/KVM_Stable_PCI_Addresses
http://fedoraproject.org/wiki/Features/KVM_Stable_Guest_ABI

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Convert Filesystem to Ext4

2011-04-19 Thread Kenni Lund
Den 19/04/2011 19.42 skrev "Matt" :
>
> On a running 64 bit CentOS 5.6 box is it possible to convert from Ext3
> to Ext4 to improve performance?

This is entirely from memory, so it might be incorrect and not relevant
anymore: When ext4 got released, it was possible to upgrade ext3 to ext4,
but while you would gain some ext4 features and minor performance
improvements, the only way to get native ext4 performance, was to delete and
recreate the partition.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Still a kvm problem after 5.6 upgrade

2011-04-21 Thread Kenni Lund
2011/4/21 Johnny Hughes :
> On 04/21/2011 06:11 AM, David McGuffey wrote:
>> redlibvirtError: internal error Process exited while reading console log
>> output: qemu: could not open disk image /dev/hda
>
> You should not need to do anything in virsh to dump a file ... there
> should be an xml file in /etc/libvirt/qemu/ for every VM already.

The XML-files in /etc/libvirt/qemu represent libvirt defined VMs, you
should never edit these files directly while the libvirtd service is
running. You should either use 'virsh edit [vm_name]' or alternatively
virsh dump followed by virsh define. If you edit the file directly
while some manager is running (like virt-manager in CentOS), your
changes will most likely conflict with, or get overwritten by,
virt-manager. Nothing critical should happen, but I don't see any
reason for encouraging doing it The Wrong Way(TM).

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 5.6 and KVM failure

2011-04-21 Thread Kenni Lund
2011/4/21 Ian Forde :
> Turns out that wasn't the only problem I faced in my migration.  With 2
> KVM servers, both sharing a volume mounted via NFS for VMs, I migrated
> all VMs to the second node, upgraded the first, them moved them all back
> to KVM1.  Instant disk corruption on all VMs.  Boom.

Are you sure it was the migration and not the raw/qcow2 error which
caused the disk corruption?

I just had two Windows Servers with image corruption after upgrading
from 5.5 to 5.6 and booting the first time with the raw setting,
before changing it to qcow2 :-/

These two images were both on the same host, which is plain CentOS 5
*BUT* with a 2.6.37 kernel (and therefore 2.6.37 KVM module) from
elrepo...

It could be my special case of running with a vanilla KVM-module +
CentOS KVM userspace which allows the corruption to happen, but if
other people are seeing disk corruption with the regular
kernel/kmod-kvm, then this "known issue" should probably have a big
fat red warning in the release notes..

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Attaching LinkSys WRT54GL to CentOS machine

2011-04-24 Thread Kenni Lund
2011/4/24 Timothy Murphy :
> I have a LinkSys WRT54GL router,
> which I would like to attach to my CentOS-5.6 server,
> to set up a LAN 192.168.2.* .
> The server is attached to the internet
> through a Billion modem/router which has a single ethernet outlet.
>
> The instructions for the LinkSys router
> assume that it is being attached directly to an ADSL modem.
> But for various reasons I want everything to go through my server.

Without any information on what the purpose of such a setup would be,
it's close to impossible to give you any recommendations. Is it
because you want to use your CentOS system as a firewall? a router? a
HTTP proxy? a network sniffer?

Or is it because you only have one external ethernet outlet and you
want to access the internet on your other systems, while the services
on your server still can be accessed from the outside? In the last
case, you would normally just put your server on the LAN and do
port-forwarding on your router. If it's because you want your server
to be "outside" of your LAN, a more correct approach would be to setup
a DMZ zone on your router, dedicate one of the LAN ports as DMZ port
and connect your server there.

> I wonder if anyone has set up a system like this?

Perhaps, perhaps not, depends on what the purpose of the system is.

Best regards
Kenni
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos