[CentOS] Unable to install 64 bit on x3650

2009-07-08 Thread Rajagopal Swaminathan
Greetings,

I am trying to install Centos 5.3 on IBM x3650 with Adaptec based ServeRAID 8k

Just two disk in RAID 1

When trying boot with CD1 the booting process aborts with a kernel
panic saying that some unknown-de and some message saying to use the
correct 'root=' boot parameter.

Kindly help

Thanks and Regards

Rajagopal
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Unable to install 64 bit on x3650

2009-07-08 Thread Rajagopal Swaminathan
Greetings

On Wed, Jul 8, 2009 at 1:01 PM, Rajagopal
Swaminathan wrote:
> Greetings,
>
> I am trying to install Centos 5.3 on IBM x3650 with Adaptec based ServeRAID 8k

Is this stuff real RAID or fakeraid?

> When trying boot with CD1 the booting process aborts with a kernel
> panic saying that some unknown-de and some message saying to use the
> correct 'root=' boot parameter.
>

Have I hit this mkinitrd bug?

http://www.linuxquestions.org/questions/syndicated-linux-news-67/lxer-setup-xen-3.4-dom0-on-centos-5.3-64-bit-719969/

Thanks and Regards

Rajagopal
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is there an openssh security problem?

2009-07-08 Thread Peter Kjellstrom
On Tuesday 07 July 2009, Ray Van Dolson wrote:
> On Tue, Jul 07, 2009 at 10:31:36PM +0200, Geoff Galitz wrote:
> > > is there a security issue on CentOS 5.3 with openssh 4.3?
> >
> > If this is a real zero-day exploit.. then yes, there is an issue.  The
> > following link may be the best source of information at the moment:
> >
> > http://isc.sans.org/diary.html?storyid=6742
> >
> >
> > FWIW, I think the second comment about RHEL/Centos in the referenced post
> > is a little off-base.  After all, you have to know that a bug exists
> > before you can fix it.
>
> This link[1] seems to show a RHEL 5.3 machine being exploited (could be
> wrong though).

The only thing indicating that this is RHEL-5.3 is, afaict, the title. The 
kernel version is not EL, the mysql version is not etc.

Worth keeping an eye on though.

/Peter


signature.asc
Description: This is a digitally signed message part.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Problems with rpmforge repo?

2009-07-08 Thread Phil Savoie
Hello,

Tried yum update all yesterday and today and seems there is a perl
dependency missing.  Does anyone know if it is a problem or just a sync
thing and that I should be more patient.  Error is below:

--> Processing Dependency: perl(Compress::Raw::Zlib) >= 2.020 for
package: perl-IO-Compress
--> Finished Dependency Resolution
perl-IO-Compress-2.020-1.el5.rf.noarch from rpmforge has depsolving problems
  --> Missing Dependency: perl(Compress::Raw::Zlib) >= 2.020 is needed
by package perl-IO-Compress-2.020-1.el5.rf.noarch (rpmforge)
Error: Missing Dependency: perl(Compress::Raw::Zlib) >= 2.020 is needed
by package perl-IO-Compress-2.020-1.el5.rf.noarch (rpmforge)

Thanks,

Phil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Karanbir Singh
On 07/08/2009 10:26 AM, Phil Savoie wrote:
> Hello,
>
> Tried yum update all yesterday and today and seems there is a perl
> dependency missing.  Does anyone know if it is a problem or just a sync
> thing and that I should be more patient.  Error is below:
>

did you go tell them about it ? ( http://lists.rpmforge.net )

-- 
Karanbir Singh : http://www.karan.org/  : 2522...@icq
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Nicolas Thierry-Mieg
Phil Savoie wrote:
> Hello,
> 
> Tried yum update all yesterday and today and seems there is a perl
> dependency missing.  Does anyone know if it is a problem or just a sync
> thing and that I should be more patient.  Error is below:
> 
> --> Processing Dependency: perl(Compress::Raw::Zlib) >= 2.020 for
> package: perl-IO-Compress
> --> Finished Dependency Resolution
> perl-IO-Compress-2.020-1.el5.rf.noarch from rpmforge has depsolving problems
>   --> Missing Dependency: perl(Compress::Raw::Zlib) >= 2.020 is needed
> by package perl-IO-Compress-2.020-1.el5.rf.noarch (rpmforge)
> Error: Missing Dependency: perl(Compress::Raw::Zlib) >= 2.020 is needed
> by package perl-IO-Compress-2.020-1.el5.rf.noarch (rpmforge)


please report these perl issues to the rpmforge list, see:
http://lists.rpmforge.net/pipermail/users/2009-July/002520.html

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Unable to install 64 bit on x3650

2009-07-08 Thread Rajagopal Swaminathan
On Wed, Jul 8, 2009 at 2:09 PM, Rajagopal
Swaminathan wrote:
> Greetings
>
> On Wed, Jul 8, 2009 at 1:01 PM, Rajagopal
> Swaminathan wrote:
>> Greetings,
>>
>> I am trying to install Centos 5.3 on IBM x3650 with Adaptec based ServeRAID 
>> 8k
>
Seems somebody had faced the similar error
The only difference is the numbers in "unknown-block (8,3)."

http://www.centos.org/modules/newbb/viewtopic.php?viewmode=thread&topic_id=4382&forum=27

any solutions? pointers?

Thanks and Regards

Rajagopal
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Olaf Mueller
Phil Savoie wrote:

Hello.

> Tried yum update all yesterday and today and seems there is a perl
> dependency missing.  Does anyone know if it is a problem or just a
> sync
It is a problem of the rpmforge repo. They build a lot of the perl
packages new and that will break dependencies for a short (I hope so)
time.


regards
Olaf

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Les Bell

Karanbir Singh  wrote:

>>
did you go tell them about it ? ( http://lists.rpmforge.net )
<<

I've contacted Christoph Maser directly about that specific problem, but
have not heard back from him so far (no rush, from my perspective).

I also tried grabbing the SPEC file from rpmforge along with the module
source from CPAN and building, then installing, the missing dependency, but
that just led to a flood of complaints from yum update, so in the end I
backed that out and resolved to wait for a fix.

Best,

--- Les Bell
[http://www.lesbell.com.au]
Tel: +61 2 9451 1144


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Getting started with NFS

2009-07-08 Thread Niki Kovacs
Frank Cox a écrit :
> 
> There isn't much to setting up a simple NFS fileserver and client mount.  Set
> up /etc/exports on the server (this assumes your client is 192.168.0.3)
> 
> /whatever/where-ever/ 192.168.0.3(rw)
> 
> Start the nfs service.  Create a mount point on the client
> 
> "mkdir /mnt/fileserver"
> 
>  then mount the fileserver there. 
> 
> "mount fileserver:/whatever/where-ever/ /mnt/fileserver"
> 

OK, I made a fresh start on this and installed two vanilla CentOS 5.3 
systems (GNOME desktops, no tweaks or whatsoever) on two sandbox 
machines in my LAN. Everything works all right, out of the box, like a 
charm.

Now I'd like to explore things NFS a little further, and the next 
question is: starting from a bare bones minimal system, what packages do 
I need to make NFS work a) on the server side, and b) on the client 
side? For example, in order to use DHCP on my network, I installed the 
dhcp package for a DHCP server, and then on the clients I'm using 
dhclient (already included in the minimum base install).

I have quite some documentation here for CentOS / RHEL, but curiously 
enough, none seems to mention the needed packages to make NFS work.

The reason I'm asking: usually I like to install only what's needed.

Any suggestions ?

Niki
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Phil Savoie


Les Bell wrote:
> Karanbir Singh  wrote:
> 
> did you go tell them about it ? ( http://lists.rpmforge.net )
> <<
> 
> I've contacted Christoph Maser directly about that specific problem, but
> have not heard back from him so far (no rush, from my perspective).
> 
> I also tried grabbing the SPEC file from rpmforge along with the module
> source from CPAN and building, then installing, the missing dependency, but
> that just led to a flood of complaints from yum update, so in the end I
> backed that out and resolved to wait for a fix.
> 
> Best,
> 
> --- Les Bell
> [http://www.lesbell.com.au]
> Tel: +61 2 9451 1144
> 
> 
Thanks so much,

Phil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Getting started with NFS

2009-07-08 Thread Sander Snel
The tool you need is nfs-utils, if you do a
# rpm -qi --provides nfs-utils
you will get output of which software it provides, and some info about 
the package.
Most of the nfs service is handled by the kernel:
"Summary : NFS utlilities and supporting clients and daemons for the 
kernel NFS server."

I hope this helps you one step further

Sander

Niki Kovacs wrote:
> Frank Cox a écrit :
>   
>> There isn't much to setting up a simple NFS fileserver and client mount.  Set
>> up /etc/exports on the server (this assumes your client is 192.168.0.3)
>>
>> /whatever/where-ever/ 192.168.0.3(rw)
>>
>> Start the nfs service.  Create a mount point on the client
>>
>> "mkdir /mnt/fileserver"
>>
>>  then mount the fileserver there. 
>>
>> "mount fileserver:/whatever/where-ever/ /mnt/fileserver"
>>
>> 
>
> OK, I made a fresh start on this and installed two vanilla CentOS 5.3 
> systems (GNOME desktops, no tweaks or whatsoever) on two sandbox 
> machines in my LAN. Everything works all right, out of the box, like a 
> charm.
>
> Now I'd like to explore things NFS a little further, and the next 
> question is: starting from a bare bones minimal system, what packages do 
> I need to make NFS work a) on the server side, and b) on the client 
> side? For example, in order to use DHCP on my network, I installed the 
> dhcp package for a DHCP server, and then on the clients I'm using 
> dhclient (already included in the minimum base install).
>
> I have quite some documentation here for CentOS / RHEL, but curiously 
> enough, none seems to mention the needed packages to make NFS work.
>
> The reason I'm asking: usually I like to install only what's needed.
>
> Any suggestions ?
>
> Niki
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>   

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is there an openssh security problem?

2009-07-08 Thread Flaherty, Patrick
> is there a security issue on CentOS 5.3 with openssh 4.3? I 
> ask that cause of
> http://www.h-online.com/security/Rumours-of-critical-vulnerabi
> lity-in-OpenSSH-in-Red-Hat-Enterprise-Linux--/news/113712
> and http://secer.org/hacktools/0day-openssh-remote-exploit.html.
> 
> Should ssh login from internet on CentOS better be disabled?
You should always limit access to sensitive services on a machine.
Remote login should be included in that list. Either limit it by
firewall or in the openssh daemon to certain ips. Even if you can only
limit it to a class c or class a, you've still chopped out a number of
possibly malicious hosts.

Patrick 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Getting started with NFS

2009-07-08 Thread Niki Kovacs
Sander Snel a écrit :
> The tool you need is nfs-utils, if you do a
> # rpm -qi --provides nfs-utils
> you will get output of which software it provides, and some info about 
> the package.
> Most of the nfs service is handled by the kernel:
> "Summary : NFS utlilities and supporting clients and daemons for the 
> kernel NFS server."
> 
> I hope this helps you one step further

Yes! I just got it working on two minimal installs. Looks like server as 
well as client need the nfs-utils package, as it contains the mount.nfs 
and umount.nfs commands.

Cheers,

Niki
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Les Mikesell
 o wrote:
> Hi,
> 
> I have a program that writes lots of files to a directory tree (around 15 
> Million fo files), and a node can have up to 40 files (and I don't have 
> any way to split this ammount in smaller ones). As the number of files grows, 
> my application gets slower and slower (the app is works something like a 
> cache for another app and I can't redesign the way it distributes files into 
> disk due to the other app requirements).
> 
> The filesystem I use is ext3 with teh following options enabled:
> 
> Filesystem features:  has_journal resize_inode dir_index filetype 
> needs_recovery sparse_super large_file
> 
> Is there any way to improve performance in ext3? Would you suggest another FS 
> for this situation (this is a prodution server, so I need a stable one) ?
> 
> Thanks in advance (and please excuse my bad english).

I haven't done, or even seen, any recent benchmarks but I'd expect 
reiserfs to still be the best at that sort of thing.   However even if 
you can improve things slightly, do not let whoever is responsible for 
that application ignore the fact that it is a horrible design that 
ignores a very well known problem that has easy solutions.  And don't 
ever do business with someone who would write a program like that again. 
  Any way you approach it, when you want to write a file the system must 
check to see if the name already exists, and if not, create it in an 
empty space that it must also find - and this must be done atomically so 
the directory must be locked against other concurrent operations until 
the update is complete.  If you don't index the contents the lookup is a 
slow linear scan - if you do, you then have to rewrite the index on 
every change so you can't win.  Sensible programs that expect to access 
a lot of files will build a tree structure to break up the number that 
land in any single directory (see squid for an example).  Even more 
sensible programs would re-use some existing caching mechanism like 
squid or memcached instead of writing a new one badly.

-- 
   Les Mikesell
lesmikes...@gmail.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Kwan Lowe
On Wed, Jul 8, 2009 at 2:27 AM,  o <
hhh...@hotmail.com> wrote:

>
> Hi,
>
> I have a program that writes lots of files to a directory tree (around 15
> Million fo files), and a node can have up to 40 files (and I don't have
> any way to split this ammount in smaller ones). As the number of files
> grows, my application gets slower and slower (the app is works something
> like a cache for another app and I can't redesign the way it distributes
> files into disk due to the other app requirements).
>
> The filesystem I use is ext3 with teh following options enabled:
>
> Filesystem features:  has_journal resize_inode dir_index filetype
> needs_recovery sparse_super large_file
>
> Is there any way to improve performance in ext3? Would you suggest another
> FS for this situation (this is a prodution server, so I need a stable one) ?
>

I saw this article some time back.

http://www.linux.com/archive/feature/127055

I've not implemented it, but from past experience, you may lose some
performance initially, but the database fs performance might be more
consistent as the number of files grow.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] dhcp question

2009-07-08 Thread chloe K
Hi 
 
How can I do the dhcp to assign ip for eth2 network only?
 
eth1 and eth0 can igorn
 
thank you


  __
Ask a question on any topic and get answers from real people. Go to Yahoo! 
Answers and share what you know at http://ca.answers.yahoo.com___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread Tim Nelson
- "chloe K"  wrote: 
> 
Hi 

How can I do the dhcp to assign ip for eth2 network only? 

eth1 and eth0 can igorn 

thank you 


Edit your /etc/sysconfig/dhcpd file. Ensure the 'DHCPDARGS' line looks like 
this: 

DHCPDARGS=eth2 

Save the file then restart DHCP. 

--Tim 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread chloe K
thank you
 
how can I put the name server in dhcpd.conf  for the client as I use ISP 
nameserver?

this name server has to change when I change other ISP too
 
eg:
 
dhcpd.conf
 
option domain-name-servers   x.x.x.x;
 
 
thank you

--- On Wed, 7/8/09, Tim Nelson  wrote:


From: Tim Nelson 
Subject: Re: [CentOS] dhcp question
To: "CentOS mailing list" 
Received: Wednesday, July 8, 2009, 12:27 PM



#yiv1433150591 p {margin:0;}

- "chloe K"  wrote: 
> 




Hi 
 
How can I do the dhcp to assign ip for eth2 network only?
 
eth1 and eth0 can igorn
 
thank you


Edit your /etc/sysconfig/dhcpd file. Ensure the 'DHCPDARGS' line looks like 
this:

DHCPDARGS=eth2

Save the file then restart DHCP.

--Tim

-Inline Attachment Follows-


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos



  __
Make your browsing faster, safer, and easier with the new Internet Explorer® 8. 
Optimized for Yahoo! Get it Now for Free! at 
http://downloads.yahoo.com/ca/internetexplorer/___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread John R Pierce
chloe K wrote:
> thank you
>  
> how can I put the name server in dhcpd.conf  for the client as I use 
> ISP nameserver?
> this name server has to change when I change other ISP too
>  
>

run your own local DNS caching server, and give the DHCP clients 
192.168.0.1 or whatever your local network gateway IP is.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Mr. X



--- On Wed, 7/8/09, Phil Savoie  wrote:

> From: Phil Savoie 
> Subject: [CentOS] Problems with rpmforge repo?
> To: "CentOS mailing list" 
> Date: Wednesday, July 8, 2009, 2:26 AM
> Hello,
> 
> Tried yum update all yesterday and today and seems there is
> a perl
> dependency missing.  Does anyone know if it is a
> problem or just a sync
> thing and that I should be more patient.  Error is
> below:
> 
> --> Processing Dependency: perl(Compress::Raw::Zlib)
> >= 2.020 for
> package: perl-IO-Compress
> --> Finished Dependency Resolution
> perl-IO-Compress-2.020-1.el5.rf.noarch from rpmforge has
> depsolving problems
>   --> Missing Dependency: perl(Compress::Raw::Zlib)
> >= 2.020 is needed
> by package perl-IO-Compress-2.020-1.el5.rf.noarch
> (rpmforge)

if you are running x86_64 then exclude the i386 package. Try uninstalling 
perl-IO-Compress.i386 first.

-- 
Mark


  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread Mfawa Alfred Onen
On Wed, Jul 8, 2009 at 6:03 PM, chloe K wrote:
> thank you
>
> how can I put the name server in dhcpd.conf  for the client as I use ISP
> nameserver?
> this name server has to change when I change other ISP too
>
> eg:
>
> dhcpd.conf
>
> option domain-name-servers   x.x.x.x;
>
>
> thank you

Chloe K,
  I don`t really understand your question as per setting name server
option in DHCP but lets see:-

are you trying to put a domain name  like option domain-name
"chloe.k"; or putting the domain-name-servers "x.x.x.x"; ?
You will still have to change the option domain-name-servers "x.x.x.x"
when you change an ISP anyways.

Please just be specific.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Gary Greene
On 7/8/09 8:56 AM, "Les Mikesell"  wrote:
>  o wrote:
>> Hi,
>> 
>> I have a program that writes lots of files to a directory tree (around 15
>> Million fo files), and a node can have up to 40 files (and I don't have
>> any way to split this ammount in smaller ones). As the number of files grows,
>> my application gets slower and slower (the app is works something like a
>> cache for another app and I can't redesign the way it distributes files into
>> disk due to the other app requirements).
>> 
>> The filesystem I use is ext3 with teh following options enabled:
>> 
>> Filesystem features:  has_journal resize_inode dir_index filetype
>> needs_recovery sparse_super large_file
>> 
>> Is there any way to improve performance in ext3? Would you suggest another FS
>> for this situation (this is a prodution server, so I need a stable one) ?
>> 
>> Thanks in advance (and please excuse my bad english).
> 
> I haven't done, or even seen, any recent benchmarks but I'd expect
> reiserfs to still be the best at that sort of thing.   However even if
> you can improve things slightly, do not let whoever is responsible for
> that application ignore the fact that it is a horrible design that
> ignores a very well known problem that has easy solutions.  And don't
> ever do business with someone who would write a program like that again.
>   Any way you approach it, when you want to write a file the system must
> check to see if the name already exists, and if not, create it in an
> empty space that it must also find - and this must be done atomically so
> the directory must be locked against other concurrent operations until
> the update is complete.  If you don't index the contents the lookup is a
> slow linear scan - if you do, you then have to rewrite the index on
> every change so you can't win.  Sensible programs that expect to access
> a lot of files will build a tree structure to break up the number that
> land in any single directory (see squid for an example).  Even more
> sensible programs would re-use some existing caching mechanism like
> squid or memcached instead of writing a new one badly.

In many ways this is similar to issues you'll see in a very active mail or
news server that uses maildir wherein the d-entries get too large to be
traversed quickly. The only way to deal with it (especially if the
application adds and removes these files regularly) is to every once in a
while copy the files to another directory, nuke the directory and restore
from the copy. This is why databases are better for this kind of intensive
data caching.

-- 
Gary L. Greene, Jr.
IT Operations
Minerva Networks, Inc.
Cell:  (650) 704-6633
Phone: (408) 240-1239

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Correct way to disble TCP Segmentation Offload (tso off) in CentOS 5

2009-07-08 Thread Santi Saez
Hi,

What's the correct way to disble TSO (TCP Segmentation Offload) in RHEL5?

I have tried adding those options in ifcfg-ethX configuration file:

# grep ETHTOOL /etc/sysconfig/network-scripts/ifcfg-eth0
ETHTOOL_OPTS="tso off"

And also with:

ETHTOOL_OPTS="-K eth0 tso off"

But when restating the server TSO is enabled:

# ethtool -k eth0
tcp segmentation offload: on

As a temporary solution, I'm executing this command in a start script:

/sbin/ethtool -K eth0 tso off

But I think it can be configured in network configuration files, any 
idea to solve this? thanks!!

Regards,

-- 
Santi Saez
http://woop.es
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Flash Drive problem?

2009-07-08 Thread Ron Blizzard
On Wed, Jul 8, 2009 at 12:53 AM, Robert
Nichols wrote:
>
> If there is a significant amount of data that must be written to the
> device, then you get the pop ups.  If the device can immediately be
> made ready to remove, then there are no messages.  In either case,
> when the icon disappears from the desktop the device is safe to
> remove.  The situation gets really messy if the device has more than
> one partition mounted.  You can unmount one partition and get a
> "safe to remove" message while another partition is still mounted.
> I believe that's one reason MS Windows doesn't support multiple
> partitions on these devices.

That makes sense. It does seem like the notice comes up when I've
moved larger files. Thanks -- though I like using the 'sync' command
anyhow. It kind of puts my mind at ease.

-- 
RonB -- Using CentOS 5.3
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Neil Aggarwal
Hello:

According to the Red Hat Virtualization Guide,
Windows Server 2003 32-bit will only run as
a fully virtualized guest on an AMD64 system.

I thought I have seen a lot of discussion about
running paravirtualized Windows on CentOS.  Is
that a bad idea?

Neil

--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Will your e-commerce site go offline if you have
a DB server failure, fiber cut, flood, or other disaster?
If so, ask me about our geographically redudant database system.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with rpmforge repo?

2009-07-08 Thread Filipe Brandenburger
Hi,

On Wed, Jul 8, 2009 at 05:26, Phil Savoie wrote:
> Tried yum update all yesterday and today and seems there is a perl
> dependency missing.

As pointed out in the rpmforge mailing list, if you are complaining
about dependency issues not because you need the specific packages
with problems, but because you want to run "yum update" and be able to
get the security updates from the other repositories not affected by
the issue, and if you are on CentOS 5, you may use the "--skip-broken"
option of yum (not available in CentOS 4!) to skip updating packages
for which the dependencies are not met.

This is the command you should run in this case:
# yum --skip-broken update

HTH,
Filipe
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum update error

2009-07-08 Thread Filipe Brandenburger
Hi,

On Tue, Jul 7, 2009 at 15:07, Olaf Mueller wrote:
> since today I could not update my CentOS 5.3 system with yum cause of
> the following error message.

As pointed out in the rpmforge mailing list, if you are complaining
about dependency issues not because you need the specific packages
with problems, but because you want to run "yum update" and be able to
get the security updates from the other repositories not affected by
the issue, and if you are on CentOS 5, you may use the "--skip-broken"
option of yum (not available in CentOS 4!) to skip updating packages
for which the dependencies are not met.

This is the command you should run in this case:
# yum --skip-broken update

HTH,
Filipe
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Victor Padro
2009/7/8 Neil Aggarwal :
> Hello:
>
> According to the Red Hat Virtualization Guide,
> Windows Server 2003 32-bit will only run as
> a fully virtualized guest on an AMD64 system.
>
> I thought I have seen a lot of discussion about
> running paravirtualized Windows on CentOS.  Is
> that a bad idea?
>
>        Neil
>
> --
> Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
> Will your e-commerce site go offline if you have
> a DB server failure, fiber cut, flood, or other disaster?
> If so, ask me about our geographically redudant database system.
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

AFAIK you can only run any windows version in paravirtualized mode
only in vmware server, not in Xen or KVM which are the virtualization
technologies CentOS supports.

-- 
Usuario Linux Registrado #452368
Usuario Ubuntu Registrado #28025

"Doing a thing well is often a waste of time."
--
//Netbook - HP Mini 1035NR 2GB 60GB - Windows XP/Ubuntu 9.04
//Desktop - Core 2 Duo 1.86Ghz 8GB 320GB - Windows 7 - Ubuntu 9.04
//Server - Athlon 64 2.7Ghz 8GB 500GB - Debian Lenny
//Server - Pentium D 3.2Ghz 4GB 400GB - Debian Lenny
//Server - NSLU2 266Mhz 32MB 1TB - Debian Lenny
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Joseph L. Casale
>I thought I have seen a lot of discussion about
>running paravirtualized Windows on CentOS.  Is
>that a bad idea?

It's not a bad idea, it's just not a possible one :)
What you have seen is talk of the paravirt *drivers* that
you use in an HVM domain to improve the otherwise useless
performance. As windows is not opensource, the Xen guys have
not been able to write code on their side to paravirtualize
it...

However, there has been some agreements done with Windows
Server 2008 and "Enlightenments" but I haven't followed it.

Setup an HVM, install windows, and give James Harpers GPLPV
Drivers a go.

jlc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Joseph L. Casale
>AFAIK you can only run any windows version in paravirtualized mode
>only in vmware server, not in Xen or KVM which are the virtualization
>technologies CentOS supports.

No. Vmware is no different than Xen or any other in this respect, they
also don't have access to the source and therefore cannot provide a modified
version of the OS.

There was a University project ages ago for Xen that was done under NDA but
obviously was only POC and never released.

http://www.vmware.com/files/pdf/VMware_paravirtualization.pdf

jlc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread chloe K
can i know how to set up dns cache server?
 
do you have any useful website?
 
thank you

--- On Wed, 7/8/09, John R Pierce  wrote:


From: John R Pierce 
Subject: Re: [CentOS] dhcp question
To: "CentOS mailing list" 
Received: Wednesday, July 8, 2009, 1:08 PM


chloe K wrote:
> thank you
>  
> how can I put the name server in dhcpd.conf  for the client as I use 
> ISP nameserver?
> this name server has to change when I change other ISP too
>  
>

run your own local DNS caching server, and give the DHCP clients 
192.168.0.1 or whatever your local network gateway IP is.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos



  __
Be smarter than spam. See how smart SpamGuard is at giving junk email the boot 
with the All-new Yahoo! Mail.  Click on Options in Mail and switch to New Mail 
today or register for free at http://mail.yahoo.ca___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread oooooooooooo ooooooooooooo


>Perhaps think about running tune2fs maybe also consider adding noatime 

Yes, I added it and I got a perfomance increase, anyway as the number of fields 
grows the speed keeps going below an acceptable level.

>I saw this article some time back.

http://www.linux.com/archive/feature/127055
Good idea, I already use mysql for indexing the files, so everytime I need to 
make a lookup I don't need the entire dir and then get the file, anyway my 
requirements are keeping the files on disk.

>The only way to deal with it (especially if the
application adds and removes these files regularly) is to every once in a
while copy the files to another directory, nuke the directory and restore
from the copy.Thanks, but there will not be too many file updates once the 
cache is done, so recreating directories can not be very helpful here. The 
issue is that as the number of files grows, bot reads from existing files and 
new insertion gets slower and slower.

>I haven't done, or even seen, any recent benchmarks but I'd expect
 reiserfs to still be the best at that sort of thing. I've looking at some 
benchmarks and reiser seems a bit faster in my scenario, however my problem 
happens when I have a arge number of files, for what I have seen, I'm not sure 
if reiser would be a fix
>However even if 
you can improve things slightly, do not let whoever is responsible for 
that application ignore the fact that it is a horrible design that 
ignores a very well known problem that has easy solutions.My original idea was 
storing the file with a hash of it name, and then store a  hash->real filename 
in mysql. By this way I have direct access to the file and I can make a 
directory hierachy with the first characters of teh hash /c/0/2/a, so i would 
have 16*4 =65536 leaves in the directoy tree, and the files would be 
identically distributed, with around 200 files per dir (waht should not give 
any perfomance issues). But the requiremenst are to use the real file name for 
the directory tree, what gives the issue.


>Did that program also write your address header ?
:)

Thanks for the help.



> From: hhh...@hotmail.com
> To: centos@centos.org
> Date: Wed, 8 Jul 2009 06:27:40 +
> Subject: [CentOS] Question about optimal filesystem with many small files.
>
>
> Hi,
>
> I have a program that writes lots of files to a directory tree (around 15 
> Million fo files), and a node can have up to 40 files (and I don't have 
> any way to split this ammount in smaller ones). As the number of files grows, 
> my application gets slower and slower (the app is works something like a 
> cache for another app and I can't redesign the way it distributes files into 
> disk due to the other app requirements).
>
> The filesystem I use is ext3 with teh following options enabled:
>
> Filesystem features: has_journal resize_inode dir_index filetype 
> needs_recovery sparse_super large_file
>
> Is there any way to improve performance in ext3? Would you suggest another FS 
> for this situation (this is a prodution server, so I need a stable one) ?
>
> Thanks in advance (and please excuse my bad english).
>
>
> _
> Connect to the next generation of MSN Messenger
> http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos

_
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Victor Padro
2009/7/8 Joseph L. Casale :
>>AFAIK you can only run any windows version in paravirtualized mode
>>only in vmware server, not in Xen or KVM which are the virtualization
>>technologies CentOS supports.
>
> No. Vmware is no different than Xen or any other in this respect, they
> also don't have access to the source and therefore cannot provide a modified
> version of the OS.
>
> There was a University project ages ago for Xen that was done under NDA but
> obviously was only POC and never released.
>
> http://www.vmware.com/files/pdf/VMware_paravirtualization.pdf
>
> jlc
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

good to know that.

thanks

-- 
Usuario Linux Registrado #452368
Usuario Ubuntu Registrado #28025

"Doing a thing well is often a waste of time."
--
//Netbook - HP Mini 1035NR 2GB 60GB - Windows XP/Ubuntu 9.04
//Desktop - Core 2 Duo 1.86Ghz 8GB 320GB - Windows 7 - Ubuntu 9.04
//Server - Athlon 64 2.7Ghz 8GB 500GB - Debian Lenny
//Server - Pentium D 3.2Ghz 4GB 400GB - Debian Lenny
//Server - NSLU2 266Mhz 32MB 1TB - Debian Lenny
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread oooooooooooo ooooooooooooo

(i resent thsi message as previous one seems bad formatted, sorry for the mess).


>Perhaps think about running tune2fs maybe also consider adding noatime 
 
Yes, I added it and I got a perfomance increase, anyway as the number of fields 
grows the speed keeps going below an acceptable level.
 


>I saw this article some time back.
 
http://www.linux.com/archive/feature/127055


Good idea, I already use mysql for indexing the files, so everytime I need to 
make a lookup I don't need the entire dir and then get the file, anyway my 
requirements are keeping the files on disk.


 
>The only way to deal with it (especially if the
application adds and removes these files regularly) is to every once in a
while copy the files to another directory, nuke the directory and restore
from the copy.


Thanks, but there will not be too many file updates once the cache is done, so 
recreating directories can not be very helpful here. The issue is that as the 
number of files grows, bot reads from existing files and new insertion gets 
slower and slower.


 
>I haven't done, or even seen, any recent benchmarks but I'd expect
 reiserfs to still be the best at that sort of thing. I've looking at some 
benchmarks and reiser seems a bit faster in my scenario, however my problem 
happens when I have a arge number of files, for what I have seen, I'm not sure 
if reiser would be a fix
>However even if 
you can improve things slightly, do not let whoever is responsible for 
that application ignore the fact that it is a horrible design that 
ignores a very well known problem that has easy solutions.

My original idea was storing the file with a hash of it name, and then store a  
hash->real filename in mysql. By this way I have direct access to the file and 
I can make a directory hierachy with the first characters of teh hash /c/0/2/a, 
so i would have 16*4 =65536 leaves in the directoy tree, and the files would be 
identically distributed, with around 200 files per dir (waht should not give 
any perfomance issues). But the requiremenst are to use the real file name for 
the directory tree, what gives the issue.

 
 
>Did that program also write your address header ?
:)


 
Thanks for the help.
 
 

> From: hhh...@hotmail.com
> To: centos@centos.org
> Date: Wed, 8 Jul 2009 06:27:40 +
> Subject: [CentOS] Question about optimal filesystem with many small files.
>
>
> Hi,
>
> I have a program that writes lots of files to a directory tree (around 15 
> Million fo files), and a node can have up to 40 files (and I don't have 
> any way to split this ammount in smaller ones). As the number of files grows, 
> my application gets slower and slower (the app is works something like a 
> cache for another app and I can't redesign the way it distributes files into 
> disk due to the other app requirements).
>
> The filesystem I use is ext3 with teh following options enabled:
>
> Filesystem features: has_journal resize_inode dir_index filetype 
> needs_recovery sparse_super large_file
>
> Is there any way to improve performance in ext3? Would you suggest another FS 
> for this situation (this is a prodution server, so I need a stable one) ?
>
> Thanks in advance (and please excuse my bad english).
>
>
> _
> Connect to the next generation of MSN Messenger
> http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
 
_
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
_
Connect to the next generation of MSN Messenger 
http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Filipe Brandenburger
Hi,

On Wed, Jul 8, 2009 at 17:59, 
o wrote:
> My original idea was storing the file with a hash of it name, and then store 
> a  hash->real filename in mysql. By this way I have direct access to the file 
> and I can make a directory hierachy with the first characters of teh hash 
> /c/0/2/a, so i would have 16*4 =65536 leaves in the directoy tree, and the 
> files would be identically distributed, with around 200 files per dir (waht 
> should not give any perfomance issues). But the requiremenst are to use the 
> real file name for the directory tree, what gives the issue.

You can hash it and still keep the original filename, and you don't
even need a MySQL database to do lookups.

For instance, let's take "example.txt" as the file name.

Then let's hash it, say using MD5 (just for the sake of example, a
simpler hash could give you good enough results and be quicker to
calculate):
$ echo -n example.txt | md5sum
e76faa0543e007be095bb52982802abe  -

Then say you take the first 4 digits of it to build the hash: e/7/6/f

Then you store file example.txt at: e/7/6/f/example.txt

The file still has its original name (example.txt), and if you want to
find it, you can just calculate the hash for the name again, in which
case you will find the e/7/6/f, and prepend that to the original name.

I would also suggest that you keep less directories levels with more
branches on them, the optimal performance will be achieved by getting
a balance of them. For example, in this case (4 hex digits) you would
have 4 levels with 16 entries each. If you group the hex digits two by
two, you would have (up to) 256 entries on each level, but only two
levels of subdirectories. For instance: example.txt ->
e7/6f/example.txt. That might (or might not) give you a better
performance. A benchmark should tell you which one is better, but in
any case, both of these setups will be many times faster than the one
where you have 400,000 files in a single directory.

Would that help solve your issue?

HTH,
Filipe
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Frank Cox
On Wed, 08 Jul 2009 18:09:28 -0400
Filipe Brandenburger wrote:

> You can hash it and still keep the original filename, and you don't
> even need a MySQL database to do lookups.

Now that is slick as all get-out.  I'm really impressed your scheme, though I
don't actually have any use for it right at this moment.

It's really clever. 

-- 
MELVILLE THEATRE ~ Melville Sask ~ http://www.melvilletheatre.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread oooooooooooo ooooooooooooo

> You can hash it and still keep the original filename, and you don't
> even need a MySQL database to do lookups.

There are an issue I forgot to mention: the original file name can be up top 
1023 characters long. As linux only allows 256 characters in the file path, I 
could have a (very small) number of collisions, that's why my original idea was 
using a hash->filename table. So I'm not sure if I could implement that idea in 
my scenario.

>For instance: example.txt ->
> e7/6f/example.txt. That might (or might not) give you a better
> performance.

After a quick calculation, that could put around 3200 files per directory (I 
have around 15 million of files), I think that above 1000 files the performance 
will start to degrade significantly, anyway it would be a mater of doing some 
benchmarks.

Thanks for the advice.


_
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread Les Mikesell
chloe K wrote:
> can i know how to set up dns cache server?
>  
> do you have any useful website?
>  

  yum install caching-nameserver
  chkconfig named on
  service named start

will work if that's all you want.  But don't install that package if you 
also want it to act as a primary server for your own local names (and if 
you have more than a few machines you probably do want that).

-- 
   Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread John R Pierce
chloe K wrote:
> can i know how to set up dns cache server?
>  
> do you have any useful website?
>  
>

for your use, dnsmasq would do nicely.   with the rpmforge repo 
configured...

# yum install dnsmasq
# chkconfig dnsmasq on
# service dnsmasq start

*done*

dnsmasq has some configuration options in /etc/dnsmasq.conf  as well as 
man pages explaining more.  there's some notes on it here, 
http://www.thekelleys.org.uk/dnsmasq/doc.html




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Les Mikesell
 o wrote:
>> You can hash it and still keep the original filename, and you don't
>> even need a MySQL database to do lookups.
> 
> There are an issue I forgot to mention: the original file name can be up top 
> 1023 characters long. As linux only allows 256 characters in the file path, I 
> could have a (very small) number of collisions, that's why my original idea 
> was using a hash->filename table. So I'm not sure if I could implement that 
> idea in my scenario.
> 
>> For instance: example.txt ->
>> e7/6f/example.txt. That might (or might not) give you a better
>> performance.
> 
> After a quick calculation, that could put around 3200 files per directory (I 
> have around 15 million of files), I think that above 1000 files the 
> performance will start to degrade significantly, anyway it would be a mater 
> of doing some benchmarks.

There's C code to do this in squid, and backuppc does it in perl (for a 
pool directory where all identical files are hardlinked).  Source for 
both is available and might be worth a look at their choices for the 
depth of the trees and collision handling (backuppc actually hashes the 
file content, not the name, though).

-- 
   Les Mikesell
lesmikes...@gmail.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread Karanbir Singh
On 07/08/2009 11:46 PM, John R Pierce wrote:
> for your use, dnsmasq would do nicely.   with the rpmforge repo
> configured...

whats wrong with the dnsmasq already included in C5 ? ( I am guessing 
the target is c5 )

>  # yum install dnsmasq
>  # chkconfig dnsmasq on
>  # service dnsmasq start

Why not just use the caching-nameserver ?

-- 
Karanbir Singh : http://www.karan.org/  : 2522...@icq
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Les Mikesell
Joseph L. Casale wrote:
>> AFAIK you can only run any windows version in paravirtualized mode
>> only in vmware server, not in Xen or KVM which are the virtualization
>> technologies CentOS supports.
> 
> No. Vmware is no different than Xen or any other in this respect, they
> also don't have access to the source and therefore cannot provide a modified
> version of the OS.

But Vmware and I think Virtualbox are capable of running unmodified 
Windows guests even on CPU's lacking the vt capability.   I don't think 
Xen can do that.

-- 
   Les Mikesell
lesmikes...@gmail.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 2 servers cluster

2009-07-08 Thread sheraz naz
http://www.howtoforge.com/high_availability_heartbeat_centos

?

--- On Mon, 6/29/09, Linux Advocate  wrote:

From: Linux Advocate 
Subject: Re: [CentOS] 2 servers cluster
To: "CentOS mailing list" 
Date: Monday, June 29, 2009, 12:12 AM

thanx bro. yes i have been looking as well. have looked at drbd...

From: Neil Aggarwal 
To: CentOS mailing list 
Sent: Monday, June 29, 2009 12:32:46 PM
Subject: Re: [CentOS] 2 servers cluster



 
#yiv693936518 DIV {
MARGIN:0px;}



We tried Sequioa:
http://www.continuent.com/community/lab-projects/sequoia
 
We wanted automatic failover and geographical 
distribution
of the database nodes.  Sequoia only supports 
master-master
operation if the database nodes are on the same 
subnet.
 
We did not find anything else out there, so we wrote 
our
own geographically distributed database system.
We can adapt that to your project if you are 
interested.
 
    Neil

--
Neil Aggarwal, (281)846-8957, 
www.JAMMConsulting.com
Your e-commerce site can be geographically redundant 
and available
even if failure occurs. Ask me about the GRed database 
system. 
 


  
  
  From: centos-boun...@centos.org 
  [mailto:centos-boun...@centos.org] On Behalf Of Linux 
  Advocate
Sent: Sunday, June 28, 2009 11:18 PM
To: CentOS 
  mailing list
Subject: Re: [CentOS] 2 servers 
  cluster


  
  
  taling abt piranha... i understand that its LVS + webfrontend and is 
  suitable fro webpages and so on. What do we need to make it as a LAMP 
cluster, 
  i.e with a mysql HA backend as well.

So-> HA of [ LoadBalancer + 
  Apache + MySQL}

Any ideas guys?

  

  
  
  From: fmb fmb 
  
To: CentOS mailing list 
  
Sent: Saturday, June 27, 2009 11:14:33 
  PM
Subject: Re: [CentOS] 2 
  servers cluster

Thnx Brian. This is the first thing that I will 
  do...


  On Sat, Jun 27, 2009 at 5:29 PM, Brian Mathis  
  wrote:

  CentOS has the redhat piranha packages available for 
install.  Piranha
is a repackaging of the linux virtual server 
software, along with a
web-based front-end.  You can find 
information about that in the
CentOS docs and also by googling for 
"redhat piranha".





On Fri, Jun 26, 2009 at 11:57 PM, fmb fmb 
wrote:
> Hi,
>
> I am thinking of setting up two servers 
in load balance mode. I would really
> appreciate your suggestions and 
hints...
>
>
> thnx,
>



___
CentOS 
mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos





  
-Inline Attachment Follows-

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos



  ___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Joseph L. Casale
>> No. Vmware is no different than Xen or any other in this respect, they
>> also don't have access to the source and therefore cannot provide a modified
>> version of the OS.
>
>But Vmware and I think Virtualbox are capable of running unmodified
>Windows guests even on CPU's lacking the vt capability.   I don't think
>Xen can do that.

Right, but that was not in debate or what I was referring to.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 2 servers cluster

2009-07-08 Thread Neil Aggarwal
That is only suitable for apache, not for database nodes.
 


--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Will your e-commerce site go offline if you have
a DB server failure, fiber cut, flood, fire, or other disaster?
If so, ask me about our geographically redudant database system. 

 


  _  

From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of sheraz naz
Sent: Wednesday, July 08, 2009 6:34 PM
To: CentOS mailing list
Subject: Re: [CentOS] 2 servers cluster


http://www.howtoforge.com/high_availability_heartbeat_centos

?

--- On Mon, 6/29/09, Linux Advocate  wrote:




From: Linux Advocate 
Subject: Re: [CentOS] 2 servers cluster
To: "CentOS mailing list" 
Date: Monday, June 29, 2009, 12:12 AM


thanx bro. yes i have been looking as well. have looked at drbd...



  _  

From: Neil Aggarwal 
To: CentOS mailing list 
Sent: Monday, June 29, 2009 12:32:46 PM
Subject: Re: [CentOS] 2 servers cluster


We tried Sequioa:
http://www.continuent.com/community/lab-projects/sequoia
 
We wanted automatic failover and geographical distribution
of the database nodes.  Sequoia only supports master-master
operation if the database nodes are on the same subnet.
 
We did not find anything else out there, so we wrote our
own geographically distributed database system.
We can adapt that to your project if you are interested.
 
Neil


--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Your e-commerce site can be geographically redundant and available
even if failure occurs. Ask me about the GRed database system. 

 


  _  

From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Linux Advocate
Sent: Sunday, June 28, 2009 11:18 PM
To: CentOS mailing list
Subject: Re: [CentOS] 2 servers cluster


taling abt piranha... i understand that its LVS + webfrontend and is
suitable fro webpages and so on. What do we need to make it as a LAMP
cluster, i.e with a mysql HA backend as well.

So-> HA of [ LoadBalancer + Apache + MySQL}

Any ideas guys?



  _  

From: fmb fmb 
To: CentOS mailing list 
Sent: Saturday, June 27, 2009 11:14:33 PM
Subject: Re: [CentOS] 2 servers cluster

Thnx Brian. This is the first thing that I will do...


On Sat, Jun 27, 2009 at 5:29 PM, Brian Mathis 
wrote:


CentOS has the redhat piranha packages available for install.  Piranha
is a repackaging of the linux virtual server software, along with a
web-based front-end.  You can find information about that in the
CentOS docs and also by googling for "redhat piranha".



On Fri, Jun 26, 2009 at 11:57 PM, fmb fmb wrote:
> Hi,
>
> I am thinking of setting up two servers in load balance mode. I would
really
> appreciate your suggestions and hints...
>
>
> thnx,
>

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos






-Inline Attachment Follows-


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Neil Aggarwal
Joseph:

OK, so the drivers are paravirtualized, not the
entire OS.  I think I get it.

Thanks,
  Neil


--
Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
Will your e-commerce site go offline if you have
a DB server failure, fiber cut, flood, fire, or other disaster?
If so, ask me about our geographically redudant database system. 

> -Original Message-
> From: centos-boun...@centos.org 
> [mailto:centos-boun...@centos.org] On Behalf Of Joseph L. Casale
> Sent: Wednesday, July 08, 2009 4:28 PM
> To: 'CentOS mailing list'
> Subject: Re: [CentOS] Window Server 2003 will not run as 
> paravirtualized?
> 
> >I thought I have seen a lot of discussion about
> >running paravirtualized Windows on CentOS.  Is
> >that a bad idea?
> 
> It's not a bad idea, it's just not a possible one :)
> What you have seen is talk of the paravirt *drivers* that
> you use in an HVM domain to improve the otherwise useless
> performance. As windows is not opensource, the Xen guys have
> not been able to write code on their side to paravirtualize
> it...
> 
> However, there has been some agreements done with Windows
> Server 2008 and "Enlightenments" but I haven't followed it.
> 
> Setup an HVM, install windows, and give James Harpers GPLPV
> Drivers a go.
> 
> jlc
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Window Server 2003 will not run as paravirtualized?

2009-07-08 Thread Joseph L. Casale
>OK, so the drivers are paravirtualized, not the
>entire OS.  I think I get it.

Yeah, instead of me paraphrasing and probably butchering
what is well stated, have a quick read of this article.

Lots of good info...

http://searchservervirtualization.techtarget.com/tip/0,289483,sid94_gci1281856_mem1,00.html

hth,
jlc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread James A. Peltier
On Wed, 8 Jul 2009,  o wrote:

>
> Hi,
>
> I have a program that writes lots of files to a directory tree (around 15 
> Million fo files), and a node can have up to 40 files (and I don't have 
> any way to split this ammount in smaller ones). As the number of files grows, 
> my application gets slower and slower (the app is works something like a 
> cache for another app and I can't redesign the way it distributes files into 
> disk due to the other app requirements).
>
> The filesystem I use is ext3 with teh following options enabled:
>
> Filesystem features:  has_journal resize_inode dir_index filetype 
> needs_recovery sparse_super large_file
>
> Is there any way to improve performance in ext3? Would you suggest another FS 
> for this situation (this is a prodution server, so I need a stable one) ?
>
> Thanks in advance (and please excuse my bad english).

There isn't a good file system for this type of thing.  filesystems with 
many very small files are always slow.  Ext3, XFS, JFS are all terrible 
for this type of thing.

Rethink how you're writing files or you'll be in a world of hurt.

-- 
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
HPC Coordinator
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca
   http://blogs.sfu.ca/people/jpeltier
MSN : subatomic_s...@hotmail.com

The point of the HPC scheduler is to
keep everyone equally unhappy.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread James A. Peltier
On Wed, 8 Jul 2009,  o wrote:

>
> Hi,
>
> I have a program that writes lots of files to a directory tree (around 15 
> Million fo files), and a node can have up to 40 files (and I don't have 
> any way to split this ammount in smaller ones). As the number of files grows, 
> my application gets slower and slower (the app is works something like a 
> cache for another app and I can't redesign the way it distributes files into 
> disk due to the other app requirements).
>
> The filesystem I use is ext3 with teh following options enabled:
>
> Filesystem features:  has_journal resize_inode dir_index filetype 
> needs_recovery sparse_super large_file
>
> Is there any way to improve performance in ext3? Would you suggest another FS 
> for this situation (this is a prodution server, so I need a stable one) ?
>
> Thanks in advance (and please excuse my bad english).


BTW, you can pretty much say goodbye to any backup solution for this type 
of project as well.  They'll all die dealing with a file system structure 
like this

  -- 
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
HPC Coordinator
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca
   http://blogs.sfu.ca/people/jpeltier
MSN : subatomic_s...@hotmail.com

The point of the HPC scheduler is to
keep everyone equally unhappy.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread John R Pierce
Karanbir Singh wrote:
> On 07/08/2009 11:46 PM, John R Pierce wrote:
>   
>> for your use, dnsmasq would do nicely.   with the rpmforge repo
>> configured...
>> 
>
> whats wrong with the dnsmasq already included in C5 ? ( I am guessing 
> the target is c5 )
>   

oh is it?   I did a rpm -qi and saw your name and assumed it was from 
rpmforge.


> Why not just use the caching-nameserver ?
>   

isn't that a canned bind configuration?  ah, yeah, thats what the 
package info file says it is.

bind is a lot more complex than dnsmasq.   dnsmasq uses /etc/resolv.conf 
for forwarded lookups, while a caching bind server either uses
a statically configured forwarder, or a root cache zone, and running a 
root cache zone on a intermittently connected system ('different ISPs') 
isn't a good idea.

dnsmasq will also serve local clients with dns using entries in your 
/etc/hosts file, which can be handy when you have a few static hosts on 
a small masqueraded network.



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread nate
James A. Peltier wrote:

> There isn't a good file system for this type of thing.  filesystems with
> many very small files are always slow.  Ext3, XFS, JFS are all terrible
> for this type of thing.

I can think of one...though you'll pay out the ass for it, the
Silicon file system from BlueArc (NFS), file system runs on
FPGAs. Our BlueArc's never had more than 50-100,000 files in any
particular directory(millions in any particular tree), though
they are supposed to be able to handle this sort of thing quite
well.

I think entry level list pricing starts at about $80-100k for
1 NAS gateway (no disks).

Our BlueArc's went end of life earlier this year and we migrated
to an Exanet cluster(runs on top of CentOS 4.4 though uses it's
own file system, clustering and NFS services) which is still
very fast though not as fast as BlueArc.

And with block based replication it doesn't matter how many
files there are, performance is excellent for backup, send
data to another rack in your data center or to another
continent over the WAN. In BlueArc's case transparently
send data to a dedupe device or tape drive based on
dynamic access patterns(and move it back automatically
when needed).

http://www.bluearc.com/html/products/file_system.shtml
http://www.exanet.com/default.asp?contentID=231

Both systems scale to gigabytes/second of throughput linearly,
and petabytes of storage without downtime. The only downside
to BlueArc is their back end storage, they only offer tier
2 storage and only have HDS for tier 1. You can make an HDS
perform but it'll cost you even more..The tier 2 stuff is
too unreliable(LSI logic). Exanet at least supports
almost any storage out there(we went with 3PAR).

Don't even try to get a netapp to do such a thing.

nate

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] OT:CMS

2009-07-08 Thread madunix
Dear ALL,

What  are the experiences you have with various open source CMS
products (Comparison of PHP-based CMS) such as  (Drupal, Joomla,
OpenCMS, Typo3, eZ publish ..etc.)

Security, Bugs, Performance, Support, Developer Community, learning
curve, appearance..etc

Thanks
-mu
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT:CMS

2009-07-08 Thread Greg Bailey
madunix wrote:
> Dear ALL,
> 
> What  are the experiences you have with various open source CMS
> products (Comparison of PHP-based CMS) such as  (Drupal, Joomla,
> OpenCMS, Typo3, eZ publish ..etc.)
> 
> Security, Bugs, Performance, Support, Developer Community, learning
> curve, appearance..etc
> 
> Thanks
> -mu

I recently have been asked by a few different people to "help them set 
up a web site".  My challenge was finding one that a non-technical 
person can maintain.  After digging through lots of trial installations, 
I settled on "CMS Made Simple" http://www.cmsmadesimple.org and it's 
worked great so far.  I like that it generates menus, etc. 
automatically, and can be used to generate and maintain a small business 
web site without having to look as much like a blog...  All the others 
you mention have great features, etc. but would be overwhelming for a 
non-technical person, in my opinion.

-Greg
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread Alexander Georgiev
2009/7/9,  o :
>
> After a quick calculation, that could put around 3200 files per directory (I
> have around 15 million of files), I think that above 1000 files the
> performance will start to degrade significantly, anyway it would be a mater
> of doing some benchmarks.

depending on the total size of this cache files, as it was suggested
by nate - throw some hardware at it.

perhaps a hardware ram device will provide adequate performance :

http://www.tomshardware.com/reviews/hyperos-dram-hard-drive-block,1186.html
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Question about optimal filesystem with many small files.

2009-07-08 Thread oooooooooooo ooooooooooooo

>There's C code to do this in squid, and backuppc does it in perl (for a 
pool directory where all identical files are hardlinked).

Unfortunately I have to write the file with some predefined format, so these 
would not provide the flexibility I need.

>Rethink how you're writing files or you'll be in a world of hurt.

It's possible that I will be able to name the directory tree based in the hash 
of te file, so I would get the structure described in one of my previous post 
(4 directory levels, each directory name would be a single character from 0-9 
and A-F, and 65536 (16^4) leaves, each leave containing 200 files). Do you 
think that this would really improve performance? Could this structure be 
improved?

>BTW, you can pretty much say goodbye to any backup solution for this type 
of project as well.  They'll all die dealing with a file system structure 
like this.

We don't plan to use backups (if the data gets corrupted, we can retrieve it 
again), but thanks for teh advice.

>I think entry level list pricing starts at about $80-100k for
1 NAS gateway (no disks).

That's far above the budget... 

>depending on the total size of this cache files, as it was suggested
by nate - throw some hardware at it.

Same that above, seems they don't want to spend more in HW  (so I have to deal 
with all performance issues...). Anyway if I can get all the directories to 
have around 200 files, I think I will be able to make this with the current 
hardware.

Thanks for the advice.

_
Invite your mail contacts to join your friends list with Windows Live Spaces. 
It's easy!
http://spaces.live.com/spacesapi.aspx?wx_action=create&wx_url=/friends.aspx&mkt=en-us
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dhcp question

2009-07-08 Thread Rob Townley
On Wed, Jul 8, 2009 at 5:55 PM, Karanbir Singh wrote:
> On 07/08/2009 11:46 PM, John R Pierce wrote:
>> for your use, dnsmasq would do nicely.   with the rpmforge repo
>> configured...
>
> whats wrong with the dnsmasq already included in C5 ? ( I am guessing
> the target is c5 )
>
>>      # yum install dnsmasq
>>      # chkconfig dnsmasq on
>>      # service dnsmasq start
>
> Why not just use the caching-nameserver ?
>
> --
> Karanbir Singh : http://www.karan.org/  : 2522...@icq
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

There are db based nameservers such as MyDNS or djbdns or pdns.
MySQL db replication can replicate zones to other machines and it has
an web interface option.

pdns is authoritative only, not caching.  pdns-recursor is caching.

yum search pdns for ldap, db, geo,  and i thought a web interface.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Add instantly active local user accounts *with* password using useradd -p option ?

2009-07-08 Thread Niki Kovacs
Hi,

I need to setup a load of user accounts on a series of machines, for 
testing purposes. I'm using a script to do this, but the only problem I 
have so far: I have to activate them all manually by doing passwd user1, 
passwd user2, passwd user3, etcetera. The useradd man page mentions a -p 
option to define a password, but I can't seem to get this to work. 
Here's what I'd like to be able to do:

# useradd -c "Gaston Lagaffe" -p abc123 -m glagaffe

And put that line in a script, so the account is *instantly* activated. 
I tried it, but to no avail. Looks like there's some burning loop I have 
to jump through first :o)

No security considerations here for the moment, since it's for testing.

Any idea how this works?

Niki
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] ldap authentication

2009-07-08 Thread hqm8512
hello ,
we're using LDAP for user authentication

I'm looking for a mechanism to automatically create a users home directory
when he logs in for the first time 
Thanks,

--
Best Regards
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos