Re: Kernels 2.4.x for ISP ?

2001-05-18 Thread Jason Lim

Hi,

I'm interested in knowing as well.

We run a bunch of webhosting and dedicated servers using kernel 2.2.19 .
Most boxen are Intel PentiumIII or AMD Athlon or Durons.

Will there be much of a performance boost from using the new 2.4 kernels,
or are there many compatibility issues?

Anyway, I know the ext2 2Gb filelimit has been fixed in the 2.4 kernels,
so thats a bonus, but are there any other "great" improvements like that?
And don't tell me about the new USB, webcam, etc. functions and other
"desktop" stuff because these are servers ;-)

Thanks in advance.

Sincerely,
Jason Lim

- Original Message -
From: "Przemyslaw Wegrzyn" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 18, 2001 7:01 PM
Subject: Kernels 2.4.x for ISP ?


>
> I'm just setting up new box - large mail server,
> dual Pentium, STL2 motherboard, Mylex AccleRAID.
>
> I'd like to try with kernel 2.4.x + ResierFS, but I worry a little about
> it's maturity. Is it really stable now ?
>
> Can anyone here write about his/her experiences with 2.4.x ?
>
> Does 2.4.x kernels support AcceleRAID 352 well ?
>
>
> Greetings
> -=Czaj-nik=-
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
> http://hk.zentek-international.com
> http://us.zentek-international.com
> http://www.zentek.com.hk


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Apache suEXEC problem

2001-05-19 Thread Jason Lim

Hi,

Well... you could either simply get the source directly from apache's
website www.apache.org, or if you have deb-src entries in your apt.conf,
you could pull the source from there, and then simply compile suexec with
your own settings.

Its a REAL pain in the ass, but so far I haven't found any runtime
settings that we can allow you to change the directory. It would be GREAT
if suexec would read in what directories are allowed from
/etc/apache/suexec.conf, for example, but oh well.

Jason.


- Original Message -
From: "Hector Castillo" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, May 20, 2001 4:01 AM
Subject: Apache suEXEC problem


We have recently installed Debian into one of our machines. The purpose
was to serve part of our client pages to the Net using apache-ssl
server, but we have discovered now one problem.
apache-ssl has been configured in Debian with suEXEC, but it supposes
that pages are in "/var/www", and we have them into "/home/webftp". So
now no script can be executed because the Apache wrapper (suexec) locks
its execution.
It's impossible to move the pages to the original directory (there are
a lot users and a lot pages), and the suEXEC is necessary.
My question is: ¿does anybody know how to recompile the Debian source
of apache-ssl and which files would be necessary to change?


--
   Héctor Castillo Andreu
   [EMAIL PROTECTED]


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re:

2001-05-22 Thread Jason Lim

I must say... thats pretty stupid.

I mean... okay.. you spam some newbie list, or some "get-rich-quick"
mailing lists, maybe no one would notice, or they wouldn't care... (hehe
maybe they don't know what to do and how to complain).

However... and ISP list... debian-isp? Thats got to be a REAL stupid move.
I think virtually everyone here knows how to lookup IP address contact
info and abuse.net records and such.

How stupid. Oh well.


- Original Message -
From: "Tech Support" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 22, 2001 7:22 PM
Subject: Re:


> John wrote:
> >
> > Hey there, I found a great retail site with all kinds of products.
Home
> > decor, office decor, travel, outdoors, kitchen, etc... Take a look
around
> > at http://www.merchandisewholesale.com  just click on the images of
the
> > product to enlarge it for a better view.
> >
> > Sincerely,
> >   John
>
>   If everyone would complain it would be appreciated. Adding the
> merchandisewholesale.com to your blacklist and letting them know your
> users will no longer be able to access it would be nice too!
>   Lets let them know that posting SPAM to an ISP list is a BAD idea.
> Here is the contact info for this SPAM's upstream:
>
> Re:151.202.29.28 (Administrator of network where email originates)
>To: [EMAIL PROTECTED] (Notes)
>To: [EMAIL PROTECTED] (Notes)
>
> Re:151.202.29.28 (Administrator of IP block - statistics only)
>To: [EMAIL PROTECTED] (Notes)
>To: [EMAIL PROTECTED] (Notes)
>
> Re:http://www.merchandisewholesale.com (Administrator of network hosting
> website referenced in spam)
>To: [EMAIL PROTECTED] (Notes)
>
> Pete
> --
> http://www.elbnet.com
> ELB Internet Service, Inc.
> Web Design, Computer Consulting, Internet Hosting
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: kernel 2.2.19 limitations.

2001-05-27 Thread Jason Lim

Hi,

Yes the limit is still the usual 2Gb. The limit is actually with ext2, i
believe, although I'm not sure.

Sincerely,
Jason

- Original Message -
From: "Przemyslaw Wegrzyn" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, May 27, 2001 5:58 PM
Subject: kernel 2.2.19 limitations.


>
> What is the maximum file size in 2.2.19 kernel ?
> Still 2GB ?
>
> Is it ext2 limitations , or ?
> Will ReiserFS have the same limit using 2.2.19 kernel ?
>
> -=Czaj-nick=-
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: kernel 2.2.19 limitations.

2001-05-27 Thread Jason Lim

Hi,

I stand corrected (told you I wasn't sure ;-)  ).

There are tons of reasons people in production environments wouldn't go to
2.4. We use 2.2.19 right now, and its because if we upgrade to 2.4, there
are bunch of packages that need upgrading. And it isn't guarenteed that
all packages will work with 2.4. 2.2 is tried and tested, and until Debian
makes an official move to 2.4 and people report near 100% compatibility
with the 2.4 kernel, we won't be moving. I might test it out on my own
personal box, tho ;-)

Btw, does the RH patch work reliably? Any compatibility issues? Does
everything run as normal?

Sincerely,
Jason

- Original Message -
From: "Peter Billson" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Sent: Monday, May 28, 2001 12:28 AM
Subject: Re: kernel 2.2.19 limitations.


> > Yes the limit is still the usual 2Gb. The limit is actually with ext2,
i
> > believe, although I'm not sure.
>
>   The limit is in the kernel, not the ext2 file system, otherwise 2.4.x
> wouldn't be able to support >2Gb file either. There are patches about
> for adding LFS (large file system) support. I had compiled a 2.2.18
> kernel after patching with the LFS patch borrowed from RedHat's 6.2ee
> (Enterprise Edition) source.
>
>   But why not just run 2.4.x?
> Pete
> --
> http://www.elbnet.com
> ELB Internet Services, Inc.
> Web Design, Computer Consulting, Internet Hosting
> http://www.zentek-international.com


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: speed up modem connection

2001-05-31 Thread Jason Lim

Um...

This is a question related to ISPs?

- Original Message -
From: "John Joe" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, June 01, 2001 5:45 AM
Subject: speed up modem connection


> I surf with Netscape 4.0 for Linux and find it much
> slower than IE 5.0 of MS Winodws. I've change MTU to
> 576 (MTU is an argument to pppd) and it didn't help.
>
> I connect by 33.6k internal modem. I use Debian 2.1.
>
>
> __
> Do You Yahoo!?
> Get personalized email addresses from Yahoo! Mail - only $35
> a year!  http://personal.mail.yahoo.com/
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Multiple Uplinks

2001-06-01 Thread Jason Lim

Hi,

I was wondering a few days ago about this... tell me if this is possible
or not.

We have servers with 2 NICs each. Right now, we usually plug in only one
of the NICs to the switch.

However, some people want to have the other NIC connected as well for
redunduncy and additional bandwidth (100Mb per link).

I was thinking that we could do something with routed and /etc/gateways,
but I'm not sure how this would work. We try to run a little "extra"
software on each server as possible (for a whole number of reasons,
including stability). Do you guys have any suggestions? How do you get
this kind of thing set up with as little fuss as possible?

Sincerely,
Jason



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Network Design

2001-06-03 Thread Jason Lim

Hi,

I don't get this.

If you can run DNS servers (that require static IPs) then why on earth
would you want to run the webserver on a dynamic IP?
You then go on to talk about "resilience and redundancy" for your
webservers. On a dynamic IP? Whats up with that?

You're just contradicting yourself. You want to run a full scale,
load-balancing/server-takover setup, yet you want to do this all on a
dynamic IP?

I can't see how you want to do all this properly, or what your real goals
are.

Jason


- Original Message -
From: A. Benjamin
To: Homestead ; CrackStore
Cc: mervin whealy ; Lex Berrios ; Karl Winkler ; James J. Stewart ;
[EMAIL PROTECTED]
Sent: Sunday, June 04, 2000 2:55 AM
Subject: Network Design


Hello,

I have a network layout that I am deemed to put into operation.
I am trying to make this thing work before I start configuring
this monster. Please offer your comments.

Here's a few hurdles I would have to overcome.
1. I do not have a static IP address to my ISP. It's dynamic.
2.Computer number 1 is on the 1st floor and the rest are
all in the basement.
3. I have no bridges, routers or switches.
4. There is one twisted-pair cable running from the basement
to computer 1 and wish not to run another.
5. I will attempt to use a redirectional service, such
as DHS to direct viewers to a my web server.
6. I will run my own DNS servers.
7. I want to add some resilience and redundancy for
my webservers. I mentioned a primary and a secondary
web server. The primary would be my main domain and
the another a subdomain. As I understand, a Class C
IP address is not routable thru the internet, but can I use
it as a secondary web server if it has a Class C IP?

A few temporal remedies:
1. I could use a program such as DHSup to have my IP
address point to the same IP address to compensate
for the dynamic IP.
2. When I use DHS services and create a host for example,
myserver.dhs.org, and my computer (locally) host name is
Phoenix,  I can configure my DNS server to reflect
phoenix.myserver.dhs.org.
3. If it is possible, I could "sub, sub, subnet" a network to
give more than one workable IP. For instance, I have
configured the following:

NetworkHosts (from and to) Broadcast Address
212.185.0.0 212.185.0.1 212.185.63.254 212.185.63.255
212.185.64.0 212.185.64.1 212.185.127.254 212.185.127.255
212.185.128.0 212.185.128.1 212.185.191.254 212.185.191.255
212.185.192.0 212.185.192.1 212.185.255.254 212.185.255.255

Is this conceivable?
Please reply with any comments that I could use to better my
problem. Thanks for you help.


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: What Happened to ORBS?

2001-06-05 Thread Jason Lim

Hum...

I ready the attachment, but it only says that they were served with an
injunction in NZ.

Doesn't say if they will be back up, what is happening now, or what the
future holds.

Anyone know?

Sincerely,
Jason

- Original Message -
From: "Robert Waldner" <[EMAIL PROTECTED]>
To: "s u r f l o r i d a" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, June 05, 2001 6:55 PM
Subject: Re: What Happened to ORBS?


>
> (please don´t send html-mails, thankyouverymuch)
>
> On Tue, 05 Jun 2001 06:46:31 CDT, "s u r f l o r i d a" writes:
> >Does anyone know what happened to http://www.orbs.org/ and the
> >mail servers they had on their blacklist?  Is someone taking it over?
> >
> >I have searched the news sites and have came up with nothing.
>
> see attached mail.
>
> cheers,
> &rw
>


--
--


> / Ing. Robert Waldner |  <[EMAIL PROTECTED]>  \
> \ Xsoft GmbH  | T: +43 1 796 36 36 692 /
> http://www.zentek-international.com/


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Backup-request

2001-06-05 Thread Jason Lim

Hi,

I work at a Web/Dedicated Server hosting company in Hong Kong.

A few companies in the USA have sent us servers for colocation in Hong
Kong for a whole number of reasons (true redunancy/backup, fast
connections to Asia, etc.), but I suppose that avoiding PSI's possible
network meltdown would be as good as any. We connect to a number of
backbone providers in Asia, and PSI is not on our list.

Anyway, if you're interested, let me know and I'm sure we can arrange
something.

Sincerely,
Jason

- Original Message -
From: "Peter Billson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, June 05, 2001 11:23 PM
Subject: Backup-request


> Hello All,
>I am using PSI as my backbone provider and, in case you haven't
> heard, they have filed Chapter 11. Service is fine and I don't believe
> they are going to do a Northpoint because they have been very up-front
> with everything (including an advance e-mail and phone call to warn
> about the bankruptcy filing) but better safe than sorry.
>   I can't just use another provider because I have a contract until
> November with PSI and it takes about 90 days to set up a new line
> anyway.
>   I'm wondering if any of you fellow ISPs would be willing to co-locate
> (for a fee of course) a Web server temporarily should the worst happen.
> My thought is to FedEx a complete machine that just needs to be plugged
> into your Network which means that I need someone in the states.
>   If you are interested, please email me (off the list is probably best
> since everyone probably doesn't want to read this thread) to work out
> the details.
>
> Pete
> --
> http://www.elbnet.com
> ELB Internet Services, Inc.
> Web Design, Computer Consulting, Internet Hosting
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Finding the Bottleneck

2001-06-05 Thread Jason Lim

Hi all,

I was wondering if there is a way to find out what/where the bottleneck of
a large mail server is.

A client is running a huge mail server that we set up for them (running
qmail), but performance seems to be limited somewhere. Qmail has already
been optimized as far as it can go (big-todo patch, large concurrency
patch, etc.).

We're thinking one of the Hard Disks may be the main bottleneck (the mail
queue is already on a seperate disk on a seperate IDE channel from other
disks). Is there any way to find out how "utilized" the IDE channel/hard
disk is, or how hard it is going? Seems that right now the only way we
really know is by looking at the light on the server case (how technical
;-) ). Must be a better way...

The bottleneck wouldn't be bandwidth... it is definately with the server.
Perhaps the CPU or kernel is the bottleneck (load average: 4.84, 3.94,
3.88, going up to 5 or 6 during heavy mailing)? Is that normal for a large
mail server? We haven't run such a large mail server before (anywhere
between 500k to 1M per day so far, increasing each day), so ANY tips and
pointers would be greatly appreciated. We've already been playing around
with hdparm to see if we can tweak the disks, but doesn't seem to help
much. Maybe some cache settings we can fiddle with? Maybe the mail queue
disk could use a different file cache setting (each email being from 1K to
10K on average)?

Thanks in advance!

Sincerely,
Jason



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-06 Thread Jason Lim

Hi Bertrand,

Thanks for your insightful email!

Just so you know, this server is an:
AMD K6-2 500Mhz, 128M-133Mhz, 2 UDMA100 drives (IBM), 10M bandwidth.
The server runs Apache, Qmail, vpopmail (for pop3). The webserver is not
primary (doesn't have to have the fastest response time), as this is
mainly for the mailing lists. The 2 hard disks are on 2 different IDE
channels, as putting both disks on the same cable would drastically reduce
performance of both disks.

The way it is organized is that the mail spool/queue is on the 2nd disk,
while the OS and programs are on disk 1. Logging is also performed on disk
1, so that writing to the mail log won't interfere with the mail queue (as
they commonly both occur simultaneously).

>From MY understanding, the "load average" shows how many programs are
running, and not really how "stressed" the CPU is. I'm not sure exactly
sure how this works (please correct me if i'm wrong) but 1 program taking
80% CPU might have load average of 2, while 100 programs taking 0.5% each
would take 50% CPU and have load average of 8. Is that correct thinking?

The reason I'm saying that is because qmail spawns a program called
"qmail-remote" for EACH email to be sent. So if 200 concurrent emails are
being sent at one time, then 200 qmail-remotes are created. Granted...
each of these takes up tiny amounts of ram and CPU time, but I suspect (if
the above statements are correct) that the load average would be
artificially inflated because of it.

We don't use NFS on this server. NFS on linux, as you said, is pretty
crummy and should be avoided if possible. We simply put the mail queue on
a seperate hard disk.

pop3 load is extremely minimal. Its mainly an outgoing mail server (mail
list server). People essentially use the website to send mail to the
mailing list, so load of the pop3 server won't be an issue.

About the HK job market, do you have any official qualifications (eg. a
degree, diploma, cert., etc.)? In HK bosses like to see that... even more
than in some other countries like Australia (not sure bout US).

BTW. was your mother headmistress of St. Paul before?

Sincerely,
Jason

----- Original Message -
From: "schemerz" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Sent: Wednesday, June 06, 2001 3:57 PM
Subject: Re: Finding the Bottleneck


On Wed, Jun 06, 2001 at 11:53:22AM +0800, Jason Lim wrote:
> Hi all,
>
> I was wondering if there is a way to find out what/where the bottleneck
of
> a large mail server is.
>
> A client is running a huge mail server that we set up for them (running
> qmail), but performance seems to be limited somewhere. Qmail has already
> been optimized as far as it can go (big-todo patch, large concurrency
> patch, etc.).
>
> We're thinking one of the Hard Disks may be the main bottleneck (the
mail
> queue is already on a seperate disk on a seperate IDE channel from other
> disks). Is there any way to find out how "utilized" the IDE channel/hard
> disk is, or how hard it is going? Seems that right now the only way we
> really know is by looking at the light on the server case (how technical
> ;-) ). Must be a better way...
>
> The bottleneck wouldn't be bandwidth... it is definately with the
server.
> Perhaps the CPU or kernel is the bottleneck (load average: 4.84, 3.94,
> 3.88, going up to 5 or 6 during heavy mailing)? Is that normal for a
large
> mail server? We haven't run such a large mail server before (anywhere
> between 500k to 1M per day so far, increasing each day), so ANY tips and
> pointers would be greatly appreciated. We've already been playing around
> with hdparm to see if we can tweak the disks, but doesn't seem to help
> much. Maybe some cache settings we can fiddle with? Maybe the mail queue
> disk could use a different file cache setting (each email being from 1K
to
> 10K on average)?
>
> Thanks in advance!
>
> Sincerely,
> Jason
>
Jason,

I am a lurker on the list.  I don't run linux anymore but recently a
friend of mine encountered a similar load problem.  Granted, he was
running sendmail, but his main bottleneck wasn't the mta at all.  I will
explain his situation and see if you will find any similarites with
yours...

The discussion pertains to the mailserver at sunflower.com.  When my
friend took over the previous admin left the place in a mess.  One of the
machines he inherited was a dual 400 mhz penta 2 with about 256 megs of
ram.  It would be quite adequate for serving about 5000 accounts except...

1)  The server was running on a box using the promise IDE raid
controllers.  IDE for a small server would work fine, but this box was
getting hit alot.  2)  This server was also the pop server.  One of the
reasons of design to run pop and sendmail on the same box was becau

Re: Finding the Bottleneck

2001-06-06 Thread Jason Lim

Hi Russell,

Here is the result of "top":

 05:51:18 up 5 days, 22:38,  1 user,  load average: 6.60, 7.40, 6.51
119 processes: 106 sleeping, 11 running, 2 zombie, 0 stopped
CPU states:  16.4% user,  18.3% system,   0.0% nice,  65.3% idle
Mem:128236K total,   124348K used, 3888K free,72392K buffers
Swap:   289160K total,0K used,   289160K free, 9356K cached

And of "qmail-qstat":
sh-2.05# qmail-qstat
messages in queue: 108903
messages in queue but not yet preprocessed: 19537

Swap is on Disk 1, because mail queue/spool is on Disk 2.

I also already added the "-" in front of most entries except the emergency
or critical ones (if I didn't do it, the load was way higher just writing
the log files).

Concerning the mail queue and spool being on the same disk, the reason is
that there is virtually no emails incoming, 99.999% outgoing.

About running software raid... I've heard that the CPU usage is increased
dramatically if you use any form of software raid. Is that true?
Actually... i doubt the customer would be willing to pay us to implement
this for him on a hardware level. Good raid cards with good amounts of ram
don't come cheap last time I checked... :-/

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, June 06, 2001 8:05 PM
Subject: Re: Finding the Bottleneck


On Wednesday 06 June 2001 10:51, Jason Lim wrote:
> Just so you know, this server is an:
> AMD K6-2 500Mhz, 128M-133Mhz, 2 UDMA100 drives (IBM), 10M bandwidth.

How much swap is being used?  If you have any serious amount of mail being
delivered then having a mere 128M of RAM will seriously hurt performance!
RAM is also cheap and easy to upgrade...

> mainly for the mailing lists. The 2 hard disks are on 2 different IDE
> channels, as putting both disks on the same cable would drastically
reduce
> performance of both disks.

In my tests so far I have not been able to show drastic performance
difference.  I have shown about a 20% performance benefit for using
separate
cables...

> The way it is organized is that the mail spool/queue is on the 2nd disk,
> while the OS and programs are on disk 1. Logging is also performed on
disk
> 1, so that writing to the mail log won't interfere with the mail queue
(as
> they commonly both occur simultaneously).

Where is swap?


Take note of the following paragraph from syslog.conf(5):
   You  may  prefix  each  entry with the minus ``-'' sign to
   omit syncing the file after every logging.  Note that  you
   might  lose information if the system crashes right behind
   a write attempt.  Nevertheless this might  give  you  back
   some  performance, especially if you run programs that use
   logging in a very verbose manner.

Do that for all logs apart from kern.log!  Then syslogd will hardly use
any
disk access.

> From MY understanding, the "load average" shows how many programs are
> running, and not really how "stressed" the CPU is. I'm not sure exactly
> sure how this works (please correct me if i'm wrong) but 1 program
taking
> 80% CPU might have load average of 2, while 100 programs taking 0.5%
each
> would take 50% CPU and have load average of 8. Is that correct thinking?

Not.

1 program taking up all CPU time will give a load average of 1.00.  1
program
being blocked on disk IO (EG reading from a floppy disk) will give a load
average of 1.00.  Two programs blocked on disk IO to different disks and a
third program that's doing a lot of CPU usage will result in load average
of
3.00 while the machine is running as efficiently as it can.

Load average isn't a very good way of measuring system use!

> We don't use NFS on this server. NFS on linux, as you said, is pretty
> crummy and should be avoided if possible. We simply put the mail queue
on
> a seperate hard disk.

Actually if you have the latest patches then NFS should be quite solid.


Now firstly the OS and the syslog will not use the disk much at all if you
have enough RAM that the machine doesn't swap and has some spare memory
for
caching.  Boost the machine to 256M.  Don't bother with DDR RAM as it
won't
gain you anything, get 384 or 512M if you can afford it.

Next the most important thing for local mail delivery is to have the queue
on
a separate disk to the spool.  Queue and spool writes are independant and
the
data is immidiately sync'd.  Having them on separate disks can provide
serious performance benefits.

Also if your data is at all important to you then you should be using
RAID.
Software RAID-1 in the 2.4.x kernels and with the patch for 2.2.x kernels
is
very solid.  I suggest getting 4 drives and running two RAID-1 sets, one
for
the OS and queue, the other

Re: Finding the Bottleneck

2001-06-07 Thread Jason Lim

Hi,

I found vmstat on the server, but could not find your other "systat" or
"fstat". I think this is exactly what I need... especially fstat.

Does anyone know a similar program for linux?

Sincerely,
Jason

- Original Message -
From: "Jeremy C. Reed" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, June 07, 2001 6:43 AM
Subject: Re: Finding the Bottleneck


> On Wed, 6 Jun 2001, Jason Lim wrote:
>
> > I was wondering if there is a way to find out what/where the
bottleneck of
> > a large mail server is.
>
> Look at vmstat.
>
> vmstat can tell you about number of processeses waiting for run time,
> amount of memory swapped to disk, blocks per second sent (and
> received) from disks, number of interrupts and context switches per
> second, CPU usage, and more.
>
> The difficult part of using this is to have a baseline to compare it
> with. For example, this is from a lightly-loaded system:
>
>procs  memoryswap  io system
> cpu
>  r  b  w   swpd   free   buff  cache  si  sobibo   incs  us
> sy  id
>  0  0  0   3528   1804   1508  13868   0   0 0 1  104 7   1
> 1  98
>  0  0  0   3528   1804   1508  13868   0   0 0 0  104 7   1
> 1  98
>
> BSD systems have detailed systat, vmstat, and fstat utilities that are
> useful for tracking down bottlenecks. (It sure seems like Linux would
have
> similar tools, but I don't know where.)
>
>   Jeremy C. Reed
> ...
>  ISP-FAQ.com -- find answers to your questions
>  http://www.isp-faq.com/
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-07 Thread Jason Lim

Hi,

I agree with you... it seems more and more likely that the Disks are the
limiting factor here.

I guess the next big thing to do would be to run some form of Raid
(software or hardware) for the mail queue.

Does anyone know of a cheap but adequate raid hardware solution? The one's
I've seen seem to cost quite a bit. I know that the common cheap ATA100
Raid cards available now (using the Highpoint HPT370) don't work properly
on Linux beacuse of the bad driver support. Anyone know of an alternative?

Actually... do you think setting up a seperate box (connected via NFS)
PURELY for mail queue processing would help at all? Or would the
bottleneck then be shifted to NFS?

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, June 07, 2001 3:21 PM
Subject: Re: Finding the Bottleneck


On Thursday 07 June 2001 00:11, Jason Lim wrote:
>  05:51:18 up 5 days, 22:38,  1 user,  load average: 6.60, 7.40, 6.51
> 119 processes: 106 sleeping, 11 running, 2 zombie, 0 stopped
> CPU states:  16.4% user,  18.3% system,   0.0% nice,  65.3% idle
> Mem:128236K total,   124348K used, 3888K free,72392K buffers
> Swap:   289160K total,0K used,   289160K free, 9356K cached
>
> And of "qmail-qstat":
> sh-2.05# qmail-qstat
> messages in queue: 108903
> messages in queue but not yet preprocessed: 19537
>
> Swap is on Disk 1, because mail queue/spool is on Disk 2.
>
> I also already added the "-" in front of most entries except the
emergency
> or critical ones (if I didn't do it, the load was way higher just
writing
> the log files).
>
> Concerning the mail queue and spool being on the same disk, the reason
is
> that there is virtually no emails incoming, 99.999% outgoing.
>
> About running software raid... I've heard that the CPU usage is
increased
> dramatically if you use any form of software raid. Is that true?
> Actually... i doubt the customer would be willing to pay us to implement
> this for him on a hardware level. Good raid cards with good amounts of
ram
> don't come cheap last time I checked... :-/

CPU usage isn't increased that much, and as you're only using 30% of the
CPU
time it shouldn't be problem if you use more anyway...

Disk access is the main bottleneck, anything that alleviates it is
something
you want to do.

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]





--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-07 Thread Jason Lim

Hi,

Thanks for your detailed reply.

As reliability is not of great importance (only the mail queue will be
there and no critical system files), I'd go for speed and cheap price. The
client doesn't have the huge wads of cash for the optimal system with scsi
drives and 64M cache raid card :-/  So I guess if it comes down to the
crunch, speed and cheap price is it.
I'll also scratch the NFS idea since qmail wouldn't work well on it, and
as you said, there wouldn't be much benefit if the other server has the
same problems as well.

Okay... the server right now is an AMD k6-2 500mhz 128M with 2 15G IBM
drives. At this moment, it is really having trouble processing the mail
queue.
sh-2.05# qmail-qstat
messages in queue: 297121
messages in queue but not yet preprocessed: 72333

I checked the log and yesterday it sent about 1 million emails. Do you
think that the above hardware configuration is performing at it's
realistic limit?

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 3:42 AM
Subject: Re: Finding the Bottleneck


On Thursday 07 June 2001 20:14, Jason Lim wrote:
> I agree with you... it seems more and more likely that the Disks are
> the limiting factor here.
>
> I guess the next big thing to do would be to run some form of Raid
> (software or hardware) for the mail queue.
>
> Does anyone know of a cheap but adequate raid hardware solution? The
> one's I've seen seem to cost quite a bit. I know that the common cheap
> ATA100 Raid cards available now (using the Highpoint HPT370) don't work
> properly on Linux beacuse of the bad driver support. Anyone know of an
> alternative?

For RAID hardware there are three criteria you desire, reliability, low
price, and speed.  You can have at most two of them.

There are a number of cheap hardware RAID solutions out there which would
be quite OK for home use, but not for a server of the type you are
considering.  If you had several redundant servers with the same data
then one of the cheap hardware RAID solutions might do well though.

> Actually... do you think setting up a seperate box (connected via NFS)
> PURELY for mail queue processing would help at all? Or would the
> bottleneck then be shifted to NFS?

If the NFS server has the same disk system then you will only make things
worse.  Anything you could do to give the NFS server better IO
performance could more productively be done to the main server.
Also many of the common Unix mail server programs are specifically
designed to have the queue on a local file system with standard Unix
semantics (Inode numbers etc).  Qmail is one mail server that I have
found to not work with it's queue on NFS, I haven't seriously tried any
others.  I've CC'd Brian May because he does a lot more NFS stuff with
Debian than most people and he may be able to advise you.

I suggest just getting 4 IDE disks and putting everything on a software
RAID-10 apart from /boot (which must be RAID-1 for the boot loader to
work).  It'll get the most bang for the buck!

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

Hi,

Yes that is correct. The design of qmail forces a connection for each
email message. Changing that behaviour would require massive patching.

Sincerely,
Jason

- Original Message -
From: "Tomasz Papszun" <[EMAIL PROTECTED]>
To: "Rich Puhek" <[EMAIL PROTECTED]>
Cc: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 6:04 PM
Subject: Re: Finding the Bottleneck


> On Thu, 07 Jun 2001 at 22:47:09 -0500, Rich Puhek wrote:
> [...]
> > Also, there are probably some optimizations you can do for queue sort
> > order. I'm most familiar with Sendmail, not qmail, so I don't know the
> > exact settings, but try to process the queue according to recipient
> > domain. That way, you gain some advantages with holding SMTP
connections
> > open to a server, rather than closing and reopening a session, etc.
> >
> > --Rich
>
> If my memory serves me well, qmail opens a new session for each message,
> even if this message is to be delivered to the same server.
> I may be wrong though.
>
> --
>  Tomasz Papszun   SysAdm @ TP S.A. Lodz, Poland  | And it's only
>  [EMAIL PROTECTED]   http://www.lodz.tpsa.pl/   | ones and zeros.
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

Hi,

The network is connected via 100Mb to a switch, so server to server
connections would be at that limit. Even 10Mb wouldn't be a problem as I
don't think that much data would be crossing the cable.. would it?

As for the "single machine" issue, that would depend. If you're talking
about either getting a couple of SCSI disks, putting them on a hardware
raid, or getting an additional small server just for the queue, then I
think the cost would end up approximately the same. This client doesn't
have the cash for a huge upgrade, but something moderate would be okay.

BTW, just to clarify for people who are not familar with qmail, qmail
stores outgoing email in a special queue, not in Maildir. Only incoming
mail is stored in Maildir. The Maildirs are actually stored on Disk 1
(along with the operating system and everything else except the queue). I
know Maildir can be put in a NFS disk... BUT i've never heard of anyone
putting the mail queue on NFS, so I'm not sure if the file locking issues
you mention would pertain to that as well.

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Brian May" <[EMAIL PROTECTED]>
Cc: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 5:45 PM
Subject: Re: Finding the Bottleneck


On Friday 08 June 2001 10:49, Brian May wrote:
> Russell> If the NFS server has the same disk system then you will
> Russell> only make things worse.  Anything you could do to give
> Russell> the NFS server better IO performance could more
> Russell> productively be done to the main server.  Also many of
> Russell> the common Unix mail server programs are specifically
> Russell> designed to have the queue on a local file system with
> Russell> standard Unix semantics (Inode numbers etc).  Qmail is
> Russell> one mail server that I have found to not work with it's
> Russell> queue on NFS, I haven't seriously tried any others.  I've
> Russell> CC'd Brian May because he does a lot more NFS stuff with
> Russell> Debian than most people and he may be able to advise you.
>
> It depends on how much you want to share via NFS, what speed/type
> network you have, etc.

Jason has previously stated that he only wants a single machine and wants
things to be as cheap as possible.

> The idea of putting any queue on NFS is generally discouraged (due to
> file locking issues). Maildir is one exception to this rule.

Maildir != queue...

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: WAN Adapters...Wan in general

2001-06-08 Thread Jason Lim

We also use PR3000s with various WAN cards. Cyclades have wonder products
and great support. I recommend them. www.cyclades.com

Sincerely,
Jason

- Original Message -
From: "Teun Vink" <[EMAIL PROTECTED]>
To: "Nicolas Bougues" <[EMAIL PROTECTED]>
Cc: "Alex" <[EMAIL PROTECTED]>; "debian list" <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 6:24 PM
Subject: Re: WAN Adapters...Wan in general


> On Fri, 8 Jun 2001, Nicolas Bougues wrote:
>
> [snip]
> >
> > I believe you're talking about a T1/E1 link. Basically, the telco
> > brings you the T1/E1 trunk. Then, depending on the country/operator,
> > they provide you with a CSU/DSU, or not.
> >
> > It they do, the CSU/DSU will provide a sync serial port, either V35 or
> > X21. V35 should be avoided, connectors are ugly and expensive, X21 is
> > OK. Then you'll need a sync board with a matching serial interface
> > (see below).
> >
> > If they don't, they provide you a basic G703 T1 or E1 line. You have
> > either to buy a CSU/DSU, or to use a board that doesn't require
> > one. In this case, your board will connect directly to the 4 telco
> > wires, using (usually) an RJ45 plug.
>
> >
> > Such board (with or without CSU/DSU) exist for Linux. Try :
> > www.sangoma.com, www.etinc.com, etc.
> >
> >
> >
>
> At my work we use Cyclades PC300 boards
> (http://www.cyclades.com/products/svrbas/pc300.htm), available with
> different types of connectors.
>
> They are quite easy to configure and offer open source drivers.
>
>
>
> Teun
>
> --
> Teun Vink - [EMAIL PROTECTED] - icq: 15001247 -
http://teun.moonblade.net
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

I agree.

I always thought that doing a local lookup would be far faster than doing
one on a remote dns cache. We use bind, and set the forwarders to 2 other
DNS servers that are only lightly loaded and on the same network.

Additionally, as far as I can see, most emails get sent to the same
moderately large list of domains (eg. aol), so the local DNS server
would've cache them already anyway.

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Rich Puhek" <[EMAIL PROTECTED]>; "Jason Lim"
<[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 6:18 PM
Subject: Re: Finding the Bottleneck


On Friday 08 June 2001 05:47, Rich Puhek wrote:
> In addition to checking the disk usage, memory, and the other
> suggestions that have come up on the list, have you looked at DNS?
> Quite often you'll find that DNS lookups are severely limiting the
> performance of something like a mailing list. Make sure that the mail
> server itself isn't running a DNS server. Make sure you've got one or

Why not?  When DNS speed is important I ALWAYS install a local DNS.
Requests to 127.0.0.1 have to be faster than any other requests...

> two DNS servers in close proximity to the mail server. Make sure that
> the DNS server process isn't swapping on the DNS servers (for the kind

The output of "top" that he recently posted suggests that nothing is
swapping.

> with 128 MB of RAM as your DNS server. Also, if possible, I like to
> have the DNS server I'm querying kept free from being the authoratative
> server for any domains (not always practical in a real life situation,
> I know).

How does that help?


If DNS caching is the issue then probably the only place to look for a
solution is djb-dnscache.

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

I have also thought about that... but if you have a look at Qmail's
website (http://www.qmail.org) then you'll see that a number of extremely
large mail companies (hotmail for one) uses qmail for... get this...
outgoing mail. They could've easily chosen sendmail, postfix (maybe it
wasn't around when they designed their systems), zmailer, etc. but those
chose qmail.

Why would that be?

I'm not sure about exim (haven't used it before) but qmail can open (after
patching) over 500 or even 1000 concurrent outgoing connections. I've
patched and recompiled qmail to send 500 concurrent outgoing emails, which
I thought would be plenty. Unfortunately, I haven't had the opportunity to
ever see it go that high because (as Russell is suggesting) the hard disks
just cannot feed mail fast enough into the system. Most I've ever seen it
go was approximately 250 concurrent emails. And who said bandwidth is
always the limiting factor?

If only there was some simple way to spread the mail queue over 2 disks
naturally (not by raid).
In the /var/qmail/queue directory,
sh-2.05# ls -al
total 44
drwxr-xr-x   11 qmailq   qmail4096 Jun  8 09:04 .
drwxr-xr-x   13 root root 4096 Jun  8 15:24 ..
drwx--2 qmails   qmail4096 Jun  8 21:54 bounce
drwx--   25 qmails   qmail4096 Jun  8 09:04 info
drwx--   25 qmailq   qmail4096 Jun  8 09:04 intd
drwx--   25 qmails   qmail4096 Jun  8 09:04 local
drwxr-x---2 qmailq   qmail4096 Jun  8 09:04 lock
drwxr-x---   25 qmailq   qmail4096 Jun  8 09:04 mess
drwx--2 qmailq   qmail4096 Jun  8 21:54 pid
drwx--   25 qmails   qmail4096 Jun  8 09:04 remote
drwxr-x---   25 qmailq   qmail4096 Jun  8 09:04 todo

However, as Russell also mentioned in a previous email, qmail uses inode
numbers extensively for the queue, so there would be a huge headache in
splitting them up as the system would be unable to cope with 2 different
inodes from 2 different hard disks. :-/

Sincerely,
Jason

- Original Message -
From: "Peter Billson" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 8:04 PM
Subject: Re: Finding the Bottleneck


> > Additionally, as far as I can see, most emails get sent to the same
> > moderately large list of domains (eg. aol), so the local DNS server
> > would've cache them already anyway.
>
> This has been a long thread so forgive me if this has already been
> discussed but...
>   If you are usually delivering multiple messages to a handful of
> domains wouldn't the performance be greatly improved by using Exim or
> Sendmail that is capable of delivering multiple messages per connection?
> Particularly if AOL is involved since I have seen it take four or five
> attempts to get a connection to their SPAM clogged mail servers.
>   If I understand q-mail correctly it does one message per connection
> and opening/closing the connection often takes longer then transmitting
> the data.
>
> DISCLAIMER: This post is not intended to start a religious war over
> which mail transport program is best! :-)
>
> Pete
> --
> http://www.elbnet.com
> ELB Internet Services, Inc.
> Web Design, Computer Consulting, Internet Hosting
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim
circumstances,  such failures can result in massive
  filesystem corruption.
(I set it to 16... do you think 32 would make more sense?)

   -u Get/set interrupt-unmask flag  for  the  drive.   A
  setting  of  1  permits  the driver to unmask other
  interrupts during processing of a  disk  interrupt,
  which  greatly  improves Linux's responsiveness and
  eliminates "serial port overrun" errors.  Use  this
  feature  with caution: some drive/controller combi­
  nations do not tolerate the increased I/O latencies
  possible when this feature is enabled, resulting in
  massive  filesystem  corruption.   In   particular,
  CMD-640B  and RZ1000 (E)IDE interfaces can be unre­
  liable (due to a hardware flaw) when this option is
  used  with  kernel  versions  earlier  than 2.0.13.
  Disabling the IDE prefetch feature of these  inter­
  faces (usually a BIOS/CMOS setting) provides a safe
  fix for the problem for use with earlier kernels.
(this seem to have the largest performance boost)

Anyway... there it is. Maybe someone else could use these results to get a
free 10% increase as well. I stupidly set write_cache on as well, which
ended up trashing a bunch of stuff. Thank goodness at that time the server
was not being used, and I immediately rebuilt the mail queue.

Does anyone have any better configs than above, or some utility that could
further boost performance?

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; "Brian May"
<[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 7:17 PM
Subject: Re: Finding the Bottleneck


On Friday 08 June 2001 12:25, Jason Lim wrote:
> The network is connected via 100Mb to a switch, so server to server
> connections would be at that limit. Even 10Mb wouldn't be a problem as
> I don't think that much data would be crossing the cable.. would it?

10Mb shouldn't be a problem for DNS.  Of course there's the issue of what
else is on the same cable.

There will of course be a few extra milli-seconds latency, but you are
correct that it shouldn't make a difference.

> As for the "single machine" issue, that would depend. If you're talking
> about either getting a couple of SCSI disks, putting them on a hardware
> raid, or getting an additional small server just for the queue, then I
> think the cost would end up approximately the same. This client doesn't
> have the cash for a huge upgrade, but something moderate would be okay.

However getting an extra server will not make things faster, in fact it
will probably make things slower (maybe a lot slower).  Faster hard
drives is what you need!

> BTW, just to clarify for people who are not familar with qmail, qmail
> stores outgoing email in a special queue, not in Maildir. Only incoming
> mail is stored in Maildir. The Maildirs are actually stored on Disk 1
> (along with the operating system and everything else except the queue).
> I know Maildir can be put in a NFS disk... BUT i've never heard of
> anyone putting the mail queue on NFS, so I'm not sure if the file
> locking issues you mention would pertain to that as well.

For the queue, Qmail creates file names that match Inode numbers (NFS
doesn't have Inodes).  Qmail also relies on certain link operations being
atomic and reliable, while on NFS they aren't guaranteed to be atomic,
and packet loss can cause big reliability problems.

Consider "ln file file2", when an NFS packet is sent to the server the
server will create the link and return success, if the return packet is
lost due to packet corruption then the client will re-send the request.
The server will notice that file2 exists and return an error message.
The result is that the operation succeeded but the client thinks it
failed!

There are many other issues with NFS for this type of thing.  NFS is only
good for data that has simple access patterns (read-only files and simple
operations like mounting a home directory and editing a file with "vi"),
and for applications which have carefully been written to work with NFS
(Maildir programs).

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

This statement makes me wonder:
"Also even the slowest part of a 45G drive will be twice as fast as the
fastest part of a 15G drive."

Are you sure? I never heard this before... might be a 1% difference there,
but twice as fast? The 15G runs at 7200rpms, and I assume the 45G also
does unless its a 1rpm unit (but I don't think there are any 1000rpm
units for ata interfaces, only scsi, right?) The cache on the 15G is 2Mb,
which I assume is the same on the 45G disk.

"That an average of over 11 messages per second.  Not bad with such tiny
hardware."

Well, it doesn't seem to come to raw hardware performance here. From "top"
output, the 500mhz CPU isn't loaded and swap is not touched at all
(totally in ram). Many small-mid range servers don't run Raid
configurations. However, it does appear that a Raid setup will help
greatly in this case, even with the lower end hardware specs, don't you
think?

As soon as you clarify the first point, I think we'll be looking far more
closely at Raid solutions (either software or cheap hardware which
would be better if we go with software the bottleneck might then be
shifted to the CPU?!)

Sincerely,
Jason

- Original Message -----
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 10:43 AM
Subject: Re: Finding the Bottleneck


On Friday 08 June 2001 00:05, Jason Lim wrote:
> Thanks for your detailed reply.
>
> As reliability is not of great importance (only the mail queue will be
> there and no critical system files), I'd go for speed and cheap price.
> The client doesn't have the huge wads of cash for the optimal system
> with scsi drives and 64M cache raid card :-/  So I guess if it comes
> down to the crunch, speed and cheap price is it.

OK.  Software RAID on ATA drives is what you will be using then.

> I'll also scratch the NFS idea since qmail wouldn't work well on it,

Qmail won't work at all on NFS...

> Okay... the server right now is an AMD k6-2 500mhz 128M with 2 15G IBM
> drives. At this moment, it is really having trouble processing the mail
> queue.
> sh-2.05# qmail-qstat
> messages in queue: 297121
> messages in queue but not yet preprocessed: 72333
>
> I checked the log and yesterday it sent about 1 million emails. Do you
> think that the above hardware configuration is performing at it's
> realistic limit?

That an average of over 11 messages per second.  Not bad with such tiny
hardware.

The first thing to do is to buy some decent hard drives.  Get new IBM
drives of 40G or more capacity and use the first 15M of them (it reduces
seek times and also the first part has a higher transfer rate).  Also
even the slowest part of a 45G drive will be twice as fast as the fastest
part of a 15G drive.

Put the spool and queue on a software RAID-1 over two 45G drives and put
the rest on a software RAID-1 over the two old 15G drives.  Then the
system should be considerably more than twice as fast.

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

BTW, I think I noticed something as well (before and after the below
optimization).

sh-2.05# qmail-qstat
messages in queue: 17957
messages in queue but not yet preprocessed: 1229

With everything queue running off one hard disk (disk 1), I never noticed
such a few emails not even being able to be preprocessed. It seems the
system's ability to preprocess the messages has declined since putting the
queue on disk 2.

I don't see any reason why... but anyway, facts are facts :-/

Sincerely,
Jason

- Original Message -----
From: "Jason Lim" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Russell Coker" <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 10:14 PM
Subject: Re: Finding the Bottleneck


> I agree with you that splitting the mail queue to another server
wouldn't
> help, especially since you've seen the top results, and know that it
isn't
> very heavily loaded with other jobs in the first place. So I think you
are
> very correct in saying that the hard disk is the limit here.
>
> Today I played around with hdparm to see if I could tweak some
additional
> performance out of the existing drives, and it helped by about 10% (not
a
> huge jump, but anything helps!).
>
> Specifically, I set /sbin/hdparm -a4 -c3 -d1 -m16 -u1 /dev/hdc:
>
>-a Get/set sector  count  for  filesystem  read-ahead.
>   This  is  used to improve performance in sequential
>   reads of large  files,  by  prefetching  additional
>   blocks  in anticipation of them being needed by the
>   running  task.   In  the  current  kernel   version
>   (2.0.10)  this  has  a default setting of 8 sectors
>   (4KB).  This value seems good  for  most  purposes,
>   but in a system where most file accesses are random
>   seeks, a smaller setting might provide better  per­
>   formance.   Also, many IDE drives also have a sepa­
>   rate built-in read-ahead function, which alleviates
>   the need for a filesystem read-ahead in many situa­
>   tions.
> (Since most the emails are small and randomly placed around, I thought
> maybe 2KB read-ahead might make more sense. Tell me if I'm wrong...
> because the performance jump may not be due to this setting)
>
>-c Query/enable  (E)IDE 32-bit I/O support.  A numeric
>   parameter can be used to enable/disable 32-bit  I/O
>   support:  Currently  supported  values include 0 to
>   disable 32-bit I/O support, 1 to enable 32-bit data
>   transfers,  and  3  to enable 32-bit data transfers
>   with a  special  sync  sequence  required  by  many
>   chipsets.  The value 3 works with nearly all 32-bit
>   IDE chipsets, but incurs  slightly  more  overhead.
>   Note  that "32-bit" refers to data transfers across
>   a PCI or VLB bus to the interface  card  only;  all
>   (E)IDE  drives  still have only a 16-bit connection
>   over the ribbon cable from the interface card.
>
> (Couldn't hurt to have it going 32 bit rather than 16 bit)
>
>-d Disable/enable the "using_dma" flag for this drive.
>   This  option  only works with a few combinations of
>   drives and interfaces which support DMA  and  which
>   are known to the IDE driver (and with all supported
>   XT interfaces).  In particular,  the  Intel  Triton
>   chipset is supported for bus-mastered DMA operation
>   with many drives (experimental).  It is also a good
>   idea to use the -X34 option in combination with -d1
>   to ensure that the drive itself is  programmed  for
>   multiword  DMA mode2.  Using DMA does not necessar­
>   ily provide any improvement in throughput or system
>   performance,  but  many  folks  swear  by it.  Your
>   mileage may vary.
> (this is a dma100 7200 drive so setting this couldn't hurt either.
Didn't
> see much performance increase with this though)
>
>-m Get/set sector count for multiple sector I/O on the
>   drive.  A setting of 0 disables this feature.  Mul­
>   tiple  sector  mode (aka IDE Block Mode), is a fea­
>   ture of most modern IDE hard drives, permitting the
>   transfer  of  multiple  sectors  per I/O interrupt,
>   rather than the usual  one  sector  per  interrupt.
>   When  this feature is enabled, it typically reduces
>   operating system over

Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

It is strange that we each get such different results.

I haven't run these in totally sanitized environments (haven't had the
luxury or time to do so) so I can't say mine were really scientific, but i
just looked at the real results of each command.

I switch back to using 4Kb readahead as you suggested.

I couldn't set the -m setting any higher:

/dev/hdc:

 Model=IBM-DTLA-307015, FwRev=TX2OA50C, SerialNo=YF0YFT68281
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=40
 BuffType=DualPortCache, BuffSize=1916kB, MaxMultSect=16, MultSect=16
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=30003120
 IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes: pio0 pio1 pio2 pio3 pio4
 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 udma5

The maxMultSect is 16 :-/

We haven't run nor tested 2.4 kernels yet... but sooner or later we'll get
there. We run a business and need things to be near 100% stable (or at
least try). Do you see any other way to tweak these disks?

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 10:49 PM
Subject: Re: Finding the Bottleneck


On Friday 08 June 2001 16:14, Jason Lim wrote:
> Today I played around with hdparm to see if I could tweak some
>
> Specifically, I set /sbin/hdparm -a4 -c3 -d1 -m16 -u1 /dev/hdc:
>
>-a Get/set sector  count  for  filesystem  read-ahead.
>   This  is  used to improve performance in sequential
>   reads of large  files,  by  prefetching  additional
>   blocks  in anticipation of them being needed by the
>   running  task.   In  the  current  kernel   version
>   (2.0.10)  this  has  a default setting of 8 sectors
>   (4KB).  This value seems good  for  most  purposes,
>   but in a system where most file accesses are random
>   seeks, a smaller setting might provide better  per­
>   formance.   Also, many IDE drives also have a sepa­
>   rate built-in read-ahead function, which alleviates
>   the need for a filesystem read-ahead in many situa­
>   tions.
> (Since most the emails are small and randomly placed around, I thought
> maybe 2KB read-ahead might make more sense. Tell me if I'm wrong...
> because the performance jump may not be due to this setting)

What is the block size on the file system?  What file system are you
using?

If you use Ext2 then I suggest using a 4K block size.  It will make FSCK
much faster, and it will reduce fragmentation and file system overhead
and generally make things faster.

If you use a file system with a 4K block size then it probably makes
sense to have a 4K read-ahead (but I have never tested this option).

>-c Query/enable  (E)IDE 32-bit I/O support.  A numeric
>
> (Couldn't hurt to have it going 32 bit rather than 16 bit)

If you have DMA on then the 32bit IO flag is ignored...

>-d Disable/enable the "using_dma" flag for this drive.

> (this is a dma100 7200 drive so setting this couldn't hurt either.
> Didn't see much performance increase with this though)

This is where you expect to see some performance increase, particularly
when combined with the "-m" option.

>-m Get/set sector count for multiple sector I/O on the

> (I set it to 16... do you think 32 would make more sense?)

I suggest setting it to the maximum.  Also why not use kernel 2.4.4 and
compile your kernel to enable DMA and multi-mode by default?  The 2.4
series of kernels should increase performance for disk IO even if you use
the same settings, and it has better tuning options.

>-u Get/set interrupt-unmask flag  for  the  drive.   A

> (this seem to have the largest performance boost)

That's strange.  Last time I played with that one I couldn't find any
benefit to it.  I'll have to test again.


--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-08 Thread Jason Lim

Hi,

Well... I'm not sure if you saw the "top" output I sent to the list a
while back, but the swap isn't touched at all. The 128M ram seems to be
sufficient at this time. I'm not sure that throwing more memory at it
would help much, would it? I think even if more ram is put in, it will
just use at buffers. er that MIGHT help, right? Would be an easy
solution if 256M would help get an extra 20% performance :-)

Concerning the dns server, if it was not run locally, then every single
dns lookup would have to go across the network. Now that would be a LOT of
requests that would otherwise have been answered nearly instantly if it
was local.

The mail queue is on its OWN disk (disk 2). Everything else is on disk 1.
The mail queue disk ONLY has the mail queue and absolutely nothing else.
It is there for that specific purpose (even though it hasn't helped that
much). Also, log files in syslog already have "-" in front of the file
name so they aren't synced every write.

We've tried just about everything (except the raid setup) to get every
last ounce of performance out of this... but it doesn't seem like its made
a huge difference :-/

Sincerely,
Jason

- Original Message -
From: "Rich Puhek" <[EMAIL PROTECTED]>
To: "Russell Coker" <[EMAIL PROTECTED]>
Cc: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, June 09, 2001 7:11 AM
Subject: Re: Finding the Bottleneck


> Memory memory memory! True, memory is not currently a limiting factor,
> but it likely could be if he were running BIND locally. As for making
> sure that the server is not authoratative for other domains, that will
> help keep other DNS demands to a minimum.
>
> The mail server will chew up a load of memory (or can anyhow.. his
> doesn't seem too bad). A highly-utilized DNS server will also chew up a
> load of memory. You do not want the DNS server to swap, so you need to
> have enough memory to be sure it can cache enough information.
>
> As for speed, if you have a machine on a LAN set up as a caching-only
> DNS server (that's what I was trying to say before), I'm thinking I'll
> take the LAN latency hit over having the MTA competing for resources
> with the DNS server.
>
> Other than that, yea, some kind of RAID solution would be cool for him.
> I'd also look at making sure /var/log is on a seperate drive from
> /var/spool/mail. I saw an email that indicated that /swap was seperate
> from /var/spool, but nothing about where the log files were located. Not
> synching after evey write will help obviously, but I recall seeing quite
> a benefit from seperate drive for /var/log and /var/spool.
>
> --Rich
>
>
> Russell Coker wrote:
> >
> > On Friday 08 June 2001 05:47, Rich Puhek wrote:
> > > In addition to checking the disk usage, memory, and the other
> > > suggestions that have come up on the list, have you looked at DNS?
> > > Quite often you'll find that DNS lookups are severely limiting the
> > > performance of something like a mailing list. Make sure that the
mail
> > > server itself isn't running a DNS server. Make sure you've got one
or
> >
> > Why not?  When DNS speed is important I ALWAYS install a local DNS.
> > Requests to 127.0.0.1 have to be faster than any other requests...
> >
> > > two DNS servers in close proximity to the mail server. Make sure
that
> > > the DNS server process isn't swapping on the DNS servers (for the
kind
> >
> > The output of "top" that he recently posted suggests that nothing is
> > swapping.
> >
> > > with 128 MB of RAM as your DNS server. Also, if possible, I like to
> > > have the DNS server I'm querying kept free from being the
authoratative
> > > server for any domains (not always practical in a real life
situation,
> > > I know).
> >
> > How does that help?
> >
> > If DNS caching is the issue then probably the only place to look for a
> > solution is djb-dnscache.
> >
> > --
> > http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
> > http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
> > http://www.coker.com.au/projects.html Projects I am working on
> > http://www.coker.com.au/~russell/ My home page
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
> --
>
> _
>
> Rich Puhek
> ETN Systems Inc.
> _
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-09 Thread Jason Lim

I'm not exactly sure how the Linux kernel would handle this.

Right now, the swap is untouched. If the server needed more ram, wouldn't
it be swapping something... anything? I mean, it currently has 0kb in
swap, and still has free memory.

Here is a recent top:

101 processes: 97 sleeping, 3 running, 1 zombie, 0 stopped
CPU states:   9.4% user,  14.0% system,   0.5% nice,  76.1% idle
Mem:128236K total,   125492K used, 2744K free,69528K buffers
Swap:   289160K total,0K used,   289160K free,10320K cached
  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 5361 qmails 4   0  2728 2728   368 R 5.6  2.1  68:58 qmail-send
11911 root   4   0  1052 1052   800 R 1.7  0.8   0:00 top
  165 root   1   0  2640 2640   860 S 0.9  2.0  25:00 named
 5367 qmailr17   0   464  464   324 S 0.9  0.3   6:58 qmail-rspawn
 1178 root   0   0   832  832   708 S 0.3  0.6   4:30 syslogd
 5365 qmaill 0   0   476  476   404 S 0.1  0.3   6:12 splogger
 5368 qmailq 1   0   396  396   332 S 0.1  0.3   5:20 qmail-clean
11988 qmailr 1   0   512  512   432 S 0.1  0.3   0:00 qmail-remote
11993 qmailr 4   0   512  512   432 R 0.1  0.3   0:00 qmail-remote
11994 qmailr 4   0   512  512   432 S 0.1  0.3   0:00 qmail-remote
11996 qmailr 5   0   512  512   432 R 0.1  0.3   0:00 qmail-remote
11997 qmailr 8   0   512  512   432 S 0.1  0.3   0:00 qmail-remote
11998 qmailr 9   0   512  512   432 R 0.1  0.3   0:00 qmail-remote
11999 qmailr10   0   512  512   432 R 0.1  0.3   0:00 qmail-remote
12000 qmailr10   0   512  512   432 S 0.1  0.3   0:00 qmail-remote
1 root   0   0   532  532   472 S 0.0  0.4   0:07 init
2 root   0   0 00 0 SW0.0  0.0   0:07 kflushd

I hope you can read the above because it won't be formatted right when I
send it, but hopefully you get the idea. As far as I know, linux will
allocate as much free ram to the buffers, rather than just leave it empty.
So ~68M in buffers sort of tells me that it has plenty of memory. I mean,
if you think more would really help, we could try more ram, but I doubt
the bottleneck really is with the memory limit...?

Anyway... as for the raid solution, is there anything I should look out
for BEFORE i start implementing it? Like any particular disk or ext2
settings that would benefit the mail queue in any way? Don't want to get
everything set up, only to find I missed something critical that you
already thought of!

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; "Rich Puhek"
<[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, June 10, 2001 1:07 AM
Subject: Re: Finding the Bottleneck


On Saturday 09 June 2001 08:23, Jason Lim wrote:
> Well... I'm not sure if you saw the "top" output I sent to the list a
> while back, but the swap isn't touched at all. The 128M ram seems to be
> sufficient at this time. I'm not sure that throwing more memory at it
> would help much, would it? I think even if more ram is put in, it will
> just use at buffers. er that MIGHT help, right? Would be an
> easy solution if 256M would help get an extra 20% performance :-)

More cache is very likely to help, and it requires little expense and
little work to add another 128M of RAM to the machine.  I'm not sure that
you'll get 20% more performance, I'd expect maybe 10% - but it depends on
the load patterns.

For a cheap and easy way to add performance adding RAM is the best thing
you can do IMHO.

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-09 Thread Jason Lim

Hi,

Actually, I thought they increased performance mainly if you were doing
large file transfers and such, and that small random file transfers were
not help (even hindered) by reiserFS. Don't flame me if I'm wrong as I
haven't done huge amounts of research into this, but this is just what
I've heard.

Sincerely,
Jason

- Original Message -
From: "Alson van der Meulen" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, June 10, 2001 2:32 AM
Subject: Re: Finding the Bottleneck


> On Sun, Jun 10, 2001 at 02:04:36AM +0800, Jason Lim wrote:
> > I'm not exactly sure how the Linux kernel would handle this.
> >
> [...]
> >
> > Anyway... as for the raid solution, is there anything I should look
out
> > for BEFORE i start implementing it? Like any particular disk or ext2
> > settings that would benefit the mail queue in any way? Don't want to
get
> > everything set up, only to find I missed something critical that you
> > already thought of!
> Maybe reiserfs (or xfs) might help? I know they increase
> filesystem/metadata performance in some cases...
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-11 Thread Jason Lim

Hi,

AFAIK, even if there was a gig of ram in there, it would not allocate any
(or maybe just a little) to free memory, and would throw any free memory
into buffers anyway.

So 68M of buffers tells me it has ample free memory, it or wouldn't
allocate so much there anyway, right?

Sincerely,
Jason

- Original Message -
From: "Marcin Owsiany" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, June 11, 2001 7:10 AM
Subject: Re: Finding the Bottleneck


> On Sun, Jun 10, 2001 at 02:04:36AM +0800, Jason Lim wrote:
> > I'm not exactly sure how the Linux kernel would handle this.
> >
> > Right now, the swap is untouched. If the server needed more ram,
wouldn't
> > it be swapping something... anything? I mean, it currently has 0kb in
> > swap, and still has free memory.
> >
> > Here is a recent top:
> >
> > 101 processes: 97 sleeping, 3 running, 1 zombie, 0 stopped
> > CPU states:   9.4% user,  14.0% system,   0.5% nice,  76.1% idle
> > Mem:128236K total,   125492K used, 2744K free,69528K
buffers
> > Swap:   289160K total,0K used,   289160K free,10320K
cached
>
> Remember that adding RAM means larger buffers/cache, and so
> faster IO. Only 3 MB free memory means that Linux would really
> like more RAM for larger buffers.
>
> Marcin
> --
> Marcin Owsiany <[EMAIL PROTECTED]>
> http://student.uci.agh.edu.pl/~porridge/
> GnuPG: 1024D/60F41216  FE67 DA2D 0ACA FC5E 3F75  D6F6 3A0D 8AA0 60F4
1216
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-11 Thread Jason Lim

Hi,

Too bad this is a production system or I would try it. I've never tried
reiserFS (neither has anyone else here) so we might test it along with a
2.4 kernel later. I hear 2.4 has intergrated reiserFS support?

Sincerely,
Jason

- Original Message -
From: "Alson van der Meulen" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, June 10, 2001 4:25 AM
Subject: Re: Finding the Bottleneck


> On Sun, Jun 10, 2001 at 04:14:10AM +0800, Jason Lim wrote:
> > Hi,
> >
> > Actually, I thought they increased performance mainly if you were
doing
> > large file transfers and such, and that small random file transfers
were
> > not help (even hindered) by reiserFS. Don't flame me if I'm wrong as I
> > haven't done huge amounts of research into this, but this is just what
> > I've heard.
> I know at least for news servers, reiserfs can be quite faster,
> so I guess it might be faster for mail too...
>
> There's really one way to be sure: test it ;)
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-11 Thread Jason Lim

Hi,

These 15G drives are new, or they wouldn't be ata100 ;-)

Only reason we don't get 45 or 50G drives all the time is that not all
customers need that amount of space. They pay for what they get.

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Rich Puhek" <[EMAIL PROTECTED]>
Cc: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, June 10, 2001 1:04 AM
Subject: Re: Finding the Bottleneck


On Saturday 09 June 2001 01:11, Rich Puhek wrote:
> Memory memory memory! True, memory is not currently a limiting factor,
> but it likely could be if he were running BIND locally. As for making
> sure that the server is not authoratative for other domains, that will
> help keep other DNS demands to a minimum.

>From memory (sic) a caching name server for an ISP with 500,000 customers
that has typically >10,000 customers online at busy times will grow to
about 200M of RAM.  Extrapolating from that I expect that 20M of RAM
should be adequate for a caching name server for the type of load we are
discussing.

If the machine is upgraded to a decent amount of RAM (128M is nothing by
today's standards and upgrading RAM is the cheapest upgrade possible)
then the amount of RAM for a caching name server should not be an issue.

> Other than that, yea, some kind of RAID solution would be cool for him.
> I'd also look at making sure /var/log is on a seperate drive from
> /var/spool/mail. I saw an email that indicated that /swap was seperate
> from /var/spool, but nothing about where the log files were located.
> Not synching after evey write will help obviously, but I recall seeing
> quite a benefit from seperate drive for /var/log and /var/spool.

My understanding of the discussion was that there was one drive for
/var/spool (which is for the queue and /var/spool/mail) and another drive
for everything else.

That should be fine, but getting some drives that are less than 3 years
old would be a good idea...

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-11 Thread Jason Lim

Hi,

More buffers makes sense... but i wonder what KIND of buffers those are.
Only if they are disk buffers would the performance be increased.

Sincerely,
Jason

- Original Message -
From: "Marcin Owsiany" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, June 11, 2001 5:37 PM
Subject: Re: Finding the Bottleneck


> On Mon, Jun 11, 2001 at 04:49:21PM +0800, Jason Lim wrote:
> > Hi,
> >
> > AFAIK, even if there was a gig of ram in there, it would not allocate
any
> > (or maybe just a little) to free memory, and would throw any free
memory
> > into buffers anyway.
> >
> > So 68M of buffers tells me it has ample free memory, it or wouldn't
> > allocate so much there anyway, right?
>
> Right, it probably would not allocate any more memory for the
> processes themselves, but my point is that "the bigger buffers,
> the better performance". I guess that 68 MB buffers isn't that
> much for such a heavily loaded machine.
>
> Marcin
>
> PS: No need to CC to me.
> --
> Marcin Owsiany <[EMAIL PROTECTED]>
> http://student.uci.agh.edu.pl/~porridge/
> GnuPG: 1024D/60F41216  FE67 DA2D 0ACA FC5E 3F75  D6F6 3A0D 8AA0 60F4
1216
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-11 Thread Jason Lim

Hi,

I in no way pretend to know a lot about the kernel and the specific ways
it handles free memory and caches, but i just look at it from a "logical"
point of view.

Hopefully I'm not too far off course in the assumptions i make! Hope Rik
can clarify this not just for me but for everyone thats been following
this thread. Since he is the expert on this, he's the authority!

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, June 11, 2001 6:43 PM
Subject: Re: Finding the Bottleneck


On Saturday 09 June 2001 20:04, Jason Lim wrote:
> I'm not exactly sure how the Linux kernel would handle this.
>
> Right now, the swap is untouched. If the server needed more ram,
> wouldn't it be swapping something... anything? I mean, it currently has
> 0kb in swap, and still has free memory.

That is a really good point.  What you say makes a lot of sense.  However
the Linux kernel policies on when to free cache and when to swap are
always being tweaked and are very complex.

I have CC'd this message to Rik.  Rik wrote most of the code in question
and is the expert in this area.

Rik, as a general rule if a machine has 0 swap in use then can it be
assumed that the gain from adding more RAM will be minimal or
non-existant?  Or is my previous assumption correct in that it could
still be able to productively use more RAM for cache?

As a more specific issue, assuming that there is memory which is not
being touched (there always is some) will that memory ever be paged out
to allow for more caching?

I believe that Jason is using kernel 2.2.19, but I would like to have
this issue clarified for both 2.2.19 and 2.4.5 kernels if possible.

> Anyway... as for the raid solution, is there anything I should look out
> for BEFORE i start implementing it? Like any particular disk or ext2
> settings that would benefit the mail queue in any way? Don't want to
> get everything set up, only to find I missed something critical that
> you already thought of!

There is an option to mke2fs to tune it for RAID-0 or RAID-5.  I'm not
sure if it provides much benefit though, and it does not help RAID-1
(which is the RAID level you are most likely to use).

I suggest that you firstly run zcav from my bonnie++ suite on your new
hard drives.  Then allocate partitions for the most speed critical data
in the fastest parts of the drives.  Then use RAID-1 on those partitions.

Good luck!

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck

2001-06-11 Thread Jason Lim

Hi,

Yeah.. they still make 15Gb drives. I think they still make 10Gbs but I'm
not sure.

Concerning the drives, since most of these are dedicated servers, they
main way we can differenciate (sp.? plz correct me if wrong) is to provide
different amounts of ram, cpu mhz, disk space, bandwidth, etc. Actually
15Gb is the smallest drive we offer. Most are in the range of 30-40Gbs. We
will probably be increasing these numbers soon to reflect the market
prices.

BTW we also thought about buying 30Gb drives and partitioning them so that
only 15Gb is usable, but with the advent of partition resizing proggies
(and they are pretty stable now too), we can't do that.

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Sent: Monday, June 11, 2001 7:03 PM
Subject: Re: Finding the Bottleneck


On Monday 11 June 2001 10:52, you wrote:
> Hi,
>
> These 15G drives are new, or they wouldn't be ata100 ;-)

I didn't know that they still made such small drives.  The major stores
in Amsterdam haven't sold such drives for over a year.

> Only reason we don't get 45 or 50G drives all the time is that not all
> customers need that amount of space. They pay for what they get.

When you say "they pay for what they get" are you saying that they are
trying to save money by buying tiny drives, or is this some sort of
encouragement for the customers to pay you more management fees?

If the former then you should compare the prices and see how little the
difference is.  If the latter then try and sell them on the
backup/management angle...

>
> Sincerely,
> Jason
>
> - Original Message -----
> From: "Russell Coker" <[EMAIL PROTECTED]>
> To: "Rich Puhek" <[EMAIL PROTECTED]>
> Cc: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> Sent: Sunday, June 10, 2001 1:04 AM
> Subject: Re: Finding the Bottleneck
>
> On Saturday 09 June 2001 01:11, Rich Puhek wrote:
> > Memory memory memory! True, memory is not currently a limiting
> > factor, but it likely could be if he were running BIND locally. As
> > for making sure that the server is not authoratative for other
> > domains, that will help keep other DNS demands to a minimum.
>
> From memory (sic) a caching name server for an ISP with 500,000
> customers that has typically >10,000 customers online at busy times
> will grow to about 200M of RAM.  Extrapolating from that I expect that
> 20M of RAM should be adequate for a caching name server for the type of
> load we are discussing.
>
> If the machine is upgraded to a decent amount of RAM (128M is nothing
> by today's standards and upgrading RAM is the cheapest upgrade
> possible) then the amount of RAM for a caching name server should not
> be an issue.
>
> > Other than that, yea, some kind of RAID solution would be cool for
> > him. I'd also look at making sure /var/log is on a seperate drive
> > from /var/spool/mail. I saw an email that indicated that /swap was
> > seperate from /var/spool, but nothing about where the log files were
> > located. Not synching after evey write will help obviously, but I
> > recall seeing quite a benefit from seperate drive for /var/log and
> > /var/spool.
>
> My understanding of the discussion was that there was one drive for
> /var/spool (which is for the queue and /var/spool/mail) and another
> drive for everything else.
>
> That should be fine, but getting some drives that are less than 3 years
> old would be a good idea...
>
> --
> http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
> http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
> http://www.coker.com.au/projects.html Projects I am working on
> http://www.coker.com.au/~russell/ My home page
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: user privileges with php (like with suexec)

2001-06-11 Thread Jason Lim

Hi,

This is also something that I've been looking into too, with no success
yet.

If you find something, let me know and I'll do the same! :-)

Sincerely,
Jason

- Original Message -
From: "Jeremy Lunn" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, June 11, 2001 11:21 PM
Subject: user privileges with php (like with suexec)


> I am wondering what is the best way to get simular results to suexec
> with php?
>
> I've heard of people running seperate instances of apache for each
> client.  Is that likely to be a messy solution?  how much overhead would
> each instance be?
>
> The other solution is to use the CGI version of php and use suexec.  I
> still don't think this is as nice as just using the the php module.
>
> Have I missed any solutions?
>
> Although I'm reluctant to hack any code that would be running as root,
> would it be possible to hack php to pass pages to child proceses owned
> by the same user as the php file and if none exist create one?
>
> Thanks,
>
> --
> Jeremy Lunn
> Melbourne, Australia
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck (nearly there!)

2001-06-11 Thread Jason Lim

Hi,

Something VERY interested has occurred.

I kept playing around with the /var/qmail/queue directory, to see how I
could optimize it.

I also saw in some qmail-* manpage that mess & pid directories, and todo &
intd directories have to be on the same drive (or was that partition?
nevermind)

So since mess has the content of the emails on it, then in theory would be
the most "hard" on the disk (larger files than any other directories), I
left mess and pid on disk 2, and kept todo and intd onto disk 1.

sh-2.05# ls -al
total 36
drwxr-x---9 qmailq   qmail4096 Jun 10 21:12 .
drwxr-xr-x7 root root 4096 Jun 10 21:11 ..
drwx--2 qmails   qmail4096 Jun 11 23:46 bounce
drwx--   25 qmails   qmail4096 Jun 10 21:11 info
drwx--   25 qmailq   qmail4096 Jun 10 21:11 intd
drwx--   25 qmails   qmail4096 Jun 10 21:11 local
drwxr-x---2 qmailq   qmail4096 Jun 10 21:11 lock
lrwxrwxrwx1 qmailq   qmail  15 Jun 10 21:12 mess ->
/mnt/disk2/mess
lrwxrwxrwx1 qmailq   qmail  14 Jun 10 21:12 pid ->
/mnt/disk2/pid
drwx--   25 qmails   qmail4096 Jun 10 21:11 remote
drwxr-x---   25 qmailq   qmail4096 Jun 10 21:11 todo


Surprise suprise... HUGE PERFORMANCE INCREASE!!!

I mean double or even triple the thoughtput!

sh-2.05# qmail-qstat
messages in queue: 28617
messages in queue but not yet preprocessed: 0

NO unprocessed messages (compared to having 20-50K) and only 28K messages
in queue (compared to 500K).

The mail volume has not changed since before, so besides playing with
hdparm a bit previously, nothing else has been changed.

I have NO idea why putting the entire queue on disk 2, compared to just
putting mess and pid on disk 2, would have SUCH a huge difference. It
baffles me.

Anyway... I have only observed this huge performance increase for a day,
so I will monitor this for another day and see if it keeps this up. I'll
post the findings tomorrow.

Who could've guessed? It makes SOME sense... but double to triple the
performance?

Sincerely,
Jason


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Finding the Bottleneck (nearly there!)

2001-06-11 Thread Jason Lim

Thought I'd mention one more thing that would be pretty important to know.

qmail is now sending up to 400-450 concurrent outgoing emails (not all the
time obviously, but easily goes up to that). Previously the maximum it
would go to would be around 50... max 100, especially when it had 20K odd
messages not preprocessed yet.

Sincerely,
Jason

- Original Message -
From: "Jason Lim" <[EMAIL PROTECTED]>
To: "Russell Coker" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, June 11, 2001 11:59 PM
Subject: Re: Finding the Bottleneck (nearly there!)


> Hi,
>
> Something VERY interested has occurred.
>
> I kept playing around with the /var/qmail/queue directory, to see how I
> could optimize it.
>
> I also saw in some qmail-* manpage that mess & pid directories, and todo
&
> intd directories have to be on the same drive (or was that partition?
> nevermind)
>
> So since mess has the content of the emails on it, then in theory would
be
> the most "hard" on the disk (larger files than any other directories), I
> left mess and pid on disk 2, and kept todo and intd onto disk 1.
>
> sh-2.05# ls -al
> total 36
> drwxr-x---9 qmailq   qmail4096 Jun 10 21:12 .
> drwxr-xr-x7 root root 4096 Jun 10 21:11 ..
> drwx--2 qmails   qmail4096 Jun 11 23:46 bounce
> drwx--   25 qmails   qmail4096 Jun 10 21:11 info
> drwx--   25 qmailq   qmail4096 Jun 10 21:11 intd
> drwx--   25 qmails   qmail4096 Jun 10 21:11 local
> drwxr-x---2 qmailq   qmail4096 Jun 10 21:11 lock
> lrwxrwxrwx1 qmailq   qmail  15 Jun 10 21:12 mess ->
> /mnt/disk2/mess
> lrwxrwxrwx1 qmailq   qmail  14 Jun 10 21:12 pid ->
> /mnt/disk2/pid
> drwx--   25 qmails   qmail4096 Jun 10 21:11 remote
> drwxr-x---   25 qmailq   qmail4096 Jun 10 21:11 todo
>
>
> Surprise suprise... HUGE PERFORMANCE INCREASE!!!
>
> I mean double or even triple the thoughtput!
>
> sh-2.05# qmail-qstat
> messages in queue: 28617
> messages in queue but not yet preprocessed: 0
>
> NO unprocessed messages (compared to having 20-50K) and only 28K
messages
> in queue (compared to 500K).
>
> The mail volume has not changed since before, so besides playing with
> hdparm a bit previously, nothing else has been changed.
>
> I have NO idea why putting the entire queue on disk 2, compared to just
> putting mess and pid on disk 2, would have SUCH a huge difference. It
> baffles me.
>
> Anyway... I have only observed this huge performance increase for a day,
> so I will monitor this for another day and see if it keeps this up. I'll
> post the findings tomorrow.
>
> Who could've guessed? It makes SOME sense... but double to triple the
> performance?
>
> Sincerely,
> Jason
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Remote Resue Disk

2001-06-16 Thread Jason Lim

Hi all,

I was about to develop my own "Remove Rescue Disk)... but thought maybe
you had a better idea or had already done this...

Regularly if the hard disk fails or needs a manual fsck (usually just
pressing y throughout), then it means a trip to the datacenter at whatever
ungodly hour it may be for this relatively simple task.

If it was possible to create a boot disk with a simple telnetd (and
minimum network support) and static e2fsck utilities, then, in theory, all
that needs to be done is to insert the disk, reboot the server, and the
telnetd binds to a special, pre-defined IP just for this emergency
purpose. Then I can telnet in from home or wherever, run e2fsck, mount the
drives, see /var/log/syslog, etc. to see what went wrong. After the
repairs, the disk can be removed, and server rebooted.

Does this sound realistic? Even if 2 disks or even 3 were required, if it
means I can save a trip to the datacenter it would be worthwhile to do.

Perhaps you guys have thought of something similar, or maybe there already
IS something like this out there? Any ideas/suggestions would be greatly
appreciated.

Sincerely,
Jason



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Remote Resue Disk

2001-06-16 Thread Jason Lim

Really?

Strange... last time I looked around there (a couple of months ago) I
couldn't find what I wanted. i haven't looked there again since, but I'll
have another quick check.

AFAIK there are "rescue disks" but none that also include a telnetd and
remote connection capabilities as well. I don't just want it to be able to
repair the disks... whats the point if I need to be right there at the
server? I could run the debian rescue disk as it also has e2fsck if i only
needed a regular rescue disk :-/

I'll have another quick look.

Sincerely,
Jason

- Original Message -
From: "Allen Ahoffman" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, June 16, 2001 5:47 PM
Subject: Re: Remote Resue Disk


> there are several of these out there, look for rootboot disk on
> freshmeat.net
>
> [Charset iso-8859-1 unsupported, filtering to ASCII...]
> > Hi all,
> >
> > I was about to develop my own "Remove Rescue Disk)... but thought
maybe
> > you had a better idea or had already done this...
> >
> > Regularly if the hard disk fails or needs a manual fsck (usually just
> > pressing y throughout), then it means a trip to the datacenter at
whatever
> > ungodly hour it may be for this relatively simple task.
> >
> > If it was possible to create a boot disk with a simple telnetd (and
> > minimum network support) and static e2fsck utilities, then, in theory,
all
> > that needs to be done is to insert the disk, reboot the server, and
the
> > telnetd binds to a special, pre-defined IP just for this emergency
> > purpose. Then I can telnet in from home or wherever, run e2fsck, mount
the
> > drives, see /var/log/syslog, etc. to see what went wrong. After the
> > repairs, the disk can be removed, and server rebooted.
> >
> > Does this sound realistic? Even if 2 disks or even 3 were required, if
it
> > means I can save a trip to the datacenter it would be worthwhile to
do.
> >
> > Perhaps you guys have thought of something similar, or maybe there
already
> > IS something like this out there? Any ideas/suggestions would be
greatly
> > appreciated.
> >
> > Sincerely,
> > Jason
> >
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> >
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Remote Resue Disk

2001-06-16 Thread Jason Lim

CCLinux, eh? Haven't heard of it... I'll scratch around google and
freshmeat to see if I can find it.

Sounds like it might do just what is required :-)

Sincerely,
Jason

- Original Message -
From: "Martin WHEELER" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: "Allen Ahoffman" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 6:17 AM
Subject: Re: Remote Resue Disk


On Sun, 17 Jun 2001, Jason Lim wrote:

> AFAIK there are "rescue disks" but none that also include a telnetd and
> remote connection capabilities as well.

I seem to remember doing just this with a one-disk rescue distro called
CCLinux (Cosmic Chaos Linux?) that someone brought into a class I was
running in Dublin. One of the Eircom engineers used it to get into
one of their servers from within the classroom; no problem.
IIRC it's one of those single boot/rescue disks you can very easily add
your own mods to (e.g. telnetd, if it isn't on there already).

HTH
--
Martin Wheeler   -StarTEXT - Glastonbury - BA6 9PH - England
[1] [EMAIL PROTECTED]   http://www.startext.co.uk/

 www.gateway.gov.uk  --  the UK government's £18M Microsoft-only website
  -- "all your government's databases are now belong to us" --


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Remote Resue Disk

2001-06-16 Thread Jason Lim

Looks like it might be the one :-)

Thanks.

Sincerely,
Jason

- Original Message -
From: "Martin WHEELER" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: "Allen Ahoffman" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 6:37 AM
Subject: Re: Remote Resue Disk


Can't find CCLinux; but 'Nuclinux' at:

 http://tuma.stc.cx/nuclinux.php

should *definitely* sort you.
--
Martin Wheeler   -StarTEXT - Glastonbury - BA6 9PH - England
[1] [EMAIL PROTECTED]   http://www.startext.co.uk/

 www.gateway.gov.uk  --  the UK government's £18M Microsoft-only website
  -- "all your government's databases are now belong to us" --




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Remote Resue Disk

2001-06-16 Thread Jason Lim

www.toms.net has a floppy distro, plus links to tons of other floppy
distros, so even if it isn't the right one, then I'm bound to find
something there that would fit.

Thanks :-)

Sincerely,
Jason

- Original Message -
From: "Martin WHEELER" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: "Allen Ahoffman" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 6:32 AM
Subject: Re: Remote Resue Disk


On Sun, 17 Jun 2001, Jason Lim wrote:

> CCLinux, eh? Haven't heard of it... I'll scratch around google and
> freshmeat to see if I can find it.
>
> Sounds like it might do just what is required :-)

Try : http://www.toms.net/

- there's a fair old selection to be found there.
--
Martin Wheeler   -StarTEXT - Glastonbury - BA6 9PH - England
[1] [EMAIL PROTECTED]   http://www.startext.co.uk/

 www.gateway.gov.uk  --  the UK government's £18M Microsoft-only website
  -- "all your government's databases are now belong to us" --




--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: per host bandwidth limit

2001-06-17 Thread Jason Lim

Do they all have their own IPs within your lan? You could limit bandwidth
on a per-IP level if you want. That way, if they decide to play with
napster and stuff, they will then have to suffer with low webpage loading,
slow email, etc. That might encourage them NOT to use those types of
programs anymore.

That would not help if you really want to just slow down and limit their
use of audiogalaxy and stuff like that though.

Sincerely,
Jason

- Original Message -
From: "PiotR" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 1:25 PM
Subject: per host bandwidth limit


> Hi, is there a way to limit the bandwidth in a "per hosts" basis?
>
> I'm actually using CBQ / SFQ to limit bandwith for two networks in an
internet
> link.
>
> It's possible with the linux kernel+iptables+tc to make a packet queue
with
> TOS based priority?
>
> My coleagues are eating a lot of bw with the use of audiogalaxy and that
kind
> of peer2peer downloaders. I would like the packets with No delay TOS
like
> telnet and stuff, to get a high priority and not have to wait for their
place
> on the queue.
>
> Excuse me for my bad english, but I'm not native speaker.
>
> Regards.
> --
>  ... ___ ...
> |   /| |\   |
> |  /-| Pedro Larroy Tovar. PiotR | http://omega.resa.es/piotr/ |-\  |
> o-|--| e-mail: [EMAIL PROTECTED]   |--|-o
> |  \-| finger [EMAIL PROTECTED] for public gnupg key |-/  |
> |...\|_|/...|
>
> :wq
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: redundancy via DNS

2001-06-17 Thread Jason Lim

It would depend on how popular the sites hosted on the servers were. If
you set a the times to be too low, say 1 minute, then every time someone
looks up the DNS records, then BLAM... your dns servers are hit because
things aren't cached anywhere.

So I would use something like an hour (we use this). An hour is reasonable
unless you need total 100% uptime. If you needed 100% uptime, you wouldn't
just rely on DNS for this anyway. You'd need something more reliable like
IP takeover, dedicated hardware solutions, etc. Depends greatly on what
your budget is. The dns servers are queried randomly, so say you have 4
DNS servers listed, then each 4, in theory, should get approximately the
same amount of traffic. If one of them goes down, then the client SHOULD
try the next available dns server.

You'd also want to colocate somewhere WAY out of the same network
neighbourhood. Interestingly a few of our clients from the USA do this.
Since we are located in Hong Kong, our networks are totally seperate from
anything you use in the USA. So when these california blackouts (is that
the right term?) hit them, they were fine. If you really want to keep
everything in the USA, try and find totally seperate networks... and i
mean totally (if you want to be real safe). UUnet and the big boys in the
USA tend to have a few core NOCs (even if they tell you everything is
distributed and safe, blah blah blah), and if any one of them is hit with
a blackout, earthquake, etc. then the whole network is affected. This
happened to UUnet in one of the countries in Asia (won't mention which
country it just in case UUnet is watching this) once... something happened
to one of their core international-link routers, and many countries were
affected, including the one our client was in. UUnet may deny it but we...
the people who actually use them... know the true story ;-)

Anyway, if you're really into reliability, you might want to colocate in
hong kong. Can't get much more diversified network-wise than that. Email
me back if you're interested in working something out. Otherwise, consider
the above carefully about the US networks.

Sincerely,
Jason

- Original Message -
From: ":yegon" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 8:50 PM
Subject: redundancy via DNS


> we have several servers colocated with several ISP's
> i am trying to sort out some configuration that would ensure good uptime
for
> customers
>
> i want to place the html documents of every customer on two separate
servers
> connected to separate ISP's
> the dns servers will point to one server and the second one will be just
a
> backup, in case the main server goes down we just change the DNS and
point
> the affected domains to the backup server. when the main server is back
up
> the dns changes back to normal
>
> and now my questions:
> 1. what should the times in zone files be set to to enable the dns
change to
> be propagated very quickly, say 5 minutes max.
>is it possible/wise to use TTL=0
>
> 2. if a domain has 2 name servers set during registration, are both of
these
> servers used for lookups? Or is it so that just the primary is querried
if
> it works, and the secondary is querried only if the primary is not
> responding?
>
> 3. is this whole idea worth consideration anyway or should I forget it?
>
>
> thanks for answers
>
> Martin Dragun
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: redundancy via DNS

2001-06-17 Thread Jason Lim

I mentioned "hardware solutions" in my email...

however, the cost of these hardware appliances is pretty high. In theory,
you can do the same thing with a properly configured linux server at less
than half the price. Of course... the money is in the configuration ;-)

Sincerely,
Jason

- Original Message -
From: "Ken Seefried" <[EMAIL PROTECTED]>
To: ":yegon" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, June 17, 2001 10:33 PM
Subject: Re: redundancy via DNS


>
> There are a number of very effect "appliance" style solutions to doing
this.
> Please have a look at RadWare (WSD) and F5 Networks (3DNS); I have had
great
> success with both companies.  The bonus is that these solutions can
> automaticly determine if a server is up.
>
> Ken Seefried, CISSP
>
> :yegon writes:
>
> > we have several servers colocated with several ISP's
> > i am trying to sort out some configuration that would ensure good
uptime for
> > customers
> >
> > i want to place the html documents of every customer on two separate
servers
> > connected to separate ISP's
> > the dns servers will point to one server and the second one will be
just a
> > backup, in case the main server goes down we just change the DNS and
point
> > the affected domains to the backup server. when the main server is
back up
> > the dns changes back to normal
> >
> > and now my questions:
> > 1. what should the times in zone files be set to to enable the dns
change to
> > be propagated very quickly, say 5 minutes max.
> >is it possible/wise to use TTL=0
> >
> > 2. if a domain has 2 name servers set during registration, are both of
these
> > servers used for lookups? Or is it so that just the primary is
querried if
> > it works, and the secondary is querried only if the primary is not
> > responding?
> >
> > 3. is this whole idea worth consideration anyway or should I forget
it?
> >
> >
> > thanks for answers
> >
> > Martin Dragun
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> >
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Remote Resue Disk

2001-06-17 Thread Jason Lim

Hi Michael,

Supposing linux does NOT boot up properly (eg. automatic e2fsck does not
fix disk, and needs to be run manually), is it possible, using your serial
getty solution, to SEE the screen and input anything at that point? That
sounds like it might help solve lots of problems... but not if it only
starts AFTER e2fsck is suppose to run.

Sincerely,
Jason

- Original Message -
From: "Michael R. Schwarzbach" <[EMAIL PROTECTED]>
To: "Florian Friesdorf" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, June 18, 2001 1:04 AM
Subject: RE: Remote Resue Disk


>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> > -Original Message-
> > From: Florian Friesdorf [mailto:[EMAIL PROTECTED]]On Behalf Of
> > Florian Friesdorf
> > Sent: Sonntag, 17. Juni 2001 16:40
> > To: [EMAIL PROTECTED]
> > Subject: Re: Remote Resue Disk
> >
> >
> > On Sat, Jun 16, 2001 at 05:02:55PM +0800, Jason Lim wrote:
> > > Hi all,
> > >
> > > I was about to develop my own "Remove Rescue Disk)... but thought
> > > maybe you had a better idea or had already done this...
> > >
> > > Regularly if the hard disk fails or needs a manual fsck (usually
> > > just pressing y throughout), then it means a trip to the
> > > datacenter
> > at whatever
> > > ungodly hour it may be for this relatively simple task.
> > >
> > > If it was possible to create a boot disk with a simple telnetd
> > > (and minimum network support) and static e2fsck utilities, then,
> > > in
> > theory, all
> > > that needs to be done is to insert the disk, reboot the server,
> > > and the telnetd binds to a special, pre-defined IP just for this
> > > emergency purpose. Then I can telnet in from home or wherever,
> > > run
> > e2fsck, mount the
> > > drives, see /var/log/syslog, etc. to see what went wrong. After
> > > the repairs, the disk can be removed, and server rebooted.
> > >
> > > Does this sound realistic? Even if 2 disks or even 3 were
> > required, if it
> > > means I can save a trip to the datacenter it would be worthwhile
> > > to do.
> > >
> > > Perhaps you guys have thought of something similar, or maybe
> > there already
> > > IS something like this out there? Any ideas/suggestions would be
> > > greatly appreciated.
> >
> > Another approach would be, (however you need at least 2 computers)
> > to connect the computers serial ports with null-modem cables and
> > tell lilo and the kernel to use the serial port as console.
> >
> > You then logon on the one computer to get the console of the other.
> >
> > Kind of a cheap console server.
> >
> > I have not tried it, but I think it should work.
> > Could someone comment on this?
> >
> >
> > florian
> >
>
> Hi Flo!
>
> I'm using this solution for my ISDN-Router. This is a small linux-box
> with no vga-card. You have to add the line "console=ttyS0" to your
> lilo config, and then you can use a terminal program (minicom, etc.)
> to control the box. If you add a serial getty in your /etc/inittab,
> you have a console too. (this is very usefull, if your nic isn't
> working:) )
>
> Michael Schwarzbach
>
> +--+
> |  /"\ |
> |  \ / |
> |   X  ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL   |
> |  / \ |
> `~~'
>
>
> -BEGIN PGP SIGNATURE-
> Version: PGPfreeware 7.0.3 for non-commercial use <http://www.pgp.com>
>
> iQA/AwUBOyzifgUqVktPGYHYEQLElACgldup8i5bFF5GmiyNyoRbN5esL8QAoN70
> pH6RkeqoKIbBtc+fKKYNjF/p
> =HsyH
> -END PGP SIGNATURE-
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Remote Resue Disk

2001-06-18 Thread Jason Lim

Dang... these are off-the-shelf servers made from various components. Most
use AWARD bios afaik :-/

Sincerely,
Jason

- Original Message -
From: "Marcel Hicking" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, June 18, 2001 8:15 PM
Subject: Re: Remote Resue Disk


> Slightly off-topic maybe, but Intel's isp1100 (and above)
> boxes have a special server-BIOS that allows remote
> control of the machine without OS help (text mode, that is)
> That includes the BIOS itself.
> As far as I hear, our server people are quite satisfied
> with the machines, and they used to be prefering Suns...
>
> "You can take control of the system remotely over the LAN or WAN, or
> access the front-panel serial port for BIOS setup/update or text-
> based applications."
> http://channel.intel.com/business/ibp/servers/isp1100/prodbrief.htm
>
> Cheers,
> Marcel
>
>
> Jason Lim <[EMAIL PROTECTED]> 16 Jun 2001, at 17:02:
>
> > Hi all,
> >
> > I was about to develop my own "Remove Rescue Disk)... but thought
> > maybe you had a better idea or had already done this...
> >
> > Regularly if the hard disk fails or needs a manual fsck (usually just
> > pressing y throughout), then it means a trip to the datacenter at
> > whatever ungodly hour it may be for this relatively simple task.
> >
> > If it was possible to create a boot disk with a simple telnetd (and
> > minimum network support) and static e2fsck utilities, then, in theory,
> > all that needs to be done is to insert the disk, reboot the server,
> > and the telnetd binds to a special, pre-defined IP just for this
> > emergency purpose. Then I can telnet in from home or wherever, run
> > e2fsck, mount the drives, see /var/log/syslog, etc. to see what went
> > wrong. After the repairs, the disk can be removed, and server
> > rebooted.
> >
> > Does this sound realistic? Even if 2 disks or even 3 were required, if
> > it means I can save a trip to the datacenter it would be worthwhile to
> > do.
> >
> > Perhaps you guys have thought of something similar, or maybe there
> > already IS something like this out there? Any ideas/suggestions would
> > be greatly appreciated.
> >
> > Sincerely,
> > Jason
> >
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
> > [EMAIL PROTECTED]
> >
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Check NIC for speed and promisc mode

2001-06-19 Thread Jason Lim

Hi,

Pretty much as title.

How can I check the "real" connection speed of a NIC? These are realtek
8129/8139 network cards. The leds behind the NIC aren't exactly
informative. I was hoping there was some way to do this directly in linux
through a hardware call. I checked /proc and i couldn't find anything
pertaining to this.

On a related note, how can I check if the card is in "promisc" mode? Some
software seems to leave it in promisc mode... so once I find out if it IS
in promisc mode, how can i switch it back?

Thanks in advance.

Sincerely,
Jason



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Check NIC for speed and promisc mode

2001-06-19 Thread Jason Lim

Thanks a lot for the fast response Bart!

Man... just a few months away from playing with these tools and I forget
everything ;-)

Just for the curious:

sh-2.05# mii-tool
eth0: negotiated 100baseTx-FD, link ok

sh-2.05# ifconfig
eth0  Link encap:Ethernet  HWaddr 00:00:21:D8:CA:34
  inet addr:xxx.xxx.xxx.xxx  Bcast:xxx.xxx.xxx.xxx
Mask:255.255.255.0
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:57608213 errors:0 dropped:0 overruns:0 frame:0
  TX packets:61894552 errors:0 dropped:0 overruns:5 carrier:0
  collisions:0 txqueuelen:100
  RX bytes:2864521106 (2731.8 Mb)  TX bytes:3091168733 (2947.9 Mb)
  Interrupt:5 Base address:0xe800

Again, thanks.

Sincerely,
Jason

- Original Message -
From: "Bart-Jan Vrielink" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, June 19, 2001 6:51 PM
Subject: Re: Check NIC for speed and promisc mode


> On Tue, 19 Jun 2001, Jason Lim wrote:
>
> > How can I check the "real" connection speed of a NIC? These are
realtek
> > 8129/8139 network cards. The leds behind the NIC aren't exactly
> > informative. I was hoping there was some way to do this directly in
linux
> > through a hardware call. I checked /proc and i couldn't find anything
> > pertaining to this.
>
> mii-tool (Package: net-tools) or mii-diag (Package mii-diag).
>
> > On a related note, how can I check if the card is in "promisc" mode?
Some
> > software seems to leave it in promisc mode... so once I find out if it
IS
> > in promisc mode, how can i switch it back?
>
> ifconfig eth0 | grep PROMISC
> ifconfig eth0 -promisc
>
> --
> Tot ziens,
>
> Bart-Jan
>
> http://www.zentek-international.com/



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: redundancy via DNS

2001-06-19 Thread Jason Lim

Hi,

I don't quite understand one bit of your statement...

> This way if one of the connections go down, that DNS server becomes
available
> and those IPs stop being handed out ... effectively removing those IPs
from
> your DNS rotation and automatically failing over to the remaining
> connections.  This also provides a load balancing effect.

I can understand how DNS rotation provides rudimentary load balancing, but
how does it "fail over"? The downed DNS server's IPs (because the ISP's
link has servered to it) cannot be transferred over to the other links.
Fail over would mean that somehow the dead DNS server's job is taken over.

To do that with your configuration, you'd need to change the domain name's
DNS entries to either remove the dead DNS server, or to change it's IP. If
people do a DNS lookup, and you have 4 connections, then there is a 1 in 4
chance the DNS lookup may fail. Not all clients will try all the other DNS
servers before declaring the domain name unresolvable.

I'm not picking holes in the system, I'm also trying to come up with a
good solution for this. The solutions we use now are similar (main
difference is we have the servers physically located in different places,
and use some dedicated hardware solutions) so if there was some way to
overcome the above problems, that would be great.

Maybe someone else on the list has already found a way to solve them?

Sincerely,
Jason

- Original Message -
From: "Fraser Campbell" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, June 19, 2001 10:29 PM
Subject: Re: redundancy via DNS


> ":yegon" <[EMAIL PROTECTED]> writes:
>
> > we have several servers colocated with several ISP's
> > i am trying to sort out some configuration that would ensure good
uptime for
> > customers
>
> We're helping a customer with a similar situation.  They have multiple
> incoming Internet connections.  What we plan to do:
>
> - Have a DNS server for each Internet connection
> - Servers are replicated/available via every connection
> - Each DNS server gives out IPs only within it's subnet
>
> This way if one of the connections go down, that DNS server becomes
available
> and those IPs stop being handed out ... effectively removing those IPs
from
> your DNS rotation and automatically failing over to the remaining
> connections.  This also provides a load balancing effect.
>
> Fraser
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com/
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Qmail - huge performance increase

2001-06-20 Thread Jason Lim

Hi,

Anyone that has followed this list knows I've been trying to boost Qmail's
outgoing mail performance greatly.

Just thought I'd let everyone know that increasing the number in the
"conf-split" drastically improves performance.

One of the problems I was having earlier was that the customer's server
had a HUGE number of emails, and even just doing "ls" in each directory in
the qmail queue directories would take ages, as there were so many
individual emails queued.

SO... by increasing conf-split to 97 (from the default of 20 something
afaik), each directory ends up only having a hundred or so files. Doing
"ls" now is far speedier.

I couldn't find any documentation anywhere stating this, so I'll share it
with you all :-)

Sincerely,
Jason


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




KVM via Internet?

2001-06-24 Thread Jason Lim

Hi,

I was wondering if you guys know of any cost-effective KVM (remote
access/control) solution that can be accessed over the internet?

Everyone knows about the cheapo products that you have to press a button
to switch between computers and stuff, but how about being able to
accessed these over the net (especially useful if you live far away from
the datacenter)?

Just in case you're not sure of what I'm talking about, I mean something
like the Rose Ultralink (http://www.rosel.com/htm/ultralink.htm). It is
nearly exactly what I need, BUT... the cost... is almost astronomical. I
don't need those 64 port things... this is just for about 4-5 servers.

I'm not sure if there is some way to hook up those cheap "push button
KVMs" to a server, and have the server pass the video feed over the net
somehow. Perhaps some video capture card in a server could be hooked up to
those cheap KVMs to pass the video feed that way? There seem to be lots of
POSSIBLE ways to do it, but I'm not exactly sure how.

The main reason for all this is to be able to see what I would normally
see sitting in front of the server during bootup, so, for example, if I
see e2fsck fail during bootup (requiring root password and a manual e2fsck
run), I would be able to do something about it rather than go all the way
to the datacenter just to press the Y key a few times and reboot. (if you
guys know a good way to get around that, that would be great too,
especially if I can't find any solution for the above).

Thanks in advance!

Sincerely,
Jason



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: KVM via Internet?

2001-06-24 Thread Jason Lim

Hi,

Looks good but unfortunately is ISA only :-/

Since many of our servers don't have legacy ISA support, it won't work :-/

Any other ideas? That one looked pretty good. I wish they had one that
translated the stuff directly to asci data that could be pumped over an
ethernet connection ;-)

Sincerely,
Jason

- Original Message -
From: "Mark Janssen" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, June 25, 2001 4:59 AM
Subject: Re: KVM via Internet?


> On Mon, Jun 25, 2001 at 03:41:26AM +0800, Jason Lim wrote:
> > Hi,
> >
> > I was wondering if you guys know of any cost-effective KVM (remote
> > access/control) solution that can be accessed over the internet?
>
> I think you are looking for a RealWeasel 2000
>
> I think it's www.realweasel.com
>
> Try it is should do what you like (convert video to text and put it
on
> the serial/network... and put input from serial to keyboard in...
>
> It converts to serial... but you can connect the serial to another
server or
> whatever to make it networked...
>
> They even have a telnettable demo system so you can try for yerself...
>
> --
> Mark Janssen Unix Consultant @ SyConOS IT
> E-mail: [EMAIL PROTECTED]  GnuPG Key Id: 357D2178
> http: maniac.nl, unix-god.[net|org], markjanssen.[com|net|org|nl]
> Fax/VoiceMail: +31 84 8757555 Finger for GPG and GeekCode
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: MTA - MLM - DNS configuration question

2001-06-30 Thread Jason Lim

I've been optimizing a number of email servers for a client now, AND I can
tell you that ANY disk access apart from the mail system will severely
impact the speed of the server, unless you're talking real low volume. As
soon as you start to get around 200-300K per day, you're gonna need to
seperate the www from the mail if possible. Of course, it also depends on
how heavy the web traffic is as well... but think about the above first.
If you give us more figures we could give you a better idea.

Sincerely,
Jason

- Original Message -
From: "Eirik Dentz" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, June 30, 2001 10:43 AM
Subject: MTA - MLM - DNS configuration question


> I've been asked to set up a MLM along side a web server and I wanted to
ask
> a quick question to the experienced, before I put a lot of time into
setting
> this up.
>
> My situation: I'm responsible for an web server that has sendmail
installed
> and is configured to send email via Perl and PHP scripts, but doesn't
> receive any.  Recently my supervisor has asked me to set up mailing list
> capabilities on the web server, because the IS department doesn't have
the
> capacity to do so at present and they want tight integration between the
> mailing lists and the web server (web based subscribe/unsubscibe pages
for
> lists and archives).  Based upon various threads that I've followed on
this
> list and other research, I've decided to switch from sendmail to postfix
and
> to use the GNU Mailman MLM (I'm open to other suggestions...)
>
> My question is this: The DNS is under the jurisdiction of the IS
department
> and the MX record @mydomain.org is set up to point at their email
server.
> Does it make sense and is it possible to set up another MX record:
> @lists.mydomain.org which will point at the web server?
>
> I realize that it is generally a bad idea to set up your web server to
do
> double duty as an email server.  Any ideas regarding at what message
volume
> a mail server will have a serious negative impact on a web server
running on
> the same machine would be appreciated.
>
> Thanks in advance
>
> eirik
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Salt for /etc/shadow and passwd?

2001-07-17 Thread Jason Lim

Okay... I wasn't thinking. The salt is stored within the crypted password
generated, which is why password crackers work.

Well... hopefully you can confirm this :-)

Sincerely,
Jason

- Original Message -
From: "Jason Lim" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, July 18, 2001 3:17 AM
Subject: Salt for /etc/shadow and passwd?


> Hi!
>
> I was wondering where the salt is stored on a debian linux system...?
>
> I want to cp /etc/shadow from one server to another for simplicity, and
I
> would rather not have to regenerate all the crypted passwords over
again.
>
> So... if I can make the salt for both servers the same then that SHOULD
> work, right?
>
> Well, thanks in advance!
>
> Sincerely,
> Jason
>
> http://www.zentek-international.com/


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Salt for /etc/shadow and passwd?

2001-07-17 Thread Jason Lim

Hi!

I was wondering where the salt is stored on a debian linux system...?

I want to cp /etc/shadow from one server to another for simplicity, and I
would rather not have to regenerate all the crypted passwords over again.

So... if I can make the salt for both servers the same then that SHOULD
work, right?

Well, thanks in advance!

Sincerely,
Jason



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Salt for /etc/shadow and passwd?

2001-07-17 Thread Jason Lim

Hi,

Thanks for the confirmation :-)

Sincerely,
Jason Lim

- Original Message -
From: "Thomas Morin" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, July 18, 2001 4:02 AM
Subject: Re: Salt for /etc/shadow and passwd?


> -. Jason Lim (2001-07-18) :
>  |
>  | Okay... I wasn't thinking. The salt is stored within the crypted
password
>  | generated, which is why password crackers work.
>
> Yes, with crypt(3) the salt is precisely the first two characters.
>
> -tom
>
> --
> == Thomas.Morin @webmotion.comSysAdmin/R&D
> == Phone: +1 613 731 4046 ext113 \Fax: +1 613 260 9545
> == PGP/keyID: 8CEA233D
> == PGP/KeyFP: 503BF6CFD3AE8719377B832A02FB94E08CEA233D
> --
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com/
>


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Ethernet Card recommendation

2001-09-02 Thread Jason Lim

Don't dlinks use the rtl chipset?

Sincerely,
Jason

- Original Message -
From: "Frank Louwers" <[EMAIL PROTECTED]>
To: "Christofer Algotsson" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, September 02, 2001 7:20 PM
Subject: Re: Ethernet Card recommendation


>
> > D-Link cant do constant load, they hang or just loose ability to talk
to other
> > hosts on the network.
>
> I have about 5 dlinks i can hang with just a pingflood! I'll never buy
> dlink nics again!
>
> Frank
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
> http://www.zentek-international.com/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Data Center, enviromental recomendations

2001-09-13 Thread Jason Lim

Hi,

If you're interested in that, there is a good bit about datacenter
temperature and humidity, plus a few other environmental factors at:
http://www.merit.edu/mail.archives/nanog/1998-10/msg00276.html

Have a look. Makes good reading. We learnt a bit.

Sincerely,
Jason

- Original Message -
From: "Jeff S Wheeler" <[EMAIL PROTECTED]>
To: "z-deb-isp" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 14, 2001 1:16 AM
Subject: RE: Data Center, enviromental recomendations


> There was a lengthy thread on NANOG about this very question a couple
years
> ago.  Check out the archives at
http://www.merit.edu/mail.archives/nanog/
>
> - jsw
>
>
> -Original Message-
> From: Felipe Alvarez Harnecker [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, September 13, 2001 12:21 PM
> To: [EMAIL PROTECTED]
> Subject: Data Center, enviromental recomendations
>
>
>
> Hi,
>
> what are the standar temperature, humidity, etc range for a data
> center ?
>
> Is the any paper on the subject ?
>
> Thanx.
>
> --
> __
>
> Felipe Alvarez Harnecker.  QlSoftware.
>
> Tels. 665.99.41 - 09.874.60.17
> e-mail: [EMAIL PROTECTED]
>
> http://qlsoft.cl/
> http://ql.cl/
> __
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com/
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Roach Motel For Packets...

2001-09-30 Thread Jason Lim

Why not bridge eth0 and eth1?

- Original Message -
From: "Peter Billson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, September 30, 2001 9:25 PM
Subject: Re: Roach Motel For Packets...


> Let me see if bad drawings help any:
>
> eth0(to Internet IP "A.A.A.A")--|--|
> |Router|--eth2(192.168.1.1)
> eth1(to Internet IP "B.B.B.B")--|--|  eth2:0(10.0.0.1)
>
>
> and
>
> |---|
> <<--to router --eth0(192.168.1.2)---|PC #1 -localnet|
> eth0:0 (10.0.0.2)   |---|
>
> All traffic to and from 192.168.1.0/27 goes over A.A.A.A
> All traffic to and from 10.0.0.0/27 goes over B.B.B.B
> A.A.A.A is the default gateway for all other traffic
>
> If I log into the router I can ping any IP, on any interface including
> my telco's first hop out eth0 and eth1. Packets get routed as expected.
>
> If I log into PC#1 I can ping any interface on the router, anything on
> the localnet and anything on the Internet (through the router's eth0
> which is the default gateway) but I can not ping anything on the remote
> side of the router's eth1.
>
> If I log into a remote machine I can ping any IP serviced by eth0, can
> ping my telco's side of the eth1 connection but can not reach any IPs
> serviced by eth1, including eth1 itself.
>
> I'm using ipchains to log *all* packets on every interface and in all
> the above examples I can see the ping packets come in eth1 but that's
> it. They never attempt to leave through any interface.
>
> Note the IPs in the example are fake. The real IPs are in the public IP
> space so the problem isn't trying to route these private IPs over the
> internet. :-)
>
> The ipchains rules are:
> # Rules for eth0 these work!
> ipchains -A input   -i eth2 -s 192.168.1.0/27 -j ACCEPT
> ipchains -A output  -i eth2 -d 192.168.1.0/27 -j ACCEPT
> ipchains -A forward -i eth0 -s 192.168.1.0/27 -j ACCEPT
> ipchains -A forward -i eth2 -d 192.168.1.0/27 -j ACCEPT
>
> # Rules for eth1 these don't!
> ipchains -A input   -i eth2 -s 10.0.0.0/27 -j ACCEPT
> ipchains -A output  -i eth2 -d 10.0.0.0/27 -j ACCEPT
> ipchains -A forward -i eth1 -s 10.0.0.0/27 -j ACCEPT
> ipchains -A forward -i eth2 -d 10.0.0.0/27 -j ACCEPT
>
> # And of course there are other rules allowing traffic in and out eth0
> and eth1.
>
> I'm stumped! I'd be happy if it was a routing problem that I could see
> or  firewall rule screwing things up.
>
> Is there, maybe, something I need to do when I give the NIC an alias?
>
> Pete
>
>
> > I am not sure if I understand this exactly. It may help to have more
> > information.
> >
> > I have a feeling your replies are being sent out but are being
firewalled
> > by another router, since they appear to have a source address that
doesn't
> > belong to its network (i.e. address spoofing, SMURF attack).
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Java?

2001-10-18 Thread Jason Lim

Hi,

I was wondering if anyone knew of a packaged jvm (java virtual machine)
and some associated libs.

Perhaps someone has packaged Sun's? Or IBM's?

Is Kaffe okay? Someone wants to run java programs (eg. java
programnamehere) and such, and we need to come up with something.
Unfortunately, due to license restrictions, Sun's implementation cannot by
put on the official Debian pkg list... but has anyone ELSE done it?

Just wondering... why duplicate work?

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Java?

2001-10-18 Thread Jason Lim

Hi,

I'll try those, and see howit goes. We're running unstable (yes, on a
production machine... amazing eh?), so I'll let you know how it goes.

Sincerely,
Jason

- Original Message -
From: "Martin Swany" <[EMAIL PROTECTED]>
To: "Rikki Hall" <[EMAIL PROTECTED]>
Cc: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, October 19, 2001 5:57 AM
Subject: Re: Java?


> Woody debs are here:
>
> deb ftp://metalab.unc.edu/pub/linux/devel/lang/java/blackdown.org/debian
> woody non-free
>
>
> > Blackdown does it.
> >
> > The relevant Debian packages are jdk1.1 (~= jre), jdk1.1-dev (full
jdk),
> > and the native threads versions of each.  These provide the 1.1.8
version
> > of Java.  If you want the most recent version (1.3.1), you need to get
> > it from www.blackdown.org.  No .debs that I know of, but it's a
trivial
> > install.  Many common browsers only support Java 1.1, but if you are
> > talking about running applications, 1.3 will work.
> >
> > Rikki Hall
> >
> > --> Tomorrow, Jason Lim said:
> >
> > > Hi,
> > >
> > > I was wondering if anyone knew of a packaged jvm (java virtual
machine)
> > > and some associated libs.
> > >
> > > Perhaps someone has packaged Sun's? Or IBM's?
> > >
> > > Is Kaffe okay? Someone wants to run java programs (eg. java
> > > programnamehere) and such, and we need to come up with something.
> > > Unfortunately, due to license restrictions, Sun's implementation
cannot by
> > > put on the official Debian pkg list... but has anyone ELSE done it?
> > >
> > > Just wondering... why duplicate work?
> > >
> > > Sincerely,
> > > Jason
> > >
> > >
> > >
> > > --
> > > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> > >
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Survey .. how many domains do you host? (Now RAID)

2001-11-02 Thread Jason Lim

Hi Dave...

Hum... if the Highpoint chipsets are merely IDE controllers... whats the
advantage to using them over the regular plain vanilla generic IDE
controller cards?

Don't they offload ANY work from the processor at ALL? They have to have
SOME sort of benefit... otherwise, why market them as RAID controllers?

Sincerely,
Jason

- Original Message -
From: "Dave Watkins" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, November 03, 2001 10:07 AM
Subject: Re: Survey .. how many domains do you host? (Now RAID)


>
>
> Contrary to popular belief the Highpoint chipsets are only software
RAID.
> The driver uses processor time to actually do the RAID work. The chip is
> just an IDE controller. Based on that even if it isn't supported at a
RAID
> level you can still use the software RAID avaliable in linux as the
kernel
> has had standard IDE drivers for the highpoint for a while now
>
> Hope this helps
>
> At 08:35 AM 11/3/01 +1100, you wrote:
> >On the topic of RAID...
> >
> >does anyone know if the HighPoint RAID chipsets are supported YET?
> >
> >BSD has had support for this for ages... linux in the game yet?
> >
> >Sincerely,
> >Jason
> >
> >- Original Message -
> >From: "James Beam" <[EMAIL PROTECTED]>
> >To: <[EMAIL PROTECTED]>
> >Sent: Saturday, November 03, 2001 6:07 AM
> >Subject: Re: Survey .. how many domains do you host?
> >
> >
> > > Wouldn't something like this totaly depend on the hardware resources
and
> > > general config/maintenance of the server?
> > >
> > > I can tell you that one of my servers running an older copy of
> >qmail/vchkpw
> > > is running over 800 domains with lots of steam to spare (each domain
is
> > > minimal traffic). Hardware is a PIII733 w256MB ram and 30GIG EIDE
drives
> > > (promise mirror)
> > >
> > > - Original Message -
> > > From: "alexus" <[EMAIL PROTECTED]>
> > > To: "Steve Fulton" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> > > Sent: Friday, November 02, 2001 11:49 AM
> > > Subject: Re: Survey .. how many domains do you host?
> > >
> > >
> > > > um.. m'key..
> > > >
> > > > you should've state that before so no one would get wrong thoughts
> >(like i
> > > > did)
> > > >
> > > > - Original Message -
> > > > From: "Steve Fulton" <[EMAIL PROTECTED]>
> > > > To: "alexus" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> > > > Sent: Friday, November 02, 2001 1:58 AM
> > > > Subject: Re: Survey .. how many domains do you host?
> > > >
> > > >
> > > > > > and who are you to do such a survey?
> > > > >
> > > > >   Down boy!  Down!  LOL!
> > > > >
> > > > >   No need to snap, I'm doing this because a PROGRAM I AM WRITING
has
> > > > > VARIABLES that need to be defined to a certain array size, as
they
> >will
> > > > hold
> > > > > FQDN's.  In order to make this program universally useful, I
would
> >like
> > > to
> > > > > know the maximum number of domains that has been (realistically)
> >hosted
> > > on
> > > > > one server.
> > > > >
> > > > >   K?
> > > > >
> > > > > -- Steve
> > > > >
> > > > > http://www.zentek-international.com/
> > > >
> > > >
> > >
> > >
> >
> >
> >--
> >To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> >with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Mail server

2001-11-03 Thread Jason Lim

How often will these people be checking email? ONLY through the webmail
interface, or will they be checking by pop3, imap, etc.?

If they start playing around with imap and storing large files and
attachments on your server, the requirements will vary greatly.

If you're doing a Hotmail setup (2Mb each user), then you can get by with
virtually any kinda hardware above a pentium 233MMX ;-)

Sincerely,
Jason

- Original Message -
From: "James" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, November 04, 2001 11:55 AM
Subject: Mail server


> I'm going to be setting up a mail server (Exim + uwimapd + IMP webmail)
> that will serve about 300-500 users.
>
> There will not be a major amount of traffic being put through it and was
> wondering if anyone had any cost effective hardware recommendations for
> CPU/RAM/HD space?
>
> - James
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: stable vs testing

2001-11-10 Thread Jason Lim

You just don't know what you're missing...

till you run unstable.


- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, November 11, 2001 1:09 PM
Subject: Re: stable vs testing


> On Sun, Nov 11, 2001 at 10:30:56AM +1100, Craig Sanders wrote:
> > On Fri, Nov 09, 2001 at 03:32:29AM +1100, Jason Lim wrote:
> > > We run unstable on our production servers. That means we must be
very
> > > vigilant in making sure no one else has had a problem. We download
> > > the updates, and install them a day or two later after other people
> > > have tested it and made sure it doesn't totally destroy the box. The
> > > reason we run unstable is because quite a few times we've needed new
> > > software, and it just wasn't in stable.
> >
> > another good idea is to install the same packages that your server
> > requires on another machine (e.g. a development box or your
> > workstation). then test every upgrade on that box before doing it on
> > your production server. if the upgrade works smoothly on the
workstation
> > then it's probably OK to run on the production server. if not, then
wait
> > a few days and run a test upgrade again.
> >
> > once you've done this a few times, you get a feel for what kinds of
> > problems to look out for, what to keep an eye on during & after the
> > upgrade.
>
> ...
>
> > in my experience, there is far less risk in upgrading regularly &
often
> > than there is in upgrading only when there is a new stable release.
you
> > get small incremental changes rather than one enormous change...one
> > advantage of this is that if something does go wrong, it's generally
> > only one or two problems at a time, which is much easier to deal with
> > than dozens or hundreds of simultaneous problems.
> ...
> > here's a good rule of thumb for deciding whether to run unstable:
> >
> > if you are highly skilled and you need the new versions in unstable
then
> > it's worth the risk to run unstable.
> >
> > if not, then stick to stable. most packages in unstable can easily be
> > recompiled for stable (depending on which dependancies you also have
to
> > recompile for stable...if there's too many, then it becomes more work
> > and more risk to recompile than it is to just upgrade to unstable)
>
> Yes, I can second that.  Excepting only that if you are skilled enough
to
> recompile unstable source on stable you are probably more than skilled
enough
> to run vanilla unstable.  :-)
>
> We typically upgrade all our development machines first.  For the most
part,
> that catches most of the issues.
>
> --
>
> Christopher F. Miller, Publisher
[EMAIL PROTECTED]
> MaineStreet Communications, Inc   208 Portland Road, Gray, ME
04039
> 1.207.657.5078
http://www.maine.com/
> Content/site management, online commerce, internet integration, Debian
linux
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: customizing debian apache

2001-11-08 Thread Jason Lim

Apparently the path is hardcoded into suexec for "security reasons", so...

Anyway, as Jeremy said (and much to our fustration) the suexec source
included is missing critical files required for compilation, so we grab
the apache source code and just build suexec (think i've mentioned this
before... anyway).

Sincerely,
Jason Lim

- Original Message -
From: "Jeremy C. Reed" <[EMAIL PROTECTED]>
To: "Matt Hope" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, November 09, 2001 11:39 AM
Subject: Re: customizing debian apache


> On Thu, 8 Nov 2001, Matt Hope wrote:
>
> > fairly tidy. An alternative would be patching suexec to accept a
run-time
> > path (from what I gather, this is non-trivial)
>
> The apache-common package comes with some suexec source. But I am not
sure
> why -- since it is unusable because it is missing several more headers.
>
> Anyways, I often use sed to patch suexec:
>
> $ cat -v sed.expression
> s/\/home\/httpd\/html/\/home^@^@^@^@^@^@^@^@^@^@^@/
>
>   Jeremy C. Reed
> ..
>  ISP-FAQ.com -- find answers to your questions
>  http://www.isp-faq.com/
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Survey .. how many domains do you host? (Now RAID)

2001-11-02 Thread Jason Lim

On the topic of RAID...

does anyone know if the HighPoint RAID chipsets are supported YET?

BSD has had support for this for ages... linux in the game yet?

Sincerely,
Jason

- Original Message -
From: "James Beam" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, November 03, 2001 6:07 AM
Subject: Re: Survey .. how many domains do you host?


> Wouldn't something like this totaly depend on the hardware resources and
> general config/maintenance of the server?
>
> I can tell you that one of my servers running an older copy of
qmail/vchkpw
> is running over 800 domains with lots of steam to spare (each domain is
> minimal traffic). Hardware is a PIII733 w256MB ram and 30GIG EIDE drives
> (promise mirror)
>
> - Original Message -
> From: "alexus" <[EMAIL PROTECTED]>
> To: "Steve Fulton" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> Sent: Friday, November 02, 2001 11:49 AM
> Subject: Re: Survey .. how many domains do you host?
>
>
> > um.. m'key..
> >
> > you should've state that before so no one would get wrong thoughts
(like i
> > did)
> >
> > - Original Message -
> > From: "Steve Fulton" <[EMAIL PROTECTED]>
> > To: "alexus" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> > Sent: Friday, November 02, 2001 1:58 AM
> > Subject: Re: Survey .. how many domains do you host?
> >
> >
> > > > and who are you to do such a survey?
> > >
> > >   Down boy!  Down!  LOL!
> > >
> > >   No need to snap, I'm doing this because a PROGRAM I AM WRITING has
> > > VARIABLES that need to be defined to a certain array size, as they
will
> > hold
> > > FQDN's.  In order to make this program universally useful, I would
like
> to
> > > know the maximum number of domains that has been (realistically)
hosted
> on
> > > one server.
> > >
> > >   K?
> > >
> > > -- Steve
> > >
> > > http://www.zentek-international.com/
> >
> >
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




kernel: eth0: Memory squeeze, deferring packet.

2001-09-27 Thread Jason Lim

Hi all,

On a 2.2.19 kernel box, this happened:

21:30:12 alpha -- MARK --
21:50:11 alpha -- MARK --
22:02:05 alpha kernel: eth0: Memory squeeze, deferring packet.
22:02:05 alpha last message repeated 46 times
22:02:17 alpha netsaint: Warning: fork() in my_system() failed for command
"/usr/lib/netsaint/plugins/check_ping 216.115.102.79 100 100 5000.0
5000.0 -p1"

and from then on in, all attempts by the server to ping out or do any
outbound transactions fail. Inbound (eg. pinging the server from another
box) also fail. The server however is operating, and one can login from
the console directly.

eth0 is a realtek el-cheapo card (but have run reliabily for over a year).

After a soft reboot all is fine again.

I did a search on google and didn't find much on this. Do you guys have
any experience with this? I think it was doing regular normal traffic
levels before that (no sudden spike apparent in switch mrtg reports).

So, anyone know what caused that, or how it could be prevented?

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




eth0: Memory squeeze, deferring packet

2001-10-06 Thread Jason Lim

Hi!

Do you'all know what this means:

"eth0: Memory squeeze, deferring packet"?

We get that one one of our boxes every so often, and it is annoying
because then it loses all connectivity to the net, and has to be
physically rebooted.

eth0: RealTek RTL8139 Fast Ethernet at 0xe800, IRQ 10, 00:50:fc:28:46:90.

So... anyone know whats going on, or experienced this before?

Any help would be greatly appreciated.

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: eth0: Memory squeeze, deferring packet

2001-10-06 Thread Jason Lim

Hi,

Thanks for the reply.

Strange... but wha could be causing it? The box has 512M ram and has
almost 20M free... it is that empty and free.

Plus there is no huge amount of traffic going through it.

I'm just trying to figure out what triggered this :-/

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, October 07, 2001 2:06 AM
Subject: Re: eth0: Memory squeeze, deferring packet


> On Sat, 6 Oct 2001 19:12, Jason Lim wrote:
> > Memory squeeze, deferring packet
>
> The dev_alloc_skb() function in the kernel is failing due to lack of
memory.
>
> Presumably if you change /proc/sys/vm/freepages to have some larger
numbers
> it will stop this happening.
>
> --
> http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
> http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
> http://www.coker.com.au/projects.html Projects I am working on
> http://www.coker.com.au/~russell/ My home page
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: eth0: Memory squeeze, deferring packet

2001-10-06 Thread Jason Lim

Well... I doubt the it is (1) because the system was NOT under great load,
and doesn't have software RAID, and only uses one 100baseT realtek card.

SO... i guess it is (2). Kernel is 2.2.19... i thought it would be pretty
much stable by now?! damn.

how is kernel 2.4.x now? Do you think they are safe enough for production
use? Any huge performance increase?

Sincerely,
Jason

- Original Message -
From: "Russell Coker" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, October 07, 2001 7:35 AM
Subject: Re: eth0: Memory squeeze, deferring packet


> On Sun, 7 Oct 2001 02:41, Jason Lim wrote:
> > Strange... but wha could be causing it? The box has 512M ram and has
> > almost 20M free... it is that empty and free.
> >
> > Plus there is no huge amount of traffic going through it.
> >
> > I'm just trying to figure out what triggered this :-/
>
> These types of things are caused by one of two situations:
> 1)  Some situation of extremely high system load (eg a combination of
intense
> file system access over software RAID and routing between several
100baseT
> interfaces).
> 2)  Kernel bug.
>
> If it's the latter than having 2G of RAM isn't necessarily going to save
> you...
>
> --
> http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
> http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
> http://www.coker.com.au/projects.html Projects I am working on
> http://www.coker.com.au/~russell/ My home page
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Help... SSH CRC-32 compensation attack detector vulnerability

2001-12-02 Thread Jason Lim

Hi,

sigh... yes... some of our servers have been hit with the "SSH CRC-32
compensation attack detector vulnerability" attack.

some servers have been compromised, and the usual rootkit stuff (install
root shells in /etc/inetd.conf, bogus syslogd, haxored ps, etc.).

What is an easy way to locate binaries that are different from the ones
provided in the original debs?

And is there any other relatively easier way of cleaning up a system that
has had a rootkit installed?

We've done a netstat -a and removed/killed all strange processes, and
cleaned inetd.conf as much as we can, but some of the programs in
inetd.conf have themselves also been tampered with (eg. in.telnetd).

Please help... I have a bad feeling the crackers are coming back real soon
to really finish off the job... so any help at this time in removing all
their crap would be greatly appreciated.

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Help... SSH CRC-32 compensation attack detector vulnerability

2001-12-02 Thread Jason Lim

The patch is to use the "ssh" package in unstable... and I think in the
security-updates.

We were using ssh-nonfree and that is vunerable. I think they released a
patch and the debs have since been updated, but I'd be wary of staying
with ssh-nonfree now that a hole is right there.

Damn... now the messy clean up process left after numerous rootkits have
been installed. We're just trying to cp -a all the files from our backups
into their right places. That should solve things.

If anyone has better ideas, please let me know.

Sincerely,
Jason

- Original Message -
From: "Keith Elder" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, December 03, 2001 1:11 PM
Subject: Re: Help... SSH CRC-32 compensation attack detector vulnerability


> What is the patch to plug this hole?
>
> K.
>
> * Jason Lim ([EMAIL PROTECTED]) wrote:
> > Reply-To: "Jason Lim" <[EMAIL PROTECTED]>
> > From: "Jason Lim" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Subject: Help... SSH CRC-32 compensation attack detector vulnerability
> > Date: Mon, 3 Dec 2001 09:33:07 +1100
> > X-Mailer: Microsoft Outlook Express 6.00.2600.
> >
> > Hi,
> >
> > sigh... yes... some of our servers have been hit with the "SSH CRC-32
> > compensation attack detector vulnerability" attack.
> >
> > some servers have been compromised, and the usual rootkit stuff
(install
> > root shells in /etc/inetd.conf, bogus syslogd, haxored ps, etc.).
> >
> > What is an easy way to locate binaries that are different from the
ones
> > provided in the original debs?
> >
> > And is there any other relatively easier way of cleaning up a system
that
> > has had a rootkit installed?
> >
> > We've done a netstat -a and removed/killed all strange processes, and
> > cleaned inetd.conf as much as we can, but some of the programs in
> > inetd.conf have themselves also been tampered with (eg. in.telnetd).
> >
> > Please help... I have a bad feeling the crackers are coming back real
soon
> > to really finish off the job... so any help at this time in removing
all
> > their crap would be greatly appreciated.
> >
> > Sincerely,
> > Jason
> >
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>
> ###
>   Keith Elder
>Email: [EMAIL PROTECTED]
> Phone: 1-734-507-1438
>  Text Messaging (145 characters): [EMAIL PROTECTED]
> Web: http://www.zorka.com (Howto's, News, and hosting!)
>
>  "With enough memory and hard drive space
>anything in life is psosible!"
> ###
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: isp

2001-12-06 Thread Jason Lim

Uh... this is interesting...

As far as I know, bulk friendly hosting and such go for around 300-400 per
month at a minimum... with many a lot more.

So not only are you trying to scam people with pyramid schemes and such,
you're cheap too.

Oh well.

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, December 07, 2001 3:18 AM
Subject: isp


> i need a bulk friendli isp for about $30.00 a month
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Strange apache behaviour?

2001-12-06 Thread Jason Lim

Hi all,

Do you know how to change the permissions of the log files apache
generates?

-rw-r-1 www-data www-data  1372461 Dec  7 13:04 apache-access.log
-rw-r-1 www-data www-data   740269 Dec  2 06:21
apache-access.log.0
-rw-r-1 www-data www-data44414 Nov 25 05:52
apache-access.log.1.gz
-rw-rw-r--1 www-data www-data   167114 Sep 23 06:10
apache-access.log.10.gz
-rw-rw-r--1 www-data www-data13069 Sep 16 06:06
apache-access.log.11.gz
-rw-rw-r--1 www-data www-data14357 Sep  9 06:04
apache-access.log.12.gz
-rw-rw-r--1 www-data www-data21209 Sep  2 06:24
apache-access.log.13.gz
-rw-rw-r--1 www-data www-data 5979 Nov 19  2000
apache-access.log.14.gz
-rw-rw-r--1 www-data www-data36771 Nov 18 06:23
apache-access.log.2.gz

It USED to be readable by all, now the persmissions have changed (which in
my case screws up the webalizer processes run by users).

Having a look at the changelog...

apache (1.3.22-1) unstable; urgency=low
  * Default ownership of logfiles is root/adm, perms 640 (closes:
#112675).

Thats all nice a good... but how to I get it 644? I looked and can't
appear to find it. Closest thing I could find was in
/etc/apache/cron.conf, but that only sets the uid/gid, not the file
permissions of the logfiles.

Any ideas?

TIA.

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Strange apache behaviour?

2001-12-07 Thread Jason Lim

Yes, for security, and it is easier to see what processes are taking too
long to run.

But now that apache saves the log files so that users can't read them...
it doesn't work anymore :-/

Any ideas on how to make the logfiles a+r?

Sincerely,
Jason

- Original Message -
From: "James" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, December 07, 2001 7:41 PM
Subject: Re: Strange apache behaviour?


>
> It is usual to run webalizer as a user? I have never even thought of
> doing that. Is there any particular reason? (security?)
>
> On Fri, 7 Dec 2001, Jason Lim wrote:
>
> > Hi all,
> >
> > Do you know how to change the permissions of the log files apache
> > generates?
> >
> > -rw-r-1 www-data www-data  1372461 Dec  7 13:04
apache-access.log
> > -rw-r-1 www-data www-data   740269 Dec  2 06:21
> > apache-access.log.0
> > -rw-r-1 www-data www-data44414 Nov 25 05:52
> > apache-access.log.1.gz
> > -rw-rw-r--1 www-data www-data   167114 Sep 23 06:10
> > apache-access.log.10.gz
> > -rw-rw-r--1 www-data www-data13069 Sep 16 06:06
> > apache-access.log.11.gz
> > -rw-rw-r--1 www-data www-data14357 Sep  9 06:04
> > apache-access.log.12.gz
> > -rw-rw-r--1 www-data www-data21209 Sep  2 06:24
> > apache-access.log.13.gz
> > -rw-rw-r--1 www-data www-data 5979 Nov 19  2000
> > apache-access.log.14.gz
> > -rw-rw-r--1 www-data www-data36771 Nov 18 06:23
> > apache-access.log.2.gz
> >
> > It USED to be readable by all, now the persmissions have changed
(which in
> > my case screws up the webalizer processes run by users).
> >
> > Having a look at the changelog...
> >
> > apache (1.3.22-1) unstable; urgency=low
> >   * Default ownership of logfiles is root/adm, perms 640 (closes:
> > #112675).
> >
> > Thats all nice a good... but how to I get it 644? I looked and can't
> > appear to find it. Closest thing I could find was in
> > /etc/apache/cron.conf, but that only sets the uid/gid, not the file
> > permissions of the logfiles.
> >
> > Any ideas?
> >
> > TIA.
> >
> > Sincerely,
> > Jason
> >
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> >
> >
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Strange apache behaviour?

2001-12-07 Thread Jason Lim

Thats not very good security-wise to run webalizer as www-data, because if
a user ever finds a way to poison the log files, then webalizer will run
them as www-data, and possibly be able to fool around with apache too
(because they now run as the same user).

A far better way (and much more direct) would be to have a way to change
apache's log files BACK to the previous permissions.

I think if no one knows the answer i'll have to ask netgod himself... (i
think he is still the package maintainer?)

Sincerely,
Jason

- Original Message -
From: "Denis A. Kulgeyko" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Sent: Friday, December 07, 2001 9:10 PM
Subject: Re: Strange apache behaviour?


>  Hello !
>
> > Do you know how to change the permissions of the log files apache
> > generates?
> >
> > -rw-r-1 www-data www-data  1372461 Dec  7 13:04
apache-access.log
> > -rw-r-1 www-data www-data   740269 Dec  2 06:21
> > apache-access.log.0
> > -rw-r-1 www-data www-data44414 Nov 25 05:52
> > apache-access.log.1.gz
> > -rw-rw-r--1 www-data www-data   167114 Sep 23 06:10
> > apache-access.log.10.gz
> > -rw-rw-r--1 www-data www-data13069 Sep 16 06:06
> > apache-access.log.11.gz
> > -rw-rw-r--1 www-data www-data14357 Sep  9 06:04
> > apache-access.log.12.gz
> > -rw-rw-r--1 www-data www-data21209 Sep  2 06:24
> > apache-access.log.13.gz
> > -rw-rw-r--1 www-data www-data 5979 Nov 19  2000
> > apache-access.log.14.gz
> > -rw-rw-r--1 www-data www-data36771 Nov 18 06:23
> > apache-access.log.2.gz
> >
> > It USED to be readable by all, now the persmissions have changed
(which in
> > my case screws up the webalizer processes run by users).
> >
> > Having a look at the changelog...
> >
> > apache (1.3.22-1) unstable; urgency=low
> >   * Default ownership of logfiles is root/adm, perms 640 (closes:
> > #112675).
> >
> > Thats all nice a good... but how to I get it 644? I looked and can't
> > appear to find it. Closest thing I could find was in
> > /etc/apache/cron.conf, but that only sets the uid/gid, not the file
> > permissions of the logfiles.
> >
> > Any ideas?
>
> Run webalizer with permissions of group www-data and set appropriate
umask to
> user www-data (may be to loogrotate daemon too).
>
> --
> With Best Regards,
> Denis A. Kulgeyko
> DK666-UANIC
> e-mail: [EMAIL PROTECTED]
> ICQ: 81607525
> SMS: [EMAIL PROTECTED]
> --
> UNIXes ... they are VERY friendly.
> But .. they chooses their friends VERY carefully ... :)
> ^]:wq!
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Strange apache behaviour?

2001-12-08 Thread Jason Lim

Anyone figured out my apache problem (log file permissions)?

I still haven't figured this one out yet.

TIA,

Jas

- Original Message -
From: "Jason Lim" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, December 08, 2001 1:52 AM
Subject: Re: Strange apache behaviour?


> Thats not very good security-wise to run webalizer as www-data, because
if
> a user ever finds a way to poison the log files, then webalizer will run
> them as www-data, and possibly be able to fool around with apache too
> (because they now run as the same user).
>
> A far better way (and much more direct) would be to have a way to change
> apache's log files BACK to the previous permissions.
>
> I think if no one knows the answer i'll have to ask netgod himself... (i
> think he is still the package maintainer?)
>
> Sincerely,
> Jason
>
> ----- Original Message -
> From: "Denis A. Kulgeyko" <[EMAIL PROTECTED]>
> To: "Jason Lim" <[EMAIL PROTECTED]>
> Sent: Friday, December 07, 2001 9:10 PM
> Subject: Re: Strange apache behaviour?
>
>
> >  Hello !
> >
> > > Do you know how to change the permissions of the log files apache
> > > generates?
> > >
> > > -rw-r-1 www-data www-data  1372461 Dec  7 13:04
> apache-access.log
> > > -rw-r-1 www-data www-data   740269 Dec  2 06:21
> > > apache-access.log.0
> > > -rw-r-1 www-data www-data44414 Nov 25 05:52
> > > apache-access.log.1.gz
> > > -rw-rw-r--1 www-data www-data   167114 Sep 23 06:10
> > > apache-access.log.10.gz
> > > -rw-rw-r--1 www-data www-data13069 Sep 16 06:06
> > > apache-access.log.11.gz
> > > -rw-rw-r--1 www-data www-data14357 Sep  9 06:04
> > > apache-access.log.12.gz
> > > -rw-rw-r--1 www-data www-data21209 Sep  2 06:24
> > > apache-access.log.13.gz
> > > -rw-rw-r--1 www-data www-data 5979 Nov 19  2000
> > > apache-access.log.14.gz
> > > -rw-rw-r--1 www-data www-data36771 Nov 18 06:23
> > > apache-access.log.2.gz
> > >
> > > It USED to be readable by all, now the persmissions have changed
> (which in
> > > my case screws up the webalizer processes run by users).
> > >
> > > Having a look at the changelog...
> > >
> > > apache (1.3.22-1) unstable; urgency=low
> > >   * Default ownership of logfiles is root/adm, perms 640 (closes:
> > > #112675).
> > >
> > > Thats all nice a good... but how to I get it 644? I looked and can't
> > > appear to find it. Closest thing I could find was in
> > > /etc/apache/cron.conf, but that only sets the uid/gid, not the file
> > > permissions of the logfiles.
> > >
> > > Any ideas?
> >
> > Run webalizer with permissions of group www-data and set appropriate
> umask to
> > user www-data (may be to loogrotate daemon too).
> >
> > --
> > With Best Regards,
> > Denis A. Kulgeyko
> > DK666-UANIC
> > e-mail: [EMAIL PROTECTED]
> > ICQ: 81607525
> > SMS: [EMAIL PROTECTED]
> > --
> > UNIXes ... they are VERY friendly.
> > But .. they chooses their friends VERY carefully ... :)
> > ^]:wq!
> >
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Strange apache behaviour? (solved)

2001-12-08 Thread Jason Lim

Thanks...

The lines to change are:

do
if [ -f $LOG ]
then
if [ "$APACHE_CHOWN_LOGFILES" = "1" ]
then
savelog -c $APACHE_OLD_LOGS -m 640 -u $USR -g $GRP \
$LOG > /dev/null
else
savelog -c $APACHE_OLD_LOGS -m 640 -u root -g adm \
$LOG > /dev/null
fi
fi
done

changing 640 to 644. This should work... will wait a few days to make sure
there are no side-effects to this.

Perhaps Johnie could make this an optional setting in
/etc/apache/cron.conf or something like that...?

Sincerely,
Jas

- Original Message -
From: "Peter Billson" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, December 09, 2001 9:31 AM
Subject: Re: Strange apache behaviour?


> Jason,
>   Apaches log file ownership and permissions are set when they rotate in
> /etc/cron.daily/apache (about line 90 or so). As pointed out there are
> security issues to worry about so be careful.
>
> Pete
> --
> http://www.elbnet.com
> ELB Internet Services, Inc.
> Web Design, Computer Consulting, Internet Hosting
>
>
> Jason Lim wrote:
> >
> > Anyone figured out my apache problem (log file permissions)?
> >
> > I still haven't figured this one out yet.
> >
> > TIA,
> >
> > Jas
> >
> > - Original Message -
> > From: "Jason Lim" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Saturday, December 08, 2001 1:52 AM
> > Subject: Re: Strange apache behaviour?
> >
> > > Thats not very good security-wise to run webalizer as www-data,
because
> > if
> > > a user ever finds a way to poison the log files, then webalizer will
run
> > > them as www-data, and possibly be able to fool around with apache
too
> > > (because they now run as the same user).
> > >
> > > A far better way (and much more direct) would be to have a way to
change
> > > apache's log files BACK to the previous permissions.
> > >
> > > I think if no one knows the answer i'll have to ask netgod
himself... (i
> > > think he is still the package maintainer?)
> > >
> > > Sincerely,
> > > Jason
> > >
> > > - Original Message -
> > > From: "Denis A. Kulgeyko" <[EMAIL PROTECTED]>
> > > To: "Jason Lim" <[EMAIL PROTECTED]>
> > > Sent: Friday, December 07, 2001 9:10 PM
> > > Subject: Re: Strange apache behaviour?
> > >
> > >
> > > >  Hello !
> > > >
> > > > > Do you know how to change the permissions of the log files
apache
> > > > > generates?
> > > > >
> > > > > -rw-r-1 www-data www-data  1372461 Dec  7 13:04
> > > apache-access.log
> > > > > -rw-r-1 www-data www-data   740269 Dec  2 06:21
> > > > > apache-access.log.0
> > > > > -rw-r-1 www-data www-data44414 Nov 25 05:52
> > > > > apache-access.log.1.gz
> > > > > -rw-rw-r--1 www-data www-data   167114 Sep 23 06:10
> > > > > apache-access.log.10.gz
> > > > > -rw-rw-r--1 www-data www-data13069 Sep 16 06:06
> > > > > apache-access.log.11.gz
> > > > > -rw-rw-r--1 www-data www-data14357 Sep  9 06:04
> > > > > apache-access.log.12.gz
> > > > > -rw-rw-r--1 www-data www-data21209 Sep  2 06:24
> > > > > apache-access.log.13.gz
> > > > > -rw-rw-r--1 www-data www-data 5979 Nov 19  2000
> > > > > apache-access.log.14.gz
> > > > > -rw-rw-r--1 www-data www-data36771 Nov 18 06:23
> > > > > apache-access.log.2.gz
> > > > >
> > > > > It USED to be readable by all, now the persmissions have changed
> > > (which in
> > > > > my case screws up the webalizer processes run by users).
> > > > >
> > > > > Having a look at the changelog...
> > > > >
> > > > > apache (1.3.22-1) unstable; urgency=low
> > > > >   * Default ownership of logfiles is root/adm, perms 640
(closes:
> > > > > #112675).
> > > > >
> > > > > Thats all nice a good... but how to I get it 644? I looked and
can't
> > > > > appear to find it. Closest thing I could find was in
> > > > > /etc/apache/cron.conf, but that only sets the

Re: Strange apache behaviour? (solved)

2001-12-08 Thread Jason Lim

I know about that option...
but it doesn't CHMOD... it only chowns.

- Original Message -
From: "Bob Billson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, December 09, 2001 11:58 AM
Subject: Re: Strange apache behaviour? (solved)


> On Sun, Dec 09, 2001 at 08:05:17AM +1100, Jason Lim wrote:
> > Perhaps Johnie could make this an optional setting in
> > /etc/apache/cron.conf or something like that...?
>
> There is:
>
> .# Whether to chown logfiles to the user/group Apache runs as.
> APACHE_CHOWN_LOGFILES=0
>  ^^ This should be 0 *not* 1, which I think is
Debian's
> default.
>
> This is used by /etc/cron.daily/apache.  The server logs should root.adm
or
> root.root with 640 permissions.  Having the same that runs the server
> owner/group write permissions to the logs is asking for trouble.  Nor
> should the world normally be able to look them.
>
> bob
> --
>   bob billsonemail: [EMAIL PROTECTED]  ham: kc2wz   /)
> [EMAIL PROTECTED] beekeeper -8|||}
>   "Níl aon tinteán mar do thinteán féin." --DorothyLinux geek   \)
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Strange apache behaviour? (solved)

2001-12-09 Thread Jason Lim

Yeap, I already replied on that and posted the correct lines to change.

All I was saying was that since this was a recent change in the latest
apache builds, I was suggesting that this could be an optional feature
maybe made in /etc/apache/cron.conf, to allow people to stay with the
original previous way of doing things. I don't think
/etc/cron.daily/apache is marked as a conf file, so the next time apache
is updated/upgraded, the changes will be overwritten and yet again need to
be manually changed. See the problem?


- Original Message -
From: "Bob Billson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, December 10, 2001 2:02 AM
Subject: Re: Strange apache behaviour? (solved)


> On Sun, Dec 09, 2001 at 01:16:03PM +1100, Jason Lim wrote:
> > I know about that option...
> > but it doesn't CHMOD... it only chowns.
>
> Right.  And look at /etc/cron.daily/apache.  You'll see where
> owner/permissions are set depending on its value, as Pete said.  Edit to
> suit your situation.
>
>   bob
> --
>   bob billsonemail: [EMAIL PROTECTED]  ham: kc2wz   /)
> [EMAIL PROTECTED] beekeeper -8|||}
>   "Níl aon tinteán mar do thinteán féin." --DorothyLinux geek   \)
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Problems with Duron Procesor

2001-12-15 Thread Jason Lim

Let me confirm to you that AMD Durons from 700-1.1G work perfectly with
Kernel 2.2.19 and 2.4.16.

We have over 10 such boxes running 2.4.16 and they are all rock solid.
Faster than the PIII and P4 boxes.

So whatever is going down, it is your motherboard, heatsink, something.
Just a reminder... Durons run EXTREMELY hot compared to PIII... we've
fried a few AMD CPUs due to the CPU fan either going too slow or stopping.
So use only high quality ball-bearing fans on them.

As for motherboards, we use only top quality ones from either ASUS or
Magic-Pro. Avoid using cheapo ones.

Try testing the RAM using "memtest" or something. that might help you.

Anyway, thats my 2 cents. Hope that helps you isolate the problem.

Sincerely,
Jason

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, December 16, 2001 10:37 AM
Subject: Problems with Duron Procesor


> Hello!
>
> We bought a Clone with a 950k, AMD-Duron Processor, Motherboard by
> Biostar to build an Intranet Server out of it.
>
> When installing a new Kernel (2.4.7), compiled for this processortype
> the machine stopped to work, because of severe Memory fault problems,
> reducing the access "speed" from 133 Mhz to 100 Mhz reduces the
> problem significatively
>
> Using a plain Pentium kernel we got no memory faults anymore.
>
> Is this a Motherboard/Memory problem, or is there any known problem
> with the AMD-Duron optimization?
>
> gcc-version: 2.95.4 20010902 (Debian prerelease)
>
> Thanks,
>
> Jorge-León
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
> http://www.zentek.biz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Ever used mod_throttle in Debian Aapache?

2001-12-17 Thread Jason Lim

Did you get it working? I never could.

Always spits the dummy. I already filed a bug with it.

But if you got it working, let me know. I mean out of the package... not
recompiling it yourself.

TIA

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




MicroATX Motherboard with 1.5-2GB Ram?

2001-12-17 Thread Jason Lim

Hi,

Does anyone know of a good MicroATX motherboard that supports 1.5-2GB of
RAM?

MicroATX motherboards make nice servers (small form factor, and support
nearly everything conventional ATX motherboards have), but they SEEM to
usually only have 2 DIMM slots (512Mx2=1024M max).

Have you over come across one that has 3 DIMM slots... or some way to get
to 1.5-2Gb RAM?

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: MicroATX Motherboard with 1.5-2GB Ram?

2001-12-18 Thread Jason Lim

Hi,

Anyone found a MB like this?

TIA.

 Jason

- Original Message -
From: "Jason Lim" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, December 18, 2001 12:42 AM
Subject: MicroATX Motherboard with 1.5-2GB Ram?


> Hi,
>
> Does anyone know of a good MicroATX motherboard that supports 1.5-2GB of
> RAM?
>
> MicroATX motherboards make nice servers (small form factor, and support
> nearly everything conventional ATX motherboards have), but they SEEM to
> usually only have 2 DIMM slots (512Mx2=1024M max).
>
> Have you over come across one that has 3 DIMM slots... or some way to
get
> to 1.5-2Gb RAM?
>
> Sincerely,
> Jason
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Ever used mod_throttle in Debian Aapache?

2001-12-18 Thread Jason Lim

I've confirmed that mod_throttle fails to work with 1.3.19 all the way to
1.3.22 (nearly the latest?).

Anyone EVER get this working, or am I the ONLY person around that actually
uses mod_throttle (or would LIKE to ;-)   ).

- Original Message -
From: "Jason Lim" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, December 18, 2001 12:40 AM
Subject: Ever used mod_throttle in Debian Aapache?


> Did you get it working? I never could.
>
> Always spits the dummy. I already filed a bug with it.
>
> But if you got it working, let me know. I mean out of the package... not
> recompiling it yourself.
>
> TIA
>
> Sincerely,
> Jason
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: MicroATX Motherboard with 1.5-2GB Ram?

2001-12-19 Thread Jason Lim

Hi Nick,

Unfortunately the Tyan boards in MicroATX don't seem to be available with
current chipsets.

Not sure why... :-/

This is a disappointment... i'm sure there must be SOME demand for 1.5G
Ram in these. I know that many chipsets do support 1.5G and 2G, but the
motherboard manufacturers only put on 2 slots, so you have 512M * 2 only.

Anyone find anything?

Sincerely,
Jason

- Original Message -
From: "Nicolas Bouthors" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Sent: Wednesday, December 19, 2001 6:28 PM
Subject: Re: MicroATX Motherboard with 1.5-2GB Ram?


> > MicroATX motherboards make nice servers (small form factor, and
support
> > nearly everything conventional ATX motherboards have), but they SEEM
to
> > usually only have 2 DIMM slots (512Mx2=1024M max).
>
> Totaly agree !
>
> > Have you over come across one that has 3 DIMM slots... or some way to
get
> > to 1.5-2Gb RAM?
>
> Check out Tyan boards (www.tyan.com) we use S2518 here and it's great
(this
> one is not MicroATX, but I do think they have some Micro ATX with 1.5Gb
> capacity
>
> Yours,
> Nico
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: MicroATX Motherboard with 1.5-2GB Ram?

2001-12-21 Thread Jason Lim


- Original Message -
From: "Ben Aitchison" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, December 21, 2001 7:40 AM
Subject: Re: MicroATX Motherboard with 1.5-2GB Ram?


> On Thu, Dec 20, 2001 at 06:27:11AM +0800, Jason Lim wrote:
> > This is a disappointment... i'm sure there must be SOME demand for
1.5G
> > Ram in these. I know that many chipsets do support 1.5G and 2G, but
the
> > motherboard manufacturers only put on 2 slots, so you have 512M * 2
only.
>
> You could always use 1 gig sticks.  (x2)
>
> Ben.
>


That would only work if the board supported 1G sticks. As far as I can
see, most only support 512M sticks, meaning 512M x 2. One of the only
integrated chipsets I know to support the most RAM is the SiS7xx ones
which support 1.5G and 3 sticks. Obviously they intended (and thus is the
limit) for you to use 512M x 3 to achieve that. Since the manufacturers
only include 2 slots... well, you get the picture :-/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: System locks up with RealTek 8139 and kernel 2.2.20

2001-12-27 Thread Jason Lim

Well, we've had RTL8139 cards in servers up for about 60-70 days with no
problem. Don't know about the outgoing/ping issue as the servers are
always in use, but they appear reliable.

Sincerely,
Jason

- Original Message -
From: "John Gonzalez, Tularosa Communications" <[EMAIL PROTECTED]>
To: "Olivier Poitrey" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; "Antonio Rodriguez"
<[EMAIL PROTECTED]>
Sent: Friday, December 28, 2001 12:52 AM
Subject: Re: System locks up with RealTek 8139 and kernel 2.2.20


> What causes the lockups? How often? I have an RTL 8139 in use.
>
> However, I have noticed something strange. I must keep "outbound"
traffic
> flowing or they forget their ARP table for some strange reason. I keep
an
> outbound ping running... If i dont, and there is no network activity on
> the box, it is unresponsive via network, but hopping on the console and
> starting another ping session brings it back to life. (I also have to do
> this on my machines with older RTL cards, using the ne2k-pci driver)
>
> So far, uptime of box and kernel ver with RTL cards is:
>
> Linux x 2.2.19 #22 Wed Jun 20 18:12:16 PDT 2001 i686 unknown
>   9:42am  up 13 days,  6:57,  6 users,  load average: 0.00, 0.00, 0.00
>
>
>
>
> --
> John Gonzalez, Tularosa Communications | (505) 439-0200 work
> JG6416, ASN 11711, [EMAIL PROTECTED]  | (505) 443-1228 fax
>   http://www.tularosa.net
>
> On Thu, 27 Dec 2001, Olivier Poitrey wrote:
>
> > - Original Message -
> > From: "Antonio Rodriguez" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Friday, December 21, 2001 3:54 PM
> > Subject: System locks up with RealTek 8139 and kernel 2.2.20
> >
> > > 2. move to 2.4 and hope this solves the problem. I have already gone
> > > from 2.2.17 to 2.2.20 to try to fix it, but maybe the move to 2.4
will
> > > be more significant.
> >
> > You'll have the same problem with 2.4.x, I have tested it for you :/
>
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
> http://www.zentek-international.com
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Best way to duplicate HDs

2001-12-31 Thread Jason Lim

Hi all,

What do you think would be the best way to duplicate a HD to another
(similar sized) HD?

I'm thinking that a live RAID solution isn't the best option, as (for
example) if crackers got in and fiddled with the system, all the HDs would
end up having the same fiddled files.

If the HD is duplicated every 12 hours or 24 hours, then there would
always be a working copy, so if something is detected as being altered, we
could always swap the disks around and get a live working system up and
running almost instantly (unless we detect the problem more than 24 hours
later, and then it would be too late since the HDs already synced).

So... what do you think the best way would be to duplicate a HD on a live
working system (without bringing it down or anything like that).
Performance can drop for a while (maybe do this at 5am in the morning),
but the system must stay up and operational at all times.

Maybe dd... or cp -a /drive1/* /drive2/... or something?

Any suggestions would be greatly appreciated.

TIA.

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim



> On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
> >
> > I'm thinking that a live RAID solution isn't the best option, as (for
> > example) if crackers got in and fiddled with the system, all the HDs
would
> > end up having the same fiddled files.
>
> If crackers get in then anything which involves online storage is
> (potentially) gone.

Right now one of the things we are testing is:
1) mount up the "backup" hard disk
2) cp -a /home/* /mnt/backup/home/
3) umount "backup" hard disk

The way we do it right now is:
1) a backup server with a few 60Gb HDs
2) use "dump" to cp the partitions over to the backup server
3) use "export" to restore stuff
(not very elegant... which is why we're trying to set up a better way)

Unless a cracker spends quite a bit of time going through everything, they
would most probably miss this part. True... if they do spend enough time
going through everything, then as you said, it is potentially gone.

> > If the HD is duplicated every 12 hours or 24 hours, then there would
> > always be a working copy, so if something is detected as being
altered, we
> > could always swap the disks around and get a live working system up
and
> > running almost instantly (unless we detect the problem more than 24
hours
> > later, and then it would be too late since the HDs already synced).
>
> The most common problem in this regard I've encountered when running
ISPs
> (see at many sites with all distributions of Linux, Solaris, and AIX) is
when
> someone makes a change which results in a non-bootable system.  Then
several
> months later the machine is rebooted and no-one can remember what they
> changed.

Haven't had that yet... because every time we make a massive system change
that might upset the "rebootability" of the server (eg. fiddle with lilo,
partition settings, etc.) we do a real reboot. This might not be pratical
on a system that needs 99.% uptime, but ensures it will work in
future.

> Better off having an online RAID for protection against hardware
failures and
> secure everything as much as possible to alleviate the problem of the
> machines being cracked.
>
> > So... what do you think the best way would be to duplicate a HD on a
live
> > working system (without bringing it down or anything like that).
> > Performance can drop for a while (maybe do this at 5am in the
morning),
> > but the system must stay up and operational at all times.
>
> LVM.  Create a snapshot of the LV and then use dd to copy it.

Eep... setting up LVM for the SOLE purpose of doing this mirroring? Seems
a bit like overkill and would add an extra level of complexity :-/

> > Maybe dd... or cp -a /drive1/* /drive2/... or something?
>
> Doing that on the device for a mounted file system will at best add a
fsck to
> your recovery process, and at worst result in a file system so badly
> corrupted that an mkfs is needed.  LVM solves this, but adds it's own
set of
> problems.

> I think that probably your whole plan here is misguided.  Please tell us
> exactly what you are trying to protect against and we can probably give
> better advice.
>

I know of a few hardware solutions that do something like this, but would
like to do this in hardware. They claim to perform a "mirror" of one HD to
another HD while the system is live and in use. I have no idea how it does
this without corruption of some type (as you mentioned above, doing dd on
a live HD will probably cause errors, especially if the live HD is in
use). For example, http://www.arcoide.com/ . To quote the function we're
looking at " the DupliDisk2 automatically switches to the remaining drive
and alerts the user that a drive has failed. Then, depending on the model,
the user can hot-swap out the failed drive and re-mirror in the
background.". So it "re-mirrors" in the background... how do they perform
that reliabily?

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > For example, http://www.arcoide.com/ . To quote the function we're
looking
> > at " the DupliDisk2 automatically switches to the remaining drive and
> > alerts the user that a drive has failed. Then, depending on the model,
the
> > user can hot-swap out the failed drive and re-mirror in the
background.".
> > So it "re-mirrors" in the background... how do they perform that
> > reliabily?
>
> That's just RAID 1, which has done it since the dawn of time [1]. You
can
> achieve the same thing with Linux software RAID; you just pull out one
of
> the drives and you have half a mirrored RAID set. It's pretty neat to
watch
> /proc/mdstat as your drives are resyncing, too. ;)
>
> The advantage you get with this hardware is the hot-swap rack... and
that's
> about it.
>

Except that I've pointed out already that we're specifically NOT looking
at a live RAID solution. This is a backup drive that is suppose to be
synced every 12 hours or 24 hours.

The idea being that if there is a virus, a cracker, or hardware
malfunction, then the backup drives can be immediately pulled out and
inserted into a backup computer, and switch on to provide immediate
restoration of services (with data up to 12 hours old, but better than
having up-to-date information that may be corrupted or "cracked" versions
of programs).

Sincerely,
Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > I know of a few hardware solutions that do something like this, but
would
> > like to do this in hardware. They claim to perform a "mirror" of one
HD to
> > another HD while the system is live and in use.
>
> It's called RAID-1.

I dunno... whenever I think of "RAID" I always think of live mirrors that
operate constantly and not a "once in a while" mirror operation just to
perform a backup (when talking about RAID-1). Am I mistaken in this
thinking?

> > I have no idea how it does
> > this without corruption of some type (as you mentioned above, doing dd
on
> > a live HD will probably cause errors, especially if the live HD is in
> > use). For example, http://www.arcoide.com/ . To quote the function
we're
> > looking at " the DupliDisk2 automatically switches to the remaining
drive
>
> So setup three disks in a software RAID-1 configuration with one disk
being
> marked as a "spare" disk.  Then have a script run from a cron job every
day
> which marks the first disk as failed, this will cause the spare disk to
be
> added to the RAID set and have the data copied to it.  After setting one
disk
> as failed the script can then add it back to the RAID as a spare disk.
>
> This means that apart from the RAID copying time (at least 20 minutes on
an
> idle array - longer on a busy array) you will always have two live
active
> copies of your data.  Before your script runs you'll also have an old
> snapshot of your data which can be used to recover from operator error.
>
> This will do everything that the arcoide product appears to do.

>From what you have said, basically the only advantage to the Arcoide
products are that they reduce load on the system, as they can perform the
RAID-1 mirror process in the background idependent of the OS.

An alternative spin on what you have said (nearly identical) would be to
put double the hard disks in each server (eg. a server has 2 hds, put in 2
"backup" hds). Configure them in RAID-1 mode, marking the 2 backups as a
spare, and then "adding" them to the RAID array every day via cron. This
would cause the 2 live HDs to be mirrored to the backups, and then
disengage the 2 "backup" HDs so they aren't constantly synced.

Would the above work? Sorry if I seem naive, but I haven't tried this
"once in a while" RAID method before.

Sincerely,
Jason





-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
> >
> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction
>
> And if you discover this within 12 hours...  Most times you won't.

We've got file integrity checkers running on all the servers, and they run
very often (mostly every hour or so) so unless the first thing the cracker
does is to "fix" or disengage the checkers then we SHOULD notice this
within the 12 hours... or make that 24 hours to give a bit more leeway.

> > then the backup drives can be immediately pulled out and
> > inserted into a backup computer, and switch on to provide immediate
> > restoration of services (with data up to 12 hours old, but better than
> > having up-to-date information that may be corrupted or "cracked"
versions
> > of programs).
>
> If the drive is in the cracked machine then it should not be trusted.
If a
> drive is in a machine that has hardware damage then there's no guarantee
> it'll still have data on it...

Well, as said before, unless the cracker spends a considerable amount of
time learning the setup of the system, going through the cron files,
config files, etc. then hopefully there will be enough things set up that
the cracker will be unable to destroy or "fix" everything.

And regarding data loss, so far the most common thing for us is to have a
HD become the point of failure. We've had quite a few CPUs burn out too
(no bad ram yet, lucky...), but nothing except a bad HD has seemed to
cause severe data loss.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
>
> Sorry, but I don't see any benefit to having maximum 12 hour old data
when
> you could have 0. The hardware solution you mentioned was RAID 1 anyway.
> Easiest thing to do is use it, and have both spare drives and spare
machines
> ready to roll should you need to swap either.

> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction, then the backup drives can be immediately pulled out and
> > inserted into a backup computer, and switch on to provide immediate
> > restoration of services (with data up to 12 hours old, but better than
> > having up-to-date information that may be corrupted or "cracked"
versions
> > of programs).
>
> Well, there's your benefit to having old data. Who's to say you're going
to
> know within 12 hours? This is not a particularly interesting problem,
mostly
> because you're not curing the disease, you're trying to clean up after
> infection.

Not really... i think of it as helping to cure the disease and helping to
clean up the problem, not eliminating both because it is impossible to
cure the disease completely. Unfortuantely if you work with a medium to
large number of various equipment (or even a small number if you're
unlucky) you're bounded to be cracked sooner or later. Even the strictest
security policy and such won't guarentee 100% protection. Another way to
do this would be to go Russell's way (the ideal way) and run a RAID array
with 3 drives, 2 live and 1 spare, and the sync the spare up every 24
hours. However, this would require 3 drives instead of 2... $$$ and space.
For the average server between 2-4 drives, this would mean a minimum of
6-12 drives compared to 4-8. The server cases wouldn't even hold 12
drives. They could hold up to 8 or so. So money isn't the only
consideration. Then you have to consider that even if we could somehow
place that many drives in the average rackmount case, overheating... power
supply issues...etc. come into play.

You might say "tape backup"... but keep in mind that it doesn't offer a
"plug n play" solution if a server goes down. With the above method, a
dead server could be brought to life in a minute or so (literally) rather
than half an hour... an hour... or more.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > You might say "tape backup"... but keep in mind that it doesn't offer
a
> > "plug n play" solution if a server goes down. With the above method, a
> > dead server could be brought to life in a minute or so (literally)
> > rather
> > than half an hour... an hour... or more.
>
> It occours to me that in most cases, recovery from a catostrophic
> failure is not going to be as easy as plug and play. Let's take some
> common situations where we need to recover a system.
>
> Virus -
> The way I traditionaly deal with a virus, is to never have it touch
> my system. As a system admin it is my job to keep viruses from hitting
> machines in the first place, not clean them up once they arrive.
> Cleaning up is the mentality of the Microsoft security world, and I
> refuse to accept such poluted doctrine. However, I do have a contingency
> plan should I miss a virus. I have a master OS image burnt onto a disk,
> and each of my systems make a backup of their data nightly (simple tar).
> The backups rotate and are incrimental, so I can restore data to the
> current date, masking out any infected paths. This, however, is not a
> plug and play solution, it requires manual control.
>
> Hardware failure-
> I run arround and sceam alot. This kind of failure is mostly luck
> of the draw, but I try to follow the same strategy as above.
>
> Hacker-
> If they wipe the disk, then the OS image and data backup will work
> nicely. If they do something else,  then I wipe the disk myself (no
> backdoors that way), and recover.
>
> In none of these situations do I see any value in making a replica of a
> tainted or damaged disk every 12 hours.

You are thinking resource-intensive work, which would require more than a
basic or low level sysadmin to do. I would not trust a low level sysadmin
to start performing restoration work on a system. At least if we catch it
within 12 hours or 24 hours then the sysadmin could at least pull out the
backup hard disks from the drive caddies, plug them into the backup system
on standby (basically has everything except hard disks) and have a working
system up and running instantly. A high level sysadmin can slowly sift
through original information carefully once the system is up and running.

Your assumption is that you can have a sysadmin onsite within a certain
amount of time to perform said restoration work on the filesystem, which
may not be possible especially with cutbacks everywhere and everyone
tightening their belts. Calling in a high-level sysadmin at 3am in the
morning to perform such tasks is not always possible resource-wise.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > > The idea being that if there is a virus, a cracker, or hardware
> > > > malfunction
> > >
> > > And if you discover this within 12 hours...  Most times you won't.
> >
> > We've got file integrity checkers running on all the servers, and they
run
> > very often (mostly every hour or so) so unless the first thing the
cracker
> > does is to "fix" or disengage the checkers then we SHOULD notice this
> > within the 12 hours... or make that 24 hours to give a bit more
leeway.
>
> The first thing any serious cracker will do is to stop themselves being
> trivially discovered.  This means changing the database for the file
> integrity checkers etc.  Also if you're running a standard kernel from a
> major distribution then they can have pre-compiled kernel modules ready
to be
> inserted into your kernel.  They can prevent the rootkit from being
> discovered (system calls such as readdir() will not show the file names,
> stat() won't stat them, only chdir() and exec() will see them).
>
> To alleviate this risk you need grsecurity or NSA SE-Linux.
>
> At the moment if you want to make your servers more secure in a way that
is
> Debian supported then grsecurity is your best option.  It allows you to
> really control what the kernel allows.  Run something in a chroot() and
it'll
> NEVER get out.  Run all the daemons that are likely to get attacked in
> chroot() environments and the rest of the system will be safe no matter
what
> happens.
>
> Grsecurity allows you to prevent stack-smashing and prevent chroot()
programs
> from calling chroot(), mknod(), ptrace(), and other dangerous system
calls.
> Also it does lots more.
>
> I used to maintain the package, but I have given it up as I'm more
interested
> in SE Linux.  The URL is below.
> http://www.earth.li/~noodles/grsec/
>
> Any comparison of amount of time/effort/money vs result will show that
> grsecurity is much better than any of these ideas regarding swapping
hard
> drives etc for improving security.
>
> > Well, as said before, unless the cracker spends a considerable amount
of
> > time learning the setup of the system, going through the cron files,
> > config files, etc. then hopefully there will be enough things set up
that
> > the cracker will be unable to destroy or "fix" everything.
>
> Heh.  When setting things up I presume that anyone who can crack my
systems
> knows more about such things than I do.  Therefore once they get root I
can't
> beat them or expect to detect them in any small amount of time.
>
> Once they get root it's game over - unless you run SE Linux.

Of course, hackers/crackers are only one thing that needs to be "solved".
Hardware failure, etc. also needs to be taken into consideration. But your
points are very correct, and I will investigate grsecurity as an
additional means to secure the systems. My hopes are only that by
increasing the number of detection methods, backups, etc. that it will
make it harder, not impossible (is it ever impossible?), for a cracker to
cripple/destroy a system.

> > And regarding data loss, so far the most common thing for us is to
have a
> > HD become the point of failure. We've had quite a few CPUs burn out
too
> > (no bad ram yet, lucky...), but nothing except a bad HD has seemed to
> > cause severe data loss.
>
> So RAID is what you need.

Actually your three-disk RAID solution sounded pretty good... I'll look
more carefully into the "standby" mode for a RAID disk and such, and see
how well it would work.

Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
/mnt/disk2/  :-/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Apache cgi-bin for users

2002-01-03 Thread Jason Lim

While I've never run things from
/home/*/public_html/cgi-bin/somethinghere.cgi,
we've always had to recompile suexec to get things working.

suexec has hard-compiled in the allowed directory, so you'd need to
recompile that to get some other directory to work.

I suggest you try that.

Sincerely,
Jason

- Original Message -
From: "Keith Elder" <[EMAIL PROTECTED]>
To: "Marcel Hicking" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, January 04, 2002 5:36 AM
Subject: Re: Apache cgi-bin for users


> Thanks Marcel,
>
> Let me restate what it was I was asking just to clarify my situation.
> If anyone has any input, by all means annie up.
>
> What I am trying to do is setup the server so users in /home/*/ can
> execute CGI programs on their personal web pages on this particular
machine.  I found a reference in the apache admin guide I have and the
apache site which say to put the following in the httpd.conf:
>
> 
> Options ExecCGI
> Addhandler cgi-script .cgi .pl
> 
>
> I have done that, but I still cannot make the following work:
>
> http://yourdomain.com/~username/cgi-bin/test.cgi
>
> When this page is run, I get "premature end of headers" in the error.log
> file.  I thought this would be fairly simple but it is turning out to be
> a headache.
>
> Anything else I can try?
>
> Keith
>
>
> * Marcel Hicking ([EMAIL PROTECTED]) wrote:
> > From: "Marcel Hicking" <[EMAIL PROTECTED]>
> > To: [EMAIL PROTECTED]
> > Date: Thu, 3 Jan 2002 19:08:32 +0100
> > Subject: Re: Apache cgi-bin for users
> > Reply-to: [EMAIL PROTECTED]
> > X-mailer: Pegasus Mail for Win32 (v3.12c)
> >
> > ScriptAlias /cgi-bin/ /path/to/customers/cgi-bin/
> >
> > See
> > http://httpd.apache.org/docs/mod/mod_alias.html#scriptalias
> >
> > Please make really(!) sure what security implications it
> > has to allow not trustworthy people (customers ;-) to run
> > programms on _your_ server. Hint: Look for cgi-wrap and
> > changeroot.
> >
> > http://httpd.apache.org/docs-2.0/misc/security_tips.html
> > http://httpd.apache.org/docs-2.0/suexec.html
> >  or better
> > http://wwwcgi.umr.edu/~cgiwrap/
> >
> > Cheers,
> > Marcel
> >
> >
> > Keith Elder <[EMAIL PROTECTED]> 31 Dec 2001, at 17:31:
> >
> > > Greetings and Happy New Year!
> > >
> > > I am trying to enable cgi-bin on user directories.  I found
> > > the following lines on the apache.org site, put them in, but
> > > they didn't work:
> > >
> > > 
> > > Options ExecCGI
> > >  SetHandler cgi-script
> > > 
> > >
> > >
> > > Any other suggestions as to how to setup cgi-bin directories
> > > for user accounts?
> > >
> > >
> > > Thanks,
> > >
> > > Keith
> > >
> > > ###
> > >   Keith Elder
> > >Email: [EMAIL PROTECTED]
> > > Phone: 1-734-507-1438
> > >  Text Messaging (145 characters): [EMAIL PROTECTED]
> > > Web: http://www.zorka.com (Howto's, News, and hosting!)
> > >
> > >  "With enough memory and hard drive space
> > >anything in life is possible!"
> > > ###
> > >
> > >
> > > --
> > > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > > with a subject of "unsubscribe". Trouble? Contact
> > > [EMAIL PROTECTED]
> > >
> >
> >
> > --
> >__
> >  .´  `.
> >  : :' !  Enjoy
> >  `. `´  Debian/GNU Linux
> >`-
> >
> >
> > --
> > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>
> ###
>   Keith Elder
>Email: [EMAIL PROTECTED]
> Phone: 1-734-507-1438
>  Text Messaging (145 characters): [EMAIL PROTECTED]
> Web: http://www.zorka.com (Howto's, News, and hosting!)
>
>  "With enough memory and hard drive space
>anything in life is possible!"
> ###
> http://www.zentek-international.com
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: BIND exploited ?

2002-01-03 Thread Jason Lim

I would also strongly suggest getting chkrootkit.

chkrootkit - Checks for signs of rootkits on the local system

chkrootkit identifies whether the target computer is infected with a
rootkit. It can currently identify the following root kits:
 1. lrk3, lrk4, lrk5, lrk6 (and some variants);
 2. Solaris rootkit;
 3. FreeBSD rootkit;
 4. t0rn (including latest variant);
 5. Ambient's Rootkit for Linux (ARK);
 6. Ramen Worm;
 7. rh[67]-shaper;
 8. RSHA;
 9. Romanian rootkit;
 10. RK17;
 11. Lion Worm;
 12. Adore Worm.

Please note that this is not a definitive test, it does not ensure that
the
target has not been cracked. In addition to running chkrootkit, one should
perform more specific tests.

Hope that helps. What we did was install new hard disks, restore from
backups to the new hard disks, immediately find out how they got in by
analysing the old hard disks, patch/fix/whatever the new hard disks so the
kiddies can't get back in, and slowly and carefully go through the old
hard disks and find out what they did and such (if you are interested).
Good for a learning experience. Trace their actions, what they
did/changed/installed/etc.

- Original Message -
From: "Thedore Knab" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 04, 2002 10:16 AM
Subject: BIND exploited ?


> I recently inherited a machine that I think has been exploited.
>
> It seems to have a stupid root kit installed unless this is a decoy.
>
> What does it look like to you professionals?
>
> [root@moe ...]# uname -a
> Linux moe. 2.2.14-5.0 #1 Tue Mar 7 21:07:39 EST 2000 i686
> unknown
>
> [root@moe ...]# ps auxww
> USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
> root 1  0.0  0.3  1120  476 ?S 2001   0:06 init [3]
> root 2  0.0  0.0 00 ?SW2001   0:00 [kflushd]
> root 3  0.0  0.0 00 ?SW2001   0:27 [kupdate]
> root 4  0.0  0.0 00 ?SW2001   0:00 [kpiod]
> root 5  0.0  0.0 00 ?SW2001   0:01 [kswapd]
> root 6  0.0  0.0 00 ?SW<   2001   0:00
> [mdrecoveryd]
> root   154  0.0  0.3  1104  392 ?S 2001   0:00
> /usr/sbin/apmd -p 10 -w 5 -W -s /etc/sysconfig/apm-scripts/suspend -r
> /etc/sysconfig/apm-scripts/resume
> bin315  0.0  0.3  1216  404 ?S 2001   0:00 portmap
> root   330  0.0  0.0 00 ?SW2001   0:00 [lockd]
> root   331  0.0  0.0 00 ?SW2001   0:00 [rpciod]
> root   340  0.0  0.4  1164  516 ?S 2001   0:00 rpc.statd
> nobody 414  0.0  0.4  1308  544 ?S 2001   0:00 identd -e
> -o
> nobody 415  0.0  0.4  1308  544 ?S 2001   0:00 identd -e
> -o
> nobody 416  0.0  0.4  1308  544 ?S 2001   0:00 identd -e
> -o
> nobody 420  0.0  0.4  1308  544 ?S 2001   0:00 identd -e
> -o
> nobody 421  0.0  0.4  1308  544 ?S 2001   0:00 identd -e
> -o
> daemon 432  0.0  0.2  1144  296 ?S 2001   0:00
> /usr/sbin/atd
> root   446  0.0  0.4  1328  572 ?S 2001   0:00 crond
> root   464  0.0  0.3  1168  468 ?S 2001   0:00 inetd
> root   478  0.0  1.6  3160 2120 ?S 2001  14:00
> /usr/sbin/snmpd
> root   543  0.0  0.3  1156  400 ?S 2001   0:00 gpm -t
> imps2
> xfs604  0.0  0.6  1920  876 ?S 2001   0:00 xfs
> -droppriv -daemon -port -1
> root   645  0.0  0.0   852  100 ?S 2001   0:00
> /etc/.../bindshell
> root   646  0.0  0.0   864  124 ?S 2001   0:00
> /etc/.../bnc
> root   650  0.0  0.3  1092  408 tty2 S 2001   0:00
> /sbin/mingetty tty2
> root   651  0.0  0.3  1092  408 tty3 S 2001   0:00
> /sbin/mingetty tty3
> root   652  0.0  0.3  1092  408 tty4 S 2001   0:00
> /sbin/mingetty tty4
> root   653  0.0  0.3  1092  408 tty5 S 2001   0:00
> /sbin/mingetty tty5
> root   654  0.0  0.3  1092  408 tty6 S 2001   0:00
> /sbin/mingetty tty6
> root   655  0.0  0.0   856  104 ?S 2001   0:00
> /etc/.../lsh 31333 v0idzz
> named 9928  0.0  4.9  7268 6356 ?S 2001   6:48 named -u
> named
> root 11369  0.0  0.3  1092  408 tty1 S 2001   0:00
> /sbin/mingetty tty1
> root  3574  0.0  0.5  1464  760 ?S20:28   0:00
> in.telnetd: calendar-spaces.
> root  3575  0.0  0.9  2312 1196 pts/0S20:28   0:00 login --
> ted
> ted   3576  0.0  0.7  1696  940 pts/0S20:28   0:00 -bash
> root  3599  0.0  0.7  2008  900 pts/0S20:28   0:00 su -
> root  3600  0.0  0.7  1748  996 pts/0S20:29   0:00 -bash
> root  3719  0.0  0.4  1172  540 ?S20:38   0:00 syslogd
> -m 0
> root  3728  0.0  0.6  1440  768 ?S20:38   0:00 klogd
> root  3817  0.0  0.5  2332  704 pts/0R20:43   0:00 ps auxww
>
> [root@moe .

Re: BIND exploited ?

2002-01-05 Thread Jason Lim

> > Is it really necessary to buy new hard drives?  Is there a reason why
> > he can't just reformat his current drives before reinstalling?
>
> Sure he can, if he wants to lose the evidence of what happened and lose
the
> possibility to hand the drives over to law enforcement officials (which
may
> be demanded of him even if he doesn't want it in the case that his
machine
> was used to attack others).
>

I agree, which is exactly why I suggest he get new hard drives... to
preserve evidence, and allow you to learn from your mistakes. Otherwise,
whats going to stop it happening again?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




  1   2   3   4   5   6   7   >