RE: Multiple DSLs, and switching incoming route upon failure?

2001-05-25 Thread Jeff S Wheeler

Are your DSL uplinks from different ISPs, or from the same IP provider?  If
they are differing providers, there is no way you can feasably implement
BGP.  If they are redundant paths to the same ISP you could ask them to
issue you a reserved ASN (65512 - 65535) and announce your /28 into their
network via ebgp sessions.  That makes a lot of assumptions about software
support on your router(s), and of their willingness to accomodate you, of
course.

Realistically, you aren't going to make this happen.  Perhaps you could
participate in something like the 6BONE, or simply colocate your obviously
mission-critical services at your ISP.

- jsw


-Original Message-
From: Mike Fedyk [mailto:[EMAIL PROTECTED]]On Behalf Of Mike Fedyk
Sent: Friday, May 25, 2001 9:22 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Multiple DSLs, and switching incoming route upon failure?


Hi,

I don't believe I'm subscribed to this list, so please cc me also. (I'm on
so many debian lists, and I put all of the low traffic ones in one
folder...)

I already have multiple DSL links to the Internet, but I haven't done
anything more as far as incoming connections besides SMTP and a couple
others for remote workers.

The problem now is I want to put a FTP and DNS server up.  These by them
selves aren't a problem, but sometimes one of the DSLs will go down.

I'd only qualify for a /28 block of IPs, is there any way I can get bgp
routing at my shop?  I'm willing to read all the info I need, and have an
interest in this area anyway...

This message isn't meant to start a flame war about DSL reliability, as even
with fiber it is recommended to multi-home.

DNS round-robin will do 80% of the job, but there will be intermittent
access
when one of the links goes down.

I've considered getting an account on a remote server, and just forward the
connections here, but that defeats the whole purpose of having the server
local.

Is there anything I'm missing?

TIA,

Mike


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: Multiple DSLs, and switching incoming route upon failure?

2001-05-26 Thread Jeff S Wheeler

Customers who purchase T1/T3 service generate more revenue for the ISP, and
although the difference may not justify the administrative overhead of
adding a BGP customer, most do not request this.  Some organizations (BEST
Internet, before Verio gobbled them up, for example) charge an additional
fee for BGP.  They charged 500$/Mo.

Address space is also an issue.  You cannot announce blocks smaller than /24
into global BGP and expect the results you want.  Some networks are still
filtering announcements smaller than /19 within some ranges, SprintLink for
example, as they took steps years ago to counteract routing table growth,
and this remains a problem even as routers become more powerful and memory
gets cheaper.

I do not know how the 6BONE scenario would work.  It was a shot from the
hip, I'm sure you could do some research in this area, or perhaps someone
else subscribed to the list can tell us how the 6BONE interoperates with the
current IPv4.

If you had a colocated server on a reliable IP connection you could VPN
yourself a subnet from it over either of your two DSL routes.  This might be
sane but would cause you to incur a lot of bandwidth bills. :-)

- jsw


-Original Message-
From: Mike Fedyk [mailto:[EMAIL PROTECTED]]On Behalf Of Mike Fedyk
Sent: Saturday, May 26, 2001 4:35 PM
To: Jeff S Wheeler
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Multiple DSLs, and switching incoming route upon failure?


On Fri, May 25, 2001 at 11:29:46PM -0400, Jeff S Wheeler wrote:
> Are your DSL uplinks from different ISPs, or from the same IP provider?
If

They are different providers.

DSL 1 is 384k/1.5m adsl at pacbell

dsl2 is 768k sdsl landmark (lmki)

> they are differing providers, there is no way you can feasably implement
> BGP.  If they are redundant paths to the same ISP you could ask them to

What do t1 and t3 customers do?  Is the only criteria for "feasibility" a
need for more IPs?

> issue you a reserved ASN (65512 - 65535) and announce your /28 into their
> network via ebgp sessions.  That makes a lot of assumptions about software
> support on your router(s), and of their willingness to accomodate you, of
> course.

I could get a second link to pacbell, but sometimes their entire network
gets unstable, and I would still need a second provider.  Doing the same
with the other provider would require four links, and still wouldn't fix the
problem if one ISP crashing completely.

>
> Realistically, you aren't going to make this happen.  Perhaps you could
> participate in something like the 6BONE, or simply colocate your obviously
> mission-critical services at your ISP.
>

Hmm, I wonder how exactly this would work with the 6BONE.  Can you get
traffic from ipv4 into the 6BONE from the "normal" internet?  How would I be
addressed?

I probably wouldn't choose my ISP then, I'd choose a company that connects
to several ISPs, and that'll be more expensive. :(

> - jsw
>
>

Mike


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: kernel 2.2.19 limitations.

2001-05-27 Thread Jeff S Wheeler

I have a 2.4.4 machine with two Pentium III 833MHz CPUs and an AcceleRAID
170/64MB with a pair of IBM Ultra160 disks on it, all on a Tyan Thunder 2500
LE3(?) motherboard.  It has on-board SCSI as well, symbios 53c1010 chipset,
and everything works okay.  I can't get the controller's cache to read as
enabled though, damned if I know why.

But as far as stability is concerned the machine works like a charm, and
mysql likes it well enough to do 7200 UPDATEs/second on it during my batch
jobs. :-)

- jsw


-Original Message-
From: Przemyslaw Wegrzyn [mailto:[EMAIL PROTECTED]]
Sent: Sunday, May 27, 2001 6:50 PM
To: Peter Billson
Cc: [EMAIL PROTECTED]; recipient list not shown:
Subject: Re: kernel 2.2.19 limitations.




On Sun, 27 May 2001, Peter Billson wrote:

> Whoops... helps if I post *to* the list too!
>
> > Yes the limit is still the usual 2Gb. The limit is actually with ext2, i
> > believe, although I'm not sure.
>
>   The limit is in the kernel, not the ext2 file system, otherwise 2.4.x
> wouldn't be able to support >2Gb file either. There are patches about
> for adding LFS (large file system) support. I had compiled a 2.2.18
> kernel after patching with the LFS patch borrowed from RedHat's 6.2ee
> (Enterprise Edition) source.
>
>   But why not just run 2.4.x?

Hmm, I'm building some so called "very important server".
I'm not sure if 2.4.x is stable enough.
It will run Apache, biiig PostgreSQL database, all on big RAID.

Anyone has experience with 2.4.x + SMP (2 x PIII) + Mylex AcceleRAID ?

After reading some newsgroups I believe 2.4.x kernels works pretty stable
or does totaly weird things, all depends of the hardware configuration.
Well, it seems some drivers are not yet stable enough...

-=Czaj-nick=-




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




ccbill

2001-06-02 Thread Jeff S Wheeler

I hope this isn't considered off-topic.  :-)

Does anyone else on the list deal with, or have customers who use, ccbill?
Two of my customers have had negative experiences with them recently, one
related to their customer-side CGI script(s).  CCBill has not been
cooperative in providing me with any kind of documentation on their data
schema, but realistically both customers need to move away from CCBill's
script to something more robust.

Customer A has serious problems with people subscribing with "guessable"
passwords, or passwords that are published to password-trading websites
frequently.  They actually get visitors to their site that have found them
by typing "ccbill passwords" into search engines, and so forth.  They then
have the same 3 or 4 passwords being used from -hundreds- of differing
domainnames, by most likely hundreds or thousands of different persons.  We
have started deleting the abused accounts but the real solution is to stop
allowing customers to choose their own (initial?) passwords.

Customer B has a larger problem.  She now believes CCBill has caused her
account username and password (which she had to share with CCBill to have
them setup their service) to become compromised.  It is possible her
suspicion is correct.  Has anyone else had customer accounts which turned
over passwords to CCBill become compromised recently?  I would think more
than one password would be stolen from them, and thus this would not remain
an isolated incident.


Either way, CCBill has begun to genuinely scare me.  These folks deal with,
on a daily basis, thousands of peoples' credit card numbers and other
individualized non-public information, and from my dealings with them over
the past week and a half, they are grossly underqualified to do so.  Does
anyone else use CCBill, and if so have you had differing experiences?  How
about with other companies that provide similar products?

---
Jeff S Wheeler [EMAIL PROTECTED]
Software Development Five Elements, Inc.
http://www.five-elements.com/~jsw/   502-339-3527 Office


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




colocation space/inexpensive bandwidth

2001-06-05 Thread Jeff S Wheeler

Since we're on the topic of colocation space this morning, I thought I would
post and ask if anyone has colocation cabinet space available at a Level(3)
or similar facility.  Currently we colocate with a small ISP and are very
happy with their service, but we would like to be able to offer better
pricing to our customers.  However, when last I spoke with L(3) sales folks,
their cabinets were 1000$/mo, with an added 2000$/mo minimum expenditure on
bandwidth/private line services per cabinet.  This is too much for us, but
if someone has a few U available someplace that prices bandwidth in the
400$/Mbit or less range, I would be very interested in consuming that extra
space (for a fee, of course) and taking advantage of your bandwidth pricing.
:-)

Anyone near Minneapolis/Chicago/Louisville/Cincinnati would be preferable as
well, as currently we maintain our own equipment, and have no need to trust
a pair of "remote hands" to know what they are doing.  In those areas we can
continue to have this benefit.

- jsw


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




What is the DUL?

2001-06-07 Thread Jeff S Wheeler

What is the DUL?

- jsw


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Doug Alcorn
Sent: Thursday, June 07, 2001 11:28 PM
To: Debian ISP List
Subject: Re: Have you been hacked by f*ck PoizonBOx?


Michelle Konzack <[EMAIL PROTECTED]> writes:

> America Online, Inc. (NETBLK-AOL-172BLK) AOL-172BLK   172.128.0.0 -
> 172.191.255.255

are you implying that this block is all dial-up addresses that aren't
in the DUL?
--
 (__) Doug Alcorn (mailto:[EMAIL PROTECTED] http://www.lathi.net)
 oo / PGP 02B3 1E26 BCF2 9AAF 93F1  61D7 450C B264 3E63 D543
 |_/  If you're a capitalist and you have the best goods and they're
  free, you don't have to proselytize, you just have to wait.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




privileges problem

2001-06-24 Thread Jeff S Wheeler

Also, stock 2.4.x series kernel limits supplementary groups to 32.  There
would be a per-process penalty for increasing that limit.  You could patch
apache to include the supplemental groups when it forks children (if it does
not do this already..), but overall that is a bad solution.

See NGROUPS in include/linux/limits.h and other lines containing NGROUPS /
NGROUPS_MAX in the source if you want to go ahead with your idea.


If your users' data really can't be world-readable, your remaining option is
to run seperate httpd's for customers with these large privacy concerns.
Note that most of the time, though, your customers just don't want people
copying their whole directory structures and stealing content whole-sale.
This can be accomplished by other means, anyway, but you can give yor
customers some comfort by simply instructing them to set all their
directories with permissions o-r.

Note that CGIs/SSIs will be a security concern for you.  You had better use
suEXEC or something else such that customers cannot execute their CGI
programs as the user/group apache's children run as, if you rely on that for
your privacy/security mechanism...

- jsw


-Original Message-
From: Russell Coker [mailto:[EMAIL PROTECTED]]
Sent: Sunday, June 24, 2001 5:02 AM
To: :yegon; [EMAIL PROTECTED]
Subject: Re: privileges problem


On Saturday 23 June 2001 14:40, :yegon wrote:
> while configuring dynamic virtual hosting (with mod_vhost_alias) on a
> new server i ran into this problem
>
> i create a new group named g(username) for each new virtual web, I set
> all user files to chmod 640 to avoid them to be read by another user
>
> my apache server runs as www-data so i need to add user www-data to
> each virtual web group to be able to serve its documents

Supplementary groups are only read by login, su, and other programs that
change UID etc.  They can only be changed by a root process so once the
program is running as UID != 0 it can't be changed.

> this all works fine but
> when I create a new virtual web, that means a new group, user and home
> directory and try to access its documents via http I get this error in
> the apache error.log
>
> is there a way to somehow refresh this info for the running process
> without restarting it?

No.

> do you have another suggestion?

Why do you need to have a separate GID for each web space?  Why not just
have the files owned by the GID for Apache and the UID for the user?

Another solution would be to make all the files owned by the UID of
Apache and the GID of the user and mode 660...

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: Multiple DSLs, and switching incoming route upon failure?

2001-06-26 Thread Jeff S Wheeler

Quite frankly, it's dumb as hell to try to half-ass a redundancy solution
when you evidently need as close to 100% uptime as you can get.  You need to
either spend the bucks on leased lines from tier-1 carriers and run BGP
(contracting with someone for assistance if you don't have the know-how
yet), or preferably you should colocate with a real datacenter and hope they
don't go out of business.

- jsw


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: users bypassing shaper limitation

2001-07-01 Thread Jeff S Wheeler

I have been reading this thread and noticed no one has suggested the MAC
address filtering capabilities in Linux 2.4's new ip tables subsystem.  I
hear there are serious problems with using 2.4.x series kernels as a
firewall, though; what are they?

- jsw


-Original Message-
From: Gerard MacNeil [mailto:[EMAIL PROTECTED]]
Sent: Sunday, July 01, 2001 7:46 AM
To: [EMAIL PROTECTED]
Subject: Re: users bypassing shaper limitation


On Sun, 1 Jul 2001 14:30:33 +0300, [EMAIL PROTECTED] (Sami Haahtinen)
wrote:

> On Sat, Jun 30, 2001 at 12:07:28PM +0100, Karl E. Jorgensen wrote:
> > Besides, the bad guys may choose not to use DHCP - this is
> > entirely up to the config on the client machines.
>
> but if you make dynamic firewall rules based on the leases file,
> blocking all outside traffic, it would be efficient enough.

Yes, do routing by host /32 rather than network /24.  Or you can subnet
depending on your hardware configuration.

Gerard MacNeil
System Administrator


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: users bypassing shaper limitation

2001-07-02 Thread Jeff S Wheeler

You fail to understand.  Drop traffic from any MAC/IP pair that isn't
"registered" with you, thus in your traffic shaper configuration.  Keeping
track of MAC addresses and where they're supposed to be on your network in a
campus environment is pretty standard.  I work on a University campus and
must notify the IT department anytime I want to add a host or move network
cards around.  If I do not, they will grumble and/or disable the ethernet
ports that unknown MAC addresses appear on.  In some areas (e.g. student
labs) they do that automatically so kids can't just bring their laptop in
and hop on napster at 100Mbit.

- jsw


-Original Message-
From: Gerard MacNeil [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 02, 2001 5:39 AM
To: [EMAIL PROTECTED]
Subject: Re: users bypassing shaper limitation


On Sun, 1 Jul 2001 15:59:34 -0400, "Jeff S Wheeler" <[EMAIL PROTECTED]>
wrote:

> I have been reading this thread and noticed no one has suggested the MAC
> address filtering capabilities in Linux 2.4's new ip tables subsystem.

There is no requirement to run 2.4.x and iptables, nor iproute2, to
accomplish the policy implementation that was specified.  The administrative
policy is bandwith control over a defined set of IP addresses.  That policy
is being circumvented with the current configuration by the whizkids.  It is
up to the tech to implement a solution.

Beside, I'm sure I have a MAC address changer utility (or is that a feature
of iproute2) that I downloaded sometime in the past.  The same whizkids
would use it and circumvent the policy based on MAC addresses with it ...
although it would be a trickier thing to accomplish.  I think I have read on
some mailing list that it is quite a security issue with PPPoE and some
wireless connections.

Gerard MacNeil
System Administrator


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: users bypassing shaper limitation

2001-07-03 Thread Jeff S Wheeler

Your method would allow someone to attach their computer to the network,
certainly, but it would not allow them to bypass the traffic shaping
limitations configured for that host.  That is the goal of the original
poster, as I understand.

- jsw


-Original Message-
From: news [mailto:[EMAIL PROTECTED]]On Behalf Of Holger
Lubitz
Sent: Tuesday, July 03, 2001 9:08 AM
To: [EMAIL PROTECTED]
Subject: Re: users bypassing shaper limitation


Jeff S Wheeler proclaimed:
> cards around.  If I do not, they will grumble and/or disable the ethernet
> ports that unknown MAC addresses appear on.  In some areas (e.g. student
> labs) they do that automatically so kids can't just bring their laptop in
> and hop on napster at 100Mbit.

Easy. Disconnect any machine, set your MAC/IP-addresses to its
addresses, connect your laptop.
Don't know its addresses? Just sniff around on the port for a while, but
make sure you keep quiet.

Holger


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: ATA Speed

2001-07-03 Thread Jeff S Wheeler

You can use the hdparm utility to discover what mode your disks are
operating in.  Notice the second-to-last line that begins with 'DMA modes:'.
The '*' next to udma4 indicates it is operating in that mode, which equates
to something commonly called ATA/66.  :-)

intrepid:/home/jsw# hdparm -i /dev/hdc

/dev/hdc:

 Model=Maxtor 96147U8, FwRev=BAC51KJ0, SerialNo=N8046RBC
 Config={ Fixed }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=57
 BuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=120060864
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes: pio0 pio1 pio2 pio3 pio4
 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 *udma4
 Kernel Drive Geometry LogicalCHS=7473/255/63

- jsw


-Original Message-
From: R K [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 03, 2001 6:49 PM
To: [EMAIL PROTECTED]
Subject: ATA Speed


Does the following mean that Linux is only using my ide bus at ata33 speeds?
Or more accurately not using the full ata100 mode?

ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx

I've seen nothing from dmesg to indicate that it's doing otherwise.  Does it
configure it as 33 and then still use it to it's full potential or does it
impose restrictions on itself?  Even if this doesn't have anything to do
with it, how would I verify that Linux is using the hardware to its full
potential?

Thanks in advance


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: Virtual Hosting and the FHS

2001-07-13 Thread Jeff S Wheeler

Do you find it difficult to manage your text file database when you have
programs on different machines needing access to the data?  I use mysql
extensively in our shop because it makes it easy to access from any of our
servers, and it makes reporting easy.  I'd rather spend a few minutes
crafting an SQL query than half an hour writing code to perform the same
task over text files.

I am of the opinion that storing http hit logs in a database is stupid.  I
doubt anyone on this list has such complex and dynamic log analysis needs
that they must keep traffic logs in a database, rather than analyze the text
logfile and store summaries.  Not only does it waste disk and make it
complicated to run log common analysis software on your logs, or allow
customers to do so, it also reduces the overall throughput of your web
servers.  Thats a no-no in my book.

I would be interested in the motivations and arguments anyone on the list
has to contradict my opinion.  I'm sure it looks like I'm trying to start a
flame war, but I just cannot understand why anyone would wish to log to a
database.  Perhaps someone can enlighten me.

As far as file descriptor limits are concerned, my understanding of Apache
2.0's archeticture is that it will reduce the FD problem by using kernel
threads to share file descriptors among threads.  I don't know how that fits
into the mod_perl / php / etc. picture, though, I really have not
investigated Apache 2.0 extensively.  To be honest threading makes me afraid
my good old tools won't work anymore, or they will work but I'll have to
live without the benefit of the new thread model.

So perhaps Apache 2.0's threading benefits will only shine in areas of
static content?  If that is the case, I'll be disappointed as products like
Zeus and thttpd seem to be superior to Apache in that arena, and probably
will continue to be.

- jsw


-Original Message-
From: Craig Sanders [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 12, 2001 5:05 PM
To: Haim Dimermanas
Cc: Russell Coker; [EMAIL PROTECTED]
Subject: Re: Virtual Hosting and the FHS


On Thu, Jul 12, 2001 at 10:00:57AM -0500, Haim Dimermanas wrote:
> > any script i need to write can just open the virtual-hosts.conf file
> > and parse it (it's a single line, colon-delimited format) to find
> > out everything it needs to know about every virtual host.
>
>  I used to do it that way and then I discovered something called a
>  database.

i've considered using postgres for this but am resisting it until the
advantages greatly outweigh the disadvantages.

why complicate a simple job with a database? plain text configuration is
perfect for a task of this size.

it takes a lot longer to edit a database entry than it does to edit a
text file with vi.

i'd lose the ability to check-in all changes to RCS if i used a database
instead of a text file.

to get these features, i'd have to write a wrapper script to dump the
config database to a text file, run vi, and then import the database
from the edited file. that still wouldn't get around the fact that you
can put comments in text files - you can't in databases.

in short: databases are appropriate for some tasks, but not all.

> It makes it a lot easier to delete an entry and prevent duplicates.

huh? it takes no time at all to run "vi virtual-hosts.conf" and comment
our or delete a line.


> > i need to split up the log files so that each virtual domain can
> > download their raw access logs at any time. having separate error
> > log files is necessary for debugging scripts too (and preserving
> > privacy - don't want user A having access to user B's error logs).
>
>  I strongly suggest you invest some time looking into a
> way to put the access log into a database. Something like
> http://freshmeat.net/projects/apachedb/.

i wrote my own code a year ago to store logs in postgres (mysql is a
toy). it had it's uses but i decided it was a waste of disk space and
it made archiving old logs a pain. it greatly complicated the task of
allowing users to download their log files.

i went back to log files.

i'm a strong believer in the KISS principle, and see no need to add
unneccesary complication, especially for such little benefit.


> My research showed that web hosting customers don't look at their
> stats every day. Even if they did, your stats are generated
> daily. Having the logs in a database allows you to generate the stats
> on the fly. Now with a simple caching system that keeps the stats
> until midnight, you can save yourself a lot of machine power.

not relevant.

1. my customers want raw log files. the fact that i run webalizer
for them is a nice bonus, but what they insist on having is the raw
logs downloadably by ftp whenever they want (within a time limit -
we don't keep old logs forever). that's fine by me - stats are their
responsibility.

2. cpu usage is basically irrelevant on a machine which is I/O bound.

3. caching the stats pages defeats the purpose of generating them on the
fly.


RE: help with site+database

2001-07-17 Thread Jeff S Wheeler

On Tuesday, July 17, 2001 10:37 AM, Jeremy Lunn wrote:
>I do have a serious use for a database soon so I'll be sure to test both
>database packages out myself.  Replication will be important though and
>last time I checked mysql didn't have any which is pretty useless (but I
>think they might have been implementing it).

MySQL 3.23.x does have replication however it is rudimentary.  You basically
have master/slave configurations.  Although this can work bidirectionally,
that can screw up AUTO_INCREMENT tables if you do an INSERT into such a
table on both your servers ``simultaniously''.  It also seems like no fun to
administer.

If you can live with doing all your UPDATEs on a single, master server, and
slaving your other server(s) to it, MySQL's replication is not impossible to
use or understand; but obviously that makes application development more
difficult and time-consuming.

What replication capabilities does pgsql have?  Here is a link to MySQL's
replication documentation:

http://www.mysql.com/doc/R/e/Replication.html

- jsw



--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




LDAP + quotas

2001-07-31 Thread Jeff S Wheeler

To compare to a database concept, if the LDAP daemon had `triggers' and
could execute code that made quotactl(2) calls on the relavent filesystems,
on the relavent machines, when the quota values in the LDAP database changed
that would be effective.  To determine current usage the LDAP daemon would
also have to use quotactl(2) to query the VFS though, unless current usage
simply was not provided as part of your LDAP schema.

- jsw


-Original Message-
From: Sami Haahtinen [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 31, 2001 3:10 PM
To: [EMAIL PROTECTED]
Subject: Re: Re[2]: LDAP + quotas


On Tue, Jul 31, 2001 at 02:52:55PM +0200, Russell Coker wrote:
> > something like NSS for quota lookups would be nice, and to have a
> > caching daemon (like nscd) to store the data for later lookups.
>
> nscd is only ever called by user-land code such as login, su, ls, etc.
Quota
> is handled by the kernel.  Having the kernel call back to an application
for
> this isn't what you want.  What happens if/when that application needs to
> create a file?

what i ment was something alike, a daemon that would monitor the
activity in quota related system calls and update the quota file by
itself..

i was not completely serious about the solution but it would be a nice
idea, i know that quotas can not rely on any daemon as such, but a
helper daemon would 'help' in many cases.

Sami

--
  -< Sami Haahtinen >-
  -[ Is it still a bug, if we have learned to live with it? ]-
-< 2209 3C53 D0FB 041C F7B1  F908 A9B6 F730 B83D 761C >-


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


--  
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: Data Center, enviromental recomendations

2001-09-13 Thread Jeff S Wheeler

There was a lengthy thread on NANOG about this very question a couple years
ago.  Check out the archives at http://www.merit.edu/mail.archives/nanog/

- jsw


-Original Message-
From: Felipe Alvarez Harnecker [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 13, 2001 12:21 PM
To: [EMAIL PROTECTED]
Subject: Data Center, enviromental recomendations



Hi,

what are the standar temperature, humidity, etc range for a data
center ?

Is the any paper on the subject ?

Thanx.

--
__

Felipe Alvarez Harnecker.  QlSoftware.

Tels. 665.99.41 - 09.874.60.17
e-mail: [EMAIL PROTECTED]

http://qlsoft.cl/
http://ql.cl/
__


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: NT SNMP and MRTG

2001-09-14 Thread Jeff S Wheeler

Uh, don't you have a managable switch?  Seems like a pretty standard thing
to have at a colocation facility.

- jsw


-Original Message-
From: Gene Grimm [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 16, 2001 10:17 AM
To: [EMAIL PROTECTED]
Subject: NT SNMP and MRTG


Can anyone shed any light on how to configure NT SNMP to find out how much
bandwidth these servers (and web sites if possible) are consuming? We just
got stuck with three servers, two of them NT one Mandrake, that we had to
colocate in our facility. We would like to be able to monitor the specific
bandwidth these machines are using.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: Simple web log analysis for multiple sites?

2001-11-15 Thread Jeff S Wheeler

I'd also be interested to know what other folks are doing for this.  We use
webalizer, but we keep seperate stats & reports per each web site.  I then
have a program that reads the webalizer.hist file for each site and updates
an SQL table with information for each site.  If someone needed more data
they could probably extract it from webalizer's HTML files, but using
webalizer.current is a bad idea since it destroys it at the end of each
month and starts over fresh.  I wish it preserved that data because there is
a lot of good stuff in there I might like to use.  Perhaps I'll contribute a
patch someday; but more likely we will just reinvent the wheel so we can get
over some other shortcomings of webalizer.

If this program would be useful to anyone else I could share it.  Below are
some records for one site we host.  The "ws" column is the web server on
which the site resides.  "ds" is the datestamp column, basically in mysql
the easiest way to do this was to use the first day of each month to
represent data for the whole month.  "complete" indicates if the data for
that month is either partial data, or if it has data for that month, as well
as for the following month.  While that doesn't mean there are no holes in
the data, it does at least give you an indication of if you should use it
for billing/etc yet :-)  ts is just a timestamp column.

- jsw

mysql> SELECT * FROM WlfMSum WHERE ws="fire" AND sn="memepool.com" AND
ds="2001-10-01"\G
*** 1. row ***
  ws: fire
  sn: memepool.com
  ds: 2001-10-01
hits: 1133903
   files: 1012502
   sites: 163933
  kbytes: 30512375
   pages: 988517
  visits: 577638
complete: 1
  ts: 20011101113012
1 row in set (0.04 sec)

mysql> SELECT * FROM WlfMSum WHERE ws="fire" AND sn="memepool.com";
+--+--++-+-++--+
++--++
| ws   | sn   | ds | hits| files   | sites  | kbytes   |
pages  | visits | complete | ts |
+--+--++-+-++--+
++--++
| fire | memepool.com | 2001-08-01 |  233195 |  210099 |  47498 |  7255226 |
214404 | 122863 |1 | 20011101113012 |
| fire | memepool.com | 2001-09-01 |  931873 |  823837 | 147449 | 29061296 |
817547 | 485257 |1 | 20011101113012 |
| fire | memepool.com | 2001-10-01 | 1133903 | 1012502 | 163933 | 30512375 |
988517 | 577638 |1 | 20011101113012 |
+--+--++-+-++--+
++--++
4 rows in set (0.07 sec)


-Original Message-
From: John Ackermann N8UR [mailto:[EMAIL PROTECTED]]
Sent: Thursday, November 15, 2001 8:19 AM
To: [EMAIL PROTECTED]
Subject: Simple web log analysis for multiple sites?


Hi --

I'm looking for a program that will analyze the logs across the multiple
virtual sites that I run and provide summary-level info (e.g., number of
hits/bytes per site per day, with monthly summaries, etc).

I'm currently using a slightly hacked version of webstat with some shell
scripts that cat the various logfiles together, add an identifying tag,
sort the result, and feed it into the analyzer, but that really generates
more info than I need for top-level summary purposes and doesn't provide
easy per-site statistics.

Thanks for any suggestions.

John
[EMAIL PROTECTED]



John AckermannN8UR [EMAIL PROTECTED] http://www.febo.com
President, TAPR[EMAIL PROTECTED]http://www.tapr.org


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: Strange apache behaviour?

2001-12-07 Thread Jeff S Wheeler

We do all our log processing as a user called "stats" on one of our
machines.  The root accounts on all our web servers have their ssh public
keys in the stats user's authorized_keys file, and they run a nifty log
rotation program that uploads the log data to the box we do all our log
analysis/etc on.  It also inserts some metadata about the logfile into an
sql database.  All that happens as user root.

Then, everything else happens as user stats on the one machine.  It uses
webalizer to process the files (we may change to something else when we have
spare time, webalizer is not great but the customers like the graphs/etc)
and then extracts summary data from each site's webalizer.hist file and
populates yet another sql table.

We have other log-related tools which do things like gzip the log files
after a configurable number of hours, delete them from the web servers
themselves (they have to remain there for a little while so they're
available to customers), etc.

We also use the summary data collected from webalizer.hist to bill our
customers for their traffic.  It's an incredible pain in the ass to go
through all your customer sites and figure out how much traffic they did in
a month, then do your own calculations to determine how much $$$ they owe
you based on "included traffic" plans, cost per mega/gigabyte, etc.  After a
few months of doing that, it got automated. :-)


Does anyone else on the list have similar billing / log processing systems
for their web hosting companies?  One of our vendors (!) asked us if we
would license our software to them, but we don't really have a refined
user-interface yet.  And as with any patchy, home-grown billing system it
often requires the care of a programmer to add features, etc.  We've thought
about a small monthly fee that would include any requested features/etc.
What do other folks do?

- jsw


-Original Message-
From: Jeremy Lunn [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 07, 2001 4:26 AM
To: James
Cc: Jason Lim; [EMAIL PROTECTED]
Subject: Re: Strange apache behaviour?


On Fri, Dec 07, 2001 at 09:41:00AM +0100, James wrote:
> It is usual to run webalizer as a user? I have never even thought of
> doing that. Is there any particular reason? (security?)

Generally it is a good idea to run everything you can as non-root.  Come
to think of it, I probably have webalizer running as root on a few
machines (whoops!).

--
Jeremy Lunn
Melbourne, Australia
Find me on Jabber today! Try my email address as my JID.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




RE: System locks up with RealTek 8139 and kernel 2.2.20

2001-12-27 Thread Jeff S Wheeler

>>However, I have noticed something strange. I must keep "outbound" traffic
>>flowing or they forget their ARP table for some strange reason. I keep an

AFAIK that ethernet chipset is not particularly advanced.  ARP is not a
function of the card itself, nor the low-level driver.  ARP resolution works
via broadcasting queries for an IP address to all nodes on the network.
Then, the node that has that address is supposed to send back a unicast
response to the requesting node.  The requesting node then stores the MAC/IP
association in its ARP table.  Someone correct me if I missed anything
important or don't really understand this as well as I think.

If you are having arp table problems there must be something else not
working properly.

- jsw


-Original Message-
From: John Gonzalez, Tularosa Communications
[mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 27, 2001 11:53 AM
To: Olivier Poitrey
Cc: [EMAIL PROTECTED]; Antonio Rodriguez
Subject: Re: System locks up with RealTek 8139 and kernel 2.2.20


What causes the lockups? How often? I have an RTL 8139 in use.

However, I have noticed something strange. I must keep "outbound" traffic
flowing or they forget their ARP table for some strange reason. I keep an
outbound ping running... If i dont, and there is no network activity on
the box, it is unresponsive via network, but hopping on the console and
starting another ping session brings it back to life. (I also have to do
this on my machines with older RTL cards, using the ne2k-pci driver)

So far, uptime of box and kernel ver with RTL cards is:

Linux x 2.2.19 #22 Wed Jun 20 18:12:16 PDT 2001 i686 unknown
  9:42am  up 13 days,  6:57,  6 users,  load average: 0.00, 0.00, 0.00




--
John Gonzalez, Tularosa Communications | (505) 439-0200 work
JG6416, ASN 11711, [EMAIL PROTECTED]  | (505) 443-1228 fax
  http://www.tularosa.net

On Thu, 27 Dec 2001, Olivier Poitrey wrote:

> - Original Message -
> From: "Antonio Rodriguez" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, December 21, 2001 3:54 PM
> Subject: System locks up with RealTek 8139 and kernel 2.2.20
>
> > 2. move to 2.4 and hope this solves the problem. I have already gone
> > from 2.2.17 to 2.2.20 to try to fix it, but maybe the move to 2.4 will
> > be more significant.
>
> You'll have the same problem with 2.4.x, I have tested it for you :/



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




unstable is "unstable"; stable is "outdated"

2002-02-01 Thread Jeff S Wheeler

On Fri, 2002-02-01 at 01:42, Jason Lim wrote:
> We have production boxes running unstable with no problem. Needed to run
> unstable because only unstable had some new software, unavailable in
> stable. Its a pity stable gets so outdated all the time as compared to
> other distros like Redhat and Caldera (stable still on 2.2 kernel), but
> thats a topic for a separate discussion.

This is really a shame.  It's my biggest complaint with Debian by far. 
The tools work very well, but the release cycle is such that you can't
use a "stable" revision of the distribution and have modern packages
available.

I can't imagine this issue is being ignored, but is it discussed on a
policy list, probably?  It seems like FreeBSD's -RELEASE, -STABLE,
-CURRENT scheme works much better than what Debian has.  I've never seen
big political arguments on this mailing list, but I always hear that
Debian as an organization is often too burdened with internal bickering
and politics to move forward with big changes.  Is that the case here?

Just curious, not trying to start a flame war.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




opinions on swap size and usage?

2002-02-12 Thread Jeff S Wheeler

For years I've been configuring my machines with "small" swap spaces, no
larger than 128MB in most cases, even though most of my systems have
512MB - 1GB of memory.  My desktop computer has zero swap, although I
have more ram than even X + gnome + mozilla + xemacs can use. :-)

I do this because I think if they need to swap that much, there is
probably Something Wrong, and all that disk access is just going to make
the machine unusable.  May as well let it grind to a halt quickly than
drag it out, I always said.

Alexis Bory's post earlier today made me think about swap a bit more
than I usually do.  What do other folks on this list do?  Zero swap?  As
much swap as physical memory?  More?  Why?  Can you change the swapper's
priority, and does this help when your machine starts swapping heavily?

Thanks for the opinions.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




true x86 PCI bus speeds/specs

2002-02-23 Thread Jeff S Wheeler

Often, folks post on topics such as maximum network performance or disk
performance they should expect to see from their x86-based server,
firewall, etc.  And almost as often, some uninformed person posts a
reply that says something to the effect of, "Your PCI bus is only 66MHz,
which limits you to 66Mbit/sec", or something similar.  This is wrong.

The most common PCI bus is 32 bits wide, and operates at 33MHz.  Its
maximum throughput is thus 32*33/8 million bytes/second.  That's about
132MBytes/sec.  Some PCI buses are 64 bits wide at 33MHz, such as on
several popular Tyan Thunder models.  Those have a maximum throughput of
264MBytes/sec.  Other boards are 64 bits wide at 66MHz, which is limited
to 528MBytes/sec.  And numerous motherboard implementations have more
than one PCI bus, so you could but high-bandwidth perhipherals on each
of the two buses, and not substantially impact performance or cause them
to compete for resources.

Now, all card/driver combinations have some overhead associated with
them.  The bus isn't 100% efficient, but on many "consumer-grade"
mainboards the 32 bit / 33MHz bus will push 110MBytes/sec or more in
real-world use.  If you don't believe me, check the 3ware RAID card
reviews on storagereview.com (assuming SR is still up).

This means a 100Mbit/sec network througput, which is 12.5MBytes/sec,
will easily fit within the maximum throughput of the PCI bus.  The real
issue is kernel efficiency.  Zero-copy TCP and things like that are
going to improve linux network performance by leaps and bounds.  Going
from a 132MByte/sec bus to a 528MByte/sec bus will disappoint you :-)


This is a popular form of confusion.  Mr Billson is not the first person
to give someone a misleading answer in this respect, nor will he be the
last.  I do not intend to put him down by correcting his answer, but I
hope my post serves to better inform the readership of this list.

On Sat, 2002-02-23 at 09:10, Peter Billson wrote:
>   There was some discussion last January (2001) about this type of
> thing. The problem you will run into if you are using POTS Intel
> hardware is the PCI bus speed, so you are going to have a tough time
> filling one 100Mbs connection with an old Pentium - assuming an old
> 66Mhz PCI bus. You can forget about filling two or more. Also, cheap
> NICs will do more to kill your max. throughput.
>   That being said, I run old Pentium 133s with 64Mb RAM in several
> applications as routers and can notice no network latency on a 100BaseT
> network, but I have never benchmarked the machines. Usually the
> bottlenecks are elsewhere - i.e. server hard drive throughput. Packet
> routing, filtering, masquerading really doesn't require much CPU
> horsepower.
-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: BGP4/OSPF routing daemon for Linux?

2002-03-01 Thread Jeff S Wheeler

IOS doesn't have protected memory, is that not correct?  It's like old
multitasking systems where you didn't have virtual memory. :/

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Inexpensive gigabit copper NICs?

2002-03-05 Thread Jeff S Wheeler

Can anyone recommend some inexpensive GIGE NICs that use CAT 5 instead
of fibre pairs?  I just want to run some back-to-back from a busy NFS
server to a couple of its clients for now.  I have not even looked into
GIGE copper switches but I imagine they ROI would not be very high for
my shop just yet :-)

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: redundant office of redundancy

2002-03-06 Thread Jeff S Wheeler

Hi, yes, debian-isp gets posts like this with some regularity.  I firmly
believe that no one is ever happy with the half-assed solutions they
come up with, and it's certainly not something you should have hosting
customers rely upon.  Mail isn't so difficult, but web traffic is a
different animal.

Basically you need a dynamic dns service for all your web sites, and you
also need to just switch to your "backup" IP address if your primary one
fails.  You'll have to worry about TTLs and such.  Or, have one DNS box
on each IP link, have each one have their own zone files, and use both
in your root server entries for all your domains.  That sounds sane,
because if a DNS query can't get to the server on DSL, the client will
query the one on cable, which will respond with a working IP.  Again you
need a really small TTL to make this function remotely.

Shops that want to do this "the right way" buy a couple of circuits from
service providers offering BGP, apply with ARIN for an ASN, and announce
their own space.  Sounds like that is well beyond your financial ability
since you are on DSL and looking at $50/mo for cable, though these days
it is probably within the technical grasp of many folks.


If anyone on this list has done this and is satisfied with their
solution I would like to hear about your experiences, however, I think
you will find that the general opinion is you need to colocate with some
bigger shop, or be satisfied with what you have, or resell.

On Wed, 2002-03-06 at 16:40, David Bishop wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Howdy!  As you can tell from my subject line, I am interested today in making 
> sure that I can always surf por^W^Wserve webpages.  My business (consulting & 
> small-time webhosting) is dependent on my always having an internet 
> connection.  Currently, I have a fairly stable dsl line that serves my needs, 
> but some stupid redback issues on my isp's side have made me wary.  I figure 
> the chances of both a dsl line and cable going out at the same time are 
> fairly small, and throwing $50 a month at the problem is acceptable.  Now, as 
> I'm planning on doing this, it begs the question(s): 1) how to aggregate the 
> bandwidth of both pipes into one, transparently (I will be using two 
> computers as well, might as well do it right); 2) how do you go about setting 
> up "failover", such that if one of the machines drops out, the other takes 
> over dns/mail/web?
> 
> I know some of you out there are about to exclaim "Get your isp to do this, 
> idiot!"  Well, I'm large enough to seriously look at this, but small enough 
> (and geeky enough) that I'd really like to take care of it myself.  I have a 
> decent background in setting up linux as a firewall/proxy/nat box, and a 
> basic understanding of "real" routing.  Pointers, hints, tips, all are 
> welcomed gratefully.   
> 
> To sum up: currently, my setup is 2 machines hot to the 'net, the rest nat'ed 
> off, all using 1 dsl and a block of ips, all nat routing through 1 of the 
> machines.  I would like to end up with dual-connections, bandwidth aggregated 
> through both the machines, and failover for high-priority services.  
> 
> Thanks!
> 
> - -- 
> D.A.Bishop
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.0.6 (GNU/Linux)
> Comment: For info see http://www.gnupg.org
> 
> iD8DBQE8hozkEHLN/FXAbC0RAqILAJ4+m/vgTuCluGdDjP+zj9U24QxBQgCfdNTg
> 4wcJpD5lrFxyV6B6kTfywh8=
> =T1Ff
> -END PGP SIGNATURE-
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 
-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Debian testing suitable for productive?

2002-03-12 Thread Jeff S Wheeler

I've got two different groups of production machines.  One is customer
web servers, which runs on stable.  It's antiquated and we'll probably
just move them up to unstable soon.  The other group of machines are my
SQL database and billing systems, which is already on unstable.

I think the suggestion to stay 2 - 3 days behind is good.  What I do now
is just upgrade a non-critical box, and assuming everything works okay,
I upgrade the others.  Is there an easy way to just keep the packages a
few days behind with apt?

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: RAID 0 risky ?

2002-03-20 Thread Jeff S Wheeler

On Wed, 2002-03-20 at 01:03, Dave Watkins wrote:
> replicate the data somehow. RAID 5 obviously does the least replication 
> while still keeping fault tolerance, although it does cost a small amount 
> of computing power (not a problem if you have a RAID card)

Some RAID cards are substantially slower at RAID 5 than 0/1/0+1.  The
3ware ATA RAID boards are excellent, for example, except at RAID 5. 
They now produce a couple of boards with more CPU power or different
ASICs, or whatever, to make up for this shortfall.  But they are more
expensive.

IIRC the 7x10 series is quite slow at RAID 5, but the 7x50 series
improved upon this greatly.  My source is www.storagereview.com, though,
I do not use any of their newer RAID 5 boards.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




apache BASIC authentication w/large userbase

2002-04-03 Thread Jeff S Wheeler

I have a customer who requires BASIC authentication for their site. 
They have a fair amount of traffic as well as a very quickly growing
userbase.  They were on mod_auth_mysql before, but with hundreds of
apache children that is not very practical.

I suggested a change to a signed-session-cookie type system, but they
would not go for that because apparently a disproportionate number of
their end-users disable cookies in their web browser.  Stupid media
privacy paranoia.

The userbase is presently around 100K and growing 5K/day or so.  They
were having things go so slowly that users could not login.  In the
short term we replaced mod_auth_mysql with an apache module I whipped up
to send requests out via UDP to a specified host/port, and wait for a
reply (with a 3 second timeout).  Then I hacked out a quick Perl program
to handle those requests, hit mysql for actual user/password info, and
to cache the user information in ram for the duration of the daemon's
lifetime.

Obviously this won't work forever without a serious change to my caching
strategy, but before I put more work into this mechanism, what do other
folks on the list do for high-traffic, large-userbase BASIC authen?  I
know it's a poor limitation but *shrug* the customer knows their needs.

I figured DBM would be sluggish, and the customer already tried text
files, but moved to mod_auth_mysql when that ran out of steam.

Your Input Is Appreciated.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: apache BASIC authentication w/large userbase

2002-04-04 Thread Jeff S Wheeler

On Thu, 2002-04-04 at 03:06, Stephane Bortzmeyer wrote:
> On Wed, Apr 03, 2002 at 06:35:22PM -0500,
>  Jeff S Wheeler <[EMAIL PROTECTED]> wrote 
>  a message of 39 lines which said:
> 
> > would not go for that because apparently a disproportionate number of
> > their end-users disable cookies in their web browser.  Stupid media
> > privacy paranoia.
> 
> You are wrong.
>  

Well, we deal with a lot of adult webmasters, including a few large
ones.  I don't do a lot of CGI-ish stuff, or session tracking for those
sites, however our in-house guy who does do that work claims nearly 30%
of the visitors to one high-profile site we work on have a browser with
cookies disabled.  I haven't generated the data myself, so I don't know
if I believe the 30% figure, but I believe "disproportionate" is pretty
safe given the users.

It's probably a stretch for you to state that I am wrong given who their
userbase is, however if you have information on similar sites to back up
your statement I certainly will be interested.  I'll see if we can track
that precisely on some of our customer sites.

> So you reinvented LDAP :-)

LDAP didn't ocurr to me at all, I'm glad you suggested it.  We have no
LDAP resources or experience in-house, but honestly would like to move
to it for a more sane a/a system for our unix, ftp, and mail accounts as
well.  There seems to be a real lack of a good, thorough HOWTO though. 
Have I not looked in the right place?

Is LDAP really the best tool here?  Keep in mind hundreds of authen
requests per second, although I don't doubt that large shops with a lot
of users probably have that kind of volume in regular unixy stuff.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: network cabling management

2002-04-18 Thread Jeff S Wheeler

I asked a BellSouth central office supervisor this question once.  They
didn't have any organized method of tracking much of their cable plant. 
Surprising in one respect, because you'd think some huge software
company would be very motivated to write and sell such software to Bell.

But on the other hand, it's not surprising that they weren't organized
enough to realize they spend a lot of time figuring out where things go.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Wed, 2002-04-17 at 11:01, Emile van Bergen wrote:
> On Wed, 17 Apr 2002, Tommy van Leeuwen wrote:
> 
> > What kind of tools are you using for network cabling and patches
> > management? We've tried txtfiles, acessdatabases and such but we're
> > looking at an easier to manage tool..
> 
> Perhaps the middle ground between those two: a spreadsheet?
> 
> Cheers,
> 
> 
> Emile.
> 
> --
> E-Advies / Emile van Bergen   |   [EMAIL PROTECTED]
> tel. +31 (0)70 3906153|   http://www.e-advies.info
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: postfix and relayhost question

2002-05-06 Thread Jeff S Wheeler

On Mon, 2002-05-06 at 11:43, Patrick Hsieh wrote:
> I configure my relayhost in main.cf of postfix.
> When I send a mail with multiple recipients in an envelope, how to make
> it just one single smtp connection from the postfix server to the
> relayhost? I tried to use netstat -nt to view the smtp connection and
> found postfix use 5 smtp connections to my relayhost. It seems to divide
> the envelope into multiple single recipient and send them individually.

Probably, this is a product of *_destination_recipient_limit being set
too low.  I guess if you change default_ or smtp_ to something higher
than your current value, say 500, it will reduce the number of
connections used.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-24 Thread Jeff S Wheeler

I don't know if anyone else who followed-up on this thread has ever
implemented a high traffic web site of this calibre, but the original
poster is really just trying to band-aid a poor session management
mechanism into working for traffic levels it wasn't really intended for.

While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP clients at peak instants is not very bright.  We have
cookies for this.  Server-side sessions are a great fallback for
paranoid end-users who disable cookies in their browser, but it is my
understanding that PHP relies on a cookie-based session ID anyway?

I tried to follow up with the original poster directly but I can't
deliver mail to his MX for some reason.  *shrug*

Look into signed cookies for authen/authz/session, using a shared secret
known by all your web servers.  This is not a new concept, nor a
difficult one.  It can even be implemented using PHP, though a C apache
module is smarter.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-27 Thread Jeff S Wheeler

Everything I've heard about experiences with mysql on NFS has been
negative.  If you do want to try it, though, keep in mind that
100Mbit/sec ethernet is going to give you 12.5MByte/sec, less actually,
of I/O performance.  GIGE cards are cheap these days, as are switches
with a few GIGE ports.  1000baseT works, take advantage of it.

I hope you'll think about a solution other than mysql for this problem,
though.  It's not the right tool for session management on such a scale.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Mon, 2002-05-27 at 07:54, Patrick Hsieh wrote:
> Hello Nicolas Bougues <[EMAIL PROTECTED]>,
> 
> I'd like to discuss the NFS server in this network scenario.
> Say, if I put a linux-based NFS server as the central storage device and
> make all web servers as well as the single mysql write server attached
> over the 100Base ethernet. When encountering 30,000 concurrent clients, 
> will the NFS server be the bottleneck? 
> 
> I am thinking about to put a NetApp filer as the NFS server or build a
> linux-based one myself. Can anyone give me some advice?
> 
> If I put the raw data of MySQL write server in the NetApp filer, if the
> database crashes, I can hopefully recover the latest snapshot backup
> from the NetApp filer in a very short time. However, if I put on the
> local disk array(raid 5) or linux-based NFS server with raid 5 disk
> array attached, I wonder whether it will be my bottleneck or not.
> 
> How does mysql support the NFS server? Is it wise to put mysql raw data
> in the NFS?
> 
> 
> -- 
> Patrick Hsieh <[EMAIL PROTECTED]>
> GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: transfer rate

2002-07-04 Thread Jeff S Wheeler

Hi, I suggest you check the duplex mode on your ethernet interface and
your switch.  I had a problem similar to yours just a couple of months
ago, and tracked it down to the interface auto-negotiating into half
instead of full duplex.  On a busy ethernet interface that can cause
enough collisions to affect TCP throughput substantially, due to that
small amount of packet loss.

Unfortunately under Linux there is no "good way" to find out the link
speed and duplex condition portably among different ethernet adapters,
at least that I am aware of.  Here is what I do:

$ dmesg |egrep eth[0-2]
eth1: Intel Corp. 82557 [Ethernet Pro 100], 00:A0:C9:39:4C:2C, IRQ 19.
eth2: ns83820 v0.15: DP83820 v1.2: 00:40:f4:17:74:8a io=0xfebf9000
irq=16 f=h,sg
eth2: link now 1000 mbps, full duplex and up.

Also unfortunately, most ethernet drivers don't bother reporting this,
although you can hack it into your drivers if it is important to you.

But another good way to check is to examine your switch:

switch0#sh int fa0/2
  ...
  Auto-duplex (Full), Auto Speed (100), 100BaseTX/FX
  ...
This shows 100/full auto-negotiated.  Really, this is a bad thing to do,
as we should be setting all our ports to 100/full in the configuration,
but it probably won't be done until it bites us in the ass.  :-)

switch0#sh int fa0/24
  ...
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ...
This is a port which has been fixed to 100/full, because it *did* bite
us in the ass.  It's an uplink to a router and does several mbits/sec
24x7, and that packet loss affected all the TCP sessions going over it,
limiting them to around 400Kbits/sec throughput due to TCP backoff :-(


I hope this is helpful.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


On Wed, 2002-07-03 at 22:20, Rajeev Sharma wrote:
> hi all
> 
>   I am stuck with  a problem of my network..
> 
>   my one debian box is very unstable ..some time it transfer data
>   smoothly(997.0 kB/s)and sometime it hanged up at 123.0 kB/s..
>   and when i use
> 
>   ping -f 192.168.x.x
> 
>  it shows 1% data loss and some time 0% data loss..
> 
> 
>  i have checked my cables,switch ...but no use 
> 
>  pliz help me
> 
**snip**



msg06679/pgp0.pgp
Description: PGP signature


Re: Fw: VIRUS IN YOUR MAIL (W32/BugBear.A (Clam))

2002-10-17 Thread Jeff S Wheeler
On Thu, 2002-10-17 at 04:51, Brian May wrote:
> AFAIK transparent proxying in Linux is limited to redirecting all ports
> to a given port another host. It is not possible for the proxy server to
> tell, for instance what the original destination IP address was.
Is this true, or will a getsockname() performed on a TCP socket which
was created as one endpoint of a connection which is being transparently
proxied give the client's intended destination address?  I do not know.

> A transparent HTTP proxy relies on the server name HTTP1.1 request
> field to determine what host the client really wanted to connect to.
> (this has been tested with Pacific's transparent proxy).
I do know that all HTTP/1.1 requests must contain a Host: header to be
valid.  Even if you knew the destination IP address, if you did not have
a Host: header you couldn't successfully complete an HTTP/1.1 request.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
See ISC.ORG for information on new BIND vulnerabilities.  Current bind
package in woody is 8.3.3, which is an affected version.  Patches are
not available yet, it seems.

http://www.isc.org/products/BIND/bind-security.html

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
I've taken Sonny's suggestion and upgraded to the bind9 package. 
Initially I thought I had a serious problem, as named was not answering
any queries, however it seems to have "fixed itself".  Ordinarily that
would spook me, but in this situation I think I'd rather have spooky
software than known-to-be-exploitable software :-)

Thanks for the suggestion, Sonny.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Tue, 2002-11-12 at 13:53, Sonny Kupka wrote:
> Why not use Bind 9.2.1..
> 
> It's in woody.. When I came over from Slackware to Debian I installed it 
> and haven't looked back..
> 
> The file format was the same from 8.3.* to 9.2.1 I didn't have to do anything..
> 
> ---
> Sonny
> 
> 
> At 01:08 PM 11/12/2002 -0500, Jeff S Wheeler wrote:
> >See ISC.ORG for information on new BIND vulnerabilities.  Current bind
> >package in woody is 8.3.3, which is an affected version.  Patches are
> >not available yet, it seems.
> >
> >http://www.isc.org/products/BIND/bind-security.html
> >
> >--
> >Jeff S Wheeler   [EMAIL PROTECTED]
> >Software DevelopmentFive Elements, Inc
> >http://www.five-elements.com/~jsw/
> 
> 




signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-13 Thread Jeff S Wheeler
My BIND 8 zone files are working perfectly.  We do have TTL values on
every RR in every zone, though.  Perhaps that was your difficulty?  I
believe I made that change when we upgraded from 4.x to 8.x ages ago.

If there is no such script and you have difficulty with your zonefiles,
let me know the apparent differences and I'd be happy to whip up a Perl
script and post it to the debian-isp list.  We have hundreds of zones as
well, and if it there had been a file format problem, I would had to
have done so in order to make the upgrade work.

--
Jeff S Wheeler <[EMAIL PROTECTED]>

On Tue, 2002-11-12 at 19:04, Craig Sanders wrote:
> On Tue, Nov 12, 2002 at 12:53:51PM -0600, Sonny Kupka wrote:
> > Why not use Bind 9.2.1..
> > 
> > It's in woody.. When I came over from Slackware to Debian I installed
> > it and haven't looked back..
> > 
> > The file format was the same from 8.3.* to 9.2.1 I didn't have to do
> > anything..
> 
> is this fully backwards-compatible?
> 
> last time i looked at bind9, the zonefile format had some slight
> incompatibilities - no problem if you only have a few zonefiles that
> need editing, but a major PITA if you have hundreds.
> 
> if there are zonefile incompatibilities, is there a script
> to assist in converting zonefiles?
> 
> craig
> 
> -- 
> craig sanders <[EMAIL PROTECTED]>
> 
> Fabricati Diem, PVNC.
>  -- motto of the Ankh-Morpork City Watch
> 




signature.asc
Description: This is a digitally signed message part


Re: DNS servers

2002-11-22 Thread Jeff S Wheeler
The draconian license you use to distribute tinydns and other software
is problematic for me.  I can accept different zone file syntax with
ease, and can even adapt myself to the notion that the filesytem is used
as a configuration database.  I can also understand that your resistance
to a license that would allow binary distribution, or distribution of
patched sources, is well-intentioned, but I cannot agree with it.

--
Jeff S Wheeler <[EMAIL PROTECTED]>





-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Determinig configure options in .debs

2003-01-14 Thread Jeff S Wheeler
On Tue, 2003-01-14 at 17:15, Jan V wrote:
> If you want to know the compile-options for eg cowsay: 'apt-get source
> cowsay' then go to the debian dir that has been created
  
/ I enjoyed your cowsay reference. It is \
\ very popular on EFnet. /
  
\   ^__^
 \  (oo)\___
(__)\   )\/\
||w |
|| ||




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Neighbour table overflow problem

2003-03-07 Thread Jeff S Wheeler
Dear list,

I have a linux 2.4 box running zebra and acting as a default gateway for
a number of machines.  I am concerned about "Neighbour table overflow"
output in my dmesg.  From some articles I've read on usenet, this is
related to the arp table becoming full.  Most of the posters solved
their problems by configuring a previously unused loopback interface, or
realizing that they had a /8 configured on one IP interface and a router
on their subnet that was using proxy-arp to fulfill the arp requests.

Neither of those is my situation, though.  I simply have a lot of hosts
on the segment.  When the network is busy I've seen as many as 230+ arp
entries, but it never seems to break 256.  Is this an artificial limit
on the number of entries that can be present in my arp table?  If so, I
would like to increase the limit by to 2048 or so and give myself some
headroom.  I am concerned that might slow down packet forwarding, but I
can probably live with that.

Has anyone on the list encountered similar problems?  If so, is this the
approach you took to solve them or did you do something else?

Thanks,

--
Jeff S Wheeler <[EMAIL PROTECTED]>

[EMAIL PROTECTED] uname -a
Linux mr0 2.4.20 #1 Mon Dec 16 14:13:15 CST 2002 i686 unknown
[EMAIL PROTECTED] arp -an |wc -l
239



signature.asc
Description: This is a digitally signed message part


Re: BGP memory/cpu req

2003-03-11 Thread Jeff S Wheeler
On Tue, 2003-03-11 at 05:58, Teun Vink wrote:
> Check out the Zebra mailinglist, it has been discussed there over and
> over. Basically, a full routing table would require 512Mb at least. CPU
> isn't that much of an issue, any 'normal' CPU (P3) would do...
512MB is more than enough for zebra.  I would be comfortable running
zebra on as little as 256MB of memory.  If you want to use the box for
other tasks like squid, etc. you might become constrained.  We use ours
for some RRD polling.

I have a box with two full sessions of about 120k prefixes from transit
providers, both sessions with soft-reconfig enabled.  CPU is never an
issue, as far more CPU time is spent handling ethernet card Rx
interrupts than BGP or OSPF updates.  The box forwards an average of
11kpps and 60Mbit/sec, peaks around 16kpps and 90Mbit/sec.

[EMAIL PROTECTED]:~# ps u `cat /var/run/zebra.pid` `cat
/var/run/bgpd.pid` `cat /var/run/ospfd.pid`
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
root   197  0.1  2.4 25916 24872 ?   S 2002 143:03
/usr/local/sbin/zebra -d -f/etc/zebra/zebra.conf
root   200  0.6  6.2 65756 64960 ?   S 2002 786:20
/usr/local/sbin/bgpd -d -f/etc/zebra/bgpd.conf
root   828  0.0  0.1  2292 1204 ?S 2002  20:06
/usr/local/sbin/ospfd -d -f/etc/zebra/ospfd.conf
[EMAIL PROTECTED]:~# free
 total   used   free sharedbuffers
cached
Mem:   1033380 864148 169232  0  21348
622244
-/+ buffers/cache: 220556 812824
Swap:   497972  0 497972
[EMAIL PROTECTED]:~# 

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


off-topic

2001-02-26 Thread Jeff S Wheeler
I'm guessing he would have to compile something in order to apply that
patch.

- jsw


-Original Message-
From: Robert Davidson [mailto:[EMAIL PROTECTED] Behalf Of
Robert Davidson
Sent: Monday, February 26, 2001 6:09 PM
To: Michelle Konzack
Cc: debian-isp
Subject: Re: isdn4linux


On Sat, Feb 24, 2001 at 11:17:00AM +0200, Michelle Konzack wrote:
> Hello,
>
> because my Linux-Workstation has broken memory I can not
> compile anything on it.
>
> Please does anyone have a compiled (on Debian 2.1, 2.0.36)
> isdn4linux and can send it to me ???

There's a kernel patch around that lets you use dodgy ram in linux
boxes somewhere.. if you could patch a kernel (on another pc) and then
transfer it to your linux box with dodgy ram and tell the kernel where
the dodgy ram is, you might be able to get all the functionality of
your linux box back..

Personally I think: bad ram = asking for trouble, but if you apply the
patch I'm talking about to a kernel, not sure if they have it for 2.0
or just 2.2, but it could be worth looking into.

The other thing you could do is, say you have a 64mb ram chip in your
pc, and the chip where say 35mb sits is stuffed, you could just tell
lilo (or whatever you use to boot with) to pass "mem=32M" to the kernel
so it won't use that dodgy bit of ram (just the first 32mb of good
ram).

Regards,
Robert Davidson.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Sybase ASE 11.0.3.3

2001-03-13 Thread Jeff S Wheeler
I would guess that their intention is to discourage folks from running it on
big iron Sun / IBM boxes that have the ability to run linux or linux
applications on top of another OS.  I imagine they want you to pay them for
that. :)

- jsw


-Original Message-
From: Przemyslaw Wegrzyn [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 13, 2001 3:12 PM
To: debian-isp@lists.debian.org
Subject: Sybase ASE 11.0.3.3



Has anyone here experience with Sybase ASE 11.0.3.3 on Linux ?

I'm wondering, if it is a good choice for production enviroment... It has
a little strange license:

"You are allowed to install and use the Software for free as long as you
operate the Software at all times only with the Open Source - Linux and
various BSD - operating systems running natively on your hardware
system. "

What does it mean _operate_the_Software_ ? ;)
What about multi-tier applications, when connections to Sybase are made by
linux-based-middle-tier only ? Is it allowed ?

TIA

PS - I know I should post this question directly to Sybase, but I'd like
to know about your experiences with it...


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: arpwatch and more

2001-03-19 Thread Jeff S Wheeler
Those quad ethernet cards support one MAC address per PHY, or they can
operate as a Cisco EtherChannel or probably other similar technologies used
to bond ethernet links, depending upon how you configure it and your switch.

I sent the below message to someone else on the list in private, thinking
that they might benefit from some further explaination, but also thinking
that most people subscribed to this list would have a solid understanding of
how modern ethernets work, and thus would not benefit from the post.
Obviously I was wrong, there appear to be lots of people on this list that
don't grok ethernet, so below is that message for the benefit of everyone.

-Original Message-
From: Jeff S Wheeler [mailto:[EMAIL PROTECTED]
Sent: Friday, March 16, 2001 11:44 PM
To: Mike Fedyk
Subject: RE: arpwatch and more


An ethernet switch won't send frames to "multiple ports".  Ethernet switches
can broadcast, they can unicast, and some new layer3 switches can multicast
IP "efficiently", but if your switch sees the same MAC address on several
interfaces, one of them is going to get blocked (if you have spantree), or
the switch will just learn the new interface, and frames would go to the
wrong interface, but not to both.

- jsw


-Original Message-
From: Tim Kent [mailto:[EMAIL PROTECTED]
Sent: Monday, March 19, 2001 12:50 AM
To: debian-isp@lists.debian.org
Subject: Re: arpwatch and more


I guess that means you have to keep those quad Ethernet Sun cards away.

Tim.

- Original Message -
From: "Marc Haber" <[EMAIL PROTECTED]>
To: 
Sent: Saturday, March 17, 2001 7:50 PM
Subject: Re: arpwatch and more


> On Fri, 16 Mar 2001 13:05:06 -0800, Mike Fedyk <[EMAIL PROTECTED]>
> wrote:
> >On Fri, Mar 16, 2001 at 09:24:56PM +0100, Marc Haber wrote:
> >> Please be aware, though, that the MAC address is trivial to forge
> >> nowadays.
> >Hmm, how does a switch deal with the same mac address coming from two
ports
> >at the same time?
>
> It will probably flap. MAC address forging will only work if the host
> that owns the forged MAC is switched off or disabled in some other
> way.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Auto 10/100Mb card fallback from 100 to 10 on 100Mb network

2001-04-15 Thread Jeff S Wheeler
I suggest you spend 39$ on an Intel eepro100 (z-buy.com, think shipping on
those cards is free) and give it a try.  I'm betting it will also fail to
work in this machine, and you'll discover the problem is physical plant
related, but perhaps not.  Either way you've tried most other things, may as
well shell out a few dollars.  After all, it sounds like you've put at least
39$ worth of your time into this, and you haven't been able to eliminate the
problem yet.

- jsw


-Original Message-
From: Jason Lim [mailto:[EMAIL PROTECTED]
Sent: Sunday, April 15, 2001 4:25 PM
To: debian-isp@lists.debian.org
Subject: Re: Auto 10/100Mb card fallback from 100 to 10 on 100Mb network


Well, here is the stupid and interesting thing.

I've got more than 5 boxes here, all using the SAME realtek cards. Now if
it was some negotiation error between card and switch, then it, in theory,
should occur with all of them. Also, I mentioned that I had SWITCH cards
around, so a functioning card from a "stable debian" box was put into the
"unstable debian" box, and the same fallback to 10Mb occurred.

So I'm pretty sure that its a software issue. I was wondering if there
could be anything... ANYTHING software related that could force the card
into a fallback like that. Keep in mind that during bootup, and before
Debian loads up the network code, it is still in 100Mb mode. It ISN'T a
kernel problem either. The reason I say that is because if it was, then
when kernel loads the RT8139 (yes, i finally checked ;-)   ) the fallback
should occur. However, it is STILL in 100Mb mode when it detects the
Realtek cards (2 of them, and yes, i've tried booting with only 1, and
switch those two around, put them in different PCI slots, etc.).

Could it, but some chance, maybe be the cards going into promiscuous mode
causing the card to fall back to 100Mb? I've never seen it happen... but
maybe?

Heres an interesting thing I tried. The cards came with a software disk
(needed msdos to boot) that allowed me to switch between 10Mb, 100Mb, and
Auto-neg. The cards are set by default to Auto-neg, as they should be. I
FORCED it to 100Mb to see what would happen. Sure enough, I successfully
got it to stay at 100Mb, but then the switch automatically disabled the
port after around 5-10 minutes, and said it shut the port down due to
"conflict". Thats it. Thats all it said (oh how helpful). This is a Cisco
switch btw.

Please... ANY suggestions and help would be greatly appreciated.

Sincerely,
Jason Lim

- Original Message -
From: "Jeff Waugh" <[EMAIL PROTECTED]>
To: "Jason Lim" <[EMAIL PROTECTED]>
Cc: "Pierfrancesco Caci" <[EMAIL PROTECTED]>; 
Sent: Sunday, April 15, 2001 2:56 PM
Subject: Re: Auto 10/100Mb Negotiation falling back to 10 on 100 network


> 
>
> > These are cheap REALTEK 1039? 3039? Can't remember exactly. The ending
is
> > 39... i know that for sure (because i also know they have 19, 29, and
39
> > afaik).
> >
> > I still haven't been able to solve. I've upgraded to the latest of
every
> > package related to networking, to no avail.
>
> Cheap and dirty cards... Not that I don't use them. :)
>
> Sounds like autoconfiguration issues between the cards and switch.
>
> - Jeff
>
> --
>   You'll see what I mean.
>


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Auto 10/100Mb card fallback from 100 to 10 on 100Mb network

2001-04-16 Thread Jeff S Wheeler
Good point, seems like this might be worth a try:

switch0#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
switch0(config)#int fa0/5
switch0(config-if)#speed 100
switch0(config-if)#^Z

- jsw


-Original Message-
From: Nate Duehr [mailto:[EMAIL PROTECTED] Behalf Of Nate
Duehr
Sent: Tuesday, April 17, 2001 12:24 AM
To: Jeff S Wheeler
Cc: Jason Lim; debian-isp@lists.debian.org
Subject: Re: Auto 10/100Mb card fallback from 100 to 10 on 100Mb network


On Sun, Apr 15, 2001 at 05:09:55PM -0400, Jeff S Wheeler wrote:
> I suggest you spend 39$ on an Intel eepro100 (z-buy.com, think shipping on
> those cards is free) and give it a try.  I'm betting it will also fail to
> work in this machine, and you'll discover the problem is physical plant
> related, but perhaps not.  Either way you've tried most other things, may
as
> well shell out a few dollars.  After all, it sounds like you've put at
least
> 39$ worth of your time into this, and you haven't been able to eliminate
the
> problem yet.

In my experience, the Intel cards also have problems auto-negotiating
with some Cisco switches.  FYI.

At work, we simply set the Cisco's to whatever we want them to come up
at and the Intel's follow suit just fine.

--
Nate Duehr <[EMAIL PROTECTED]>

GPG Key fingerprint = DCAF 2B9D CC9B 96FA 7A6D AAF4 2D61 77C5 7ECE C1D2
Public Key available upon request, or at wwwkeys.pgp.net and others.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Machine Registration

2001-04-22 Thread Jeff S Wheeler
I stayed in the Philadelphia Central City Marriott a little while ago, and
they had a great third-party provided product called STSN(?).  It was a
little box with an ethernet port that worked instantly with no difficulties.
It could assign settings to you based on DHCP if your laptop required that,
but if you used a static IP and already had a default gateway configured it
would simply operate promiscously and, I assume, rewrite the destination MAC
address of all ports you transmitted on the segment to the MAC address of a
router someplace in the hotel, and forwarded the packets to the segment that
router lived on.

I was skeptical when I read the instructions that claimed no setup was
required, but it worked fine with both my laptop, setup for a static IP and
a default gateway IP that was not on the segment, as well as my girlfriend's
laptop, which was setup for DHCP.


The Cisco technology I think some posters are reaching for is called a
"Private LAN", by the way.  You can read about it on CCO.  I don't know if
you could accomplish the same thing the Marriott's black box did, but given
the layer3 features and private lan technology on the Catalyst 6000/6500
series routers I suspect you could come up with something both workable and
secure.

If you read up on Private LANs and don't grok it, I could provide an
explaination.  The clue level on this thread has been higher than most on
this list (thankfully, stuff like this is why I remain subscribed) but this
is an advanced networking topic most people have no experience with.

- jsw



-Original Message-
From: Mike Fedyk [mailto:[EMAIL PROTECTED] Behalf Of Mike Fedyk
Sent: Sunday, April 22, 2001 4:22 AM
To: debian-isp@lists.debian.org
Subject: Re: Machine Registration


On Fri, Apr 20, 2001 at 07:28:54PM -0700, Ted Deppner wrote:
> Hubs (shudder) and switches (even most Cisco stuff) would allow snooping,
> break two 10.0.0.1 customers from working, broadcast collisions, gateway
> and next hop collisions, etc...
>
> The concept is the customer is directly connected to an individual
port[1],
> capable of Gateway discovery, providing itself as the next hop gateway,
> local DHCP assignment (or relay to a DHCP server higher up), and 1:1
> NAT...  all on a per port basis.
>
Hmm, have you worked in this area, or are you just speculating?  I don't
know, but do many laptop users use static addresses?  I realize the want to
have a setup be all things for everyone, but do you really think you're
going to have one switch port for each and every room?

I guess that security could be one of the advertised features of your
rooms It really depends on what the hotel wants.

Mike


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




Machine Registration

2001-04-22 Thread Jeff S Wheeler
s/ports/packets/; If "ethernet frame" isn't a better term, which it probably
was given the potential to confuse IP packets and ethernet packets, or
frames.

- jsw


-Original Message-----
From: Jeff S Wheeler [mailto:[EMAIL PROTECTED]
Sent: Sunday, April 22, 2001 10:30 PM
To: debian-isp@lists.debian.org
Subject: RE: Machine Registration

<>
address of all ports you transmitted on the segment to the MAC address of a
router someplace in the hotel, and forwarded the packets to the segment that
router lived on.

<>




RE: forgot manufacturer name of serial ethernet devices

2001-04-25 Thread Jeff S Wheeler
Computone makes several products that might suit your need, and their boxes
range in configuration from a fixed 16 port configuration to their
PowerRack, which has been renamed to Something2000.  It'll support 64 ports
and has various marketspeak things.  You can also load a handy-dandy linux
kernel driver and access the ports on the box via /dev/ character device
entries, or you can connect to its serial ports via tcp/ip.  In addition it
supports PPP/SLIP, and you can buy RS232 or 432? cards, allowing you to use
them in applications where your serial cable runs are very long.  Wal-Mart
uses these to provide serial connectivity to their cash registers, UPC code
scanners, etc.

- jsw


-Original Message-
From: Robert L. Yelvington [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 25, 2001 5:59 PM
To: debian-isp
Cc: Debian Users
Subject: forgot manufacturer name of serial ethernet devices


a couple of months ago i read an article in some trade rag about a
serial device that was networkable via ethernet?  forgot the name of the
product and the company, imagine that!  basically, it's a serial hub
with an ethernet port.

if anyone knows what i am talking about would you mind passing along a
URL, a name, something?

AND

if anyone is using a device like this...whatcha think?

kind regards,
rob


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Webalizer and net-acct differences

2001-05-08 Thread Jeff S Wheeler
The header size is not so fixed, actually.  If you use cookies on your site
the client will send them to you upon each request.  You might have CGIs and
such that update cookies frequently as well, which would reduce your
efficiency yet more.  There are a lot of factors here, but the real issue is
that your customers are going to expect to be billed by what access.log
analysis tools compute, because that is all they can use to attempt to audit
your billing mechanism, and that is what other service providers will use.
>From the customer perspective if you want to bill based on IP traffic and
not what webalizer/etc reports, you and your customer should both understand
the differences.

- jsw


-Original Message-
From: Nicolas Bougues [mailto:[EMAIL PROTECTED]
Behalf Of Nicolas Bougues
Sent: Tuesday, May 08, 2001 4:56 PM
To: Andreas Rabus
Cc: 'Russell Coker'; Debian ISP List (E-Mail)
Subject: Re: Webalizer and net-acct differences


> Back to questioning:
> recently i did some calculation and find out that webalizer results are
> about about 85% of the net-acct results.
> Ist that an realistic overhead form http-headers, ICMP (on or to port
80?),
> and TCP/IP frame info, etc.?

Yes. But it depends upon the kind of data served. The header size is
quite fixed, but the payload size may vary. A site with loits of small
replys will have a percentage more like 60%.

Furthermore, apache doesn't take into account *incoming* traffic,
whereas your hosting provider probably does (ie counts in both
directions). There can be great differences here if you do a lot of
"posting" (like posting big files, for instance).

>
> PS: we pay for the traffic "on the cable" and webalizer only gets the
> "pay-load" from http.
>

Then use net-acct to get the real values. Unfortunatly, there's no way
to figure out the data for various virtual servers which share the
same IP.

--
Nicolas BOUGUES
Axialys Interactive


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Multiple DSLs, and switching incoming route upon failure?

2001-05-25 Thread Jeff S Wheeler
Are your DSL uplinks from different ISPs, or from the same IP provider?  If
they are differing providers, there is no way you can feasably implement
BGP.  If they are redundant paths to the same ISP you could ask them to
issue you a reserved ASN (65512 - 65535) and announce your /28 into their
network via ebgp sessions.  That makes a lot of assumptions about software
support on your router(s), and of their willingness to accomodate you, of
course.

Realistically, you aren't going to make this happen.  Perhaps you could
participate in something like the 6BONE, or simply colocate your obviously
mission-critical services at your ISP.

- jsw


-Original Message-
From: Mike Fedyk [mailto:[EMAIL PROTECTED] Behalf Of Mike Fedyk
Sent: Friday, May 25, 2001 9:22 PM
To: debian-isp@lists.debian.org
Cc: debian-firewall@lists.debian.org
Subject: Multiple DSLs, and switching incoming route upon failure?


Hi,

I don't believe I'm subscribed to this list, so please cc me also. (I'm on
so many debian lists, and I put all of the low traffic ones in one
folder...)

I already have multiple DSL links to the Internet, but I haven't done
anything more as far as incoming connections besides SMTP and a couple
others for remote workers.

The problem now is I want to put a FTP and DNS server up.  These by them
selves aren't a problem, but sometimes one of the DSLs will go down.

I'd only qualify for a /28 block of IPs, is there any way I can get bgp
routing at my shop?  I'm willing to read all the info I need, and have an
interest in this area anyway...

This message isn't meant to start a flame war about DSL reliability, as even
with fiber it is recommended to multi-home.

DNS round-robin will do 80% of the job, but there will be intermittent
access
when one of the links goes down.

I've considered getting an account on a remote server, and just forward the
connections here, but that defeats the whole purpose of having the server
local.

Is there anything I'm missing?

TIA,

Mike


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Multiple DSLs, and switching incoming route upon failure?

2001-05-26 Thread Jeff S Wheeler
Customers who purchase T1/T3 service generate more revenue for the ISP, and
although the difference may not justify the administrative overhead of
adding a BGP customer, most do not request this.  Some organizations (BEST
Internet, before Verio gobbled them up, for example) charge an additional
fee for BGP.  They charged 500$/Mo.

Address space is also an issue.  You cannot announce blocks smaller than /24
into global BGP and expect the results you want.  Some networks are still
filtering announcements smaller than /19 within some ranges, SprintLink for
example, as they took steps years ago to counteract routing table growth,
and this remains a problem even as routers become more powerful and memory
gets cheaper.

I do not know how the 6BONE scenario would work.  It was a shot from the
hip, I'm sure you could do some research in this area, or perhaps someone
else subscribed to the list can tell us how the 6BONE interoperates with the
current IPv4.

If you had a colocated server on a reliable IP connection you could VPN
yourself a subnet from it over either of your two DSL routes.  This might be
sane but would cause you to incur a lot of bandwidth bills. :-)

- jsw


-Original Message-
From: Mike Fedyk [mailto:[EMAIL PROTECTED] Behalf Of Mike Fedyk
Sent: Saturday, May 26, 2001 4:35 PM
To: Jeff S Wheeler
Cc: debian-isp@lists.debian.org; debian-firewall@lists.debian.org
Subject: Re: Multiple DSLs, and switching incoming route upon failure?


On Fri, May 25, 2001 at 11:29:46PM -0400, Jeff S Wheeler wrote:
> Are your DSL uplinks from different ISPs, or from the same IP provider?
If

They are different providers.

DSL 1 is 384k/1.5m adsl at pacbell

dsl2 is 768k sdsl landmark (lmki)

> they are differing providers, there is no way you can feasably implement
> BGP.  If they are redundant paths to the same ISP you could ask them to

What do t1 and t3 customers do?  Is the only criteria for "feasibility" a
need for more IPs?

> issue you a reserved ASN (65512 - 65535) and announce your /28 into their
> network via ebgp sessions.  That makes a lot of assumptions about software
> support on your router(s), and of their willingness to accomodate you, of
> course.

I could get a second link to pacbell, but sometimes their entire network
gets unstable, and I would still need a second provider.  Doing the same
with the other provider would require four links, and still wouldn't fix the
problem if one ISP crashing completely.

>
> Realistically, you aren't going to make this happen.  Perhaps you could
> participate in something like the 6BONE, or simply colocate your obviously
> mission-critical services at your ISP.
>

Hmm, I wonder how exactly this would work with the 6BONE.  Can you get
traffic from ipv4 into the 6BONE from the "normal" internet?  How would I be
addressed?

I probably wouldn't choose my ISP then, I'd choose a company that connects
to several ISPs, and that'll be more expensive. :(

> - jsw
>
>

Mike


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: kernel 2.2.19 limitations.

2001-05-27 Thread Jeff S Wheeler
I have a 2.4.4 machine with two Pentium III 833MHz CPUs and an AcceleRAID
170/64MB with a pair of IBM Ultra160 disks on it, all on a Tyan Thunder 2500
LE3(?) motherboard.  It has on-board SCSI as well, symbios 53c1010 chipset,
and everything works okay.  I can't get the controller's cache to read as
enabled though, damned if I know why.

But as far as stability is concerned the machine works like a charm, and
mysql likes it well enough to do 7200 UPDATEs/second on it during my batch
jobs. :-)

- jsw


-Original Message-
From: Przemyslaw Wegrzyn [mailto:[EMAIL PROTECTED]
Sent: Sunday, May 27, 2001 6:50 PM
To: Peter Billson
Cc: debian-isp@lists.debian.org; recipient list not shown:
Subject: Re: kernel 2.2.19 limitations.




On Sun, 27 May 2001, Peter Billson wrote:

> Whoops... helps if I post *to* the list too!
>
> > Yes the limit is still the usual 2Gb. The limit is actually with ext2, i
> > believe, although I'm not sure.
>
>   The limit is in the kernel, not the ext2 file system, otherwise 2.4.x
> wouldn't be able to support >2Gb file either. There are patches about
> for adding LFS (large file system) support. I had compiled a 2.2.18
> kernel after patching with the LFS patch borrowed from RedHat's 6.2ee
> (Enterprise Edition) source.
>
>   But why not just run 2.4.x?

Hmm, I'm building some so called "very important server".
I'm not sure if 2.4.x is stable enough.
It will run Apache, biiig PostgreSQL database, all on big RAID.

Anyone has experience with 2.4.x + SMP (2 x PIII) + Mylex AcceleRAID ?

After reading some newsgroups I believe 2.4.x kernels works pretty stable
or does totaly weird things, all depends of the hardware configuration.
Well, it seems some drivers are not yet stable enough...

-=Czaj-nick=-




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




ccbill

2001-06-02 Thread Jeff S Wheeler
I hope this isn't considered off-topic.  :-)

Does anyone else on the list deal with, or have customers who use, ccbill?
Two of my customers have had negative experiences with them recently, one
related to their customer-side CGI script(s).  CCBill has not been
cooperative in providing me with any kind of documentation on their data
schema, but realistically both customers need to move away from CCBill's
script to something more robust.

Customer A has serious problems with people subscribing with "guessable"
passwords, or passwords that are published to password-trading websites
frequently.  They actually get visitors to their site that have found them
by typing "ccbill passwords" into search engines, and so forth.  They then
have the same 3 or 4 passwords being used from -hundreds- of differing
domainnames, by most likely hundreds or thousands of different persons.  We
have started deleting the abused accounts but the real solution is to stop
allowing customers to choose their own (initial?) passwords.

Customer B has a larger problem.  She now believes CCBill has caused her
account username and password (which she had to share with CCBill to have
them setup their service) to become compromised.  It is possible her
suspicion is correct.  Has anyone else had customer accounts which turned
over passwords to CCBill become compromised recently?  I would think more
than one password would be stolen from them, and thus this would not remain
an isolated incident.


Either way, CCBill has begun to genuinely scare me.  These folks deal with,
on a daily basis, thousands of peoples' credit card numbers and other
individualized non-public information, and from my dealings with them over
the past week and a half, they are grossly underqualified to do so.  Does
anyone else use CCBill, and if so have you had differing experiences?  How
about with other companies that provide similar products?

---
Jeff S Wheeler [EMAIL PROTECTED]
Software Development Five Elements, Inc.
http://www.five-elements.com/~jsw/   502-339-3527 Office




colocation space/inexpensive bandwidth

2001-06-07 Thread Jeff S Wheeler
Since we're on the topic of colocation space this morning, I thought I would
post and ask if anyone has colocation cabinet space available at a Level(3)
or similar facility.  Currently we colocate with a small ISP and are very
happy with their service, but we would like to be able to offer better
pricing to our customers.  However, when last I spoke with L(3) sales folks,
their cabinets were 1000$/mo, with an added 2000$/mo minimum expenditure on
bandwidth/private line services per cabinet.  This is too much for us, but
if someone has a few U available someplace that prices bandwidth in the
400$/Mbit or less range, I would be very interested in consuming that extra
space (for a fee, of course) and taking advantage of your bandwidth pricing.
:-)

Anyone near Minneapolis/Chicago/Louisville/Cincinnati would be preferable as
well, as currently we maintain our own equipment, and have no need to trust
a pair of "remote hands" to know what they are doing.  In those areas we can
continue to have this benefit.

- jsw




What is the DUL?

2001-06-07 Thread Jeff S Wheeler
What is the DUL?

- jsw


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Doug Alcorn
Sent: Thursday, June 07, 2001 11:28 PM
To: Debian ISP List
Subject: Re: Have you been hacked by f*ck PoizonBOx?


Michelle Konzack <[EMAIL PROTECTED]> writes:

> America Online, Inc. (NETBLK-AOL-172BLK) AOL-172BLK   172.128.0.0 -
> 172.191.255.255

are you implying that this block is all dial-up addresses that aren't
in the DUL?
--
 (__) Doug Alcorn (mailto:[EMAIL PROTECTED] http://www.lathi.net)
 oo / PGP 02B3 1E26 BCF2 9AAF 93F1  61D7 450C B264 3E63 D543
 |_/  If you're a capitalist and you have the best goods and they're
  free, you don't have to proselytize, you just have to wait.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




privileges problem

2001-06-24 Thread Jeff S Wheeler
Also, stock 2.4.x series kernel limits supplementary groups to 32.  There
would be a per-process penalty for increasing that limit.  You could patch
apache to include the supplemental groups when it forks children (if it does
not do this already..), but overall that is a bad solution.

See NGROUPS in include/linux/limits.h and other lines containing NGROUPS /
NGROUPS_MAX in the source if you want to go ahead with your idea.


If your users' data really can't be world-readable, your remaining option is
to run seperate httpd's for customers with these large privacy concerns.
Note that most of the time, though, your customers just don't want people
copying their whole directory structures and stealing content whole-sale.
This can be accomplished by other means, anyway, but you can give yor
customers some comfort by simply instructing them to set all their
directories with permissions o-r.

Note that CGIs/SSIs will be a security concern for you.  You had better use
suEXEC or something else such that customers cannot execute their CGI
programs as the user/group apache's children run as, if you rely on that for
your privacy/security mechanism...

- jsw


-Original Message-
From: Russell Coker [mailto:[EMAIL PROTECTED]
Sent: Sunday, June 24, 2001 5:02 AM
To: :yegon; debian-isp@lists.debian.org
Subject: Re: privileges problem


On Saturday 23 June 2001 14:40, :yegon wrote:
> while configuring dynamic virtual hosting (with mod_vhost_alias) on a
> new server i ran into this problem
>
> i create a new group named g(username) for each new virtual web, I set
> all user files to chmod 640 to avoid them to be read by another user
>
> my apache server runs as www-data so i need to add user www-data to
> each virtual web group to be able to serve its documents

Supplementary groups are only read by login, su, and other programs that
change UID etc.  They can only be changed by a root process so once the
program is running as UID != 0 it can't be changed.

> this all works fine but
> when I create a new virtual web, that means a new group, user and home
> directory and try to access its documents via http I get this error in
> the apache error.log
>
> is there a way to somehow refresh this info for the running process
> without restarting it?

No.

> do you have another suggestion?

Why do you need to have a separate GID for each web space?  Why not just
have the files owned by the GID for Apache and the UID for the user?

Another solution would be to make all the files owned by the UID of
Apache and the GID of the user and mode 660...

--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Multiple DSLs, and switching incoming route upon failure?

2001-06-26 Thread Jeff S Wheeler
Quite frankly, it's dumb as hell to try to half-ass a redundancy solution
when you evidently need as close to 100% uptime as you can get.  You need to
either spend the bucks on leased lines from tier-1 carriers and run BGP
(contracting with someone for assistance if you don't have the know-how
yet), or preferably you should colocate with a real datacenter and hope they
don't go out of business.

- jsw




RE: users bypassing shaper limitation

2001-07-01 Thread Jeff S Wheeler
I have been reading this thread and noticed no one has suggested the MAC
address filtering capabilities in Linux 2.4's new ip tables subsystem.  I
hear there are serious problems with using 2.4.x series kernels as a
firewall, though; what are they?

- jsw


-Original Message-
From: Gerard MacNeil [mailto:[EMAIL PROTECTED]
Sent: Sunday, July 01, 2001 7:46 AM
To: debian-isp@lists.debian.org
Subject: Re: users bypassing shaper limitation


On Sun, 1 Jul 2001 14:30:33 +0300, [EMAIL PROTECTED] (Sami Haahtinen)
wrote:

> On Sat, Jun 30, 2001 at 12:07:28PM +0100, Karl E. Jorgensen wrote:
> > Besides, the bad guys may choose not to use DHCP - this is
> > entirely up to the config on the client machines.
>
> but if you make dynamic firewall rules based on the leases file,
> blocking all outside traffic, it would be efficient enough.

Yes, do routing by host /32 rather than network /24.  Or you can subnet
depending on your hardware configuration.

Gerard MacNeil
System Administrator


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: users bypassing shaper limitation

2001-07-02 Thread Jeff S Wheeler
You fail to understand.  Drop traffic from any MAC/IP pair that isn't
"registered" with you, thus in your traffic shaper configuration.  Keeping
track of MAC addresses and where they're supposed to be on your network in a
campus environment is pretty standard.  I work on a University campus and
must notify the IT department anytime I want to add a host or move network
cards around.  If I do not, they will grumble and/or disable the ethernet
ports that unknown MAC addresses appear on.  In some areas (e.g. student
labs) they do that automatically so kids can't just bring their laptop in
and hop on napster at 100Mbit.

- jsw


-Original Message-
From: Gerard MacNeil [mailto:[EMAIL PROTECTED]
Sent: Monday, July 02, 2001 5:39 AM
To: debian-isp@lists.debian.org
Subject: Re: users bypassing shaper limitation


On Sun, 1 Jul 2001 15:59:34 -0400, "Jeff S Wheeler" <[EMAIL PROTECTED]>
wrote:

> I have been reading this thread and noticed no one has suggested the MAC
> address filtering capabilities in Linux 2.4's new ip tables subsystem.

There is no requirement to run 2.4.x and iptables, nor iproute2, to
accomplish the policy implementation that was specified.  The administrative
policy is bandwith control over a defined set of IP addresses.  That policy
is being circumvented with the current configuration by the whizkids.  It is
up to the tech to implement a solution.

Beside, I'm sure I have a MAC address changer utility (or is that a feature
of iproute2) that I downloaded sometime in the past.  The same whizkids
would use it and circumvent the policy based on MAC addresses with it ...
although it would be a trickier thing to accomplish.  I think I have read on
some mailing list that it is quite a security issue with PPPoE and some
wireless connections.

Gerard MacNeil
System Administrator


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: users bypassing shaper limitation

2001-07-03 Thread Jeff S Wheeler
Your method would allow someone to attach their computer to the network,
certainly, but it would not allow them to bypass the traffic shaping
limitations configured for that host.  That is the goal of the original
poster, as I understand.

- jsw


-Original Message-
From: news [mailto:[EMAIL PROTECTED] Behalf Of Holger
Lubitz
Sent: Tuesday, July 03, 2001 9:08 AM
To: debian-isp@lists.debian.org
Subject: Re: users bypassing shaper limitation


Jeff S Wheeler proclaimed:
> cards around.  If I do not, they will grumble and/or disable the ethernet
> ports that unknown MAC addresses appear on.  In some areas (e.g. student
> labs) they do that automatically so kids can't just bring their laptop in
> and hop on napster at 100Mbit.

Easy. Disconnect any machine, set your MAC/IP-addresses to its
addresses, connect your laptop.
Don't know its addresses? Just sniff around on the port for a while, but
make sure you keep quiet.

Holger


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: ATA Speed

2001-07-03 Thread Jeff S Wheeler
You can use the hdparm utility to discover what mode your disks are
operating in.  Notice the second-to-last line that begins with 'DMA modes:'.
The '*' next to udma4 indicates it is operating in that mode, which equates
to something commonly called ATA/66.  :-)

intrepid:/home/jsw# hdparm -i /dev/hdc

/dev/hdc:

 Model=Maxtor 96147U8, FwRev=BAC51KJ0, SerialNo=N8046RBC
 Config={ Fixed }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=57
 BuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=120060864
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes: pio0 pio1 pio2 pio3 pio4
 DMA modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 *udma4
 Kernel Drive Geometry LogicalCHS=7473/255/63

- jsw


-Original Message-
From: R K [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 03, 2001 6:49 PM
To: debian-isp@lists.debian.org
Subject: ATA Speed


Does the following mean that Linux is only using my ide bus at ata33 speeds?
Or more accurately not using the full ata100 mode?

ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx

I've seen nothing from dmesg to indicate that it's doing otherwise.  Does it
configure it as 33 and then still use it to it's full potential or does it
impose restrictions on itself?  Even if this doesn't have anything to do
with it, how would I verify that Linux is using the hardware to its full
potential?

Thanks in advance




RE: hardware raid

2001-11-05 Thread Jeff S Wheeler
The 3ware cards work really well.  www.3ware.com and check out the Escalade
6200/6400? or 7xxx series if you have 64-bit PCI slots.

- jsw


-Original Message-
From: Andrew Kaplan [mailto:[EMAIL PROTECTED]
Sent: Monday, November 05, 2001 5:20 PM
To: Debian-Isp
Subject: hardware raid


I'm looking for a good hardware raid 1 (mirroring) solution for Debian. Will
the promise cards work with Debian or is there a better solution thanks.

Andrew P. Kaplan
Network Administrator
CyberShore, Inc.
http://www.cshore.com

"I couldn't give him advice in business and he couldn't give me
advice in technology." --Linus Torvalds, about why he wouldn't
be interested in meeting Bill Gates.






> -Original Message-
> From: Craigsc [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 05, 2001 4:17 AM
> To: Debian-Isp
> Subject: VIM
>
>
> H
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>
> ---
> Incoming mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.286 / Virus Database: 152 - Release Date: 10/9/01
>
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.286 / Virus Database: 152 - Release Date: 10/9/01


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: "Transparent" IDE RAID controller

2001-11-05 Thread Jeff S Wheeler
Yes, please visit 3ware's web site.  Their Escalade controller takes ATA/66
and ATA/100 disks, and provides a SCSI interface to the OS.  Drivers for
linux and various Windows platforms are available.  I've had good
experiences with their controllers and use them in production.

- jsw


-Original Message-
From: Jason Lim [mailto:[EMAIL PROTECTED]
Sent: Monday, November 05, 2001 6:28 PM
To: Debian-Isp
Subject: "Transparent" IDE RAID controller


Actually... come to think of it... I wonder if ANY RAID controller does
the following...

- appears to be just ONE hard disk (eg. hda) to the server
- actually has 2 or more hard disks connected to the RAID controller (but
only shows up as one to the OS)
- if in RAID1 mode (mirroring), if one disk fails, the controller
AUTOMATICALLY uses the remaining hard disk(s), and perhaps a LED could
light up, indicating a problem with a disk. Once a new disk is connected,
the RAID controller automatically rebuilds
- if in other modes, does 99% of operations by itself with no intervention
required by the OS (auto rebuilds, etc.) except manual things like
replacing a dead drive

This would mean the RAID controller is, more or less, OS independent, and
requires no OS level software to make it run, thus making it a
"transparent" RAID controller.

I've pondered this for a while, and i'm certainly no hardware raid expert
but it appears like a workable and doable solution.

So, for example if i mounted hda, the controller would transparently
activate both the drives (if you are running raid1 with 2 hds). A cp to
hda would tell the controller to do a normal cp to hda on the OS level,
but the "transparent" hardware raid controller would know that it is
running in raid1 mode and automatically cp the file(s) to both hard disks.
After cping the file to both hard disks, it would tell the OS, like a
regular hd controller, that it had finished the operation, and thus the OS
would not need to know that the file(s) were actually copied to 2
different hard disks.

If there is such a solution on the market... I haven't seen it. But
perhaps you could tell me WHY there is no such product when it seems like
it would solve many problems with software/hardware incompatibilities, and
would solve many many admin's troubles?

Failing that... is there ANY product on the market that does plain
hardware level mirroring (for IDE)? What we do now is (essentially) cp hda
to hdb every 24 hours, so in the case of a major hd failure on hda, we
simply swap hdb over to hda and continue running (but with stuff that
could be up to 24 hours old). What would a solution be to make it so hdb
is never so out of date with hda, or perhaps even a LIVE copy (considering
the above proposed transparent hardware raid, and without causing massive
load during the day)?

I think this is something many admins have to consider... what is YOUR
solution to this?

Sincerely,
Jason

- Original Message -
From: "Jesse Molina" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "Debian-Isp" 
Sent: Tuesday, November 06, 2001 9:41 AM
Subject: RE: hardware raid


>
> If you are looking for Ultra 160 SCSI, the Mylex AcceleRAID 170 may be
> something that you want.  I recent purchased about 30 of these cards for
a
> RAID 1 solution for some rack servers.  They work pretty good.  RAID0,
> RAID1, Spanning (JBOD), RAID5.  You can backup and restore the
controller
> configuration to a floppy disk, the BIOS interface is fairly nice and
> simple.  Rebuilding takes awhile, but no big deal.
>
> They also make an AcceleRAID 170LP, a low-profile PCI card.  Pretty
neat.
>
> AMI recently sold all of their RAID card business to LSI Logic, this
making
> getting some of the AMI cards a bit difficult right now.  Otherwise, I
would
> also recommend the AMI Express 500.
>
> If you are looking for IDE, I have no comment there.
>
>
>
> # Jesse Molina lanner, Snow
> # Network Engineer Maximum Charisma Studios Inc.
> # [EMAIL PROTECTED] 1.303.432.0286
> # end of sig
>
>
> > -Original Message-
> > From: Andrew Kaplan [mailto:[EMAIL PROTECTED]
> > Sent: Monday, November 05, 2001 3:20 PM
> > To: Debian-Isp
> > Subject: hardware raid
> >
> >
> > I'm looking for a good hardware raid 1 (mirroring) solution
> > for Debian. Will
> > the promise cards work with Debian or is there a better
> > solution thanks.
> >
> > Andrew P. Kaplan
> > Network Administrator
> > CyberShore, Inc.
> > http://www.cshore.com
> >
> > "I couldn't give him advice in business and he couldn't give me
> > advice in technology." --Linus Torvalds, about why he wouldn't
> > be interested in meeting Bill Gates.
> >
> >
> >
> >
> >
> >
> > > -Original Message-
> > > From: Craigsc [mailto:[EMAIL PROTECTED]
> > > Sent: Monday, November 05, 2001 4:17 AM
> > > To: Debian-Isp
> > > Subject: VIM
> > >
> > >
> > > H
> > >
> > >
> > > --
> > > To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> > > with a subject of "unsubscribe". Trouble? Contact
> > > [EMAIL PROTECTED]
> > >
> > >
> > > ---
> >

RE: tape drives

2001-11-07 Thread Jeff S Wheeler
You probably want to use the SCSI Tape driver for that.  As I understand,
pretty much all SCSI tape drives have a similar set of commands and
features.  Your Compaq EOD003 probably operates similarly to my HP 88980,
which is an ancient 9-track drive :-)

The mt(1) program can be used to position the tape, erase tapes, etc.  The
st driver will allow the st library to work with programs like mt or your
choice of backup software.

-rw-r--r--1 root root33992 Jun  2 19:15
/lib/modules/2.4.5/kernel/drivers/scsi/st.o

- jsw


-Original Message-
From: David Bishop [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 07, 2001 5:41 PM
To: debian-isp@lists.debian.org
Subject: tape drives


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I just installed a scsi tape drive (Compaq EOD003) and was wondering how I
can tell whether or not it is recognized.  I have never used tape drives
before (at least, that weren't already setup) and I don't know even the
first
thing about them.  Searching for scsi tape drive linux via google didn't
turn
up much of any help, the howto seems to be about 3 years old.

Once I figure out whether or not it is working, are there recommendations as
to what I should use for backup software?  It is a standalone machine, about
4 gig used, probably going to do once-weekly-full + daily incremental (or
something along those lines).

Thanks for any tips/hints/pointers.

- --
D.A.Bishop
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE76bh9EHLN/FXAbC0RAmrlAKDa1yfwzuSAbUqEy25eOy2fz3RSYQCg6Xnn
JWBqMngac2Oapzj1gEyKR7Q=
=yzlC
-END PGP SIGNATURE-


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Simple web log analysis for multiple sites?

2001-11-15 Thread Jeff S Wheeler
I'd also be interested to know what other folks are doing for this.  We use
webalizer, but we keep seperate stats & reports per each web site.  I then
have a program that reads the webalizer.hist file for each site and updates
an SQL table with information for each site.  If someone needed more data
they could probably extract it from webalizer's HTML files, but using
webalizer.current is a bad idea since it destroys it at the end of each
month and starts over fresh.  I wish it preserved that data because there is
a lot of good stuff in there I might like to use.  Perhaps I'll contribute a
patch someday; but more likely we will just reinvent the wheel so we can get
over some other shortcomings of webalizer.

If this program would be useful to anyone else I could share it.  Below are
some records for one site we host.  The "ws" column is the web server on
which the site resides.  "ds" is the datestamp column, basically in mysql
the easiest way to do this was to use the first day of each month to
represent data for the whole month.  "complete" indicates if the data for
that month is either partial data, or if it has data for that month, as well
as for the following month.  While that doesn't mean there are no holes in
the data, it does at least give you an indication of if you should use it
for billing/etc yet :-)  ts is just a timestamp column.

- jsw

mysql> SELECT * FROM WlfMSum WHERE ws="fire" AND sn="memepool.com" AND
ds="2001-10-01"\G
*** 1. row ***
  ws: fire
  sn: memepool.com
  ds: 2001-10-01
hits: 1133903
   files: 1012502
   sites: 163933
  kbytes: 30512375
   pages: 988517
  visits: 577638
complete: 1
  ts: 20011101113012
1 row in set (0.04 sec)

mysql> SELECT * FROM WlfMSum WHERE ws="fire" AND sn="memepool.com";
+--+--++-+-++--+
++--++
| ws   | sn   | ds | hits| files   | sites  | kbytes   |
pages  | visits | complete | ts |
+--+--++-+-++--+
++--++
| fire | memepool.com | 2001-08-01 |  233195 |  210099 |  47498 |  7255226 |
214404 | 122863 |1 | 20011101113012 |
| fire | memepool.com | 2001-09-01 |  931873 |  823837 | 147449 | 29061296 |
817547 | 485257 |1 | 20011101113012 |
| fire | memepool.com | 2001-10-01 | 1133903 | 1012502 | 163933 | 30512375 |
988517 | 577638 |1 | 20011101113012 |
+--+--++-+-++--+
++--++
4 rows in set (0.07 sec)


-Original Message-
From: John Ackermann N8UR [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 15, 2001 8:19 AM
To: debian-isp@lists.debian.org
Subject: Simple web log analysis for multiple sites?


Hi --

I'm looking for a program that will analyze the logs across the multiple
virtual sites that I run and provide summary-level info (e.g., number of
hits/bytes per site per day, with monthly summaries, etc).

I'm currently using a slightly hacked version of webstat with some shell
scripts that cat the various logfiles together, add an identifying tag,
sort the result, and feed it into the analyzer, but that really generates
more info than I need for top-level summary purposes and doesn't provide
easy per-site statistics.

Thanks for any suggestions.

John
[EMAIL PROTECTED]



John AckermannN8UR [EMAIL PROTECTED] http://www.febo.com
President, TAPR[EMAIL PROTECTED]http://www.tapr.org


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: Strange apache behaviour?

2001-12-07 Thread Jeff S Wheeler
We do all our log processing as a user called "stats" on one of our
machines.  The root accounts on all our web servers have their ssh public
keys in the stats user's authorized_keys file, and they run a nifty log
rotation program that uploads the log data to the box we do all our log
analysis/etc on.  It also inserts some metadata about the logfile into an
sql database.  All that happens as user root.

Then, everything else happens as user stats on the one machine.  It uses
webalizer to process the files (we may change to something else when we have
spare time, webalizer is not great but the customers like the graphs/etc)
and then extracts summary data from each site's webalizer.hist file and
populates yet another sql table.

We have other log-related tools which do things like gzip the log files
after a configurable number of hours, delete them from the web servers
themselves (they have to remain there for a little while so they're
available to customers), etc.

We also use the summary data collected from webalizer.hist to bill our
customers for their traffic.  It's an incredible pain in the ass to go
through all your customer sites and figure out how much traffic they did in
a month, then do your own calculations to determine how much $$$ they owe
you based on "included traffic" plans, cost per mega/gigabyte, etc.  After a
few months of doing that, it got automated. :-)


Does anyone else on the list have similar billing / log processing systems
for their web hosting companies?  One of our vendors (!) asked us if we
would license our software to them, but we don't really have a refined
user-interface yet.  And as with any patchy, home-grown billing system it
often requires the care of a programmer to add features, etc.  We've thought
about a small monthly fee that would include any requested features/etc.
What do other folks do?

- jsw


-Original Message-
From: Jeremy Lunn [mailto:[EMAIL PROTECTED]
Sent: Friday, December 07, 2001 4:26 AM
To: James
Cc: Jason Lim; debian-isp@lists.debian.org
Subject: Re: Strange apache behaviour?


On Fri, Dec 07, 2001 at 09:41:00AM +0100, James wrote:
> It is usual to run webalizer as a user? I have never even thought of
> doing that. Is there any particular reason? (security?)

Generally it is a good idea to run everything you can as non-root.  Come
to think of it, I probably have webalizer running as root on a few
machines (whoops!).

--
Jeremy Lunn
Melbourne, Australia
Find me on Jabber today! Try my email address as my JID.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




RE: System locks up with RealTek 8139 and kernel 2.2.20

2001-12-27 Thread Jeff S Wheeler
>>However, I have noticed something strange. I must keep "outbound" traffic
>>flowing or they forget their ARP table for some strange reason. I keep an

AFAIK that ethernet chipset is not particularly advanced.  ARP is not a
function of the card itself, nor the low-level driver.  ARP resolution works
via broadcasting queries for an IP address to all nodes on the network.
Then, the node that has that address is supposed to send back a unicast
response to the requesting node.  The requesting node then stores the MAC/IP
association in its ARP table.  Someone correct me if I missed anything
important or don't really understand this as well as I think.

If you are having arp table problems there must be something else not
working properly.

- jsw


-Original Message-
From: John Gonzalez, Tularosa Communications
[mailto:[EMAIL PROTECTED]
Sent: Thursday, December 27, 2001 11:53 AM
To: Olivier Poitrey
Cc: debian-isp@lists.debian.org; Antonio Rodriguez
Subject: Re: System locks up with RealTek 8139 and kernel 2.2.20


What causes the lockups? How often? I have an RTL 8139 in use.

However, I have noticed something strange. I must keep "outbound" traffic
flowing or they forget their ARP table for some strange reason. I keep an
outbound ping running... If i dont, and there is no network activity on
the box, it is unresponsive via network, but hopping on the console and
starting another ping session brings it back to life. (I also have to do
this on my machines with older RTL cards, using the ne2k-pci driver)

So far, uptime of box and kernel ver with RTL cards is:

Linux x 2.2.19 #22 Wed Jun 20 18:12:16 PDT 2001 i686 unknown
  9:42am  up 13 days,  6:57,  6 users,  load average: 0.00, 0.00, 0.00




--
John Gonzalez, Tularosa Communications | (505) 439-0200 work
JG6416, ASN 11711, [EMAIL PROTECTED]  | (505) 443-1228 fax
  http://www.tularosa.net

On Thu, 27 Dec 2001, Olivier Poitrey wrote:

> - Original Message -
> From: "Antonio Rodriguez" <[EMAIL PROTECTED]>
> To: 
> Sent: Friday, December 21, 2001 3:54 PM
> Subject: System locks up with RealTek 8139 and kernel 2.2.20
>
> > 2. move to 2.4 and hope this solves the problem. I have already gone
> > from 2.2.17 to 2.2.20 to try to fix it, but maybe the move to 2.4 will
> > be more significant.
>
> You'll have the same problem with 2.4.x, I have tested it for you :/



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




unstable is "unstable"; stable is "outdated"

2002-02-01 Thread Jeff S Wheeler
On Fri, 2002-02-01 at 01:42, Jason Lim wrote:
> We have production boxes running unstable with no problem. Needed to run
> unstable because only unstable had some new software, unavailable in
> stable. Its a pity stable gets so outdated all the time as compared to
> other distros like Redhat and Caldera (stable still on 2.2 kernel), but
> thats a topic for a separate discussion.

This is really a shame.  It's my biggest complaint with Debian by far. 
The tools work very well, but the release cycle is such that you can't
use a "stable" revision of the distribution and have modern packages
available.

I can't imagine this issue is being ignored, but is it discussed on a
policy list, probably?  It seems like FreeBSD's -RELEASE, -STABLE,
-CURRENT scheme works much better than what Debian has.  I've never seen
big political arguments on this mailing list, but I always hear that
Debian as an organization is often too burdened with internal bickering
and politics to move forward with big changes.  Is that the case here?

Just curious, not trying to start a flame war.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





opinions on swap size and usage?

2002-02-12 Thread Jeff S Wheeler
For years I've been configuring my machines with "small" swap spaces, no
larger than 128MB in most cases, even though most of my systems have
512MB - 1GB of memory.  My desktop computer has zero swap, although I
have more ram than even X + gnome + mozilla + xemacs can use. :-)

I do this because I think if they need to swap that much, there is
probably Something Wrong, and all that disk access is just going to make
the machine unusable.  May as well let it grind to a halt quickly than
drag it out, I always said.

Alexis Bory's post earlier today made me think about swap a bit more
than I usually do.  What do other folks on this list do?  Zero swap?  As
much swap as physical memory?  More?  Why?  Can you change the swapper's
priority, and does this help when your machine starts swapping heavily?

Thanks for the opinions.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





true x86 PCI bus speeds/specs

2002-02-23 Thread Jeff S Wheeler
Often, folks post on topics such as maximum network performance or disk
performance they should expect to see from their x86-based server,
firewall, etc.  And almost as often, some uninformed person posts a
reply that says something to the effect of, "Your PCI bus is only 66MHz,
which limits you to 66Mbit/sec", or something similar.  This is wrong.

The most common PCI bus is 32 bits wide, and operates at 33MHz.  Its
maximum throughput is thus 32*33/8 million bytes/second.  That's about
132MBytes/sec.  Some PCI buses are 64 bits wide at 33MHz, such as on
several popular Tyan Thunder models.  Those have a maximum throughput of
264MBytes/sec.  Other boards are 64 bits wide at 66MHz, which is limited
to 528MBytes/sec.  And numerous motherboard implementations have more
than one PCI bus, so you could but high-bandwidth perhipherals on each
of the two buses, and not substantially impact performance or cause them
to compete for resources.

Now, all card/driver combinations have some overhead associated with
them.  The bus isn't 100% efficient, but on many "consumer-grade"
mainboards the 32 bit / 33MHz bus will push 110MBytes/sec or more in
real-world use.  If you don't believe me, check the 3ware RAID card
reviews on storagereview.com (assuming SR is still up).

This means a 100Mbit/sec network througput, which is 12.5MBytes/sec,
will easily fit within the maximum throughput of the PCI bus.  The real
issue is kernel efficiency.  Zero-copy TCP and things like that are
going to improve linux network performance by leaps and bounds.  Going
from a 132MByte/sec bus to a 528MByte/sec bus will disappoint you :-)


This is a popular form of confusion.  Mr Billson is not the first person
to give someone a misleading answer in this respect, nor will he be the
last.  I do not intend to put him down by correcting his answer, but I
hope my post serves to better inform the readership of this list.

On Sat, 2002-02-23 at 09:10, Peter Billson wrote:
>   There was some discussion last January (2001) about this type of
> thing. The problem you will run into if you are using POTS Intel
> hardware is the PCI bus speed, so you are going to have a tough time
> filling one 100Mbs connection with an old Pentium - assuming an old
> 66Mhz PCI bus. You can forget about filling two or more. Also, cheap
> NICs will do more to kill your max. throughput.
>   That being said, I run old Pentium 133s with 64Mb RAM in several
> applications as routers and can notice no network latency on a 100BaseT
> network, but I have never benchmarked the machines. Usually the
> bottlenecks are elsewhere - i.e. server hard drive throughput. Packet
> routing, filtering, masquerading really doesn't require much CPU
> horsepower.
-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





Re: BGP4/OSPF routing daemon for Linux?

2002-03-01 Thread Jeff S Wheeler
IOS doesn't have protected memory, is that not correct?  It's like old
multitasking systems where you didn't have virtual memory. :/

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





Re: how to design mysql clusters with 30,000 clients?

2002-05-24 Thread Jeff S Wheeler
I don't know if anyone else who followed-up on this thread has ever
implemented a high traffic web site of this calibre, but the original
poster is really just trying to band-aid a poor session management
mechanism into working for traffic levels it wasn't really intended for.

While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP clients at peak instants is not very bright.  We have
cookies for this.  Server-side sessions are a great fallback for
paranoid end-users who disable cookies in their browser, but it is my
understanding that PHP relies on a cookie-based session ID anyway?

I tried to follow up with the original poster directly but I can't
deliver mail to his MX for some reason.  *shrug*

Look into signed cookies for authen/authz/session, using a shared secret
known by all your web servers.  This is not a new concept, nor a
difficult one.  It can even be implemented using PHP, though a C apache
module is smarter.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-27 Thread Jeff S Wheeler
Everything I've heard about experiences with mysql on NFS has been
negative.  If you do want to try it, though, keep in mind that
100Mbit/sec ethernet is going to give you 12.5MByte/sec, less actually,
of I/O performance.  GIGE cards are cheap these days, as are switches
with a few GIGE ports.  1000baseT works, take advantage of it.

I hope you'll think about a solution other than mysql for this problem,
though.  It's not the right tool for session management on such a scale.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Mon, 2002-05-27 at 07:54, Patrick Hsieh wrote:
> Hello Nicolas Bougues <[EMAIL PROTECTED]>,
> 
> I'd like to discuss the NFS server in this network scenario.
> Say, if I put a linux-based NFS server as the central storage device and
> make all web servers as well as the single mysql write server attached
> over the 100Base ethernet. When encountering 30,000 concurrent clients, 
> will the NFS server be the bottleneck? 
> 
> I am thinking about to put a NetApp filer as the NFS server or build a
> linux-based one myself. Can anyone give me some advice?
> 
> If I put the raw data of MySQL write server in the NetApp filer, if the
> database crashes, I can hopefully recover the latest snapshot backup
> from the NetApp filer in a very short time. However, if I put on the
> local disk array(raid 5) or linux-based NFS server with raid 5 disk
> array attached, I wonder whether it will be my bottleneck or not.
> 
> How does mysql support the NFS server? Is it wise to put mysql raw data
> in the NFS?
> 
> 
> -- 
> Patrick Hsieh <[EMAIL PROTECTED]>
> GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: increase mysql max connections over 1024

2002-06-16 Thread Jeff S Wheeler
On Sun, 2002-06-16 at 04:24, Osamu Aoki wrote:
  *snip*
> If what you say is true, I can tell you that ANY program which is
> involved with mysql and which used local_lim.h needs to be recompiled.
> What I do not know is whether this involves glibc (libc6) or not.

Why would this be the case?  I might be missing something, but I believe
the poster is just discussing making a change to the mysql-server, NOT
the libmysqlclient library.

Any library dependencies of the mysqld server (ldd bin/mysqld ?) would
need to be rebuilt, probably including libc, but you could always keep
private copies of them and use LD_LIBRARY_PATH to avoid changing the
system-wide libc, and thus necessitating a rebuild of other sources
which depend on that limit being consistent between themselves and their
dependencies.

Am I off my rocker?  I know it's not a real clean solution, keeping a
seperate copy of libc, but it seems workable.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: transfer rate

2002-07-04 Thread Jeff S Wheeler
Hi, I suggest you check the duplex mode on your ethernet interface and
your switch.  I had a problem similar to yours just a couple of months
ago, and tracked it down to the interface auto-negotiating into half
instead of full duplex.  On a busy ethernet interface that can cause
enough collisions to affect TCP throughput substantially, due to that
small amount of packet loss.

Unfortunately under Linux there is no "good way" to find out the link
speed and duplex condition portably among different ethernet adapters,
at least that I am aware of.  Here is what I do:

$ dmesg |egrep eth[0-2]
eth1: Intel Corp. 82557 [Ethernet Pro 100], 00:A0:C9:39:4C:2C, IRQ 19.
eth2: ns83820 v0.15: DP83820 v1.2: 00:40:f4:17:74:8a io=0xfebf9000
irq=16 f=h,sg
eth2: link now 1000 mbps, full duplex and up.

Also unfortunately, most ethernet drivers don't bother reporting this,
although you can hack it into your drivers if it is important to you.

But another good way to check is to examine your switch:

switch0#sh int fa0/2
  ...
  Auto-duplex (Full), Auto Speed (100), 100BaseTX/FX
  ...
This shows 100/full auto-negotiated.  Really, this is a bad thing to do,
as we should be setting all our ports to 100/full in the configuration,
but it probably won't be done until it bites us in the ass.  :-)

switch0#sh int fa0/24
  ...
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ...
This is a port which has been fixed to 100/full, because it *did* bite
us in the ass.  It's an uplink to a router and does several mbits/sec
24x7, and that packet loss affected all the TCP sessions going over it,
limiting them to around 400Kbits/sec throughput due to TCP backoff :-(


I hope this is helpful.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


On Wed, 2002-07-03 at 22:20, Rajeev Sharma wrote:
> hi all
> 
>   I am stuck with  a problem of my network..
> 
>   my one debian box is very unstable ..some time it transfer data
>   smoothly(997.0 kB/s)and sometime it hanged up at 123.0 kB/s..
>   and when i use
> 
>   ping -f 192.168.x.x
> 
>  it shows 1% data loss and some time 0% data loss..
> 
> 
>  i have checked my cables,switch ...but no use 
> 
>  pliz help me
> 
**snip**


pgpYUOTYpLYRj.pgp
Description: PGP signature


Re: Linux box

2002-07-31 Thread Jeff S Wheeler
Riccardo,

You describe that you want all traffic originating from Net1 to traverse
Router1, and traffic originating from Net2 to traverse Router2, in order
to reach the Internet.  That is called policy-based routing, and you can
implement it with iproute2 on Linux.

You cannot both multihome using BGP, and policy-route in that manner. 
No currently deployed bgp-speakers can configure your routing table to
implement that policy.

In addition, although it seems like you have a firm understanding of
what you want to do on this level, your organization probably lacks the
necessary know-how to successfully deploy BGP, and your two ISPs may not
even be staffed or equipped to deliver BGP sessions to you.  If you want
to undertake it anyway, I strongly urge you to contract a consultant who
can help you and possibly your ISPs through the process.

I hope this helps.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


Re: Linux box

2002-07-31 Thread Jeff S Wheeler
If you want to deal with the hassle of DNS switch-over from Net1 to Net2
in the event of an outage, you can do that.  You can also easily setup a
linux box or one of your Cisco routers to have two default routes.  One
would be the flat-cost circuit, the other would be per-packet circuit.

The per-packet circuit would have a higher "metric" than the flat-cost. 
You can do this with the standard linux ip routing mechanisms.  I'm sure
you have noticed the Metric column in route(8) before and perhaps not
known what it meant.  This is its use.  See the route(8) man page for
help on how to install two routes for the same network with different
metrics and gateways.  Note that higher metric is _lower_ preference. 
So use metric 0 on your least expensive gateway, metric 1 on your
"backup" route, or whatnot.


If you are doing some sort of web hosting, or something where the
general Internet is accessing services at your site, you would be _far_
smarter to colocate one or more PCs with a colocation supplier, than to
try to do fail-over with DNS.  It's a bad solution, won't work all the
time, you'll have TTL issues, etc. etc. but it is possible.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


Re: creepy-crawlers from TW

2002-08-08 Thread Jeff S Wheeler
I suggest you email abuse contact for seed.net.tw, who appears to be the
owner of that network block (139.175/16) and take it up with them.  I
assume you already tried to go through openfind.com.tw and did not get a
satisfactory response.

You could always use this approach as well  :-)  Or if you do not have
access to your routing tables, add host routes to loopback0 on your web
servers.  I do this for customers when they request IP blocks.
  ip route 139.175.250.23/32 null0

What I definately recommend against is trying to use apache's access
controls to block based on IP.  It's not very smart, and will do a DNS
lookup on every request even if you are trying to block by IP.  If the
IP route null0 method ever fails me, I will patch apache to fix this.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Thu, 2002-08-08 at 14:38, Martin WHEELER wrote:
> Does *anyone* have a solution for keeping the site-sucking bots from
> openfind.tw.com out of my machine?
> 
> They don't obey any sort of international guidelines;, and tie my
> machine up for hours on end once they find a way of getting in and
> latching on.
> 
> I'm getting desperate.
> 
> Any help appreciated.
> 
> msw
> -- 
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



signature.asc
Description: This is a digitally signed message part


[Fwd: VU#210321]

2002-09-10 Thread Jeff S Wheeler
Below is a message some CERT folk posted to NANOG-L this morning.  I
personally think it's a crock of shit, and that CERT is damaging their
credibility by advising based purely on rumor and speculation, however
perhaps someone on this list has additional information?

Facts and first-hand information only, please.

--
Jeff S Wheeler <[EMAIL PROTECTED]>


-Forwarded Message-

From: CERT(R) Coordination Center <[EMAIL PROTECTED]>
To: nanog@merit.edu
Cc: CERT(R) Coordination Center <[EMAIL PROTECTED]>
Subject: VU#210321
Date: 10 Sep 2002 10:16:14 -0400


-BEGIN PGP SIGNED MESSAGE-

Hello,

The CERT/CC has recently seen discussions in a public forum detailing
potential vulnerabilities in several TCP/IP implementations (Linux,
OpenBSD, and FreeBSD). We are particularly concerned about these types
of vulnerabilities because they have the potential to be exploited
even if the target machine has no open ports.

The messages can be found here:

http://lists.netsys.com/pipermail/full-disclosure/2002-September/001667.html
http://lists.netsys.com/pipermail/full-disclosure/2002-September/001668.html
http://lists.netsys.com/pipermail/full-disclosure/2002-September/001664.html
http://lists.netsys.com/pipermail/full-disclosure/2002-September/001643.html

Note that one individual claims two exploits exist in the
underground. At this point in time, we do not have any more
information, nor have we been able to confirm the existence of these
vulnerabilities.

We would appreciate any feedback or insight you may have. We will
continue to keep an eye out for further discussions regarding this
topic.

FYI,
Ian

Ian A. Finlay
CERT (R) Coordination Center
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA  USA  15213-3890
-BEGIN PGP SIGNATURE-
Version: PGPfreeware 5.0i for non-commercial use
Charset: noconv

iQCVAwUBPX3/VqCVPMXQI2HJAQFEqQQAr54e9c5SGgrIfmK5+EWqSOdvySKRtjwa
6dE4Z4DcoyHS57W5BEwW2OSXSGwrBL+mzippfTEnwAVT/otLYAADsnlPSQioRYNi
qHVh8yRXgh3kBgx3cMdhe3NC6zaSWffOsc/EvhkCDo2xa8FQItOqE5MjOeASjt1L
st5qq4mgM+E=
=kHt1
-END PGP SIGNATURE-




signature.asc
Description: This is a digitally signed message part


Re: Fw: VIRUS IN YOUR MAIL (W32/BugBear.A (Clam))

2002-10-17 Thread Jeff S Wheeler
On Thu, 2002-10-17 at 04:51, Brian May wrote:
> AFAIK transparent proxying in Linux is limited to redirecting all ports
> to a given port another host. It is not possible for the proxy server to
> tell, for instance what the original destination IP address was.
Is this true, or will a getsockname() performed on a TCP socket which
was created as one endpoint of a connection which is being transparently
proxied give the client's intended destination address?  I do not know.

> A transparent HTTP proxy relies on the server name HTTP1.1 request
> field to determine what host the client really wanted to connect to.
> (this has been tested with Pacific's transparent proxy).
I do know that all HTTP/1.1 requests must contain a Host: header to be
valid.  Even if you knew the destination IP address, if you did not have
a Host: header you couldn't successfully complete an HTTP/1.1 request.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


signature.asc
Description: This is a digitally signed message part


New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
See ISC.ORG for information on new BIND vulnerabilities.  Current bind
package in woody is 8.3.3, which is an affected version.  Patches are
not available yet, it seems.

http://www.isc.org/products/BIND/bind-security.html

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
I've taken Sonny's suggestion and upgraded to the bind9 package. 
Initially I thought I had a serious problem, as named was not answering
any queries, however it seems to have "fixed itself".  Ordinarily that
would spook me, but in this situation I think I'd rather have spooky
software than known-to-be-exploitable software :-)

Thanks for the suggestion, Sonny.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Tue, 2002-11-12 at 13:53, Sonny Kupka wrote:
> Why not use Bind 9.2.1..
> 
> It's in woody.. When I came over from Slackware to Debian I installed it 
> and haven't looked back..
> 
> The file format was the same from 8.3.* to 9.2.1 I didn't have to do 
> anything..
> 
> ---
> Sonny
> 
> 
> At 01:08 PM 11/12/2002 -0500, Jeff S Wheeler wrote:
> >See ISC.ORG for information on new BIND vulnerabilities.  Current bind
> >package in woody is 8.3.3, which is an affected version.  Patches are
> >not available yet, it seems.
> >
> >http://www.isc.org/products/BIND/bind-security.html
> >
> >--
> >Jeff S Wheeler   [EMAIL PROTECTED]
> >Software DevelopmentFive Elements, Inc
> >http://www.five-elements.com/~jsw/
> 
> 



signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-13 Thread Jeff S Wheeler
My BIND 8 zone files are working perfectly.  We do have TTL values on
every RR in every zone, though.  Perhaps that was your difficulty?  I
believe I made that change when we upgraded from 4.x to 8.x ages ago.

If there is no such script and you have difficulty with your zonefiles,
let me know the apparent differences and I'd be happy to whip up a Perl
script and post it to the debian-isp list.  We have hundreds of zones as
well, and if it there had been a file format problem, I would had to
have done so in order to make the upgrade work.

--
Jeff S Wheeler <[EMAIL PROTECTED]>

On Tue, 2002-11-12 at 19:04, Craig Sanders wrote:
> On Tue, Nov 12, 2002 at 12:53:51PM -0600, Sonny Kupka wrote:
> > Why not use Bind 9.2.1..
> > 
> > It's in woody.. When I came over from Slackware to Debian I installed
> > it and haven't looked back..
> > 
> > The file format was the same from 8.3.* to 9.2.1 I didn't have to do
> > anything..
> 
> is this fully backwards-compatible?
> 
> last time i looked at bind9, the zonefile format had some slight
> incompatibilities - no problem if you only have a few zonefiles that
> need editing, but a major PITA if you have hundreds.
> 
> if there are zonefile incompatibilities, is there a script
> to assist in converting zonefiles?
> 
> craig
> 
> -- 
> craig sanders <[EMAIL PROTECTED]>
> 
> Fabricati Diem, PVNC.
>  -- motto of the Ankh-Morpork City Watch
> 



signature.asc
Description: This is a digitally signed message part


Neighbour table overflow problem

2003-03-07 Thread Jeff S Wheeler
Dear list,

I have a linux 2.4 box running zebra and acting as a default gateway for
a number of machines.  I am concerned about "Neighbour table overflow"
output in my dmesg.  From some articles I've read on usenet, this is
related to the arp table becoming full.  Most of the posters solved
their problems by configuring a previously unused loopback interface, or
realizing that they had a /8 configured on one IP interface and a router
on their subnet that was using proxy-arp to fulfill the arp requests.

Neither of those is my situation, though.  I simply have a lot of hosts
on the segment.  When the network is busy I've seen as many as 230+ arp
entries, but it never seems to break 256.  Is this an artificial limit
on the number of entries that can be present in my arp table?  If so, I
would like to increase the limit by to 2048 or so and give myself some
headroom.  I am concerned that might slow down packet forwarding, but I
can probably live with that.

Has anyone on the list encountered similar problems?  If so, is this the
approach you took to solve them or did you do something else?

Thanks,

--
Jeff S Wheeler <[EMAIL PROTECTED]>

[EMAIL PROTECTED] uname -a
Linux mr0 2.4.20 #1 Mon Dec 16 14:13:15 CST 2002 i686 unknown
[EMAIL PROTECTED] arp -an |wc -l
239



signature.asc
Description: This is a digitally signed message part


Re: BGP memory/cpu req

2003-03-11 Thread Jeff S Wheeler
On Tue, 2003-03-11 at 05:58, Teun Vink wrote:
> Check out the Zebra mailinglist, it has been discussed there over and
> over. Basically, a full routing table would require 512Mb at least. CPU
> isn't that much of an issue, any 'normal' CPU (P3) would do...
512MB is more than enough for zebra.  I would be comfortable running
zebra on as little as 256MB of memory.  If you want to use the box for
other tasks like squid, etc. you might become constrained.  We use ours
for some RRD polling.

I have a box with two full sessions of about 120k prefixes from transit
providers, both sessions with soft-reconfig enabled.  CPU is never an
issue, as far more CPU time is spent handling ethernet card Rx
interrupts than BGP or OSPF updates.  The box forwards an average of
11kpps and 60Mbit/sec, peaks around 16kpps and 90Mbit/sec.

[EMAIL PROTECTED]:~# ps u `cat /var/run/zebra.pid` `cat
/var/run/bgpd.pid` `cat /var/run/ospfd.pid`
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
root   197  0.1  2.4 25916 24872 ?   S 2002 143:03
/usr/local/sbin/zebra -d -f/etc/zebra/zebra.conf
root   200  0.6  6.2 65756 64960 ?   S 2002 786:20
/usr/local/sbin/bgpd -d -f/etc/zebra/bgpd.conf
root   828  0.0  0.1  2292 1204 ?S 2002  20:06
/usr/local/sbin/ospfd -d -f/etc/zebra/ospfd.conf
[EMAIL PROTECTED]:~# free
 total   used   free sharedbuffers
cached
Mem:   1033380 864148 169232  0  21348
622244
-/+ buffers/cache: 220556 812824
Swap:   497972  0 497972
[EMAIL PROTECTED]:~# 

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


RWHOIS daemon options

2003-07-03 Thread Jeff S Wheeler
Dear debian-isp list,

I've just been asked to setup an rwhois server in order to satisfy ARIN
policy without SWIPing a large number of customer blocks via email. I
have downloaded the daemon available at http://www.rwhois.net however it
leaves much to be desired. The example configurations are lacking, the
config file formats themselves aren't great, data is kept in text files
in a rather obtuse directory structure (by default), and I am wholely
unimpressed with the documentation. I'm a big IRC guy, and none of my
IRC netops pals seem to have much love, or success, with rwhoisd.

Does anyone else on the list run an RWHOIS server, and if so, which one?
An apt-cache search revealed little, as did a freshmeat.net query. If
other on the list are in the same boat I am, perhaps we could put our
heads together and come up with a free-as-in-debian alternative.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Funny NFS

2003-09-22 Thread Jeff S Wheeler
On Mon, 2003-09-22 at 06:09, Dave wrote:
> [EMAIL PROTECTED]'s password:
> Last login: Mon Sep 22 09:04:05 2003 from 192.168.11.2 on pts/1
> Linux valhalla 2.4.20-valhalla #1 Thu Sep 18 08:21:07 SAST 2003 i686 unknown
> struct nfs_fh * fh;
> const char *name;
> unsigned intle
> Last login: Mon Sep 22 09:04:05 2003 from 192.168.11.2
> valhalla:~#

The first thing I would do is login to an account w/o any startup script
commands, e.g. biff, setting permission on the tty, umask changes. If
you still get the message I would consider starting a login shell of
your new, clean account under stace with the follow child PIDs option,
and output each child's syscall messages to its own soandso.PID file.
You can use *grep to figure out where that output is coming from.

I doubt that you have a kernel problem but I suppose it is feasable. It
would be better to check other options first. Incidentally I am running
2.4.20 on my home NFS server and have no similar problems. I have not
upgraded to 2.4.20 on any of my NFS clients yet.

--
Jeff S Wheeler


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: two ethernet ports on one PCI NIC?

2003-10-09 Thread Jeff S Wheeler
On Thu, 2003-10-09 at 15:57, Chris Evans wrote:
> but only one PCI slot.  Anyone know of a reliable dual ethernet NIC 
> for PCI that has linux drivers (Debian tested ideally)?


Chris,

The Intel PRO/100 and PRO/1000 ethernet cards are excellent and
inexpensive. You can also purchase mainboards with several of these
chipsets on-board. I have a number of Tyan mainboards with as many as 3
on-board Intel-based ethernet ports.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Route Question!

2003-11-19 Thread Jeff S Wheeler
First, I strongly suggest you move your thread to the quagga-users list
at [EMAIL PROTECTED] You can find numerous configuration
examples in the archives at http://lists.quagga.net. This is the best
forum for help with Zebra/Quagga. I suggest you follow-up on that list,
which I also participate on.

On Wed, 2003-11-19 at 16:16, kgb wrote:
> router i have bgp all my traffic which are bgpeer (all traffic in my
> country) and int (outside my country or with two words international

First, you need to figure out how you will identify "bgpeer" traffic and
"international" traffic. AS-PATH works but it is not the best way to go.

Please provide details about how each of your eBGP sessions reaches your
network. Are they all presently on your Cisco? What type of ports do you
use, e.g. E3/DS3, FastEthernet, etc?

> cisco router and bgp on debian linux router to be with some access list
> _permit_ as_number _denied_ as_number can someone explane how that can

You can accomplish what you want with AS-PATH access lists, however it
will be a pain in the ass to maintain. What you really want is a BGP
community based route filtering system. In my shop(s), I set communities
on all routes learned via eBGP sessions. This helps me identify where I
learned a route (which POP); who it came from (customer, transit, peer);
and if it should have any special local-preference or export concerns. I
then use route-maps that match based on communities to export only my
customer routes to peers and transit providers, for example.

To do this, every eBGP session needs its own route-map. Below is just an
example; you will need some additional parameters for your peer ASes and
your transit ASes, as I understand you. I can produce a better example
when you provide more information. Please, follow up to the quagga list.

router 10
neighbor 20.20.20.20 remote-as 20
neighbor 20.20.20.20 description AS 20 transit
neighbor 20.20.20.20 soft-reconfiguration inbound
neighbor 20.20.20.20 route-map transit_AS20_in in
neighbor 20.20.20.20 route-map transit_AS20_out out
neighbor 30.30.30.30 remote-as 30
neighbor 30.30.30.30 description AS 30 peer
neighbor 30.30.30.30 soft-reconfiguration inbound
neighbor 30.30.30.30 route-map peer_AS30_in in
neighbor 30.30.30.30 route-map peer_AS30_out out
neighbor 40.40.40.40 remote-as 40
neighbor 40.40.40.40 description AS 40 customer
neighbor 40.40.40.40 soft-reconfiguration inbound
neighbor 40.40.40.40 route-map cust_AS40_in in
neighbor 40.40.40.40 route-map cust_AS40_out out
!
ip community-list cust_routes permit 10:14
ip community-list peer_routes permit 10:17
ip community-list transit_routes permit 10:19
!
route-map transit_AS20_in permit 100
set local-preference 100
set community 10:19 # this is "learnt from transit" community
set next-hop 20.20.20.20 # always enforce next-hop
!
route-map transit_AS20_out permit 100
match community cust_routes
set community none # don't send our communities to transit
set next-hop 20.20.20.21 # this is our interface to AS20
!
route-map peer_AS30_in permit 100
set local-preference 300
set community 19:17 # this is "learnt from peer" community
set next-hop 30.30.30.30
!
route-map peer_AS30_out permit 100
match community cust_routes
set community none # unless peer wants your communities
set next-hop 30.30.30.31
!
route-map cust_AS40_in permit 100
set local-preference 500
set community 19:14 # this is "learnt from customer"
set next-hop 40.40.40.40
!
route-map cust_AS40_out permit 100
match community transit_routes
goto 1000
!
route-map cust_AS40_out permit 110
match community peer_routes
goto 1000
!
route-map cust_AS40_out permit 120
match community cust_routes
goto 1000
!
route-map cust_AS40_out deny 999
!
route-map cust_AS40_out permit 1000
set community none
set next-hop 40.40.40.41

> be done in more details i want that because my cisco router is too weak
> and can't work well with 50-60Mbit traffic and if i can do that to split

With your level of traffic, 50Mb/s - 60Mb/s, you will want to choose
interfaces with poll-based, as opposed to interrupt-based interfaces.
The Intel e1000 cards are superb.

I hope this is a helpful start. You'll need to do some configuration
work on OSPF and Zebra itself as well, but we'll need to look at more
specifics of your setup to do that.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: CPU Utiliaztion on a ethernet bridge

2003-11-19 Thread Jeff S Wheeler
On Wed, 2003-11-19 at 21:42, Simon Allard wrote:
> I have replaced NIC's as I thought it might of been the drives also. I
> moved to the eepro100 cards. Same problem.

You should be using NICs with a poll-based driver, as opposed to an
interrupt-based driver. This will preempt the kernel less often, and
allow it to service the NIC more efficiently.

The e1000 driver is excellent in this respect. We run more than 100Mb
through a Linux router with a full eBGP table (~127k FIB entries) with
no appreciable CPU consumption. The only time the box is substantially
taxed is when a BGP peer flaps, in which case zebra consumes a lot of
CPU power reconfiguring the FIB. It's a shame that the Linux kernel
doesn't make the FIB structures accessible directly via an interface
similar to /dev/kmem so zebra could simply mmap(2) it in and make large
writes instead of small ioctl(2) calls.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: CPU Utiliaztion on a ethernet bridge

2003-11-20 Thread Jeff S Wheeler
On Thu, 2003-11-20 at 22:34, Donovan Baarda wrote:
> Do you really mean poll-based, or DMA based? Traditionally polling is
> evil CPU wise... but there could be reasons why polling is better if
> that is the only thing you are doing. Possibly PC DMA is probably so old
> and crappy that it's not worth using?

It is my understanding that the modern e1000 driver polls the NIC to
find out when new frames are available. You may be correct that it just
looks in the DMA rx ring, though; I am a bit out of my league at this
point. In either case, the PRO/100 and PRO/1000 cards, using the Intel
e100/e1000 drivers, are superb. I suspect the 3c59x driver is not quite
so modern, and the kernel is preempted by NIC interrupts frequently when
new frames come in under your existing bridge configuration.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



ntpd listening on alias interfaces seems non-trivial

2004-01-17 Thread Jeff S Wheeler
I have been attempting, without success, to get ntpd listening on an
alias interface on one of my general purpose boxes. It seems that ntpd
prefers to listen on localhost:ntp and eth0addr:ntp. It opens a socket
for *:ntp as well, but does not respond to queries on other addresses.
Here is some LSOF output demonstrating this..:

# lsof -p 16667 |grep UDP
ntpd16667 root4u  IPv44493134 UDP *:ntp 
ntpd16667 root5u  IPv44493135 UDP localhost:ntp 
ntpd16667 root6u  IPv44493136 UDP hostname:ntp 

I checked the archives, and it seems another poster had similar trouble
in Dec'02, but there were no apparent follow-up posts. Google has also
been less than revealing on this topic. All suggestions entertained.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: AOL testing new anti-spam technology

2004-01-25 Thread Jeff S Wheeler
On Sat, 2004-01-24 at 13:07, Joey Hess wrote:
> One thing I've been wondering about is pseudo-forged @debian.org From
> addresses (like mine) and spf. It would seem we can never turn it on for
> toplevel debian.org without some large changes in how developers send
> their email.

I don't understand how this problem will be solved for folks who travel.
For example, many hotel access services redirect your SMTP TCP sessions
to their local smart sender these days, as quite simply, that is the
easiest way to prevent customers from being unable to send mail due to
relay restrictions on their office or ISP mail server.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Jeff S Wheeler
On Tue, 2004-01-27 at 16:49, Benjamin Sherman wrote:
> I have a server running dual 2.66Ghz Xeons and 4GB RAM, in a 
> PenguinComputing Relion 230S system. It has a 3ware RAID card with 3 
> 120GB SATA drives in RAID5. It is currently running Debian 3.0 w/ 
> vanilla kernel 2.4.23, HIGHMEM4G=y, HIGHIO=y, SMP=y, ACPI=y. I see the 
> problem with APCI and HT turned off OR if I leave them on.

I don't know anything about thos 2.4.23 I/O problem, but I will tell you
that RAID 5 is not the way to go for big SQL performance. In a RAID 5
array, all the heads must move for every operation. You already spent a
lot of money on that server. I suggest you buy more disks for RAID 10.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Netgear FA311 and natsemi issue

2004-02-14 Thread Jeff S Wheeler
On Fri, 2004-02-13 at 19:21, [EMAIL PROTECTED] wrote:
> OS: Debian 3.0R2 stable 
> PC: Old Pentium Pro box with Intel 440FX chipset 
> Kernel: 2.4.18 (standard Debian kernel with natsemi support builtin) 
> natsemi driver: came with Debian dist (1.14 IIRC) 

I had endless problems with several Netgear ns83820 based cards last
year, and finally decided to throw them out in favor of Intel cards and
the Intel e1000 driver.  I am very pleased with this decision, and
recommend you make the same one, rather than cause yourself more grief
by trying to get the poorly supported Netgear cards to work.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Starting isp and going to use Debian

2004-02-21 Thread Jeff S Wheeler
On Sat, 2004-02-21 at 14:50, charlie derr wrote:
> > 5. Drive usage control (i.e. user only get 10M for mail and 15M for web)
> 
> We have quotas implemented on the web and mail servers.  This is a daily 
>task though (raising quotas of people who've exceeded their default)

You could automate much of that task.  There is a Perl module for
manipulating dquota. http://search.cpan.org/~tomzo/Quota-1.4.10/Quota.pm

-- jsw



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apache uses 100 % cpu

2004-02-28 Thread Jeff S Wheeler
On Sat, 2004-02-28 at 10:14, Timo Veith wrote:
> This is the output of strace:
> 
> read(8, "", 4096)   = 0
> read(8, "", 4096)   = 0
> read(8, "", 4096)   = 0
> read(8, "", 4096)   = 0
>   looping forever as it seems.


First of all, let me compliment you on the good level of detail you've
provided in your request for trouble-shooting aid.  If these processes
are still running, I'd really like to see what is on descriptor 8 of the
process that generated the above strace output.  To find that out, make
sure you have the "lsof" package installed, and issue `lsof -p `,
then take note of the FD column in the output.  That indicates which
file descriptor is being examined, and of course the information in the
remaining columns to the right are details about that descriptor.

I can't imagine why apache itself would be caught in the scenario you
are experiencing, but perhaps a CGI/PHP script or buggy module is the
culprit.  If that is the case, simple knowledge of what apache is trying
to read may be helpful.

If you have gdb available (from the gdb package) and your apache binary
is not stripped of debugging symbols, you can also issue `gdb -p `,
which will attach the debugger to that running process.  I'm not sure
what the output will look like as it's issuing garbage reads constantly,
but you want to issue the gdb command `backtrace`, and send that output
to the mailing list.  Just issue `q` after you've got that to detach.

What version of Apache are you running, and with what modules?

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



  1   2   >