Re: [GENERAL] Replication Using Triggers

2008-01-20 Thread Gregory Youngblood
On Sat, 2008-01-19 at 23:46 +, Gordan Bobic wrote:

> David Fetter wrote:
> 
> > In that case, use one of the existing solutions.  They're all way
> > easier than re-inventing the wheel.
> 
> Existing solutions can't handle multiple masters. MySQL can do it at 
> least in a ring arrangement.
> 

What about pgcluster? It's supposed to be able to provide synchronous
multi-master replication for postgresql.

Greg


[GENERAL] Anyone use pgcluster in production?

2005-06-18 Thread Gregory Youngblood
I was wondering how many people have PGCluster used in production  
environments.


How stable is it? Are there any problems? Are there any versions that  
should be avoided? Which is the better choice for production use  
right now, 1.1 or 1.3? Are there any gotchas to be avoided?


I am evaluating PGCluster as a clustering solution, and would like to  
hear any stories you might have about getting it setup and running.  
I'd also appreciate any tips or tricks to getting things set up for  
optimal performance.


Thanks,
Greg

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postfix/Maildrop and too many connections issues

2005-06-21 Thread Gregory Youngblood
I run postfix and have it connected to postgresql for just about  
everything. Postfix is very sloppy on the database side, or so it seems.


I ended up having to configure postfix to limit the number of  
processes it will start, and then make sure postgres has more than  
that connections available too.


In postfix, I set "default_process_limit = 10" in main.cf. In  
postgresql, I set max connections to 256, with 4 reserved for super  
user. My mail volume also isn't too bad, typically, so I can get away  
with setting this to 10. In some cases, I think each postfix process  
is opening a new database connection for every lookup. And, if a  
process needs to lookup 2 or 3 items, it uses new connections for  
every one. Then, the connections don't get closed until those  
processes are killed a little while later.


Pure speculation, but it's about the only thing I can think of (short  
of going through and analyzing the source code) that would explain  
the disparity between postfix processes and postgres connections.


With that, I have the following in ps (after running for a while, so  
things are mostly stable):


Postgres Connections (not including stats): 43

Postfix:
transport: 15
virtual: 13
mailbox: 8
domains: 2

Courier IMAP/POP
courier: 5

These numbers will fluctuate with mail traffic. And, they do not  
correspond with the open # of postfix processes (19 processes: 1  
master, 1 qmgr, 2 trivial rewrite, 7 smtpd, 6 cleanup, 1 pickup, 1  
virtual).


That leads me to believe that the postfix database interface (for  
postgres at least) is pretty sloppy. I've been planning to try a  
pooling program, just to see if that would make a difference, but I  
haven't had time to do any research or practical testing to find  
out.  Does anyone know if this problem occurs when using postfix with  
LDAP servers or mysql ?


Thanks,
Greg

On Jun 21, 2005, at 12:00 PM, Sven Willenberger wrote:


We have a system set up whereby postfix and maildrop gather user info
from a pg database (7.4.2 on FreeBSD 5.2.1) to do local mail  
acceptance

and delivery. I have configured max connections at 512 but I find that
this is not enough and I get "connection limit exceeded for
non-superusers" errors. I see upon ps ax that there are hundreds of  
idle

connections (state I).

Is there any way on the server end to close these connections (the
process is apparently for postfix and then maildrop to open a
connection, run one select statement, and supposedly close the
connection). If not, would pgpool help in this case? Since all the
connections are basically 2 users (one for the mail delivery system  
and

another for the mail retrieval), are there any such "connections"
limitation by pgpool (i.e. if we get a spike of 700-1000 queries each
with their own "connection")?

Thanks,

Sven


---(end of  
broadcast)---

TIP 4: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postfix/Maildrop and too many connections issues

2005-06-21 Thread Gregory Youngblood

Thanks for the proxymap tip. I will definitely look into it.

However, it probably won't do much for me, since I have user and  
directory information (i.e. sensitive information) looked up, and  
proxymap very clearly says not to use it for that. At least, not yet.  
Though it will undoubtedly help others.


On Jun 21, 2005, at 4:30 PM, Roman Neuhauser wrote:


# [EMAIL PROTECTED] / 2005-06-21 15:00:12 -0400:


We have a system set up whereby postfix and maildrop gather user info
from a pg database (7.4.2 on FreeBSD 5.2.1) to do local mail  
acceptance
and delivery. I have configured max connections at 512 but I find  
that

this is not enough and I get "connection limit exceeded for
non-superusers" errors. I see upon ps ax that there are hundreds  
of idle

connections (state I).

Is there any way on the server end to close these connections (the
process is apparently for postfix and then maildrop to open a
connection, run one select statement, and supposedly close the
connection).



You are barking up the wrong software. See proxymap(8). (It's
mentioned in http://www.postfix.org/PGSQL_README.html)

--
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of  
broadcast)---

TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that  
your

   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Clustering and replication options

2005-06-22 Thread Gregory Youngblood
I am looking for some information about clustering and replication  
options for postgresql.


I am aware of pgcluster, but have been unable to find anyone willing  
to share details about actually using it in a production environment.  
That's a little disconcerting. Is pgcluster not really ready for full  
production use right now?


I really would like to setup a multi-master cluster of databases, and  
pgcluster seems just perfect for the task. I will be testing it for  
myself in a few weeks.


I was hoping I could solicit some advice, or really any information,  
about other possible clustering or replication methods. I know slony- 
I is available, and that it is used b the .org registry, I believe,  
as well as other places. It seems to be pretty mature as well.  
However, I'm concerned about the master to multi-slave model. For  
databases with mostly reads and fewer writes, that model seems good.  
But, it seems like replication lag, not to mention the added  
complexity of splitting read and write operations between two  
database connections, could cause problems for applications with a  
large number of both reads and writes. Any lag would potentially  
allow stale data to be read from a slave after it had been updated on  
the master.


Can someone point me at some good references for either slony-I or  
pgcluster, other than their actual home pages? I've spent more time  
reading and analyzing the pgcluster configuration than slony-I, as my  
needs include support for frequent, multiple writes.


Thanks,
Greg




---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Generate a list of (days/hours) between two dates

2005-06-27 Thread Gregory Youngblood
Hopefully I'm understanding your question correctly. If so, maybe  
this will do what you are wanting.


First, a couple of questions. Do you have this data in a table  
already, and are looking to extract information based on the dates?  
Or, are you basically wanting something like a for loop so you can  
generate the dates between start and stop values?


If the former, and I understand what you are looking to accomplish,  
here's one way to do it:


select timestampfield::date::timestamp as "date", count(*) from table  
where timestampfield between start and stop group by "date" order by  
"date";


should yield:
-MM-DD 00:00:00#

for hours, use :

select (substr(timestampfield, 1, 13) || ':00:00')::timestamp as  
"hourly", count(*) from table where timestampfield between start and  
stop group by "hourly" order by "hourly";


should yield:
-MM-DD HH:00:00  #

Of course, this assumes your database already has this information.

i hope this helps.
Greg

On Jun 27, 2005, at 10:30 AM, [EMAIL PROTECTED] wrote:


Hi guys,

I've scoured the date/time functions in the docs as well as
google-grouped as many different combinations as I could think of to
figure this out without asking, but I'm having no luck.

I'd like to make a query that would return a list of every trunc'd
TIMESTAMPs between two dates.  For example, I'd want to get a list of
every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and
get a list that looks like:

6-1-2005 00:00:00
6-1-2005 01:00:00
6-1-2005 02:00:00
etc

Conversely, I want to generate a list of every day between two dates,
like:

6-1-2005 00:00:00
6-2-2005 00:00:00
6-3-2005 00:00:00

I know there's gotta be some way to do this in a SELECT function, but
I'm running into a brickwall.  I'm trying to take some of my date
handling logic out of code and use the db engine so I can spend less
time developing/maintaining code when mature date handling already
exists in a resource I've already got loaded.

Any thoughts?


---(end of  
broadcast)---

TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Performance Tuning Best Practices for 8

2005-06-28 Thread Gregory Youngblood
I've been using postgres off and on since about 1997/98. While I have  
my personal theories about tuning, I like to make sure I stay  
current. I am about to start a rather thorough, application specific  
evaluation of postgresql 8, running on a Linux server (most likely  
the newly release Debian Stable).


While I have been running 7.4.x for a while, I have not had much of  
an opportunity to really look at 8. Are there any significant  
differences or gotchas tuning 8 as compared to 7.4 or older versions?


I was hoping those that have 8 in production environments might share  
some of their tuning experiences with me, or point me to useful web  
sites or books. Most of the websites and books I've seen on this  
topic are from 2003 and 2004, so I was hoping to find something a  
little more current.


Thanks,
Greg

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Hot to restrict access to subset of data

2005-07-02 Thread Gregory Youngblood

I believe you can probably use views to accomplish this.

You create a view that is populated based on their username. Then you  
remove access to the actual table, and grant access to the view.


When people look at the table, they will only see the data in the  
view and will not have access to the other.


Of course, this assumes they do not need to update the data. I've not  
played around with rules to make a view allow updates. I believe it  
is possible, I've just not done it yet. This also assumes you have  
data somewhere that maps user names to document types.


The postgresql docs should provide the syntax and additional details  
if you want to try this. I have also found pgAdmin very useful to  
create views and other schema related activities as well.


Hope this helps,
Greg



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Hot to restrict access to subset of data

2005-07-03 Thread Gregory Youngblood
I would strongly suggest that you create a database specific user,  
one that has read/write access within this database, and that your  
application use that user instead of the pg super user.


In general, the "super user" should never be used, except for  
specific administrative tasks. This holds true for Windows  
Administrator, Unix root, and postgresql's postgres users. If your  
application runs under a single user to the database, then that  
single user should be one that you create specifically for that  
purpose, and not the postgres user.


Greg

On Jul 3, 2005, at 1:19 PM, Andrus Moor wrote:


Greg,

using views would be nice.

I have also a add privilege which allows to add only new documents.  
I think

that this requires writing triggers in Postgres.

This seems to be a lot of work.
I do'nt have enough knowledge to implement this in Postgres.

So it seems to more reasonable to run my application as Postgres  
superuser

and implement security in application.

Andrus.

"Gregory Youngblood" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]


I believe you can probably use views to accomplish this.

You create a view that is populated based on their username. Then you
remove access to the actual table, and grant access to the view.

When people look at the table, they will only see the data in the   
view

and will not have access to the other.

Of course, this assumes they do not need to update the data. I've not
played around with rules to make a view allow updates. I believe  
it  is

possible, I've just not done it yet. This also assumes you have  data
somewhere that maps user names to document types.

The postgresql docs should provide the syntax and additional  
details  if
you want to try this. I have also found pgAdmin very useful to   
create

views and other schema related activities as well.

Hope this helps,
Greg



---(end of  
broadcast)---

TIP 4: Don't 'kill -9' the postmaster






---(end of  
broadcast)---

TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Errors building older versions of PostgreSQL

2005-07-13 Thread Gregory Youngblood
I've been going through some old backups, and found databases from  
pgsql versions as old as 7.0 (7.0, 7.1, 7.3, and 7.4 to be precise).  
I'm trying to build these older versions specifically so I can dump  
the data and see what I want to keep and what I can erase.


7.3 and 7.4 appear to have built cleanly and easily (I'd expect that  
for 7.4). However, 7.0 and 7.1 are giving me grief.


I'm building this on a fresh SuSE 9.3 system.

The 7.0 is being configured with:
./configure --prefix=/export/storage/PostgreSQL/7.0 --pgport=5433

It gets through most of the make process, but then at the point where  
it starts creating files like GNUmakefile, it returns:

sed: file conftest.sl line 35: unterminated `s' command

The GNUmakefile is 0 bytes, and it says config.h is unchanged. I  
forget how many files (or times) this message is generated, but the  
end result is there's nothing to make.


Second problem, this time is for 7.1.

7.1 is configured with:
./configure --prefix=/export/storage/PostgreSQL/7.1 --pgport=5434

It doesn't get as far as 7.0, but it does get to:
Checking types of arguments for accept() . . .

where it then says:
Cannot determine argument types

and aborts the configure process.

I've found the 7.1 problem (but no solutions so far) in google when  
building for Solaris 9.


I was hoping someone here could point me in a good direction to solve  
these problems. I'd really like to extract the data from these older  
databases. I don't need to get these fully running, just enough to  
get pg_dump to work.


Thanks,
Greg

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Errors building older versions of PostgreSQL

2005-07-13 Thread Gregory Youngblood
Thanks for the tips. It kind of worked. I got passed that point, but  
then got to another set of errors.


I'm going to change my approach. I'm trying to locate some older  
Linux ISOs that I can install in a vmware virtual environment and try  
to build in there. Since all I want is to dump my data, I don't  
really have that much of a worry about long-term performance within a  
virtualized system.


We'll see how it goes. So far finding older ISOs has been more  
difficult than I expected. It appears no one is seriously keeping an  
archive of old Linux distros. This is too bad, because it can be non- 
trivial to get older versions of software working on newer distros,  
and that means data gathering dust in offline backups might not be  
recoverable, especially if it is just data backups and not entire  
systems (like I happen to be dealing with).


If this avenue fails, I'll post back with actual error messages. I do  
appreciate your assistance.


Thanks,
Greg


On Jul 13, 2005, at 7:58 PM, Alvaro Herrera wrote:


On Wed, Jul 13, 2005 at 04:29:41PM -0700, Gregory Youngblood wrote:



It gets through most of the make process, but then at the point where
it starts creating files like GNUmakefile, it returns:
sed: file conftest.sl line 35: unterminated `s' command



The problem is that GCC now gives a multiline string when given
--version.  I think the easiest way out is to change the configure  
file,

where it says

CC_VERSION=`${CC} --version`

to something like

CC_VERSION=`${CC} --version | head -1`

(Or change configure.in and rerun autoconf)



Second problem, this time is for 7.1.

7.1 is configured with:
./configure --prefix=/export/storage/PostgreSQL/7.1 --pgport=5434

It doesn't get as far as 7.0, but it does get to:
Checking types of arguments for accept() . . .



I think the easiest would be to take the
config/ac_func_accept_argtypes.m4 file from a newer release, rerun
autoconf, and rerun configure.

--
Alvaro Herrera ()
"In fact, the basic problem with Perl 5's subroutines is that  
they're not
crufty enough, so the cruft leaks out into user-defined code  
instead, by

the Conservation of Cruft Principle."  (Larry Wall, Apocalypse 6)

---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] To Postgres or not

2005-07-14 Thread Gregory Youngblood
On Jul 13, 2005, at 9:57 PM, Alvaro Herrera wrote:On Thu, Jul 14, 2005 at 02:46:01PM +1000, Neil Conway wrote: Vivek Khera wrote: The first sentence rules out MySQL, so the second sentence should  read "So that leaves Postgres".  Your problem is solved ;-)(If you are accustomed to Oracle, you are probably expecting an ACID  database, which rules out MySQL too). Does MySQL with InnoDB not qualify as an ACID-compliant database? Not if you intermix InnoDB tables with MyISAM tables, which AFAIK manypeople seem to do.Don't forget that MySQL will create a table as MyISAM, even when told to create it as an InnoDB table, if, by some chance, InnoDB is not enabled. InnoDB has to be specified at the time of table creation, but MySQL, true to form, will not generate a warning that InnoDB is not enabled or present, and will instead create the table using MyISAM. Worse yet, if this is not caught, and since MySQL also fakes transaction support, it can look like everything is working, until data has to be rolled back, and it can't be. Needless to say, this can lead to some pretty messed data in your tables, depending on your application.It is the silence, combined with doing what it feels like and not what you tell it to do, that makes me shy away from MySQL for applications where data integrity is vital. You can't trust an error to be raised when it should be. Very dangerous. I've heard these issues are being addressed in 5.0 or 5.1 (I forget which).

Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Gregory Youngblood
If linking it in directly via C would bring in the MySQL license, and  
you want to avoid that, what about one of the scripting languages  
such as perl or python, or possibly even ruby? Or, what about using  
UnixODBC to talk to MySQL.


I've written a few perl scripts when I need to convert MySQL into  
Postgres. Nothing formalized, I usually just recreate it each time I  
need to do something. My needs are typically pretty simple though,  
and I know what I'm converting, so it makes it easier.


I think one of the more difficult areas will be to convert unsigned  
fields from mysql into postgres. For smaller sizes it is possible to  
convert to postgres by moving one size up and using constraints to  
restrict numbers to be positive, and possibly within the mysql range  
too. But, the problem is unsigned bigint in mysql to postgresql.  
There's not another larger integer size that can be used that would  
allow the 18446744073709551615 (is that the max value?) max value  
available in mysql. Or am I missing something?


I think running into these would be rare, but it is something to be  
considered.


Greg

On Jul 27, 2005, at 7:20 PM, Christopher Kings-Lynne wrote:


So far, the checklist I can see includes:
*  Maintaining conversion scripts



What I think we need is a C program that dumps directly from MySQL  
into PostgreSQL sql.


ie. Take the mysqldump source code and just modify its output.

Will inherit the MySQL license though :(

Chris


---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Gregory Youngblood
On Jul 27, 2005, at 9:53 PM, Tom Lane wrote:Gregory Youngblood <[EMAIL PROTECTED]> writes: ... the problem is unsigned bigint in mysql to postgresql.  There's not another larger integer size that can be used that would  allow the 18446744073709551615 (is that the max value?) max value  available in mysql. Or am I missing something? You'd have to translate that to NUMERIC, which would work but wouldtake a bit of a performance hit ...The most common places I've seen unsigned bigint used have been primary keys for tables where the counter is expected to basically grow forever. I've also seen it used to store unique user id numbers instead of varchar fields. In both of those cases, the number is used as a key, either alone or with another field, in an index. Would there be much of a performance hit for that type of use? I can image the biggest performance hit being mathematical calculations, such as sum(), etc. Or would the performance hit be across the board?Greg

Re: [GENERAL] PostgreSQL vs. MySQL

2005-08-01 Thread Gregory Youngblood


On Aug 1, 2005, at 4:33 PM, Robert Treat wrote:


On Monday 01 August 2005 13:52, Scott Marlowe wrote:


On Mon, 2005-08-01 at 11:44, [EMAIL PROTECTED] wrote:


Hi all,
I am sorry for a stupid easy question, but I'am PostgreSQL novice.
Our development team has encountered problem with trying to  
install and
maintain cluster (pgcluster) on our production database. So they  
have

decided to switch the entire solution to MySql database.
a) have you got any good/bad experience with administering and  
running

cluster on PostrgeSQL DB?
b) would u switch PostgreSQL to MySql (7.4 --> 4.1) and why?



I would never switch from pgsql to mysql.

Here's a short list of the reasons why:

http://sql-info.de/mysql/gotchas.html

If they're having problems and haven't asked for help in these lists,
then I'm willing to be they're just looking for an excuse to  
change to

what they're comfortable with, and not really serious about using
postgresql.


--snip--
Can someone point me to the multi-master replication docs for my$ql  
4.1?  I

agree with Scott, sounds like they are looking for an excuse.



I don't believe mysql 4.1 has multi-master replication. At least not  
open sourced. If you find them, I'd be interested in reading/learning  
about it.


I believe Mysql 5.0 will have a multi-master storage engine. But, it  
will have a lot of gotchas too. For one, it's all in memory, so be  
prepared to have several machines with lots and lots of RAM. The  
Mysql docs for 5.0 cover it quite well. Watch out for its  
wastefulness though. It does things like reserve space based on the  
max size of a field, so if you have a varchar(250), then every row in  
the database will have 250 bytes of space allocated, even if you only  
use 1. At least, that's how it was documented the last time I looked  
at it.


I had to rule out the mysql cluster for a project I'm working on due  
to how some things are implemented.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] DNS vs /etc/hosts

2005-08-04 Thread Gregory Youngblood
On Aug 4, 2005, at 8:13 AM, [EMAIL PROTECTED] wrote:I am changing from 7.2 to 8.0 and have both installed now on various Linux machines.  When I use the psql command line interface with a -h hostname, the connection time from 7.2 is instant while the connection time from 8.0 is 15 seconds.  My assumption is that 7.2 checks the /etc/hosts file first and if unable to find the specified host it reverts to a DNS lookup, and the 8.0 is just the opposite.  Is this a correct assumption, and if so, can I modify 8.0 to behave as 7.2 does?Is this on the same machine, or have you changed machines when you changed db versions?(1) the lookups are usually handled by system calls, and assuming your are on a Unix type system, the files /etc/host.conf and /etc/nsswitch.conf will determine the order lookups are performed. Most every system I have seen comes with a default configuration of using the files first, and dns second. It might be useful to make sure these are set correctly.(2)  have you checked the 8.0 pg_hba.conf? It looks like ident is used. I am not very familiar with ident, usually only seeing it used for IRC chats, but I believe it looks to your client for the ident information. Are you running an ident server, or do you possibly have a firewall that just drops packets for blocked ports (assuming ident is among the blocked ports)? I would guess that a simple dropped packet would make it time out, while a rejected or no server on port would cause the ident connection to fail more quickly.Just a couple of ideas.Greg

Re: [GENERAL] psqsl -> remote db

2005-08-04 Thread Gregory Youngblood
What machine is remote? Linux? Solaris? or Mac?  I couldn't tell if  
the remote system or your workstation was a Mac.


I will assume the postgresql server is on a Mac, and that the Mac has  
its firewall enabled. On my Mac, to open a firewall for something  
like this, go to System Preferences, and then click Sharing. There  
you should see a Firewall button - click to display the firewall  
settings. Scroll through the Allow list and make sure there is not  
already an entry for PostgreSQL. If there is, make sure it is  
checked. If not, click the New... button. Enter a port name  
("PostgreSQL"), and enter "5432" in the TCP Ports field. [I do not  
believe postgresql uses UDP.] Click OK. Then make sure the PostgreSQL  
port was added to the Allow list and is checked. Close System  
Preferences, and answer Yes to save, if prompted. That should be it,  
the mac should now allow incoming traffic on PostgreSQL's port.


If the remote computer is not a Mac, the instructions above will not  
work.


Are you sure the remote postgresql server has been configured to  
listen on port 5432 ? By default, postgresql often will not listen to  
TCP connections unless explicitly instructed to do so. Depending on  
how postgresql is installed, it could be that it just isn't listening  
for a network connection, only allowing local connections.


On Aug 4, 2005, at 11:26 AM, Andrew Stewart wrote:

I'm sorry, I'm sure I should be asking this someplace more general,  
but apparantly the word 'port' is used in so many different  
contexts that a google search turns up fruitless ;)


I'm just trying to access a remote postgresql database on port  
5432, which does not appear to be open on the remote computer.  I'm  
on a MacOSX and  cannot figure out for the life of me how to open  
up this port on the computer.
This is probably a unix question if anything, but any help would be  
apprecaited.


-Andrew

---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Slow Inserts on 1 table?

2005-08-04 Thread Gregory Youngblood
On Aug 2, 2005, at 8:16 AM, Alvaro Herrera wrote:On Tue, Aug 02, 2005 at 10:01:50AM -0500, Dan Armbrust wrote: I shouldn't have to manually run Analyze to make the DB be capable of handling inserts involving tables with foreign keys correctly.  My code that is doing the inserts is a java application that works across multiple DBS - MySQL, PostgreSQL, DB2, MS Access, Sybase, etc.  [etc, rant removed]You don't _have_ to be rude.  This is a known limitation, and peoplehave suggested the usual workarounds.  This is an open source project --if you think you can make it better, please by all means post a patch.Not to fan the flames, so to speak, but I do have on question. If this is a known limitation, I'd expect the behavior to be consistent. Instead, he is saying that the problem is intermittent. Sometimes it runs fine, other times it slows down to a snail's pace. So, does that mean the known problem is indeed intermittent, and not something that happens every time? Or, is this an issue that can be mostly eliminated with appropriate tuning?I ask, because I may soon be facing similar problems, and forewarned is forearmed. I'd like to anticipate this and tune things accordingly before it becomes an issue.Thanks,Greg

Re: [GENERAL] DNS vs /etc/hosts

2005-08-04 Thread Gregory Youngblood
On Aug 4, 2005, at 2:39 PM, [EMAIL PROTECTED] wrote:Both dig and nslookup are fast on all machines.  'psql' is fast on all machines, as long as I am using the version compiled with version 7.2.  It is only 'psql' compiled with version 8.0 that is slow.  I don't think DNS is the problem, but rather the way psql in version 8.0 attempts to get the DNS info.  My Linux kernal version is 2.4.18. Silly question.Could the version of psql from 8.0 be linked against readline, and it's reading in and storing in memory some of the information it needs to have cached in order to provide the tab-completion feature? And, 7.2 is not?I've seen some delays with both mysql and pgsql when readline libraries are involved on databases with lots of tables and fields in them.Just a thought.Greg

[GENERAL] PostgreSQL missing in SuSE 10?

2005-10-09 Thread Gregory Youngblood
I've been using SuSE and PostgreSQL for a fairly long time. Recently
(last 12 months), I've noticed that the 9.x (9.2 and 9.3 specifically)
versions of SuSE do not include PostgreSQL on the CD install -- only on
the DVD. At first (9.2), I thought it was just a glitch that didn't get
fixed in 9.3. Now, it seems to have been deliberate. 

With SuSE 10.0, it appears PostgreSQL is not even included on the DVD.
At least not the eval DVD currently available for download. If SuSE 10.0
users want to have a PostgreSQL server, they'll have to build it from
source themselves (or find the RPM online). The Novell web site says
PostgreSQL 8.0.3 is included in the SuSE 10 distribution, and includes
the packages for the full install. Reality appears to be just the libs.

I'm hoping this is a packaging glitch at Novell, and not something more
serious. Does anyone have any information why SuSE appears to be making
it more difficult to install PostgreSQL on their distro? I am still
searching the opensuse forums for more information as well, but really
wanted to hear from this side of the fence.

Thanks,
Greg




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL missing in SuSE 10?

2005-10-11 Thread Gregory Youngblood
On Mon, 2005-10-10 at 11:04 -0700, Steve Crawford wrote:
> > > Gregory Youngblood <[EMAIL PROTECTED]> writes:
> > >> I've been using SuSE and PostgreSQL for a fairly long time.
> > >> Recently (last 12 months), I've noticed that the 9.x (9.2 and
> > >> 9.3 specifically) versions of SuSE do not include PostgreSQL on
> > >> the CD install -- only on the DVD. At first (9.2), I thought it
> > >> was just a glitch that didn't get fixed in 9.3. Now, it seems to
> > >> have been deliberate.
> 
> The real problem is that SuSE is not upfront about the difference 
> between the CD and the DVD versions of their product. Someone (me) 
> picking up the box would assume that the versions are the same but 
> packaged on two different media but they (um, me) would be wrong. 

You are absolutely correct! It used to be that the DVD and the CD
versions were identical, or pretty close to it. I never had to use the
DVD, and in fact, SuSE DVDs have been notoriously wishy-washy, unable to
complete the installation process on about 20% of the DVD readers I've
used (no other problems for those drives with other disks), requiring
the CDs to be used for an install. 

And to move it back on topic -- now, it appears PostgreSQL is not on the
DVD either. For the record, this is the Eval SuSE 10.0 DVD.




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL missing in SuSE 10?

2005-10-11 Thread Gregory Youngblood
SuSE 10 is the "commercial" release from Novell. It contains additional
packages not in the opensuse.org open source version. There is an eval
cd and dvd set that can be downloaded of the "commercial" release -
though I do not believe it is licensed for anything beyond evaluation
purposes, but I could be mistaken. The open source opensuse edition is
freely available and not limited to evaluation.

The packages not in the opensuse version are listed here:
http://www.novell.com/coolsolutions/tip/16015.html

There has been some confusion (and annoyance) because I believe packages
that relied on one of the removed packages were also removed. Due to
Java not being included, I believe the Eclipse environment was also not
included. Don't quote me on this -- I'm going from memory.

Here is a link to instructions on tweaking the opensuse version:
http://www.thejemreport.com/mambo/content/view/178/42/

Hope this helps. If anyone has any more general SuSE questions, please
ask off-list. I'll try to reply. 

To move this back on-topic, I have been looking at files available via
FTP install for SuSE 10 (at least for the opensuse version), and the
postgresql packages are there. So, it's looking more like a packaging
and DVD creation error on the main DVD (at least for it not including
postgresql). Hopefully.

Greg


On Tue, 2005-10-11 at 13:57 -0400, Alex Turner wrote:
> I know this is a bit off-topic, but this is part of the problem I'm
> encountering.  I can't find a definitive document on SuSe licensing
> and what is 'eval' and what I have to pay for... can I still download
> the full version for free? or is that just the eval?  It's kind of
> confusing.  Does anyone have information/a link to documentation that
> clarifies that stuff?
> 
> Alex Turner
> NetEconomist
> 
> On 10/11/05, Gregory Youngblood <[EMAIL PROTECTED]> wrote:
> On Mon, 2005-10-10 at 11:04 -0700, Steve Crawford wrote:
> > > > Gregory Youngblood <[EMAIL PROTECTED]> writes:
> > > >> I've been using SuSE and PostgreSQL for a fairly long
> time. 
> > > >> Recently (last 12 months), I've noticed that the 9.x
> (9.2 and
> > > >> 9.3 specifically) versions of SuSE do not include
> PostgreSQL on
> > > >> the CD install -- only on the DVD. At first ( 9.2), I
> thought it
> > > >> was just a glitch that didn't get fixed in 9.3. Now, it
> seems to
> > > >> have been deliberate.
> >
> > The real problem is that SuSE is not upfront about the
> difference 
> > between the CD and the DVD versions of their product.
> Someone (me)
> > picking up the box would assume that the versions are the
> same but
> > packaged on two different media but they (um, me) would be
> wrong. 
> 
> You are absolutely correct! It used to be that the DVD and the
> CD
> versions were identical, or pretty close to it. I never had to
> use the
> DVD, and in fact, SuSE DVDs have been notoriously wishy-washy,
> unable to 
> complete the installation process on about 20% of the DVD
> readers I've
> used (no other problems for those drives with other disks),
> requiring
> the CDs to be used for an install.
> 
> And to move it back on topic -- now, it appears PostgreSQL is
> not on the 
> DVD either. For the record, this is the Eval SuSE 10.0 DVD.
> 
> 
> 
> 
> ---(end of
> broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-18 Thread Gregory Youngblood




On Mon, 2005-10-17 at 12:05 -0700, Chris Travers wrote:


5)  Independant patent license firms.  I guess it is a possibility, but in the end, companies that mostly manufacture lawsuits usually go broke.  Why would you sue a non-profit if you were mostly trying to make a buck with the lawsuit?



IANAL, but I think this category would be higher. For the simple reason that the non-profit might be viewed as low-hanging fruit and easy to pluck.

Remember, what some of these (a lot?) "patent license firms" (now that's a lot nicer than what I would have called them) try to do is get easy targets to license the patent. This makes it easier to get other companies to license the patent. 

Often, the smaller targets will settle and license the patent because the cost to defend themselves is so high that it is cheaper to pay the license fees than to fight (this sounds familiar, oh yeah, protection rackets). These "firms" then use those licensees to legitimize their patent, claiming others licensing the patent "proves" their patent is enforceable. Then they target bigger and more lucrative fish.

[My jaded comments aside, I think there is a place for the US patent system to protect true invention and innovation. It's just been twisted into something -- well, I don't need to say it, you already know.]




Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-18 Thread Gregory Youngblood




On Tue, 2005-10-18 at 13:07 -0700, Chris Travers wrote:


Gregory Youngblood wrote:

> On Mon, 2005-10-17 at 12:05 -0700, Chris Travers wrote:
>
>>5)  Independant patent license firms.  I guess it is a possibility, but in the end, companies that mostly manufacture lawsuits usually go broke.  Why would you sue a non-profit if you were mostly trying to make a buck with the lawsuit?
>


snip


> Often, the smaller targets will settle and license the patent because 
> the cost to defend themselves is so high that it is cheaper to pay the 
> license fees than to fight (this sounds familiar, oh yeah, protection 
> rackets). These "firms" then use those licensees to legitimize their 
> patent, claiming others licensing the patent "proves" their patent is 
> enforceable. Then they target bigger and more lucrative fish.

IANAL either, but I am hard pressed to determine where such a settlement 
proves anything (at least in the US).  Now if you go to trial and lose, 
then the same defenses may be unavailable to others.  I.e. if the court 
determines that your arguments for X being prior art do not impact the 
invention in question, then the next defendant will probably be barred 
from arguing that X is prior art at least in reference to the same 
invention.  But this cuts both ways...  If the courts determine that X 
*is* prior art, then the patent may be limited by the courts.  I.e. 
prior suits don't prevent new defenses which is why serial enforcement 
of patents is so dangerous (sooner or later, maybe someone finds a chink 
in the armor), but facts necessarily decided as part of one case are 
generally considered beyond dispute.



That's why I put the word proves in quotes when I wrote it. It doesn't prove anything. But, it does give them ammunition. 

The smart ones will stay out of court, precisely for the reasons you listed. As they move up the food chain, going after larger and larger targets, you can bet one of the arguments they will throw around is that X others have already licensed this patent, making their patent seem more legitimate. 

Still doesn't make them right.




Re: [GENERAL] Why database is corrupted after re-booting

2005-10-26 Thread Gregory Youngblood




Talking with various people that ran postgres at different times, one thing they always come back with in why mysql is so much better: postgresql corrupts too easily and you lose your data.

Personally, I've not seen corruption in postgres since 5.x or 6.x versions from several years ago. And, I've seen corruption on mysql (though I could not isolate between a reiserfs or mysql problem - both with supposedly stable releases installed as part of a distro). 

Is corruption a problem? I don't think so - but I want to make sure I haven't had my head in the sand for a while. :) I realize this instance appears to be on Windows, which is relatively new as a native Windows program. I'm really after the answer on more mature platforms (including Linux). 

Thanks,
Greg

On Wed, 2005-10-26 at 18:27 +0300, Andrus wrote:


Yesterday  computer running Postgres re-boots suddenly. After that,

select * from firma1.klient

returns

ERROR:  invalid page header in block 739 of relation "klient"

I have Quantum Fireball IDE drive, write caching is turned OFF.
I have Windows XP with FAT32  file system.
I'm using PostgreSQL 8.0.2 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 
3.4.2 (mingw-special)

Why the corruption occurs ?  How to avoid data corruption?
Will NTFS file system prevent all corruptions ? If yes, how to convert FAT32 
to NTFS without losing data in drive ?

Andrus. 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match






[GENERAL] Resend: Question about PostgreSQL, pgpool, and Postfix

2005-11-25 Thread Gregory Youngblood




[I don't know if this message made it out before or not. If it did, please accept my apologies for the duplicate message. Thanks.]

I'm running postfix 2.0.18 with a postgresql 8.0.3 database backend. I'm also using courier imap/pop servers connected to postgresql as well. All email users are stored in tables, with views providing lookup information to courier and postfix. It works very well, with one exception. 

Postfix likes to hang on to idle connections to the database, even if there are not that many postfix processes running. For example, with postfix:

18338 ?    Ss 0:00 /usr/lib/postfix/master
18339 ?    S  0:00  \_ pickup -l -t fifo -u
18340 ?    S  0:00  \_ qmgr -l -t fifo -u
18344 ?    S  0:00  \_ trivial-rewrite -n rewrite -t unix -u
18358 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18360 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18361 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18362 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18363 ?    S  0:00  \_ cleanup -z -t unix -u
18370 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18371 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18372 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18373 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18386 ?    S  0:00  \_ cleanup -z -t unix -u
18390 ?    S  0:00  \_ cleanup -z -t unix -u
18397 ?    S  0:00  \_ cleanup -z -t unix -u
18401 ?    S  0:00  \_ cleanup -z -t unix -u
18402 ?    S  0:00  \_ cleanup -z -t unix -u
18403 ?    S  0:00  \_ cleanup -z -t unix -u
18427 ?    S  0:00  \_ cleanup -z -t unix -u
18440 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18441 ?    S  0:00  \_ smtpd -n smtp -t inet -u -s 2
18557 ?    S  0:00  \_ cleanup -z -t unix -u
18558 ?    S  0:00  \_ virtual -t unix

This is what postgresql looks like:
17610 pts/1    S  0:00 /usr/bin/postmaster -D /var/lib/pgsql/data
17612 pts/1    S  0:00  \_ postgres: writer process
17613 pts/1    S  0:00  \_ postgres: stats buffer process
17614 pts/1    S  0:00  |   \_ postgres: stats collector process
17916 pts/1    S  0:00  \_ postgres: courier netio 127.0.0.1(3037) idle
18345 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27587) idle
18346 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27588) idle
18347 pts/1    S  0:00  \_ postgres: domains netio 127.0.0.1(27589) idle
18364 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27590) idle
18365 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27591) idle
18366 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27592) idle
18367 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27593) idle
18377 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27596) idle
18378 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27597) idle
18379 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27598) idle
18387 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27601) idle
18388 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27602) idle
18389 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27603) idle
18394 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27604) idle
18395 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27605) idle
18396 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27606) idle
18398 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27607) idle
18399 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27608) idle
18400 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27609) idle
18404 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27610) idle
18408 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27611) idle
18409 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27612) idle
18410 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27613) idle
18411 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27614) idle
18412 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27615) idle
18413 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27616) idle
18420 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27618) idle
18422 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27619) idle
18423 pts/1    S  0:00  \_ postgres: mailbox netio 127.0.0.1(27620) idle
18428 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27622) idle
18429 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27623) idle
18436 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27624) idle
18437 pts/1    S  0:00  \_ postgres: transport netio 127.0.0.1(27625) idle
18438 pts/1    S  0:00  \_ postgres: virtual netio 127.0.0.1(27626) idle
18439 pts/1    S  0:00  \_ postgres: transport netio 127

Re: [GENERAL] Resend: Question about PostgreSQL, pgpool, and

2005-11-26 Thread Gregory Youngblood




Of course, RTFM. :) Thanks for the pointer. I'll pursue that path and see how it goes.

Thanks,
Greg

On Sat, 2005-11-26 at 17:41 +, John Sidney-Woollett wrote:


Take a loot at postfix's proxymap feature - I believe it works for MySQL 
and Postgres, and may solve your problem.

http://www.postfix.org/proxymap.8.html

 From the docs

"To consolidate the number of open lookup tables  by
sharing  one  open  table among multiple processes.
For example, making mysql  connections  from  every
Postfix daemon process results in "too many connec-
tions" errors. The solution:

virtual_alias_maps =
proxy:mysql:/etc/postfix/virtual_alias.cf

The total number of connections is limited  by  the
number of proxymap server processes."


John

Gregory Youngblood wrote:
> [I don't know if this message made it out before or not. If it did,
> please accept my apologies for the duplicate message. Thanks.]
> 
> I'm running postfix 2.0.18 with a postgresql 8.0.3 database backend. I'm
> also using courier imap/pop servers connected to postgresql as well. All
> email users are stored in tables, with views providing lookup
> information to courier and postfix. It works very well, with one
> exception. 
> 
> Postfix likes to hang on to idle connections to the database, even if
> there are not that many postfix processes running. For example, with
> postfix:
> 
> 18338 ?Ss 0:00 /usr/lib/postfix/master
> 18339 ?S  0:00  \_ pickup -l -t fifo -u
> 18340 ?S  0:00  \_ qmgr -l -t fifo -u
> 18344 ?S  0:00  \_ trivial-rewrite -n rewrite -t unix -u
> 18358 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18360 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18361 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18362 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18363 ?S  0:00  \_ cleanup -z -t unix -u
> 18370 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18371 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18372 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18373 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18386 ?S  0:00  \_ cleanup -z -t unix -u
> 18390 ?S  0:00  \_ cleanup -z -t unix -u
> 18397 ?S  0:00  \_ cleanup -z -t unix -u
> 18401 ?S  0:00  \_ cleanup -z -t unix -u
> 18402 ?S  0:00  \_ cleanup -z -t unix -u
> 18403 ?S  0:00  \_ cleanup -z -t unix -u
> 18427 ?S  0:00  \_ cleanup -z -t unix -u
> 18440 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18441 ?S  0:00  \_ smtpd -n smtp -t inet -u -s 2
> 18557 ?S  0:00  \_ cleanup -z -t unix -u
> 18558 ?S  0:00  \_ virtual -t unix
> 
> This is what postgresql looks like:
> 17610 pts/1S  0:00 /usr/bin/postmaster -D /var/lib/pgsql/data
> 17612 pts/1S  0:00  \_ postgres: writer process
> 17613 pts/1S  0:00  \_ postgres: stats buffer process
> 17614 pts/1S  0:00  |   \_ postgres: stats collector process
> 17916 pts/1S  0:00  \_ postgres: courier netio 127.0.0.1(3037)
> idle
> 18345 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27587) idle
> 18346 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27588)
> idle
> 18347 pts/1S  0:00  \_ postgres: domains netio 127.0.0.1(27589)
> idle
> 18364 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27590) idle
> 18365 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27591)
> idle
> 18366 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27592)
> idle
> 18367 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27593) idle
> 18377 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27596) idle
> 18378 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27597)
> idle
> 18379 pts/1S  0:00  \_ postgres: mailbox netio 127.0.0.1(27598)
> idle
> 18387 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27601) idle
> 18388 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27602)
> idle
> 18389 pts/1S  0:00  \_ postgres: mailbox netio 127.0.0.1(27603)
> idle
> 18394 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27604) idle
> 18395 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27605)
> idle
> 18396 pts/1S  0:00  \_ postgres: mailbox netio 127.0.0.1(27606)
> idle
> 18398 pts/1S  0:00  \_ postgres: transport netio
> 127.0.0.1(27607) idle
> 18399 pts/1S  0:00  \_ postgres: virtual netio 127.0.0.1(27608)
> idle
> 18400 pts/1S  0:00  \_ postgres: mailbox netio 127.0.0.1(27609)
&g

Re: [GENERAL] E-mail harvesting on PG lists?

2006-01-07 Thread Gregory Youngblood




I created an account for perl-cpan and it got hit with spam/phishing attempts in less than a week.

There's not a lot that can be done about it. It's a losing battle to try and fight. There are some things you can do, but it won't be 100% effective. The closer you get to 100% effective, the more likely you are to throw the baby out with the bathwater. 

I started using dedicated addresses a few years ago. Anytime I sign up for something, I use an address dedicated for that purpose. Then, when I start seeing spam patterns, I know where the address was used. In the case of mailing lists, there's not much to hide. However, when you sign up for something with a legit store, and then 2 or 3 months later you start getting bombarded with spam having nothing to do with that store -- it's a pretty safe bet where the spammer got your address (unless you use a very easy to guess address like a simple first name or something).

The other problem is dictionary attacks. There are distributed networks of bots that do nothing except try a dictionary of names against your mailserver. You can see how coordinated they are when you are getting dictionary scans from IP addresses all over the globe, starting with A, and not overlapping words.
They are getting more devious too. I found one that had a bug in their tool so it was obvious the connections were linked and they overlapped names every so often (unless it was a single bot net running two separate lists, which is also possible).

It's ugly. No matter how you slice.

Greg