On Sat, 2008-01-19 at 23:46 +, Gordan Bobic wrote:
> David Fetter wrote:
>
> > In that case, use one of the existing solutions. They're all way
> > easier than re-inventing the wheel.
>
> Existing solutions can't handle multiple masters. MySQL can do it at
> least in a ring arrangement.
>
I was wondering how many people have PGCluster used in production
environments.
How stable is it? Are there any problems? Are there any versions that
should be avoided? Which is the better choice for production use
right now, 1.1 or 1.3? Are there any gotchas to be avoided?
I am evaluatin
I run postfix and have it connected to postgresql for just about
everything. Postfix is very sloppy on the database side, or so it seems.
I ended up having to configure postfix to limit the number of
processes it will start, and then make sure postgres has more than
that connections availab
Thanks for the proxymap tip. I will definitely look into it.
However, it probably won't do much for me, since I have user and
directory information (i.e. sensitive information) looked up, and
proxymap very clearly says not to use it for that. At least, not yet.
Though it will undoubtedly he
I am looking for some information about clustering and replication
options for postgresql.
I am aware of pgcluster, but have been unable to find anyone willing
to share details about actually using it in a production environment.
That's a little disconcerting. Is pgcluster not really ready
Hopefully I'm understanding your question correctly. If so, maybe
this will do what you are wanting.
First, a couple of questions. Do you have this data in a table
already, and are looking to extract information based on the dates?
Or, are you basically wanting something like a for loop so
I've been using postgres off and on since about 1997/98. While I have
my personal theories about tuning, I like to make sure I stay
current. I am about to start a rather thorough, application specific
evaluation of postgresql 8, running on a Linux server (most likely
the newly release Debia
I believe you can probably use views to accomplish this.
You create a view that is populated based on their username. Then you
remove access to the actual table, and grant access to the view.
When people look at the table, they will only see the data in the
view and will not have access to
So it seems to more reasonable to run my application as Postgres
superuser
and implement security in application.
Andrus.
"Gregory Youngblood" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
I believe you can probably use views to accomplish this.
You create a v
I've been going through some old backups, and found databases from
pgsql versions as old as 7.0 (7.0, 7.1, 7.3, and 7.4 to be precise).
I'm trying to build these older versions specifically so I can dump
the data and see what I want to keep and what I can erase.
7.3 and 7.4 appear to have b
nue fails, I'll post back with actual error messages. I do
appreciate your assistance.
Thanks,
Greg
On Jul 13, 2005, at 7:58 PM, Alvaro Herrera wrote:
On Wed, Jul 13, 2005 at 04:29:41PM -0700, Gregory Youngblood wrote:
It gets through most of the make process, but then at the point
On Jul 13, 2005, at 9:57 PM, Alvaro Herrera wrote:On Thu, Jul 14, 2005 at 02:46:01PM +1000, Neil Conway wrote: Vivek Khera wrote: The first sentence rules out MySQL, so the second sentence should read "So that leaves Postgres". Your problem is solved ;-)(If you are accustomed to Oracle, you are p
If linking it in directly via C would bring in the MySQL license, and
you want to avoid that, what about one of the scripting languages
such as perl or python, or possibly even ruby? Or, what about using
UnixODBC to talk to MySQL.
I've written a few perl scripts when I need to convert MySQL
On Jul 27, 2005, at 9:53 PM, Tom Lane wrote:Gregory Youngblood <[EMAIL PROTECTED]> writes: ... the problem is unsigned bigint in mysql to postgresql. There's not another larger integer size that can be used that would allow the 18446744073709551615 (is that the max value?) max value available in
On Aug 1, 2005, at 4:33 PM, Robert Treat wrote:
On Monday 01 August 2005 13:52, Scott Marlowe wrote:
On Mon, 2005-08-01 at 11:44, [EMAIL PROTECTED] wrote:
Hi all,
I am sorry for a stupid easy question, but I'am PostgreSQL novice.
Our development team has encountered problem with trying to
On Aug 4, 2005, at 8:13 AM, [EMAIL PROTECTED] wrote:I am changing from 7.2 to 8.0 and have both installed now on various Linux machines. When I use the psql command line interface with a -h hostname, the connection time from 7.2 is instant while the connection time from 8.0 is 15 seconds. My assu
What machine is remote? Linux? Solaris? or Mac? I couldn't tell if
the remote system or your workstation was a Mac.
I will assume the postgresql server is on a Mac, and that the Mac has
its firewall enabled. On my Mac, to open a firewall for something
like this, go to System Preferences, a
On Aug 2, 2005, at 8:16 AM, Alvaro Herrera wrote:On Tue, Aug 02, 2005 at 10:01:50AM -0500, Dan Armbrust wrote: I shouldn't have to manually run Analyze to make the DB be capable of handling inserts involving tables with foreign keys correctly. My code that is doing the inserts is a java applicatio
On Aug 4, 2005, at 2:39 PM, [EMAIL PROTECTED] wrote:Both dig and nslookup are fast on all machines. 'psql' is fast on all machines, as long as I am using the version compiled with version 7.2. It is only 'psql' compiled with version 8.0 that is slow. I don't think DNS is the problem, but rather
I've been using SuSE and PostgreSQL for a fairly long time. Recently
(last 12 months), I've noticed that the 9.x (9.2 and 9.3 specifically)
versions of SuSE do not include PostgreSQL on the CD install -- only on
the DVD. At first (9.2), I thought it was just a glitch that didn't get
fixed in 9.3. N
On Mon, 2005-10-10 at 11:04 -0700, Steve Crawford wrote:
> > > Gregory Youngblood <[EMAIL PROTECTED]> writes:
> > >> I've been using SuSE and PostgreSQL for a fairly long time.
> > >> Recently (last 12 months), I've noticed that the 9.x (9.2 and
entation that
> clarifies that stuff?
>
> Alex Turner
> NetEconomist
>
> On 10/11/05, Gregory Youngblood <[EMAIL PROTECTED]> wrote:
> On Mon, 2005-10-10 at 11:04 -0700, Steve Crawford wrote:
> > > > Gregory Youngblood <[EMAIL PROTECTED]&
On Mon, 2005-10-17 at 12:05 -0700, Chris Travers wrote:
5) Independant patent license firms. I guess it is a possibility, but in the end, companies that mostly manufacture lawsuits usually go broke. Why would you sue a non-profit if you were mostly trying to make a buck with the lawsuit?
On Tue, 2005-10-18 at 13:07 -0700, Chris Travers wrote:
Gregory Youngblood wrote:
> On Mon, 2005-10-17 at 12:05 -0700, Chris Travers wrote:
>
>>5) Independant patent license firms. I guess it is a possibility, but in the end, companies that mostly manufacture lawsuits usua
Talking with various people that ran postgres at different times, one thing they always come back with in why mysql is so much better: postgresql corrupts too easily and you lose your data.
Personally, I've not seen corruption in postgres since 5.x or 6.x versions from several years ago. And,
[I don't know if this message made it out before or not. If it did, please accept my apologies for the duplicate message. Thanks.]
I'm running postfix 2.0.18 with a postgresql 8.0.3 database backend. I'm also using courier imap/pop servers connected to postgresql as well. All email users are s
uot; errors. The solution:
virtual_alias_maps =
proxy:mysql:/etc/postfix/virtual_alias.cf
The total number of connections is limited by the
number of proxymap server processes."
John
Gregory Youngblood wrote:
> [I don't know if this message made it out before or not. If it did,
> ple
I created an account for perl-cpan and it got hit with spam/phishing attempts in less than a week.
There's not a lot that can be done about it. It's a losing battle to try and fight. There are some things you can do, but it won't be 100% effective. The closer you get to 100% effective, the mor
28 matches
Mail list logo