-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
AFAICS from the user requests, many people is not aware about the
compatibility RPM we built:
http://developer.PostgreSQL.org/~devrim/compat-postgresql-libs-3-2PGDG.i686.rpm
is the compatibility RPM that fixes the problem which arose with
P
"David Parker" <[EMAIL PROTECTED]> writes:
> Sorry, neglected the version yet again: 7.4.5. What happens is that we
> have active connections accessing tables that are being replicated by
> slony. Then somebody does an uninstall of slony, which removes the slony
> trigger from those tables. Then we
"gabriele zelasco" <[EMAIL PROTECTED]> writes:
> I would like to start a transaction with a sql function.
> When user press "edit" button on my form, i would lock the current row.
> After user has modified data on form, pressing "save" button I would save t=
> he modified row by sql update function
On Thu, May 26, 2005 at 05:04:37PM -0400, Hrishikesh Deshmukh wrote:
> Is it possible to connect a DB in Postgresql to a DB in MySQL! I
> know its a crazy idea!
It's called DBI-Link.
http://pgfoundry.org/projects/dbi-link/
Cheers,
D
--
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +
"daniellewis" <[EMAIL PROTECTED]> writes:
> PostgreSQL (Version 8.0.1). I installed from fink; tcl/tk 8.4.1-12, and
> changed the pgaccess bash script to read wish8.4. I tried to run this
> and I go the following error:
> Application initialization failed: no display name and no $DISPLAY
> environ
LOL..not looney!
On 5/26/05, Matt Miller <[EMAIL PROTECTED]> wrote:
> On Thu, 2005-05-26 at 17:21 -0400, Hrishikesh Deshmukh wrote:
> > I have a little schema in pgsql and some annotation in mysql;
> > ...
> > if i could make these two talk
> > ...
> > So the question and frankly i tho
On Thu, 2005-05-26 at 17:21 -0400, Hrishikesh Deshmukh wrote:
> I have a little schema in pgsql and some annotation in mysql;
> ...
> if i could make these two talk
> ...
> So the question and frankly i thought it was crazy thought!
> The replys so far indicate that i am not looney at all ;)
Well
Sorry, neglected the version yet again: 7.4.5. What happens is that we
have active connections accessing tables that are being replicated by
slony. Then somebody does an uninstall of slony, which removes the slony
trigger from those tables. Then we start getting the OID error.
If this should inde
I have a little schema in pgsql and some annotation in mysql; either
way transfer of schema might result in data types etc conflicts. So if
i could make these two talk then i don't have to worry about schema
transfer. So the question and frankly i thought it was crazy thought!
The replys so far ind
On 5/26/05, Hrishikesh Deshmukh <[EMAIL PROTECTED]> wrote:
> Is it possible to connect a DB in Postgresql to a DB in MySQL!
> I know its a crazy idea!
Why, of course. Been' doing that.
All you need is to write a set of functios, for example in PL/perlU,
some of them being set returning functions
Could you point to documentation regarding this. This would be a big help.
Thanks,
Hrishi
On 5/26/05, Dann Corbit <[EMAIL PROTECTED]> wrote:
> Of course it is possible. In fact, it's easy.
>
> Just use JDBC or ODBC or OLEDB or a .NET provider and join to both
> database systems.
>
> There is n
Hi.I'm using postgresql 8.0.3 under win2000 and developing with VS2003
(npgsql net provider).I would like to start a transaction with a sql
function.When user press "edit" button on my form, i would lock the current
row.After user has modified data on form, pressing "save" button I would
sa
Hello,
I'm quite new to postgreSQL, pgaccess and TCL/TK... Here is my
scenario:
PROBLEM 1:
I'm running X11R6 on Apple X11 (on OS X 10.3.8), I want to run pgaccess
(which I have version 0.98.7 from http://ns.flex.ro/pgaccess/ ). I have
PostgreSQL (Version 8.0.1). I installed from fink; tcl/tk 8.4
Of course it is possible. In fact, it's easy.
Just use JDBC or ODBC or OLEDB or a .NET provider and join to both
database systems.
There is nothing to it.
I can make a join where tables from RMS and DB/2 and Oracle and
PostgreSQL and MySQL are all participating in the SQL statement with
ease.
Is it possible to connect a DB in Postgresql to a DB in MySQL!
I know its a crazy idea!
H
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 5/26/05, Hervé Inisan <[EMAIL PROTECTED]> wrote:
>
> Hi everybody!
>
> I have a trigger like this:
>
> CREATE TRIGGER mytrigger
>AFTER INSERT OR UPDATE OR DELETE
>ON myschema.mytable
>FOR EACH ROW
>EXECUTE PROCEDURE myschema.myfunction(myarg);
>
> It sends an argument to myfu
"David Parker" <[EMAIL PROTECTED]> writes:
> Something that we end up doing sometimes in our failover testing is
> removing slony replication from an "active" (data provider) server.
> Because this involves removing triggers from tables, we end up with
> currently connected clients getting a bunch
Hi everybody!
I have a trigger like this:
CREATE TRIGGER mytrigger
AFTER INSERT OR UPDATE OR DELETE
ON myschema.mytable
FOR EACH ROW
EXECUTE PROCEDURE myschema.myfunction(myarg);
It sends an argument to myfunction(), and I can retrieve this value in
TG_ARGV[0]. Fine.
What I'm trying
On May 26, 2005, at 2:41 PM, David Parker wrote:
But I'm wondering - shouldn't that be part of normal server startup,
cleaning out the pg_listener table? Or has this been addressed in
8.X.?
Or is there a reason this isn't a good idea?
Try slony 1.0.5, which fixed *many* issues and bugs.
Tom Lane writes:
>Himanshu Baweja <[EMAIL PROTECTED]>writes:>> why has been maintenance_work_mem and work_mem been>> restricted to 1gb...
>So as not to overflow on 32-bit machines.
then why not add a check during configure(before compiling) to check if its a 32 or 64 bit machine in the past we
Hi,
Does anybody know any commercial or open source archieving solutions
available out there?
We need to be able to archieve data/records from certain tables that
are more than 1 year old.
Thank you in advance.
J
---(end of broadcast)---
TIP 2: you
"David Parker" <[EMAIL PROTECTED]> writes:
> But I'm wondering - shouldn't that be part of normal server startup,
> cleaning out the pg_listener table?
Perhaps, but the code is written such that it's unlikely to be a major
problem --- notifying processes automatically clean out entries that
don't
On May 26, 2005, at 11:55 AM, Manuel García wrote:Hello, Somebody knows If is possible to catch all the sentences applies to one table using triggers and function in C maybe, that’s because, I need to create one log table with all the sentences. Once that I have that I going to use all the senten
Something that we
end up doing sometimes in our failover testing is removing slony replication
from an "active" (data provider) server. Because this involves removing triggers
from tables, we end up with currently connected clients getting a bunch of "OID
123 not found" errors, where the OID
Himanshu Baweja <[EMAIL PROTECTED]> writes:
> why has been maintenance_work_mem and work_mem been
> restricted to 1gb...
So as not to overflow on 32-bit machines.
regards, tom lane
---(end of broadcast)---
TIP 3: if posting/
Thanks. Yeah, I know slony 1.0.5 cleans up after itself, and is better
in general, and I want to get there, but upgrading is not an option at
the moment, unfortunately. Same for postgres 8.
But it still seems like this is something the server itself should be
taking care of, not a client process.
On Thu, 2005-05-26 at 13:41, David Parker wrote:
> In failover testing we have been doing recently (postgres 7.4.5 w/
> slony 1.0.2) we have seen several times when the database comes back
> up after a power failure it still has old pg_listener records hanging
> around from its previous life. This
In failover testing
we have been doing recently (postgres 7.4.5 w/ slony 1.0.2) we have seen several
times when the database comes back up after a power failure it still has old
pg_listener records hanging around from its previous life. This causes some
problems with slony, but of course it
why has been maintenance_work_mem and work_mem been
restricted to 1gb... although i dont need it but kinda
server i am working on i wont mind
allocating...(32gb ram)
#define MaxAllocSize ((Size) 0x3fff)
/* 1 gigabyte - 1 */
also for those who dont know the max. share
I'd like a function to return a strongly-typed refcursor. My goal is to
allow callers of the function to know, based on the function's return
type, the number and data types of the columns that it can expect in the
refcursor. From what I see in plpgsql, all refcursors are allowed to
point to any
Greg Stark <[EMAIL PROTECTED]> writes:
> I suspect something stranger going on.
I'm still wondering about the theory that it's not the aliases at issue,
but some scripts in the PATH ahead of the normal /bin/ls and friends.
regards, tom lane
---(end
Hello Tom,
I hope that you are well, thank you for your guidence, but these are indeed
defined in my .bashrc:
# .bashrc
# User specific aliases and functions
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
if [ "$PS1" ]; then
# your settings:
"David M. Lee" <[EMAIL PROTECTED]> writes:
> I have a system that is dual bootable for both i686 and x86_64. Would
> there be any issues using the PostgreSQL database files generated for
> i686 on x86_64, or vice versa?
You'd probably have problems with the different data alignment rules for
the
David M. Lee wrote:
> I have a system that is dual bootable for both i686 and x86_64. Would
> there be any issues using the PostgreSQL database files generated for
> i686 on x86_64, or vice versa?
Uh, if the padding is the same, it would work, but we never test such
things.
--
Bruce Momjian
Tom Lane <[EMAIL PROTECTED]> writes:
> Aly Dharshi <[EMAIL PROTECTED]> writes:
> > alias ls='colorls -al'
> > alias rm='rm -i'
>
> > I don't see any aliases that are going to break the compile process.
>
> I beg to differ --- I think the ones quoted above match your symptoms
> pretty well. So t
I have a system that is dual bootable for both i686 and x86_64. Would
there be any issues using the PostgreSQL database files generated for
i686 on x86_64, or vice versa?
Thanks!
dave
<><
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmast
On Thu, May 26, 2005 at 08:56:34AM -0400, Paul Tillotson wrote:
> Tom Lane wrote:
>
> >Paul Tillotson <[EMAIL PROTECTED]> writes:
> In other words, no arbitrary number of extra decimal places when calling
> div_var() will be always sufficient to prevent rounding up at some other
> decimal place
"Yateen Joshi" <[EMAIL PROTECTED]> writes:
> Link: File-List
>
> Hi,
>
>
>
> I am using postgres 7.4.2 on Solaris. My unix system does not place a
> limitation of 2 GB on file size. If I export a data from my database that
> causes the file size to be more than 2 GB, then that export fails (and
Hi,
I am using postgres 7.4.2 on Solaris. My unix system does
not place a limitation of 2 GB on file size. If I export a data from my database that causes the
file size to be more than 2 GB, then that export fails (and vice versa for
importing, i.e. if the file size is more than 3 GB,
Hello, Somebody knows If is possible to catch all the
sentences applies to one table using triggers and function in C maybe,
thats because, I need to create
one log table with all the sentences. Once that I have that I going to use all the sentences to
replicate that table in other datab
I've painted myself into a little corner here:
I pg_dumped a 7.4.3 database, created a database of the same name on a
7.3.4 server, psql'd into the new database, and \i'd the dump file.
The database was created although there were a variety of errors which I
realized were due to 7.4.3 and 7.3
Himanshu Baweja <[EMAIL PROTECTED]> writes:
> this would greatly help ppl in determining the
> appropriate value of bgwriter parameters it would
> require a simple patch to written which will add two
> else statements in StrategyDirtyBufferList() and
> returning a struct instead of int...
> als
Bucks vs Bytes Inc <[EMAIL PROTECTED]> writes:
> Any thoughts on what could make both clients attempt wrong protocol?
They are both using 7.4-or-later libpq. Whether you think so or not.
regards, tom lane
---(end of broadcast)-
Dave E Martin <[EMAIL PROTECTED]> writes:
> I have noticed that if I set enable_sort=false in the .conf file, my
> queries are running faster.
You mean one specific example is running faster. If you do that you'll
probably slow down other queries.
It looks like the main estimation error is here
Marc G. Fournier wrote:
I'd almost think taht this shuld be much more prominently put in a
section on the main page of the web site, actually ... make it nice and
visible instead of buried on a sub page ...
I agree it would be good to have a link on the main page. Possibly near
"What's ne
Tom Lane wrote:
Paul Tillotson <[EMAIL PROTECTED]> writes:
I don't think anyone wants to defend the negative modulus as such, but to fix
it, we have to do one of these:
(1) Keep rounding division, but rewrite the numeric modulus operator to use a
form of division that always rou
wht StrategyStatsDump prints is
ARC clean buffers at LRU of T1 and T2
now lets say i have a dirty buffer at position 31st
from LRU and the next one is at 3500th...
in cases like this... t1_clean and t2_clean are of no
use
a better option would be to have a function like
StrategyDirtyBuffe
Connection logging shows an unvarying pattern: every connection
attempt, regardless of target database or source (PHP or psql), first
uses a wrong protocol and then succeeds on a second attempt, presumably
after falling back:
LOG: connection received: host=[local]
FATAL: unsupported frontend
On Thu, 26 May 2005 06:06 pm, Surabhi Ahuja wrote:
>
> I have heard about "bulk loading algorithm" for indexes..
> for eg. if u have values like 1, 2,3,4,5, etc...till a very large number.
> in case of simple mechanism of indexing, the values will be inserted one by
> one for eg..1 then 2 and so
Title: bulk loading of bplus index tree
I have heard about "bulk loading algorithm" for indexes..
for eg. if u have values like 1, 2,3,4,5, etc...till a very large number.
in case of simple mechanism of indexing, the values will be inserted one by one for eg..1 then 2 and so on
however in bu
Dave E Martin wrote:
(8.0.1 on debian/linux 2.6.11 kernel)
I have noticed that if I set enable_sort=false in the .conf file, my
queries are running faster. I had a query which if I did a limit 20, ran
in 6 milliseconds, but if I changed it to limit 21, it took around 19
seconds (or 19000 mill
51 matches
Mail list logo