On Fri, 7 Dec 2007, Alex Vinogradovs wrote:
P.S. DDL is never a subject for replication (in normal RDBMS'es).
But it is passed along by PITR replication schemes like the record-based
log shipping that started this thread off; that's the main reason I
specifically pointed out that limitation
Now you're pointing out obvious problems. My company deals with data
warehouses, we don't really need to delete/update stuff, only
insert/select ;) But seriously, those issues can be handled if one
doesn't just send plain tuples, but also includes the information
about what kind of operations were
On Fri, 7 Dec 2007, Alex Vinogradovs wrote:
The documents highlights possible problems with _SQL_ query intercepts.
I am talking about the actual tuples... i.e. row data rather than the
SQL requests.
The first two issues that come to mind are how to deal with a) deletions,
and b) changes to D
I'm implementing table partitioning on 8.2.5 -- I've got the tables set up
to partition based on the % 10 value of a key.
My problem is that I can't get the planner to take advantage of the
partitioning without also adding a key % 10 to the where clause.
Is there any way around that?
My child tabl
The documents highlights possible problems with _SQL_ query intercepts.
I am talking about the actual tuples... i.e. row data rather than the
SQL requests. Please advise if you see any other problems with suggested
approach. Thanks!
Alex.
On Fri, 2007-12-07 at 22:44 -0500, Greg Smith wrote:
> O
Tom Lane wrote:
It seemed reasonable to me that a select on the first element of an
array column could use an index on the column, but, as seen in this
example, I can't get it to do so:
Nope. The operators that go along with a btree index are equality,
less than, etc on the whole indexed colu
On Fri, 7 Dec 2007, Alex Vinogradovs wrote:
How about writing a C function (invoked from a trigger) that will send
the serialized tuple using say UDP protocol (considering you're syncing
on a reliable LAN), and then a simple UDP-listening daemon that will
perform the insert into the slave one. I
How about writing a C function (invoked from a trigger) that will send
the serialized tuple using say UDP protocol (considering you're syncing
on a reliable LAN), and then a simple UDP-listening daemon that will
perform the insert into the slave one. If you have multiple slaves, can
use that with b
On Thu, 6 Dec 2007, SHARMILA JOTHIRAJAH wrote:
Have anyone implemented or tried record-based log shipping? If so is
there any other materials in the web other than the documentation (it
has very few details about this)
There is an implementation of that as part of the Skytools WalMgr code:
ht
Shelby Cain <[EMAIL PROTECTED]> writes:
> Just upgraded from 8.2.5 to 8.3b4 on Windows and after reimporting my
> database I noticed the following messages are showing up sporadically
> in the server logs:
> 2007-12-07 11:56:17 CST ERROR: column pgl.transaction does not exist at
> character 171
"John D. Burger" <[EMAIL PROTECTED]> writes:
> It seemed reasonable to me that a select on the first element of an
> array column could use an index on the column, but, as seen in this
> example, I can't get it to do so:
Nope. The operators that go along with a btree index are equality,
less
Marc Munro <[EMAIL PROTECTED]> writes:
> Is there any way of identifying whether a cast was built-in or is
> user-defined?
It's not easy. I'd suggest following the same heuristic pg_dump
does, which is that if any of the source type, target type, or
underlying function is considered user-defined,
Hello -
I'm trying to findout a better solution to this approach.
Currently if I have to return columns from multiple tables, I have to
define my own TYPE and then return SETOF that type in the function.
I've provided an example below.
Now, if I have to add a column to the select qu
Hello -
I'm trying to findout a better solution to this approach.
Currently if I have to return columns from multiple tables, I have to
define my own TYPE and then return SETOF that type in the function.
I've provided an example below.
Now, if I have to add a column to the select qu
now i want you all to trust me it was not my and never woudl be my choice to
use a mac server but i need pgagent to run on said mac server i have the
client side stuff running meaning i can create jobs but i need to know were
the daemon for the mac is so i can have it installed so my jobs will r
[EMAIL PROTECTED] (Glyn Astill) writes:
> [posted again as it found it's way into another thread]
>
> Hi people,
>
> I intend to set up two slave servers, one using WAL shipping and one
> using Slony I.
>
> Are there any good tools, or scripts that'll help us check that both
> replication methods a
On Dec 7, 2007 4:12 PM, John D. Burger <[EMAIL PROTECTED]> wrote:
> This is under 7.4.
Urgh!
> Is this different on less paleolithic versions of
> PG, or is there some other issue?
Same here:
select version();
PostgreSQL 8.3beta4, compiled by Visual C++ build 1400
select * from temppaths where
It seemed reasonable to me that a select on the first element of an
array column could use an index on the column, but, as seen in this
example, I can't get it to do so:
=> create temp table tempPaths (path int[] primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
"
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, 07 Dec 2007 12:53:45 -0500
Robert Treat <[EMAIL PROTECTED]> wrote:
> On Wednesday 05 December 2007 17:29, Erik Jones wrote:
> > > i dont think you'll have much luck taking the "spread data evenly
> > > throught the
> > > partitions" approach;
Hello
you can use oid. When oid is greather than some constant, then cast is
custom. The constant is different on any postgresq versions.
You can get it on clean postgres with statement
select max(oid) from pg_cast;
Regards
Pavel Stehule
On 07/12/2007, Marc Munro <[EMAIL PROTECTED]> wrote:
>
On Dec 7, 2007, at 11:49 AM, Josh Harrison wrote:
On 12/7/07, Josh Harrison < [EMAIL PROTECTED]> wrote:
> I have 2 servers on which I need to have data replicated. The
master server
> should serve for read/write queries and the 2nd server is used
mainly for
> research queries(read-only
Dear all,
i'm trying to create tables using pg_user (pg_authid) as a foreign key
for my table. I need to log and control that only registered users can
modify data and i want to control data changes via logging triggers. I
need to know who exactly was modifying data. To be more exact i want to
Is there any way of identifying whether a cast was built-in or is
user-defined?
I am tempted to just assume that if the cast is to/from a user-defined
type or uses a user-defined function that it is user-defined. I suspect
though that a user could define a new cast on pre-defined types using a
pr
Thanks Erik. In between those attempts I did try what you suggest.
It failed, apparently due to not making a connection with the server.
Bob
- Original Message -
From: "Erik Jones" <[EMAIL PROTECTED]>
To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "PostgreSQL"
Sent: Friday, December 07, 200
Just upgraded from 8.2.5 to 8.3b4 on Windows and after reimporting my database
I noticed the following messages are showing up sporadically in the server
logs:
2007-12-07 11:56:17 CST ERROR: column pgl.transaction does not exist at
character 171
2007-12-07 11:56:17 CST STATEMENT: SELECT (SEL
On Dec 7, 2007, at 11:53 AM, Robert Treat wrote:
On Wednesday 05 December 2007 17:29, Erik Jones wrote:
i dont think you'll have much luck taking the "spread data evenly
throught the
partitions" approach; figure out how best to segment your data into
manageable chunks. HTH.
I agree. That's
On Dec 7, 2007, at 11:38 AM, Bob Pawley wrote:
Hi
I am having a little more success with the pg_dump command.
However, I still seem to have something wrong.
I use the following command after navigating to the bin file -
pg_dump -h localhost -d Aurel -U postgres
After six attempts -
Each
On Wednesday 05 December 2007 17:29, Erik Jones wrote:
> > i dont think you'll have much luck taking the "spread data evenly
> > throught the
> > partitions" approach; figure out how best to segment your data into
> > manageable chunks. HTH.
>
> I agree. That's also why I'm not too worried about
>
>
>
> On 12/7/07, Josh Harrison < [EMAIL PROTECTED]> wrote:
> > > I have 2 servers on which I need to have data replicated. The master
> > server
> > > should serve for read/write queries and the 2nd server is used mainly
> > for
> > > research queries(read-only queries) and so it doesn't have to
Hi
I am having a little more success with the pg_dump command. However, I still
seem to have something wrong.
I use the following command after navigating to the bin file -
pg_dump -h localhost -d Aurel -U postgres
After six attempts -
Each attempt processed the database and the command pro
On Fri, Dec 07, 2007 at 12:07:52PM -0500, [EMAIL PROTECTED] wrote:
> other DB's do FTS when there is a function involved in the predicate (WHERE
> clause)
> so a possible workaround would be to look at all function calls in your
> predicate (WHERE clause) and
> populate a new column with the result
Hi,
I've written about this problem before and thanks to Bill Bartlett and
Richard Huxton for previous replies, but the problem keeps coming up...
I'm running postgresql V8.2.5 (I think!) on W2K3 Server and occasionally
I want to rebuild a database. However I cannot drop the database because
On Thursday 06 December 2007 20:00, Tom Lane wrote:
> "Weber, Geoffrey M." <[EMAIL PROTECTED]> writes:
> > My problems really are with performance consistency. I have tweaked the
> > execution so that everything should run with sub-second execution times,
> > but even after everything is running w
I'm developing some triggers for the first time, and I'm having
trouble analyzing their performance. Does anyone have any advice for
doing EXPLAIN and the like on statements involving NEW? For
instance, I'd like to know what plan PG is coming up with for this
fragment of a trigger functio
On Dec 7, 2007, at 8:29 AM, Ted Byers wrote:
--- Erik Jones <[EMAIL PROTECTED]> wrote:
On Dec 6, 2007, at 2:36 PM, Ted Byers wrote:
[snip]
What you want to do here for handling the update v.
insert is called
an "UPSERT". Basically, what you do is run the
update as if the row
exists and ca
On Dec 7, 2007, at 6:29 AM, Ted Byers wrote:
--- Erik Jones <[EMAIL PROTECTED]> wrote:
On Dec 6, 2007, at 2:36 PM, Ted Byers wrote:
[snip]
What you want to do here for handling the update v.
insert is called
an "UPSERT". Basically, what you do is run the
update as if the row
exists and ca
assign pointfromtext(pojnt) to a variable
other DB's do FTS when there is a function involved in the predicate (WHERE
clause)
so a possible workaround would be to look at all function calls in your
predicate (WHERE clause) and
populate a new column with the results of the function(column)
and then
Ted Byers wrote:
--- Erik Jones <[EMAIL PROTECTED]> wrote:
On Dec 6, 2007, at 2:36 PM, Ted Byers wrote:
[snip]
What you want to do here for handling the update v.
insert is called
an "UPSERT". Basically, what you do is run the
update as if the row
exists and catch the exception that is th
On 12/7/07, Josh Harrison <[EMAIL PROTECTED]> wrote:
> > I have 2 servers on which I need to have data replicated. The master
> server
> > should serve for read/write queries and the 2nd server is used mainly
> for
> > research queries(read-only queries) and so it doesn't have to be
> up-to-date.
>
A. Kretschmer wrote:
am Tue, dem 04.12.2007, um 20:19:29 -0800 mailte pc folgendes:
Hi,
How to redirect the output of an sql command to a file?
Thanks in advance
within psql you can use \o , from the shell you can use this:
[EMAIL PROTECTED]:~$ echo "select now()" | psql test > now.
=?us-ascii?Q?Martin=20Korous?= <[EMAIL PROTECTED]> writes:
> and in pg_log is message:
> STATEMENT: SELECT (SELECT usename FROM pg_user WHERE usesysid = datdba) as
> dba, pg_encoding_to_char(encoding) as encoding, datpath FROM pg_database
> WHERE datname = 'dbname'
> ERROR: column "datpath" doe
On 12/7/07, Josh Harrison <[EMAIL PROTECTED]> wrote:
> I have 2 servers on which I need to have data replicated. The master server
> should serve for read/write queries and the 2nd server is used mainly for
> research queries(read-only queries) and so it doesn't have to be up-to-date.
...
> Is it p
On Dec 7, 2007 9:52 AM, Josh Harrison <[EMAIL PROTECTED]> wrote:
> I tried the 'Continuous Archiving and PITR' in my test database and it works
> fine. But this set-up is only for a warm standby server...right?! Is it
> possible to make both the servers work asynchronously, while the primary
> serv
Hi list,
I'm a newbie for postgresql replication
This is my requirement.
I have 2 servers on which I need to have data replicated. The master server
should serve for read/write queries and the 2nd server is used mainly for
research queries(read-only queries) and so it doesn't have to be up-to-dat
Hello,
I have on db server Postgresql 8.2.5
pg_dump (data and structures) works well.
I have copied pg_dum file (and libpq.so.5) to other server, into chroot where
is apache and phppgadmin. And there is problem. Dump of structures doesnt work,
onlydata OK.
I have wrote 2 minimalistic php scri
--- Erik Jones <[EMAIL PROTECTED]> wrote:
>
> On Dec 6, 2007, at 2:36 PM, Ted Byers wrote:
>
> [snip]
> What you want to do here for handling the update v.
> insert is called
> an "UPSERT". Basically, what you do is run the
> update as if the row
> exists and catch the exception that is th
Hi,
I added this function to find the nearest hospital using the distance
covered on the route itself.
My reasoning was this :
- Find the 3 most near hospitals using distance() function
- Iterate the 3 hospitals and find the one which is the
shortest
distance taking into c
Thank you.
I ran manually analyze command.
After that query runs fast.
I have enabled autovacuum and statitics collection in config file.
Every day a lot of rows are added to dok table.
However it seems that statitics is not collected (autovacuum is not running)
Any idea autovacuum is not running
I installed postgres using windows installer and added the following lines
to end of postgresql.conf :
listen_addresses = '*'
log_destination = 'stderr'
redirect_stderr = on
stats_start_collector = on
stats_row_level = on
autovacuum = on
shared_buffers= 15000 #
log_line_prefix='\n%t %u %d %h %p
On 12月6日, 下午10�r14分, [EMAIL PROTECTED] (Alvaro Herrera) wrote:
> Charles.Hou wrote:
> > this is the pg_log...
> > after 2007-12-04 10:40:37 CST 15533 , it always autovacuum "template0"
> > not mydatabase...
>
> Is there an ERROR in the log? My guess is that template0 is in danger
> of Xid wraparou
> It needs to store the number of bits present as well
Couldn't that be reduced to 1 byte that'd say how many bits count in the
last byte?
> Only in the sense that numeric also has to store some meta data as well
like
the weight and display precision.
Is it really necessary to store display prec
I see that BLOCK_SIZE can be set at compile time, but is there a way
to determine what block size is in use in a running system? I've been
searching but have been unsuccessful so far.
Thanks!
John
---(end of broadcast)---
TIP 5: don't forget to incr
On Fri, Dec 07, 2007 at 01:18:13PM +0800, Ow Mun Heng wrote:
> select i.i as vdt,dcm_evaluation_code as c_id
> , case when count(vdt_format) = 0 then NULL else count(vdt_format) end
> as count
> from generate_series(1,7) i
> left join footable f
> on i.i = f.vdt_format
> and c_id in ('71','48')
> g
Henrik wrote:
>
> 6 dec 2007 kl. 20.26 skrev Alvaro Herrera:
>
>> Henrik wrote:
>>
>>> I think I have a clue why its so off. We update a value in that table
>>> about
>>> 2 - 3 million times per night and as update creates a new row it becomes
>>> bloated pretty fast. The table hade a size of 765
On Thu, Dec 06, 2007 at 02:12:48PM -0600, Matthew Dennis wrote:
> I want to create an aggregate that will give the average velocity (sum of
> distance traveled / sum of elapsed time) from position and timestamps.
How do you want to handle noisy data? If you want to handle it in any
reasonable way
Ah thanks, thats what I must have done. Never happened on other lists
so I assumed it'd be okay. My Bad.
--- Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Glyn Astill wrote:
> > How did that happen? The subject is totally different, so is the
> > body.
>
> It has an "In-Reply-To:" and possibly "Ref
6 dec 2007 kl. 20.26 skrev Alvaro Herrera:
Henrik wrote:
I think I have a clue why its so off. We update a value in that
table about
2 - 3 million times per night and as update creates a new row it
becomes
bloated pretty fast. The table hade a size of 765 MB including
indexes and
after
Efraín López wrote:
>>> I am using Windows, and pg 8.2.5
>>>
>>> When making a connection with libpq, if it fails I would like
>>> to get the errors messages in spanish (PQerrorMessage )
>>>
>>> Is this possible? How can this be done?
I got it to work with this program:
#include
#include
#incl
6 dec 2007 kl. 22.18 skrev Alvaro Herrera:
Gauthier, Dave wrote:
Future Enhancement?
If the column's new value can fit in the space already being used
by the
existing value, just change the column value in place and leave the
record alone. Would reduce the need for vacuum in many cases.
59 matches
Mail list logo