is there any details available on this poll ?
thanks.
On Thu, Aug 13, 2015 at 11:05 PM, Sachin Srivastava <
sachin.srivast...@cyient.com> wrote:
> Congrats to all PostgreSQL DBA’s for this achievement..
>
>
>
>
> HERE ARE THE WINNERS OF THE 2015 DBTA READERS' CHOICE AWARDS FOR BEST
> DATABASE (O
2015 at 5:10 PM, Kevin Grittner wrote:
> Ravi Krishna wrote:
>
>> As per this:
>>
>> http://www.postgresql.org/docs/current/static/warm-standby.html#SYNCHRONOUS-REPLICATION
>>
>> "When requesting synchronous replication, each commit of a write
>> transa
Chris/Joshua
I would like to know more details.
As per this:
http://www.postgresql.org/docs/current/static/warm-standby.html#SYNCHRONOUS-REPLICATION
"When requesting synchronous replication, each commit of a write
transaction will wait until confirmation is received that the commit
has been wri
Does sync replication guarantee that any inserted data on primary is
immediately visible for read on standbys with no lag.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> In the above case PG will simply do a dictionary update of meta
> tables. So all new rows will reflect col-T and as and when the old
I will clarify it bit further:
All new rows will have space allocated for col-T and no space
allocated for col-S, while existing dormant rows are left unmodified
> On 6/5/2015 11:37 AM, Ravi Krishna wrote:
>>
>> Why is PG even re-writing all rows when the data type is being changed
>> from smaller (int) to larger (bigint) type, which automatically means
>> existing data is safe. Like, changing from varchar(30) to varchar(50)
>
Why is PG even re-writing all rows when the data type is being changed
from smaller (int) to larger (bigint) type, which automatically means
existing data is safe. Like, changing from varchar(30) to varchar(50)
should involve no rewrite of existing rows.
--
Sent via pgsql-general mailing list (p
Are there any plans to introduce the concept of automatic client
routing to the principal server in a cluster of N machines. For
example, if there is a four node replication cluster N1 .. N4, at any
time only one can be principal (the one which does the writing). In
Oracle and DB2, client side libr
On Thu, May 28, 2015 at 12:50 PM, Tom Lane wrote:
> Sure, because you don't have a constraint forbidding the parent from
> having a matching row, no?
As suggested by you, I included a bogus condition in the parent table
which will prevent any row addition in the parent table and made the
constra
> By and large, though, this doesn't really matter, since an empty
> parent table won't cost anything much to scan. If it's significant
> relative to the child table access time then you probably didn't
> need partitioning in the first place.
Is there a rule of thumb as to at what size does the p
On Thu, May 28, 2015 at 12:42 PM, Melvin Davidson wrote:
>
> Generally, when you partition, data should only be in child tables, and the
> parent table should be empty, otherwise you defeat the purpose of
> parttioning.`
yes of course the parent table is empty. The trigger on insert is
redirect
> Have you set up constraints on the partitions? The planner needs to know
> what is in the child tables so it can avoid scanning them.
Yes. each child table is defined as follows
CREATE TABLE TSTESTING.ACCOUNT_PART1
( CHECK (ACCOUNT_ROW_INST BETWEEN 1001 and 271660))
INHERITS (TSTESTING.ACCO
I am testing partitioning of a large table. I am INHERITING child tables.
It is using a range
partitioning based on a sequence col, which also acts as the primary
key. For inserts I am using a trigger which will redirect insert to
the right table based on the value of the primary key.
Based on my
On Sat, May 23, 2015 at 10:12 PM, Scott Marlowe wrote:
> Ever run an insert with 1M rows, and roll it back in postgresql and
> compare that to oracle. Time the rollback in both. That should give
> you an idea of how differently the two dbs operate.
>
> A rollback in postgres is immediate because
2pm update also because it never got completed. This
means, the rollback will have to go past several check points (between
2.01pm and 2.05pm).
Hope this explains it clearly.
On Sat, May 23, 2015 at 4:48 PM, David G. Johnston
wrote:
> On Sat, May 23, 2015 at 1:34 PM, Ravi Krishna
>
Is it true that PG does not log undo information, only redo. If true,
then how does it bring a database back to consistent state during
crash recovery. Just curious.
thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.po
, 2015 at 8:13 PM
From: "Joshua D. Drake"
To: "Ravi Krishna" , pgsql-...@postgresql.org
Cc: pgsql-general@postgresql.org
Subject: Re: [SQL] [GENERAL] Does PG support bulk operation in embedded C
On 05/19/2015 04:47 PM, Ravi Krishna wrote:
>
> To explain pls refer to this
To explain pls refer to this for DB2
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0002329.html
Essentially in one single sql call, we can do
-- Add new rows
-- Update a set of rows where each row is identified by a bookmark
-- Delete a set of rows
I am writing bench mark scripts and as part of it would like to clear the cache
programmatically. This is to ensure that when we run select queries the data is
not read read from the cache. Does PG provide any easy way to do it other than
the obvious way to restart the database.
Thanks.
--
Sent
19 matches
Mail list logo