On Wednesday, October 1, 2014, Jonathan Vanasco wrote:
>
> I'm trying to improve the speed of suite of queries that go across a few
> million rows.
>
> They use 2 main "filters" across a variety of columns:
>
> WHERE (col_1 IS NULL ) AND (col_2 IS NULL) AND ((col_3 IS NULL) OR
> (col_3 =
On Sep 30, 2014, at 8:04 PM, John R Pierce wrote:
> if col_1 IS NULL, then that OR condition doesn't make much sense. just
> saying...
I was just making a quick example. There are two commonly used "filter sets",
each are mostly on Bool columns that allow null -- but one checks to see if
On 9/30/2014 4:50 PM, Jonathan Vanasco wrote:
WHERE (col_1 IS NULL ) AND (col_2 IS NULL) AND ((col_3 IS NULL) OR
(col_3 = col_1))
if col_1 IS NULL, then that OR condition doesn't make much sense.
just saying...
these 4 columns are all nullable booleans, so they can be TRUE, FA
I'm trying to improve the speed of suite of queries that go across a few
million rows.
They use 2 main "filters" across a variety of columns:
WHERE (col_1 IS NULL ) AND (col_2 IS NULL) AND ((col_3 IS NULL) OR
(col_3 = col_1))
WHERE (col_1 IS True ) AND (col_2 IS True) AND (col_
Hello.
I was trying to get postgres to return the "correct" number of rows
inserted for batch inserts to a partitioned table [using the triggers as
suggested here
http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html results in
it always returning 0 by default].
What I ideally wanted it
On Tue, Sep 30, 2014 at 8:50 PM, Alvaro Herrera
wrote:
> Did you try decreasing the autovacuum_multixact_freeze_min_age and
> autovacuum_multixact_freeze_table_age parameters?
>
As per the docs this set anywhere from zero to 1 billion for
vacuum_multixact_freeze_min_age
And zero to 2 billion for
Felix, I'd love to see a single, well maintained project. For example, I
just found yours, and gave it a shot today after seeing this post. I found
a bug when an update command is issued, but the old and new values are all
the same. The trigger will blow up. I've got a fix for that, but if we
ha
Gidday,
There was an interesting presentation at the Portland Postgres Users Group
meeting in early Sept, from a guy who demo'd a Postgres database mounted as a
FUSE filesystem. Not production ready, but with tables manifesting as
directories, databases could be synch'ed using filesystem tools
Dev Kumkar wrote:
> On Fri, Sep 26, 2014 at 1:36 PM, Dev Kumkar wrote:
>
> > Received the database with huge pg_multixact directory of size 21G and
> > there are ~82,000 files in "pg_multixact/members" and 202 files in
> > "pg_multixact/offsets" directory.
> >
> > Did run "vacuum full" on this da
On Fri, Sep 26, 2014 at 1:36 PM, Dev Kumkar wrote:
> Received the database with huge pg_multixact directory of size 21G and
> there are ~82,000 files in "pg_multixact/members" and 202 files in
> "pg_multixact/offsets" directory.
>
> Did run "vacuum full" on this database and it was successful. Ho
Roopeshakumar Narayansa Shalgar (rshalgar) wrote:
> Hi,
>
>
>
> I am using version 9.3.1 and see the “no space device error” even though there
> is enough space (99% free) on my disk.
Just to be sure, check the output of both 'df -h' (for disk blocks) and 'df -hi'
(for inodes). You might have
Hi Andres,
> Hi,
>
> On 2014-09-29 13:52:52 -0700, p...@cmicdo.com wrote:
>> I have a question about BDR Global Sequences.
>>
[deleted]
>> Is there way to increase a global sequence's reservation block for each
>> node so that I can tell the nodes, "I'm going to load 100M rows now so
>> yo
Hey
yes i'm adding an additional key to each of my tables. First i wanted to use
the primary key as one column in my audit_log table, but in some of my tables
the PK consists of more than one column. Plus it's nice to have one key that is
called the same over all tables.
To get a former stat
Hi,
On 2014-09-29 13:52:52 -0700, p...@cmicdo.com wrote:
> I have a question about BDR Global Sequences.
>
> I've been playing with BDR on PG 9.4beta2, built from source from the
> 2nd Quadrant GIT page (git://git.postgresql.org/git/2ndquadrant_bdr.git).
>
> When trying a 100 row \copy-in, l
Hi,
We have an environment that has a central repository for lookups, which is
replicated to several databases, ech for different applications.
This has been arranged in a DTAP manner.
Sometimes it is necessary to synchronize the lookups of one of the DTAP
branches with another. But i can't just
15 matches
Mail list logo