On Tue, 08 Apr 2014 15:53:48 -0600
CS_DBA wrote:
> Not sure yet (new client)... for now they simply want to force the
> template column to be a valid cust_id, if it is not null...
It seems to be a different version of the textbook exercice involving
EMPLOYEE_ID and MANAGER_ID.
--
Alberto Cabel
On Sat, Apr 5, 2014 at 6:41 AM, Jim Garrison wrote:
> An ETL "job" runs inside its own transaction and consists of a series of
> queries that transform the data from staging tables to the destination
> tables. If a failure occurs, the transaction rolls back so there's no
> "debris" left over --
Steve Kehlet writes:
> Thank you. For some reason I couldn't get it to trip with "ulimit -d
> 51200", but "ulimit -v 1572864" (1.5GiB) got me this in serverlog. I hope
> this is readable, if not it's also here:
Well, here's the problem:
> ExprContext: 812638208 total in 108 blocks; 18352
I should add I had to move testing to a different VM, with less RAM and
different tunings; they are here: https://gist.github.com/skehlet/10207086
On Tue, Apr 8, 2014 at 4:08 PM, Steve Kehlet wrote:
> On Tue, Apr 8, 2014 at 12:23 PM, Tom Lane wrote:
>
> Once you've got a ulimit in place so tha
On Tue, Apr 8, 2014 at 12:23 PM, Tom Lane wrote:
> Once you've got a ulimit in place so that malloc eventually fails with
> ENOMEM, the backend process should print a memory context dump on stderr
> when it hits that. Make sure your logging setup captures the process
> stderr someplace (logging_
Re-posting, see quote
David Johnston wrote
>
> Pavel Stehule wrote
>> 2014-04-04 14:16 GMT+02:00 Tjibbe <
>> tjibbe@
>> >:
>>
>>> Thanks that works!
>>>
>>> Little bit confusing ERROR.
>>>
>>
>> yes, it could be - but hard to fix it, because it is based on cooperation
>> two worlds - plpgsql
Based on your first question a customer id itself is not a valid designator;
you have to specify (or link) in the group as well.
Not tested but should work:
FOREIGN KEY (template, group) REFERENCES customer (cust_id, group)
Depends on whether you want to allow cross-group associations if you nee
On 04/08/2014 04:08 PM, Rob Sargent wrote:
On 04/08/2014 03:53 PM, CS_DBA wrote:
On 04/08/2014 03:41 PM, Rob Sargent wrote:
On 04/08/2014 03:36 PM, CS_DBA wrote:
On 04/08/2014 03:31 PM, Rob Sargent wrote:
On 04/08/2014 03:26 PM, CS_DBA wrote:
On 04/08/2014 03:17 PM, Rob Sargent wrote:
O
On 04/08/2014 03:53 PM, CS_DBA wrote:
On 04/08/2014 03:41 PM, Rob Sargent wrote:
On 04/08/2014 03:36 PM, CS_DBA wrote:
On 04/08/2014 03:31 PM, Rob Sargent wrote:
On 04/08/2014 03:26 PM, CS_DBA wrote:
On 04/08/2014 03:17 PM, Rob Sargent wrote:
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/
On 04/08/2014 03:41 PM, Rob Sargent wrote:
On 04/08/2014 03:36 PM, CS_DBA wrote:
On 04/08/2014 03:31 PM, Rob Sargent wrote:
On 04/08/2014 03:26 PM, CS_DBA wrote:
On 04/08/2014 03:17 PM, Rob Sargent wrote:
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/08/2014 02:58 PM, Rob Sargent wrote:
O
On 04/08/2014 03:36 PM, CS_DBA wrote:
On 04/08/2014 03:31 PM, Rob Sargent wrote:
On 04/08/2014 03:26 PM, CS_DBA wrote:
On 04/08/2014 03:17 PM, Rob Sargent wrote:
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/08/2014 02:58 PM, Rob Sargent wrote:
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
On 04/08/2014 03:31 PM, Rob Sargent wrote:
On 04/08/2014 03:26 PM, CS_DBA wrote:
On 04/08/2014 03:17 PM, Rob Sargent wrote:
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/08/2014 02:58 PM, Rob Sargent wrote:
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
we have a table like so:
customer (
On 04/08/2014 03:26 PM, CS_DBA wrote:
On 04/08/2014 03:17 PM, Rob Sargent wrote:
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/08/2014 02:58 PM, Rob Sargent wrote:
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
we have a table like so:
customer (
cust_id integer not null
On 04/08/2014 03:17 PM, Rob Sargent wrote:
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/08/2014 02:58 PM, Rob Sargent wrote:
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
we have a table like so:
customer (
cust_id integer not null primary key,
cust_group_id int
On 04/08/2014 03:09 PM, CS_DBA wrote:
On 04/08/2014 02:58 PM, Rob Sargent wrote:
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
we have a table like so:
customer (
cust_id integer not null primary key,
cust_group_id integer not null,
group_account_idinteger no
On 04/08/2014 02:58 PM, Rob Sargent wrote:
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
we have a table like so:
customer (
cust_id integer not null primary key,
cust_group_id integer not null,
group_account_idinteger not null,
cust_name varchar
On 04/08/2014 02:51 PM, CS_DBA wrote:
Hi All
we have a table like so:
customer (
cust_id integer not null primary key,
cust_group_id integer not null,
group_account_idinteger not null,
cust_name varchar not null,
...
)
we want to force the cust_
Hi All
we have a table like so:
customer (
cust_id integer not null primary key,
cust_group_id integer not null,
group_account_idinteger not null,
cust_name varchar not null,
...
)
we want to force the cust_group_id to be unique across all
group
On 04/08/2014 04:09 PM, Alvaro Herrera wrote:
Elliot S wrote:
Is there a discussion somewhere regarding this line from the docs on
transaction isolation: "When you select the level Read Uncommitted
you really get Read Committed, and phantom reads are not possible in
the PostgreSQL implementation
Elliot S wrote:
> Is there a discussion somewhere regarding this line from the docs on
> transaction isolation: "When you select the level Read Uncommitted
> you really get Read Committed, and phantom reads are not possible in
> the PostgreSQL implementation of Repeatable Read"? I get that RU is
>
Is there a discussion somewhere regarding this line from the docs on
transaction isolation: "When you select the level Read Uncommitted you
really get Read Committed, and phantom reads are not possible in
the PostgreSQL implementation of Repeatable Read"? I get that RU is the
same as RC and why
On Sat, Apr 5, 2014 at 3:48 AM, Anupama Ramaswamy wrote:
> Scenario 1
>
> Suppose the secondary server is lagging behind the primary at the time of
> primary failure, will the secondary completely catch up to the primary
> state, before stopping replication. Or what in the process
Thanks alot..i have implemented in same format.You people are awesome.Thank
You once again
On Tue, Apr 8, 2014 at 8:58 PM, Hannes Erven wrote:
> Hi,
>
>
> On 2014-04-08 15:27, Vick Khera wrote:
> [...]
>
> Problem 2: you cannot have a second process skip over locked rows.
>>
>
>
> In fact, you
Steve Kehlet writes:
>> Did you either profiled or debugged it to see what is going on?
> I would love to learn more about how to do this, to get to the bottom of
> the memory usage. I can google around, or can you suggest any reads?
Once you've got a ulimit in place so that malloc eventually fa
Steve Kehlet writes:
> But for now, the devs are asking me for some way to put a cap on a postgres
> query's total memory consumption. I'm familiar with the available settings
> on the "Resource Consumption" docs (and you can see my settings in my gist
> above, including work_mem turned way down t
>
> Did you check if your OS allows you to limit memory usage per user /
> process basis?
>
I'll definitely dig more into this. I found some stuff saying it isn't as
easy as one would expect, i.e. not just a ulimit and done. Or maybe it is,
in this case. I will play around.
> Did you either prof
Kevin Grittner wrote:
> Please post the output of this:
>
> numactl --hardware
Oh, it would also help in making specific suggestions if you could
show the output of:
mount | grep cpuset
... and a listing of "file names" in the mounted directory. There
is some variation among distros in both
Hi Steve,
Did you check if your OS allows you to limit memory usage per user /
process basis?
Did you either profiled or debugged it to see what is going on?
When memory consumption grows , Doesn't ever swap out?
Thanks,
A.A.
On Tue, Apr 8, 2014 at 10:48 AM, Steve Kehlet wrote:
> I'm runnin
I'm running into some scenarios where errant postgres queries are consuming
all memory and bringing down production boxes. I'm running Postgres 9.3.4
on CentOS 5.x VMs under XenServer. My custom config is here:
https://gist.github.com/skehlet/9984666
The incidents may be related to reports of an a
On Tue, Apr 8, 2014 at 11:28 AM, Hannes Erven wrote:
> On 2014-04-08 15:27, Vick Khera wrote:
> [...]
>>
>> Problem 2: you cannot have a second process skip over locked rows.
>
>
>
> In fact, you can: use "FOR UPDATE NOWAIT" and catch any errors.
>
> e.g. SELECT * FROM jobqueue WHERE id=? FOR UPDA
Hi,
On 2014-04-08 15:27, Vick Khera wrote:
[...]
Problem 2: you cannot have a second process skip over locked rows.
In fact, you can: use "FOR UPDATE NOWAIT" and catch any errors.
e.g. SELECT * FROM jobqueue WHERE id=? FOR UPDATE NOWAIT
-hannes
--
Sent via pgsql-general mailing list (p
On Thu, Apr 3, 2014 at 4:00 PM, John R Pierce wrote:
> an important thing in getting decent wear leveling life with SSDs is to keep
> them under about 70% full.
You have to do that at provisioning time in the drive. Ie, once you
layer a file system on it, the drive doesn't know what's "empty" and
On Fri, Apr 4, 2014 at 12:03 AM, François Beausoleil
wrote:
> Our workload is lots of data import, followed by many queries to summarize
> (daily and weekly reports). Our main table is a wide table that represents
> Twitter and Facebook interactions. Most of our reports work on a week's worth
>
Problem 1: how (and why) are you locking rows in table Y?
Problem 2: you cannot have a second process skip over locked rows. It
sounds to me like a job queue... what you want to do is make it work
*really* fast to mark the job as taken and commit that work so the
locks are released. Then go do the
Hello,
We are currently testing Synchronous Streaming replication between two PG
9.1 boxes () in
a test environment, with the slave as hot_standby.
We appear to have it working correctly (changes appear on both) etc
However the following query from (
http://www.dansketcher.com/2013/01/27/monitori
Thomas Kellerer wrote:
> I usually create new databases with an explicit owner which is the
> "application user" that is used by
> the application to connect to the database.
>
> I recently noticed when I do the following:
>
> postgres=# create user arthur identified by 'secret';
> postgres=# cr
Alberto Cabello Sánchez, 08.04.2014 13:05:
>> But I would have expected the owner of the database to be the owner of any
>> "object" that is created during database initialization.
>>
>> Do I have a wrong assumption here?
>
> It seems you do. See:
>
> Why Postgresql Public Schema Is Not Owned B
On 08/04/2014 04:47, Michael Paquier wrote:
> On Tue, Apr 8, 2014 at 12:33 PM, Gaurav Jindal wrote:
>> How to import the .sql file using PSQL or PgAdmin? Can anyone guide me with
>> details and step by step instructions.
>>
>> I tried \i filename.sql command but permission denied. I don't know how
On Tue, 08 Apr 2014 12:45:29 +0200
Thomas Kellerer wrote:
>
> But I would have expected the owner of the database to be the owner of any
> "object" that is created during database initialization.
>
> Do I have a wrong assumption here?
It seems you do. See:
Why Postgresql Public Schema Is Not
Hello all,
I usually create new databases with an explicit owner which is the "application
user" that is used by the application to connect to the database.
I recently noticed when I do the following:
postgres=# create user arthur identified by 'secret';
postgres=# create database guide owner =
Rebecca Clarke wrote:
> On a side
> not, we're not doing a vacuumdb, but individual vacuum analyze statements on
> each table. Not sure if
> that makes any difference.
You vacuum the catalog tables as well, right?
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgr
41 matches
Mail list logo