pgcrypto does not work for this scenario as far as i know.
pgcrypto enables me to encrypt my data and let only a user with the
right password (or key or whatever) decrypt it, right? So if i run it in
a test environment without this password the application is broken.
I still want to use these
Kevin Goess wrote:
> We have a table "contexts" with 1.6 million rows, and a table
"articles" with 1.4 million rows, where
> an "article" is a particular kind of "context". We want to select
from a join on those two tables
> like this
>
> SELECT COUNT(*)
> FROM contexts
> JOIN article
Hi,
On Mon, Mar 19, 2012 at 12:12 AM, Florent THOMAS wrote:
> How do you query multi servers and multi databases on postgresql?
Look at this http://wiki.postgresql.org/wiki/PL/Proxy
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk
Janning Vygen writes:
> pgcrypto does not work for this scenario as far as i know.
>
> pgcrypto enables me to encrypt my data and let only a user with the
> right password (or key or whatever) decrypt it, right? So if i run it
> in a test environment without this password the application is broke
On 03/18/12 1:12 PM, Florent THOMAS wrote:
How do you query _multi servers_ and multi databases on postgresql?
other than plproxy, there's dblink, in the contrib collection. this
lets you query another database or server from within a SQL query
--
john r pierce
On Mon, Mar 19, 2012 at 10:12:01AM +0100, hari.fu...@gmail.com wrote:
> Janning Vygen writes:
> > pgcrypto does not work for this scenario as far as i know.
> >
> > pgcrypto enables me to encrypt my data and let only a user with the
> > right password (or key or whatever) decrypt it, right? So if
Great thanks to all of you regards
Le 19/03/2012 09:58, Sergey Konoplev a écrit :
Hi,
On Mon, Mar 19, 2012 at 12:12 AM, Florent THOMAS wrote:
How do you query multi servers and multi databases on postgresql?
Look at this http://wiki.postgresql.org/wiki/PL/Proxy
On Mon, Mar 19, 2012 at 9:24 AM, Albe Laurenz wrote:
> That means that your statistics are not accurate.
>
> As a first measure, you should ANALYZE the tables involved and see if
> the problem persists. If yes, post the new plans.
>
Aha, thanks, that explains why my test table with one row was s
Try this: http://klicman.org/throttle/
Kiriakos
On Mar 19, 2012, at 12:06 AM, Aleksey Tsalolikhin wrote:
> Hi. When pg_dump runs, our application becomes inoperative (too
> slow). I was going to ask if nice'ing the postgres backend process
> that handles the COPY would help but I just reali
In response to Janning Vygen :
>
> I am working on postgresql 9.1 and loving it!
>
> Sometimes we need a full database dump to test some performance issues
> with real data.
>
> Of course we don't like to have sensible data like bunches of e-mail
> addresses on our development machines as they
Kevin Goess writes:
> On Mon, Mar 19, 2012 at 9:24 AM, Albe Laurenz wrote:
>> That means that your statistics are not accurate.
> Aha, thanks, that explains why my test table with one row was so bad. But
> even with all freshly ANALYZE'd tables, I still see the query reverting to
> a sequential
Am 19.03.2012 um 13:22 schrieb Bill Moran :
> In response to Janning Vygen :
>>
>> I am working on postgresql 9.1 and loving it!
>>
>> Sometimes we need a full database dump to test some performance issues
>> with real data.
>>
>> Of course we don't like to have sensible data like bunches of e
Hy all of you,
1 - Is there a way to have conditions for committing transactions like
in oracle :
http://www.scribd.com/doc/42831667/47/Validation-conditionnelle-de-transaction-62
2 - Is there a way to declare local variables for a SQL statement
without beiing in a function?
regards
2. The short answer is No. I've got the same issue. I come from a
different sql that had a CREATE VARAIBLE which was good for the session.
With PostGres, I've created a sessionsettings table and a bunch of
functions to get by variable and use the value. My perceived downside
is that this cause
On Mon, Mar 19, 2012 at 3:28 PM, Florent THOMAS wrote:
> Hy all of you,
>
> 1 - Is there a way to have conditions for committing transactions like in
> oracle :
> http://www.scribd.com/doc/42831667/47/Validation-conditionnelle-de-transaction-62
>
> 2 - Is there a way to declare local variables for
Hello,
I'm doing some SELECTs from information_schema.views to find views with
dependencies on other views, i.e.
SELECT table_name FROM information_schema.views WHERE view_definition ILIKE
'%myviewname%';
and each is taking about 1/2 a second, which is getting a bit slow for my use.
There are
On Mar 19, 2012, at 10:59 AM, Welty, Richard wrote:
> i just finished this thread from May of last year, and am wondering if this
> still represents consensus thinking about postgresql deployments in the EC2
> cloud:
>
> http://postgresql.1045698.n5.nabble.com/amazon-ec2-td4368036.html
>
Yes,
On Thu, Mar 01, 2012 at 02:01:31PM -0800, Lonni J Friedman wrote:
> I've got a 3 node cluster (1 master/2 slaves) running 9.0.x with
> streaming replication. I'm in the planning stages of upgrading to
> 9.1.x, and am looking into the most efficient way to do the upgrade
> with the goal of minimizi
On Mon, Mar 19, 2012 at 12:30 PM, Bruce Momjian wrote:
> On Thu, Mar 01, 2012 at 02:01:31PM -0800, Lonni J Friedman wrote:
>> I've got a 3 node cluster (1 master/2 slaves) running 9.0.x with
>> streaming replication. I'm in the planning stages of upgrading to
>> 9.1.x, and am looking into the mos
-- This works.
select
TRG.tgname, TFX.proname, pg_get_functiondef(TFX.oid) as fdef
from
pg_trigger TRG
inner join pg_proc TFX on TFX.oid = TRG.tgfoid
where
TRG.tgisinternal = true
-- This blows up. -- SQL state: 42809 -- ERROR: "array_agg" is an aggregate
function
select
T
On Mon, Mar 19, 2012 at 11:16 AM, Ben Chobot wrote:
> On Mar 19, 2012, at 10:59 AM, Welty, Richard wrote:
>
> i just finished this thread from May of last year, and am wondering if this
> still represents consensus thinking about postgresql deployments in the EC2
> cloud:
>
> http://postgresql.104
writes:
> -- This blows up. -- SQL state: 42809 -- ERROR: "array_agg" is an aggregate
> function
> select
> TRG.tgname, TFX.proname, pg_get_functiondef(TFX.oid) as fdef
> from
> pg_trigger TRG
> inner join pg_proc TFX on TFX.oid = TRG.tgfoid
> where
> TRG.tgisinternal = true
>
On Mon 3/19/2012 4:30 PM Mike Christensen writes:
>I've been running my site on RackSpace CloudServers (similar to EC2)
>and have been getting pretty good performance, though I don't have
>huge amounts of database load.
>One advantage, though, is RackSpace allows for hybrid solutions so I
>cou
> On Mon 3/19/2012 4:30 PM Mike Christensen writes:
>
>>I've been running my site on RackSpace CloudServers (similar to EC2)
>>and have been getting pretty good performance, though I don't have
>>huge amounts of database load.
>
>>One advantage, though, is RackSpace allows for hybrid solutions so I
On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
> Hi. When pg_dump runs, our application becomes inoperative (too
> slow). I was going to ask if nice'ing the postgres backend process
> that handles the COPY would help but I just realized probably the
> pg_dump takes out locks when i
Le 19/03/2012 16:57, Simon Riggs a écrit :
On Mon, Mar 19, 2012 at 3:28 PM, Florent THOMAS wrote:
Hy all of you,
1 - Is there a way to have conditions for committing transactions like in
oracle :
http://www.scribd.com/doc/42831667/47/Validation-conditionnelle-de-transaction-62
2 - Is there
On 03/19/2012 01:51 PM, Guillaume Lelarge wrote:
On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
Hi. When pg_dump runs, our application becomes inoperative (too
slow)
Depends on what your app is doing. It doesn't block any usual use of the
database: DML are all accepted. But
The data anonymizer process is flawed because you are one misstep away from
data spillage. Sensitive data should be stored encrypted to begin. For test
databases you or your developers can invoke a process that replaces the real
encrypted data with fake encrypted data (for which everybody has
In response to Kiriakos Georgiou :
> The data anonymizer process is flawed because you are one misstep away from
> data spillage.
In our case, it's only one layer.
Other layers that exist:
* The systems where this test data is instantiated can't send email
* The systems where this exist have li
On Mon, 2012-03-19 at 15:30 -0400, Bruce Momjian wrote:
> On Thu, Mar 01, 2012 at 02:01:31PM -0800, Lonni J Friedman wrote:
> > I've got a 3 node cluster (1 master/2 slaves) running 9.0.x with
> > streaming replication. I'm in the planning stages of upgrading to
> > 9.1.x, and am looking into the
On Mon, Mar 19, 2012 at 3:51 PM, Guillaume Lelarge
wrote:
> On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
>> Hi. When pg_dump runs, our application becomes inoperative (too
>> slow). I was going to ask if nice'ing the postgres backend process
>> that handles the COPY would help b
On Mar 19, 2012, at 5:55 PM, Bill Moran wrote:
>
>> Sensitive data should be stored encrypted to begin. For test databases you
>> or your developers can invoke a process that replaces the real encrypted
>> data with fake encrypted data (for which everybody has the key/password.)
>> Or if the
I wrote:
> You've still got a nasty join-size estimation error:
>> -> Nested Loop (cost=6.18..1939.43 rows=411736 width=8) (actual
>> time=0.203..3.487 rows=35 loops=1)
> It's not apparent why that's so far off ...
What PG version is this, anyway? It strikes me that this estimation
error migh
33 matches
Mail list logo