Andreas Kretschmer wrote:
> A. Kretschmer wrote:
>
> > Hi,
> >
> > just to be sure, it is still (8.4) not possible to use RETURNING within an
> > other INSERT?
>
> Thx for all replies. It is not a really problem, i will write a
> benchmark to compare the new writeable CTE (in 8.5 alpha) with
Hello everyone,
I have hit a wall on completing a solution I am working on. Originally, the
app used a db per user (on MyIsam)the solution did not fair so well in
reliability and performance. I have been increasingly interested in Postgres
lately.
Currently, I have about 30-35k users/database
undisclosed user wrote:
Hello everyone,
I have hit a wall on completing a solution I am working on.
Originally, the app used a db per user (on MyIsam)the solution did
not fair so well in reliability and performance. I have been
increasingly interested in Postgres lately.
Currently, I ha
Frank,
I had the same questioning a while ago and another thing that made me think
was the amount of data per user.
In the end, I decided on using a single DB and single schema and add a
clause to split everything by each customer (customer_id).
I then added an index on that column and my code b
On 14 Nov 2009, at 22:27, Naoko Reeves wrote:
> I have a encrypted column use encrypt function.
> Querying against this column is almost not acceptable – returning 12 rows
> took 25,908 ms.
> The query was simply Select decrypt(phn_phone_enc) FROM phn WHERE
> decrypt(phn_phone_enc,’xxx’,’xxx’) L
undisclosed user wrote:
I have hit a wall on completing a solution I am working on. Originally,
the app used a db per user (on MyIsam)the solution did not fair so
well in reliability and performance. I have been increasingly interested
in Postgres lately.
Currently, I have about 30-35k us
On Sat, Nov 14, 2009 at 5:08 PM, John R Pierce wrote:
> Naoko Reeves wrote:
>>
>> I have a encrypted column use encrypt function.
>>
>> Querying against this column is almost not acceptable – returning 12 rows
>> took 25,908 ms.
>>
>> The query was simply Select decrypt(phn_phone_enc) FROM phn WHE
As Alban pointed out encrypting the search value and compare stored
encrypted value is very fast though it can't do LIKE search.
After I received valuable input from Merlin, Bill and John, I did some
research regarding "search against encrypted field" in general and as in
everyone's advice, I must
undisclosed user writes:
> I have hit a wall on completing a solution I am working on. Originally, the
> app used a db per user (on MyIsam)the solution did not fair so well in
> reliability and performance. I have been increasingly interested in Postgres
> lately.
> Currently, I have about 30
On Sun, Nov 15, 2009 at 1:28 AM, undisclosed user
wrote:
> Hello everyone,
> I have hit a wall on completing a solution I am working on. Originally, the
> app used a db per user (on MyIsam)the solution did not fair so well in
> reliability and performance. I have been increasingly interested i
On Sun, Nov 15, 2009 at 11:54 AM, Merlin Moncure wrote:
>
> Use schema. Here's a pro tip: if you have any sql or pl/pgsql
> functions you can use the same function body across all the schema as
> long as you discard the plans when you want to move from schema to
> schema.
I too vote for schemas.
undisclosed user wrote:
Currently, I have about 30-35k users/databases. The general table
layout is the sameonly the data is different. I don't need to
share data across databases. Very similar to a multi-tenant design.
Do these users make their own arbitrary SQL queries?Or is all the
Zdenek Kotala wrote:
1) Yeah I like pg_ctl init
"pg_ctl init" will be preferred method and initdb will
disappear from usr/bin in the future.
I agree with this position. My own database wrapper scripts work this
way already, and it would be nice for them to have one more comman
If I were to switch to a single DB/single schema format shared among all
users , how can I backup each user individually?
Frank
On Sat, Nov 14, 2009 at 10:28 PM, undisclosed user <
lovetodrinkpe...@gmail.com> wrote:
> Hello everyone,
>
> I have hit a wall on completing a solution I am working on
undisclosed user wrote:
If I were to switch to a single DB/single schema format shared among
all users , how can I backup each user individually?
depending on how many tables, etc, I suppose you could use a seperate
series of SELECT statements ...
but if this is a requirement, it certainly put
On Sat, 2009-11-14 at 15:07 +0100, Zdenek Kotala wrote:
> extend pg_ctl functionality and add "init" command which do same thing
> like initdb
If we did add an extra option then the option would be "initdb" not
"init". It would take us all years to remove all evidence of the phrase
"initdb" from t
Hi
I need some help with our postgresql.conf file. I would appreciate if
someone could look at the values and tell me if it looks alright or if I
need to change anything.
The db server has 4 GB of memory and one quad core CPU (2,53 GHz).
The hard drives is on a iSCSI array and is configured as f
Simon Riggs writes:
> On Sat, 2009-11-14 at 15:07 +0100, Zdenek Kotala wrote:
>> extend pg_ctl functionality and add "init" command which do same thing
>> like initdb
> If we did add an extra option then the option would be "initdb" not
> "init". It would take us all years to remove all evidence
On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test wrote:
> Hi
>
> I need some help with our postgresql.conf file. I would appreciate if
> someone could look at the values and tell me if it looks alright or if I
> need to change anything.
>
> The db server has 4 GB of memory and one quad core CPU (2
Thanks for the quick and helpful reply.
Yes, the storage array has a battery backed cache, its a Dell PowerVault
MD3000i, with dual controllers.
This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
BuyAndRead Test wrote:
This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
if I use 8 GB, as much as 4 GB?
I'd keep it around 1-2GB shared_buffers, and let the rest of the memory
be used as f
2009/11/14 Thom Brown :
> 2009/11/14 Thom Brown
>>
>> Mr Fetter has allowed me to post his lightning talk on lightning talks:
>> http://vimeo.com/7602006
>> Thom
>
> Harald's lightning talk also available with his
> permission: http://vimeo.com/7610987
> Thom
Sorry, I've only just noticed that I'
"PostgreSQL does not support specific column updates in triggers."
I found this statement on a blog.
Is there a workaround for this?
I've attempted using 'new' (refering to the specific column) without success.
Bob
"Bob Pawley" writes:
> "PostgreSQL does not support specific column updates in triggers."
> I found this statement on a blog.
> Is there a workaround for this?
If you'd explain what you think that statement means, maybe we could
help you ...
regards, tom lane
--
Sent v
I'm trying to trigger from an update.
However the trigger functions when any column has been updated.
I have columns pump1 and pump2 and column serial.
When pump1 is updated the trigger function performs properly. (one row is
returned)
When pump2 is updated the trigger function returns two
"Bob Pawley" writes:
> Hope this elucidates you?
No, it's all handwaving. In particular, showing only a fragment from
a case that does work as you expect doesn't illuminate what's not
working. Please show the whole table definition, the whole trigger,
and the specific case that's not doing wha
When running pgsql2shp it truncates fields that are over 10 characters. How
can I prevent this from occurring?
John
--
John J. Mitchell
On Sunday 15 November 2009 5:18:20 pm Tom Lane wrote:
> "Bob Pawley" writes:
> > Hope this elucidates you?
>
> No, it's all handwaving. In particular, showing only a fragment from
> a case that does work as you expect doesn't illuminate what's not
> working. Please show the whole table definiti
Tom Lane wrote:
Simon Riggs writes:
If we did add an extra option then the option would be "initdb" not
"init". It would take us all years to remove all evidence of the phrase
"initdb" from the mailing lists and our minds.
"init" is already embedded in various packagers' initscripts.
I'm trying to read "money" field using PQgetvalue (PostgreSQL 8.3.7). The
function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h, for the
value '$50.2'. I could not find description anywhere on how to convert the
binary data into, for example, a double precision number.
Would you please h
Konstantin Izmailov wrote:
I'm trying to read "money" field using PQgetvalue (PostgreSQL 8.3.7).
The function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h,
for the value '$50.2'. I could not find description anywhere on how to
convert the binary data into, for example, a double precis
Konstantin Izmailov writes:
> I'm trying to read "money" field using PQgetvalue (PostgreSQL 8.3.7). The
> function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h, for the
> value '$50.2'. I could not find description anywhere on how to convert the
> binary data into, for example, a double
Right, the value is '$51.20'! Now I understand how to interpret the bytes -
thank you!
I had to work with an existing database and I do not know why they still use
"money" fields.
On Sun, Nov 15, 2009 at 9:38 PM, John R Pierce wrote:
> Konstantin Izmailov wrote:
>
>> I'm trying to read "money" f
I'm planning to use multiple statements via libpq. Before starting coding
I'm trying to understand are there any limitations on passing parameters.
E.g. would the following work:
PQexecParams(conn, "BEGIN;INSERT INTO tbl VALUES($1,$2);SELECT
lastval();SELECT * INTO AUDIT FROM (SELECT $3, 'tbl act
34 matches
Mail list logo