I'm working in a project which is using postgres (great database!, I love
it)
We're in a stage where I need to implement a mechanism to prevent the data
modification.
I'm thinking on 'Digital Signatures' (maybe RSA) in each row. If there's a
modification, the signature doesn't verify.
However b
I am trying to install PosgreSQL using postgresql-8.3-dev1 on my WindowsXP
machine, but I get a message that reads
Fail to create a temporary directory
Does anyone know why I am getting this error message?
Thank in advance
--
Happiness has many doors, and when one of them closes another opens,
"hjenkins" <[EMAIL PROTECTED]> writes:
> On the subject of the COPY command
> (http://www.postgresql.org/docs/current/interactive/sql-copy.html), is it
> the case that the HEADER, QUOTE, escape, FORCE QUOTE, and FORCE NOT NULL
> options can only be used in CSV mode? If so, why? A tab-delimited tabl
Hello, all,
On the subject of the COPY command
(http://www.postgresql.org/docs/current/interactive/sql-copy.html), is it
the case that the HEADER, QUOTE, escape, FORCE QUOTE, and FORCE NOT NULL
options can only be used in CSV mode? If so, why? A tab-delimited table
with a header line and quoted st
> In my database, I have a core table that nearly all other tables
> key against. Now I need to adjust all of those foreign keys to
> add a "on update cascade" action. Is there a way to alter the
> existing keys? (it didn't jump out at me in the manual)
>
Would it be possible to modify confupdt
--- On Tue, 1/22/08, Adam Rich <[EMAIL PROTECTED]> wrote:
> Is there a way to alter the
> existing keys? (it didn't jump out at me in the manual)
ALTER TABLE your_table
DROP CONSTRAINT your_column_fkey_constraint,
ADD CONSTRAINT your_column_fkey_constraint
FOREIGN KEY your_column
On Jan 22, 2008, at 1:11 PM, Adam Rich wrote:
In my database, I have a core table that nearly all other tables
key against. Now I need to adjust all of those foreign keys to
add a "on update cascade" action. Is there a way to alter the
existing keys? (it didn't jump out at me in the manual)
Dear all,
I have created a group for PostgreSQL professionals at LinkedIn.com
Feel free to join if you like.
http://www.linkedin.com/e/gis/51776/760A11717C03
Regards,
Gevik Babakhani
PostgreSQL NL http://www.postgresql.nl
TrueSoftware BV
In my database, I have a core table that nearly all other tables
key against. Now I need to adjust all of those foreign keys to
add a "on update cascade" action. Is there a way to alter the
existing keys? (it didn't jump out at me in the manual)
If not, is there a serious issue preventing this
Pavel Stehule wrote:
> ...
bottleneck is in repeated assign s := s || ..
I will try trick:
create or replace function list(int)
returns varchar as $$
begin
return array_to_string(array(select '' || i || ''
from generate_series(1, $1) g(i)), '');
end$$ language plpgsql immutable;
test
>
> But you're right. With the combined index I can set the granularity
> back to 1000, and empty queries as well as non-empty queries perform
> well. The row estimate is still way off, though.¨
Bigger value --> slow analyze. Real maximum is about 200-300. So be carefully.
Regards
Pavel
---
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Alexander Staubo wrote:
> > On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
> >> Although the row-estimate still seems quite high. You might want to
> >> increase it even further (maximum is 1000). If this is a common query,
> >> I'd loo
cinu wrote:
Hi All,
I was running the run_Build.pl script that is specific
to Buildfarm and encountered errors. I am listing out
the names of the logfiles and the errors that I have
seen.
Can anyone give me some clarity on these errors?
Even though these errors are existing, at the end the
l
Alexander Staubo wrote:
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
Although the row-estimate still seems quite high. You might want to
increase it even further (maximum is 1000). If this is a common query,
I'd look at an index on (user,id) rather than just (user) perhaps.
Actually t
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Although the row-estimate still seems quite high. You might want to
> increase it even further (maximum is 1000). If this is a common query,
> I'd look at an index on (user,id) rather than just (user) perhaps.
Actually that index (with the sa
Alexander Staubo wrote:
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
Then see if an ALTER TABLE SET
STATISTICS 100 makes a difference.
So it does:
# explain analyze select max(id) from user_messages where user_id = 13604;
QUERY PLAN
--
Great, this does the trick thanks!!
um... somevalue+random() is a simplified version of what I really wanted to
do, i just wante the general idea of what the query would look like.
2008/1/21, Andrei Kovalevski <[EMAIL PROTECTED]>:
>
> May be this is what you need:
>
> select
> test.uid, coa
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Hmm, but with an estimated cost of 3646 (vs.633 for the max(*) which
> uses the wrong index). That explains why it's walking backwards through
> the pkey index, it thinks that it's 8 times cheaper.
[...]
> Have a look at most_common_vals,most_
On Jan 18, 2008 4:14 AM, Dorren <[EMAIL PROTECTED]> wrote:
> Terabytes of data: this is a lot of Oracle data to migrate. You would
> need a high performance tools capable to handle heterogeneous
> environment
> People suggested links here, so I will add some that could be very
> appropriate to you
Alexander Staubo wrote:
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
Alexander Staubo wrote:
# explain analyze select max(id) from user_messages where user_id = 13604;
QUERY PLAN
On 1/22/08, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Alexander Staubo wrote:
> > # explain analyze select max(id) from user_messages where user_id = 13604;
> >
> > QUERY PLAN
> >
> > ---
Alexander Staubo wrote:
# explain analyze select max(id) from user_messages where user_id = 13604;
QUERY PLAN
--
Result (cost=633
Hi All,
I was running the run_Build.pl script that is specific
to Buildfarm and encountered errors. I am listing out
the names of the logfiles and the errors that I have
seen.
Can anyone give me some clarity on these errors?
Even though these errors are existing, at the end the
latest version is
This is on a fresh pg_restore copy that I have additionally vacuumed
and analyzed. These queries, on a table containing 2.8 million rows,
are very fast:
# select count(*) from user_messages where user_id = 13604;
count
---
0
(1 row)
Time: 0.604 ms
# select * from user_messages where use
>
> Yep, the more I read, the more I get confused.
> Java loading overhead is a common myth (I can't say if true or false),
> and what Tom writes above can find a tentative place in my mind.
> But still then I can't understand where plsql should or shouldn't be
> used.
>
> I really would enjoy to s
On Jan 22, 2008 2:24 AM, Ivan Sergio Borgonovo <[EMAIL PROTECTED]> wrote:
> > > I doubt that what you were measuring there was either procedure
> > > call overhead or java computational speed; more likely it was the
> > > cost of calling back out of java, through pl/java's JDBC
> > > emulation, dow
Christian Schröder wrote:
> Indeed, you are right! Granting select permission to the "ts_frontend"
> user (more precisely: granting membership to the "zert_readers" role)
> solved the problem.
>
>> This is strange because ts_frontend can select from "EDITORS" because
>> of the membership to role
That's because the definitions of the functions are not stored in the
schema.
functions are stored in the pg_catalog schema in the pg_proc table and
views are accessible via the multiple pg_get_viewdef functions in the
same schema.
If you block access to that table and fucntion then pgadmin
28 matches
Mail list logo