Excellent, thanks so much!
Mike
On Thu, May 10, 2012 at 9:38 PM, Christophe Pettus wrote:
>
> On May 10, 2012, at 9:16 PM, Mike Christensen wrote:
>> Am I missing something? Doesn't GRANT ALL mean that user can do
>> anything they want with objects in that schema, including inserts?
>
> No, it
On May 10, 2012, at 9:16 PM, Mike Christensen wrote:
> Am I missing something? Doesn't GRANT ALL mean that user can do
> anything they want with objects in that schema, including inserts?
No, it means that user has all privileges on *schema itself*; the objects
inside of it have their own permi
I've created a schema called Indexer and a user called Indexer. I
then grant Indexer ALL on said schema:
GRANT ALL ON SCHEMA Indexer TO "Indexer";
Next, I attempt to INSERT into Indexer.ParseErrors, I get a permission
denied error message. However, if I specifically grant Indexer INSERT
permiss
Evan Carroll writes:
> Also, here is a copy of the complete script -- including the citext
> creation statements from the dump, with the patch, with the bugged
> statements.
Well, if that's how you're creating citext, then yeah it's broken.
As of 9.1 the citext type needs to be created with the a
> Doesn't reproduce for me. I guess one question is how you are loading
> citext into the "fresh" database --- maybe you are inheriting a bum copy
> from template1?
That doesn't seem likely but it's possible. How can I tell?
Also, here is a copy of the complete script -- including the citext
cre
Evan Carroll writes:
> I've been able to reproduce it in a fresh database. This is a PSQL
> script 2a and 2b will fail.
Doesn't reproduce for me. I guess one question is how you are loading
citext into the "fresh" database --- maybe you are inheriting a bum copy
from template1?
But anyway, look
I think I can best get around this, if I issue a
> CREATE EXTENSION citext;
And, then load the database with the result of pg_dump. It seems to be
working, but there are some citext related statements from the dump
that fail because the stuff is already there in the DB when you issue
the CREATE E
> Also see whether you can reproduce the error in a fresh database.
> I continue to think the problem is an incorrect collation value in
> some system catalog entry; if that's it, nobody will be able to
> reproduce it. You might try checking to see that there are no
> un-updated rows matching thos
Evan Carroll writes:
> BTW, The database version is 9.1.3. I'll try and work on a test that
> generates this same error, not exactly sure why it is getting
> generated though.
Also see whether you can reproduce the error in a fresh database.
I continue to think the problem is an incorrect collati
I did that, and I had to do that to get the error I pasted. I am hit
by that bug. I get this error after I fix that error. Here we my post
about the issue that you just mentioned:
* http://dba.stackexchange.com/q/17604/2639
BTW, The database version is 9.1.3. I'll try and work on a test that
gene
oh I see what you mean; I definitely didn't know that, and makes this
approach much more interesting now :)
thanks,
H
On 10/05/12 19:28, Guy Helmer wrote:
BTW, it's not a problem to query data across multiple days as long as you query
from the parent table -- Postgresql will use the child ta
Evan Carroll writes:
>> Could we see the complete context for this?
> Sure.
> dealermade=# CREATE OR REPLACE TEMP VIEW chrome_vinmatch_best_match AS
> dealermade-# SELECT DISTINCT ON (v.vin) v.vin, vd.*
> dealermade-# FROM inventory.view_in_stock_vehicles AS v
> dealermade-#
> Could we see the complete context for this?
Sure.
dealermade=# CREATE OR REPLACE TEMP VIEW chrome_vinmatch_best_match AS
dealermade-# SELECT DISTINCT ON (v.vin) v.vin, vd.*
dealermade-# FROM inventory.view_in_stock_vehicles AS v
dealermade-# JOIN chrome_vinmatch.view_vin_
On 5/10/2012 2:00 PM, Lee Hachadoorian wrote:
On Thu, May 10, 2012 at 2:42 PM, Andy Colson wrote:
On 5/10/2012 1:10 PM, Lee Hachadoorian wrote:
2) Is there a performance hit to doing a COPY to more than one table
in the same transaction?
No, I don't think so. I assume you are the only use
Hello All
Could you please help me , index is not using if have "OR condition in where
statement" ?
Always have sequence scan.
Thanks
Prashant
Evan Carroll writes:
> This is a cross post from: http://dba.stackexchange.com/q/17609/2639
> Just recently I upgraded from Postgresql 8.4.11 to 9.1.3. I used
> pg_dump in the process. Now I'm getting an error:
>> ERROR: no collation was derived for column "vin" with collatable type citext
>> HI
On Thu, May 10, 2012 at 2:42 PM, Andy Colson wrote:
> On 5/10/2012 1:10 PM, Lee Hachadoorian wrote:
>>
>> 2) Is there a performance hit to doing a COPY to more than one table
>> in the same transaction?
>
>
> No, I don't think so. I assume you are the only user hitting the
> import_table, so hold
On 5/10/2012 1:10 PM, Lee Hachadoorian wrote:
Does anyone have experience or advice on how to efficiently issue a
large number of COPY statements? The data (US Census) comes in> 100
"segments" (each will be copied to its own database tables) for each
state (51), for a total of> 5000 text files.
On May 10, 2012, at 4:31 AM, Horaci Macias wrote:
> Hi everybody,
>
> I'm running postgres 9.1 and having disk space problems.
> My application captures information 24x7 and stores it into the database.
> This includes several bytea and can be ~5M entries a day, so the size can be
> an issue af
BTW, it's not a problem to query data across multiple days as long as you query
from the parent table -- Postgresql will use the child table constraints to
search all the child tables that could contain data.
Guy
On May 10, 2012, at 11:01 AM, Horaci Macias wrote:
> thanks Guy. I had thought ab
Does anyone have experience or advice on how to efficiently issue a
large number of COPY statements? The data (US Census) comes in > 100
"segments" (each will be copied to its own database tables) for each
state (51), for a total of > 5000 text files. I can generate the COPY
statements with a scrip
This is a cross post from: http://dba.stackexchange.com/q/17609/2639
Just recently I upgraded from Postgresql 8.4.11 to 9.1.3. I used
pg_dump in the process. Now I'm getting an error:
> ERROR: no collation was derived for column "vin" with collatable type citext
> HINT: Use the COLLATE clause t
Hello,
After a hardware reboot, a hot standby server fails to start.
The log mentions pg_subtrans files. While the machine got a hard reboot,
underlying filesystem is ZFS which did not detect or report any corruption.
For the first restart after the reboot, the server (then 9.1.0) complained tha
thanks Guy. I had thought about using per-day tables (although I didn't
know about child tables) but my problem is that some entries are related
and they can span several minutes, so my worry is that I end up not
finding all the right entries when I search for entries that happen
close to the e
thanks Laurenz, I'll keep an eye on pg_stat_all_tables. I checked just
now and apparently autovacuum has been ran, but this is after my recent
upgrade to 9.1 from 8.3 (I upgraded hoping this problem would go away
and so far the application hasn't been running for long enough for me to
confirm w
Horaci Macias wrote:
> I'm running postgres 9.1 and having disk space problems.
> My application captures information 24x7 and stores it into the
> database. This includes several bytea and can be ~5M entries a day, so
> the size can be an issue after several days.
> My application also cleans up e
Doing the same tests from psql gives:
1. ~2.5 seconds for INSERT/VALUES
2. ~10 seconds for prepared statement executes
3. ~15 seconds for multiple INSERTs
Dan.
On Thu, May 10, 2012 at 3:42 PM, Tom Lane wrote:
> Alban Hertroys writes:
> > On 10 May 2012 15:05, Radosław Smogura wrote:
Alban Hertroys writes:
> On 10 May 2012 15:05, RadosÅaw Smogura wrote:
>> May I ask what kind of planning may occur during insert?
> Well, for example, if there's a unique constraint on the table then
> the database will have to check that the newly inserted values don't
> conflict with values
On 10 May 2012 15:05, Radosław Smogura wrote:
> May I ask what kind of planning may occur during insert?
Well, for example, if there's a unique constraint on the table then
the database will have to check that the newly inserted values don't
conflict with values that are already in the table. It
On Thu, May 10, 2012 at 6:52 AM, Alban Hertroys wrote:
> On 10 May 2012 11:30, Daniel McGreal wrote:
>> I put the multi-value inserts in as I was
>> curious as to why prepared statements would be slower given they only plan
>> the query once (as also does the multi-value insert, I assume).
>
> Th
On Thu, 10 May 2012 13:52:29 +0200, Alban Hertroys wrote:
On 10 May 2012 11:30, Daniel McGreal
wrote:
I put the multi-value inserts in as I was
curious as to why prepared statements would be slower given they
only plan
the query once (as also does the multi-value insert, I assume).
That's a
On 10 May 2012 11:30, Daniel McGreal wrote:
> I put the multi-value inserts in as I was
> curious as to why prepared statements would be slower given they only plan
> the query once (as also does the multi-value insert, I assume).
That's a common misconception.
The reason that prepared statement
Great Job!!! ;)
2012/5/9 Simon Riggs :
> On 9 May 2012 13:02, PostgreSQL Magazine wrote:
>> Dear PostgreSQL users,
>>
>> I am very pleased to announce the release of the first issue of
>> PostgreSQL Magazine.
>>
>> This issue #01 is brought to you thanks to the collective work of
>> dozen of peop
Hi everybody,
I'm running postgres 9.1 and having disk space problems.
My application captures information 24x7 and stores it into the
database. This includes several bytea and can be ~5M entries a day, so
the size can be an issue after several days.
My application also cleans up entries older
Hi,
Unfortunately these are experimental conditions. The conditions surrounding
the intended application are such that my two options are prepared
statements or many inserts. I put the multi-value inserts in as I was
curious as to why prepared statements would be slower given they only plan
the qu
Hello
2012/5/10 Daniel McGreal :
> Hi again,
>
> I did a follow up test using 'multi-value' inserts which is three times
> faster than multiple inserts thusly:
>
if you need speed, use a COPY statement - it should be 10x faster than INSERTS
Pavel
>
> TRUNCATE test;
> BEGIN;
> INSERT INTO test (
Hi again,
I did a follow up test using 'multi-value' inserts which is three times
faster than multiple inserts thusly:
TRUNCATE test;
BEGIN;
INSERT INTO test (one, two, three, four, five) VALUES ('2011-01-01', true,
'three', 4, 5.5)
,('2011-01-01', true, 'three', 4, 5.5)
-- 99'998 more , ('2011-0
Hi!
My reading to date suggests that prepared statements should be faster to
execute than issuing the same statement multiple times. However, issuing
100'000 INSERTs turned out to be more than ten times faster than executing
the same prepared statement 100'000 times when executed via pgAdmin. The
Randy Johnson wrote:
> in the config file for 7.4 we have an entry:
>
> shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each
>
> in 9.1 the default is:
>
> shared_buffers = 32MB
>
>
> max connections is the default 100
>
> Do I need to make any adjustments or can I leave it at
39 matches
Mail list logo