On 12/10/2013 01:45, BladeOfLight16 wrote:
On Fri, Oct 11, 2013 at 9:32 AM, Jesse Long <mailto:j...@unknown.za.net>> wrote:
explain select * from archive_document_index where node_id = 29
and value = 'BSH70002152
On 09/10/2013 18:06, Tom Lane wrote:
Jesse Long writes:
The query runs for much longer than I expect it to run for, and I think
this is due to it using the incorrect subplan. As you can see, subplans
1 and 3 make use of and index, but these subplans are not used.
Subplans and 4 are seqscan
On 09/10/2013 18:06, Tom Lane wrote:
Jesse Long writes:
The query runs for much longer than I expect it to run for, and I think
this is due to it using the incorrect subplan. As you can see, subplans
1 and 3 make use of and index, but these subplans are not used.
Subplans and 4 are seqscan
On 09/10/2013 15:20, Albe Laurenz wrote:
Jesse Long wrote:
There is no problem with row visibility, there is only one connection to
the database - the connection I am using to do these selects.
No idea why the plans cannot be used.
It might be helpful to see the table and index definitions
On 09/10/2013 12:57, Jesse Long wrote:
On 09/10/2013 12:10, Albe Laurenz wrote:
Jesse Long wrote:
I have the following query, run immediately after executing VACUUM in
the database. There is only one connection to the database.
You should run ANALYZE, not VACUUM.
The query runs for much
On 09/10/2013 12:10, Albe Laurenz wrote:
Jesse Long wrote:
I have the following query, run immediately after executing VACUUM in
the database. There is only one connection to the database.
You should run ANALYZE, not VACUUM.
The query runs for much longer than I expect it to run for, and I
, subplans
1 and 3 make use of and index, but these subplans are not used.
Subplans and 4 are seqscan, and they are used.
How can I get PostgreSQL to use subplan 1 and 3?
Thanks,
Jesse
testdb=> explain analyse SELECT * FROM ARCHIVE_DOCUMENT AS r0 WHERE r0.NODE_ID =
29 AND r0.ARCHIVE_D
ctually exist in the pg_clog directory?
>
>
>> pg_dump: The command was: COPY blob_store.blobs (blob_id, mime_type,
>> binary_data, create_ts) TO stdout;
>>
>> Command used, pretty standard
>> pg_dump -Fc -f
>>
>>
>> TIA,
>>
>> Jess
access status
of transaction 3101708884
DETAIL: Could not open file pg_clog/0B8E: No such file or directory.
pg_dump: The command was: COPY blob_store.blobs (blob_id, mime_type,
binary_data, create_ts) TO stdout;
Command used, pretty standard
pg_dump -Fc -f
TIA,
Jesse Waters
--
Sent via
processing the request.
2012-10-31 00:57:02 UTC johndoe 27599 192.168.1.30(46426) database[main]
LOG: duration: 65886.667 ms statement: select...
### Replication Config
replication_timeout = 5s
wal_receiver_status_interval = 2s
Thanks, Jesse Hathaway
--
Sent via pgsql-general
On May 4, 9:27 am, DaNieL wrote:
> Sorry, i know that this maybe is a basically problem, but i come from
> mysql.. and in mysql that query works...
if there's only one name per order, just put a min or max around the
second col. as you know by now, all columns that are not GROUPed BY
must be in
"C:/Program Files": File exists
initdb: removing contents of data directory "C:/Program
Files/PostgreSQL/8.2/data"
ANY SUGGESTIONS??
Thanks!
Jesse Johnson
<http://www.bnymellon.com> <>
Asset Servicing - Information Technology
500 Grant St, Pittsburgh, PA 152
ing
CONTEXT: COPY logs, line 69238382
WARNING: errors ignored on restore: 1
Jesse Waters
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
uence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
CONTEXT: COPY logs, line 69238382
UTF8 to UTF8 and everything in between is UTF8, and it still fails,
I'm tempted to delete this record and keep going but I'm de
You can use pg_dump.exe to generate DDL in postgre.
see: http://www.postgresql.org/docs/8.2/interactive/app-pgdump.html
also you might check out the app in my sig for a tool that generates
full reports/documentation about any pg database.
hth, Jesse
---
http
Tom Lane wrote:
Jesse Cleary <[EMAIL PROTECTED]> writes:
This script has been running successfully for several months (70-90 min each
night). Out of the blue I'm now getting the following error message after each
psql command, except the last vacuum full analyze command:
ERROR:
Tom Lane wrote:
Jesse Cleary <[EMAIL PROTECTED]> writes:
This script has been running successfully for several months (70-90 min each
night). Out of the blue I'm now getting the following error message after each
psql command, except the last vacuum full analyze command:
similar nightly
maintenance routines and have not reported any similar issues. Having a hard
time finding others who have run into this problem? Any advice is greatly
appreciated - Thanks!
Postgres 8.0.8 with
PostGIS 1.0.4 on
FC 4
Jesse
--
Jesse Cleary
Department of
Hope this is the correct group.
I am running 8.0.1 on XP P4 2.6 1GB for dev work
and the following simple query takes 5+ minutes
to complete. Would like to get this down to <2-3 seconds.
Other RDBMS complete it in <1 second.
select i.internalid, c.code
from local.internal i
inner join country.ip
thing like this, it would be easier and faster to use
them. Will SELECT ... FOR UPDATE help me out here? Is there a way to
check and see if a row is locked and ignore locked rows in another SELECT
... FOR UPDATE? Any input on a better way to handle this would also be
appreciated.
Thanks for the time. :)
-Jesse
Why is text the recommended character data type?
Is it accessed faster than the other data types?
Jesse
--
Jesse Estevez
Director of E-Commerce
[EMAIL PROTECTED]
Travel-Italy.com
2400 Bluff Creek Drive
Columbia, Missouri 65201
USA
Toll Free
t and tried the suggestions in the readme.RPM
file but I couldn't get them to work.
Thanks :)
-Jesse
by a very large software
publisher...no not Microsoft.) and he expressed some mild interest in
working on a port if it was feasible, so any input on this subject would be
appreciated as I might be able to convince him to work on it. :)
Thanks in advance!
-Jesse
Can you do:
drop sequence people_id_seq
create sequence people_id_seq start 44
?
jesse kipp
"Jason C. Wells" wrote:
>
> sequence_name|last_value|increment_by|
>max_value|min_value|cache_value|
rations
> being performed are inserts?
>
What indexes do you have on the table? Having lots of indexes could slow
down inserts, but speed up queries. If you don't have any indexes, a
query against a 180 meg table could take a while, particularly if you
have less then 180 megs of ram.
jesse
25 matches
Mail list logo