Unfortunately it's now impossible to say how many were updated, as they get
deleted by another process later. I may be able to restore part of a dump
from 2 days ago on another machine, and get some counts from that, assuming
I have the disk space. I'll work on that.
I do not believe there could
Gordon Shannon writes:
> I assume you can now see the plan? I uploaded it twice, once via gmail and
> once on Nabble.
Yeah, the Nabble one works. Now I'm even more confused, because the
whole-row var seems to be coming from the outside of the nestloop, which
is about the simplest possible case.
The number of matching rows on these queries is anything from 0 to 1. I
don't think I can tell how many would have matched on the ones that
crashed. Although I suspect it would have been toward the 1 end. I've
been trying to get a reproducable test case with no luck so far.
I assume y
I wrote:
> The odds seem pretty good that the "corrupt compressed data" message
> has the same origin at bottom, although the lack of any obvious data
> to be compressed in this table is confusing. Maybe you could get that
> from trying to copy over a garbage value of that one varchar column,
> th
Maybe it doesn't work from gmail. I'll try uploading from here.
http://postgresql.1045698.n5.nabble.com/file/n3323933/plan.txt plan.txt
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/seg-fault-crashed-the-postmaster-tp3323117p3323933.html
Sent from the PostgreSQL - g
On Friday 31 December 2010 9:06:19 am Håvard Wahl Kongsgård wrote:
> Well I created the SQL files from multiple shapefiles. Used the shp2sql (
> postgis 1.5) to generate the SQL dumps.
>
And the shp2pgsql docs say:
"Appends data from the Shape file into the database table. Note that to use
this
On 2010-12-31, Håvard Wahl Kongsgård wrote:
> --90e6ba488359a7721f0498b43825
> Content-Type: text/plain; charset=ISO-8859-1
> Content-Transfer-Encoding: quoted-printable
>
> Hi,
> I am trying to insert new records from multiple SQL dumps into an existing
> table. My problem is that the database ta
Gordon Shannon writes:
> Enclosed is the query plan -- 21000 lines
Um ... nope?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Yes that query does take 30 or 90 secs. I'm pretty sure it was blocking on
its twin update running concurrently. However I'm not really sure how to
identify what "transaction 1283585646" was.
Enclosed is the query plan -- 21000 lines
-gordon
I tried to replicate the problem here without succes
Gordon Shannon writes:
> Sorry, I left that out. Yeah, I wondered that too, since these tables do
> not use toast.
Hm. Well, given that the stack trace suggests we're trying to access a
tuple value that's not there (bogus pointer, or data overwritten since
the pointer was created), the "invalid
Sorry, I left that out. Yeah, I wondered that too, since these tables do
not use toast.
CREATE TYPE message_status_enum AS ENUM ( 'V', 'X', 'S', 'R', 'U', 'D' );
On Fri, Dec 31, 2010 at 12:38 PM, Tom Lane-2 [via PostgreSQL] <
ml-node+3323859-1425181809-56...@n5.nabble.com
> wrote:
> Hmmm ...
bricklen writes:
> On Wed, Dec 29, 2010 at 1:53 PM, bricklen wrote:
>> On Wed, Dec 29, 2010 at 12:11 PM, Tom Lane wrote:
>>> You did something on the source DB that rewrote the table with a new
>>> relfilenode (possibly CLUSTER or some form of ALTER TABLE; plain VACUUM
>>> or ANALYZE wouldn't do
Gordon Shannon writes:
> Here is the ddl for the tables in question. There are foreign keys to other
> tables that I omitted.
> http://postgresql.1045698.n5.nabble.com/file/n3323804/parts.sql parts.sql
Hmmm ... what is "message_status_enum"? Is that an actual enum type, or
some kind of domain
On Fri, Dec 31, 2010 at 1:13 PM, bricklen wrote:
> On Wed, Dec 29, 2010 at 1:53 PM, bricklen wrote:
>> On Wed, Dec 29, 2010 at 12:11 PM, Tom Lane wrote:
>>>
>>> The difference in ctid, and the values of xmin and relfrozenxid,
>>> seems to confirm my suspicion that this wasn't just random cosmic
On Wed, Dec 29, 2010 at 1:53 PM, bricklen wrote:
> On Wed, Dec 29, 2010 at 12:11 PM, Tom Lane wrote:
>>
>> The difference in ctid, and the values of xmin and relfrozenxid,
>> seems to confirm my suspicion that this wasn't just random cosmic rays.
>> You did something on the source DB that rewrote
Hi,
Can I ask for implementing binary in / out at least for following types:
void - usefull when using binary and procudre returns void
acl - in this way that role's/user's name will be visible in binary stream, so
there will be no need to requerying for user's name by the oid.
Kind regards,
Ra
I decided last night to rename my 'public' schema (Not sure of that's
a good / bad idea) since I'm still learning about how schema's work on
PostgreSQL. My question is:
1. If I have a constraint (specifically 'unique') on a specific table,
when I rename the public schema, does that impact my ''uni
Here is the ddl for the tables in question. There are foreign keys to other
tables that I omitted.
http://postgresql.1045698.n5.nabble.com/file/n3323804/parts.sql parts.sql
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/seg-fault-crashed-the-postmaster-tp3323117p3323
Interesting. That's exactly what we have been doing -- trying to update the
same rows in multiple txns. For us to proceed in production, I will take
steps to ensure we stop doing that, as it's just an app bug really.
The table in question -- v_messages -- is an empty base table with 76
partitions
Well I created the SQL files from multiple shapefiles. Used the shp2sql (
postgis 1.5) to generate the SQL dumps.
On Fri, Dec 31, 2010 at 5:00 PM, Vick Khera wrote:
> 2010/12/31 Håvard Wahl Kongsgård :
> > Is it possible to overriding the default psql behavior, so that the sql
> > session simpl
2010/12/31 Håvard Wahl Kongsgård :
> Is it possible to overriding the default psql behavior, so that the sql
> session simply ignores any missing fields?
>
Do you still have the original database? Re-run your table exports
without the unneeded columns using COPY, then import those outputs
instead
Gordon Shannon writes:
> Stack trace:
> #0 0x0031a147c15c in memcpy () from /lib64/libc.so.6
> #1 0x00450cb8 in __memcpy_ichk (tuple=0x7fffb29ac900) at
> /usr/include/bits/string3.h:51
> #2 heap_copytuple (tuple=0x7fffb29ac900) at heaptuple.c:592
> #3 0x00543d4c in EvalPla
On Fri, Dec 31, 2010 at 09:44:43AM +0530, tamanna madaan wrote:
> However, This is not a very long running query.
> This was supposed to update only one row in a table.
That it's supposed to update only one row does not mean it wasn't a
very long running query.
> Moreover, it cant be waiting for
Hey gvim,
2010/12/30 gvim
> Is it possible, with PostgreSQL 9.0, to restrict access to specific table
> rows by `id`? I want a user to be able to INSERT new rows but not UPDATE or
> DELETE rows with `id` < 1616.
>
I believe that first you need to restrict SELECT. You can do it by creating
view:
Hi,
I am trying to insert new records from multiple SQL dumps into an existing
table. My problem is that the database table does not have some of the
columns used in the sql dumps. So when I try to import the dumps psql fails
with: "ERROR: current transaction is aborted, commands ignored until end
Le jeudi 30 décembre 2010 à 12:05 -0500, Andrew Sullivan a écrit :
[about Abiword]
> It's intended as a word processor rather than a text
> editor, isn't it?
It works with text files too. It's not a problem.
--
Vincent Veyron
http://marica.fr/
Progiciel de gestion des dossiers de contentieux e
On 31 Dec 2010, at 5:14, tamanna madaan wrote:
> Moreover, it cant be waiting for a lock as
> other processes were able to update the same table at the same time.
That only means it wasn't waiting on a TABLE-lock, occurrences of which are
quite rare in Postgres. But if, for example, an other upd
27 matches
Mail list logo