Thank you Laurenz,
I replaced "pg_time_t" with "Timestamp", yet the result looks the same -
each call returns random result.
2017-06-06 18:56 GMT+03:00 Albe Laurenz :
> Kouber Saparev wrote:
> > I am trying to write a function in C to return the log file name by
> given timestamp. I
> > will use
On Tuesday, June 6, 2017 10:32:16 PM EDT Patrick B wrote:
> Hi guys,
>
> I've got tableA with 3 columns.
>
> id(seriaL) | type(character varying(256)) | string(character varying(256))
>
> I have the type/string value stored in another table, and from that i would
> like to get the id.
>
> Examp
Our database has started reporting errors like this:
2017-05-31 13:48:10 CEST ERROR: unexpected chunk number 0 (expected 1) for
toast value 14242189 in pg_toast_10919630
...
2017-06-01 11:06:56 CEST ERROR: unexpected chunk number 0 (expected 1) for
toast value 19573520 in pg_toast_10
I'm running a Spark job that is writing to a postgres db (v9.6), using
the JDBC driver (v42.0.0), and running into a puzzling error:
2017-06-06 16:05:17.718 UTC [36661] dmx@dmx ERROR: deadlock detected
2017-06-06 16:05:17.718 UTC [36661] dmx@dmx DETAIL: Process 36661 waits
for ExclusiveLock o
On 07/06/2017 16:33, ADSJ (Adam Sjøgren) wrote:
Our database has started reporting errors like this:
2017-05-31 13:48:10 CEST ERROR: unexpected chunk number 0 (expected 1) for
toast value 14242189 in pg_toast_10919630
...
2017-06-01 11:06:56 CEST ERROR: unexpected chunk number 0
On Wed, Jun 7, 2017 at 9:16 AM, David Rosenstrauch wrote:
> I'm running a Spark job that is writing to a postgres db (v9.6), using the
> JDBC driver (v42.0.0), and running into a puzzling error:
>
> 2017-06-06 16:05:17.718 UTC [36661] dmx@dmx ERROR: deadlock detected
> 2017-06-06 16:05:17.718 UTC
Hi,
I too have been experiencing this with a busy PostgreSQL instance.
I have been following the updates to the 9.4 branch hoping a fix will appear,
but sadly no luck yet. I have manually replicated the issue on 9.4.4, 9.4.10
and 9.4.12. My replication steps are:
BEGIN;
CREATE TABLE x (id BIGS
Change the relfilenode in above from 13741353 to 5214493
*I' no change yeat, but i will...*
select * from pg_classs where reltoastrelid = 9277970
returns:
* oid | relname | relnamespace | reltype | reloftype | relowner |
relam | relfilenode | reltablespace | relpages | reltuples |
rel
On 07/06/2017 17:49, Harry Ambrose wrote:
Hi,
Out of interest, are you using any tablespaces other than pg_default? I can
only replicate the issue when using separately mounted tablespaces.
One lesson I learned from the BSD camp when dealing with random freezes and panics : when all else fails t
On 06/07/2017 10:32 AM, Merlin Moncure wrote:
On Wed, Jun 7, 2017 at 9:16 AM, David Rosenstrauch wrote:
* How could it be possible that there are 2 PG processes trying to acquire
the same lock? Spark's partitioning should ensure that all updates to the
same user record get routed to the same
Harry Ambrose writes:
> I have been following the updates to the 9.4 branch hoping a fix will appear,
> but sadly no luck yet. I have manually replicated the issue on 9.4.4, 9.4.10
> and 9.4.12. My replication steps are:
This is a very interesting report, but you didn't actually provide a
repro
Hi,
Thanks for the responses.
> "One lesson I learned from the BSD camp when dealing with random freezes and
> panics : when all else fails to give an answer it is time to start blaming my
> hardware. Are those tablespaces on any cheap SSD's ?”
The tablespaces are not sat on SSD’s. Something I
Harry Ambrose writes:
> Tom - I can provide a jar that I have been using to replicate the issue.
> Whats the best transport method to send it over?
If it's not enormous, just send it as an email attachment.
regards, tom lane
--
Sent via pgsql-general mailing list (pgs
Ken Tanzer writes:
>> FWIW, the business with making and editing a list file should work just
>> fine with a tar-format dump, not only with a custom-format dump. The
>> metadata is all there in either case.
> The pg_dump doc page kinda suggests but doesn't quite say that you can't
> re-order tar
On 06/07/2017 08:11 AM, David Rosenstrauch wrote:
On 06/07/2017 10:32 AM, Merlin Moncure wrote:
On Wed, Jun 7, 2017 at 9:16 AM, David Rosenstrauch
wrote:
* How could it be possible that there are 2 PG processes trying to
acquire
the same lock? Spark's partitioning should ensure that all upd
New to this group, so if this is not the right place to ask this question or it
has been asked before/documented, please kindly point me the right group or the
right thread/documentation, thanks.
A BDR novice, I would like to know how BDR replicate changes among nodes in a
BDR group, let's say
On 06/07/2017 07:53 AM, tel medola wrote:
Change the relfilenode in above from 13741353 to 5214493
/I' no change yeat, but i will.../
What is not clear is what 5214495 is?
/Not to me either/
select * from pg_class where relfilenode = 5214495;
/returns: none records/
But I'm worried about
On 8 June 2017 at 04:50, Zhu, Joshua wrote:
> How does BDR replicate a change delta on A to B, C, and D?
It's a mesh.
Once joined, it doesn't matter what the join node was, all nodes are equal.
> e.g., A
> replicates delta to B and D, and B to C, or some other way, or not
> statically determin
Holy shit! (sorry)
Thanks, thanks!!!
It worked!
My goodness
After I point to the filnode, I did a reindex on the toast and some records
have already been located.
2017-06-07 17:58 GMT-03:00 Adrian Klaver :
> On 06/07/2017 07:53 AM, tel medola wrote:
>
>>
>> Change the relfilenode in
Hi,
I've been exploring the pg_catalog tables and pointed a couple of
tools at it to extract an ER diagram for a blog post. At first I
thought it was a bug in the drawing tool but it appears that the
relationships between the pg_catalog tables are implicit rather than
enforced by the database, is
Neil Anderson writes:
> I've been exploring the pg_catalog tables and pointed a couple of
> tools at it to extract an ER diagram for a blog post. At first I
> thought it was a bug in the drawing tool but it appears that the
> relationships between the pg_catalog tables are implicit rather than
> e
21 matches
Mail list logo