On Sep 27, 2014, at 7:48 PM, snacktime wrote:
> The schema is that a key is a string, and the value is a string or binary. I
> am actually storing protocol buffer messages, but the library gives me the
> ability to serialize to native protobuf or to json. Json is useful at times
> especially
On 09/29/2014 05:16 PM, Adrian Klaver wrote:
On 09/29/2014 02:17 PM, Andy Colson wrote:
Crap! Is this a problem?!
I switched back to cp, all was going well, here are some logs:
Sep 29 16:07:10 webserv postgres[17735]: [590-1] ,,2014-09-29
16:07:10.888 CDT,: LOG: restored log file "000200
On 09/29/2014 02:17 PM, Andy Colson wrote:
Crap! Is this a problem?!
I switched back to cp, all was going well, here are some logs:
Sep 29 16:07:10 webserv postgres[17735]: [590-1] ,,2014-09-29
16:07:10.888 CDT,: LOG: restored log file "000200B90023"
from archive
Sep 29 16:07:13 w
On Sep 29, 2014, at 4:06 PM, Nick Guenther wrote:
> A newbie tangent question: how do you access the transaction serial? Is it
> txid_current() as listed in
> http://www.postgresql.org/docs/9.3/static/functions-info.html?
My implementations were ridiculously simple/naive in design, and existed
Thank you Felix, Gavin, and Jonathan for your responses.
Felix & Jonathan: both of you mention just storing deltas. But if you do
that, how do you associate the delta record with the original row? Where's
the PK stored, if it wasn't part of the delta?
Felix, thank you very much for the example co
Crap! Is this a problem?!
I switched back to cp, all was going well, here are some logs:
Sep 29 16:07:10 webserv postgres[17735]: [590-1] ,,2014-09-29
16:07:10.888 CDT,: LOG: restored log file "000200B90023"
from archive
Sep 29 16:07:13 webserv postgres[17734]: [3-1] ,,2014-09-29
I have a question about BDR Global Sequences.
I've been playing with BDR on PG 9.4beta2, built from source from the
2nd Quadrant GIT page (git://git.postgresql.org/git/2ndquadrant_bdr.git).
When trying a 100 row \copy-in, letting PG choose the global sequence
values, I get "ERROR: could not
Hi All.
I have a slave that was using streaming replication, and all was well,
but its far away (remote data center), and its using too much bandwidth
at peak times.
I'm switching it to wal shipping at night. I don't really need it
constantly up to date, nightly is good enough.
In my reco
On September 29, 2014 11:08:55 AM EDT, Jonathan Vanasco
wrote:
>
>- use a "transaction" log. every write session gets logged into the
>transaction table (serial, timestamp, user_id). all updates to the
>recorded tables include the transaction's serial. then there is a
>"transactions" table,
In the past, to accomplish the same thing I've done this:
- store the data in hstore/json. instead of storing snapshots, I store deltas.
i've been using a second table though, because it's improved performance on
reads and writes.
- use a "transaction" log. every write session gets logged in
Hey
i've also tried to implement a database versioning using JSON to log changes in
tables. Here it is: https://github.com/fxku/audit
I've got two versioning tables, one storing information about all transactions
that happened and one where i put the JSON logs of row changes of each table.
I'm
11 matches
Mail list logo