ent_on IS NULL
;
The API documentation is at
https://github.com/decibel/trunklet/blob/master/doc/trunklet.asc.
[1] https://github.com/decibel/pg_classy/blob/master/doc/pg_classy.asc
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent
toast overhead...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
able must be done *on
the partitions*, not on the master table.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
lds in pg_stat_database_conflicts on the
*replica* increasing?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
s got corrupted. You probably have either a hardware
or a configuration problem somewhere.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
n q1
or q2? As in, do you have more than just a single COPY statement in
either file? Because the odds of that working are not very good...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list
ach Postgres backend. You can store things in there and they will
be accessible for all functions. You could keep an array of composite
types (records) in there and that would act pretty much like a table.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http
before, but I don't know if that was for analyze or what.
I wonder if we have some kind of memory leak in GIN's vacuum support...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-g
On 3/5/15 7:36 PM, wambacher wrote:
Jim Nasby-5 wrote
>On 3/5/15 2:06 PM, wambacher wrote:
>Crashed? Or hit by the OOM killer? What's the log say?
killed by OOM, but has only 1.2 GB mem, which is ok to me.
Ok, but...
>What's the largest memory size that a vacuum/autovac
big difference between it and Slony is that Skytools is modular, so
it's designed to allow for this kind of flexibility.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.
e's some way to
handle this at the function call level.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 4/7/15 4:17 PM, Tom Lane wrote:
Jim Nasby writes:
I've created a variant data type [1]. It seems to work pretty well,
except for some issues with casting.
Since the idea of the type is to allow storing any other data type, it
creates casts to and from all other types. At first these
On 4/7/15 11:59 AM, Gerardo Herzig wrote:
I guess that could need something like (untested)
delete from bigtable text_column !~ '^[0-9][0-9]*$';
Won't work for...
.1
-1
1.1e+5
...
Really you need to do something like what Jerry suggested if you want
this to be robust.
--
epeat step 4, you
have to ROLLBACK and go back to step 1. If you design your app with that
in mind it's not a big deal. If you don't... ugh. :)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing lis
On 4/7/15 4:35 PM, Tom Lane wrote:
Jim Nasby writes:
On 4/7/15 4:17 PM, Tom Lane wrote:
I suspect that that's only the tip of the iceberg. Remember the mess
we had with implicit casts to text? And those only existed for a dozen
or so types, not for everything. Every function or ope
On 4/7/15 5:56 PM, David G. Johnston wrote:
On Tue, Apr 7, 2015 at 3:09 PM, Jim Nasby mailto:jim.na...@bluetreble.com>>wrote:
On 4/7/15 4:35 PM, Tom Lane wrote:
Jim Nasby writes:
On 4/7/15 4:17 PM, Tom Lane wrote:
I suspect that that's only
y for the late reply.
Yes, that sounds like a bug in the GIN code. Please post to pgsql-bugs
or hit http://www.postgresql.org/support/submitbug/
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pg
mplicated. Is this even possible using
streaming replication? If yes, how can this be achieved?
You basically need to replace the master with a new replica built off
the new master. There's been some recent work to make this easier/faster
to do, but it's not terribly trivial, and you
hereas a C trigger could avoid almost all of that.
Rules are likely to be even faster (at least until you get to a fairly
large number of partitions), but as Thomas mentioned they're very tricky
to use. The critical thing to remember with them is they're essentially
hacking on the
ason I prefer londiste over Slony is that it's extremely
modular, so it's easy to do stuff like this.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.o
on. If the array is quite small you might get
away with that. But if you're actually storing images you'll probably be
pretty unhappy with performance, because every time you make ANY change
to that array you'll need to completely re-write the *entire* array to disk.
--
Jim Nasby,
og file, talking about blah context:
xxx total in xxx blocks;... That's diagnostics about where PG has used
all it's memory. That's what we need here.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
locking in at 250Mb
after being zipped. What's the best way of getting this data out to
someone who can take a look at this? (Feel free to contact me off-list
to coordinate.)
It would be nice if you could further reduce it, but if not I'd suggest
posting it to something like DropBox
On 4/7/15 11:58 PM, Tom Lane wrote:
Jim Nasby writes:
On 4/4/15 8:38 AM, Chris Curvey wrote:
I can! I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error. I can supply the script to
create and populate the table, but that's still clo
arse
as well as more expensive to execute. If you really want to know for
certain, connect gdb to a database backend on your laptop/desktop, fire
off an insert (to load plpgsql into backend memory), set a gdb
breakpoint on exec_stmt_block(), and see what's different between the
tw
"r_agrio_hourly_pkey" PRIMARY KEY, btree (id)
"u_r_agrio_hourly" UNIQUE, btree (tagid, unitid, rowdate,
device_type, placement)
"unit_r_agrio_hourly" btree (unitid)
"rowdate_r_agrio_hourly" btree (rowdate)
"tag_r_agrio_hourly" bt
someone changing
GUCs, other than GUCs that are marked as superuser-only.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http:
On 4/12/15 11:55 PM, Sameer Kumar wrote:
On Mon, 13 Apr 2015 11:35 Jim Nasby mailto:jim.na...@bluetreble.com>> wrote:
On 4/11/15 4:11 PM, Sameer Kumar wrote:
> Pg_settings currently has an upper bound column - though it is a
> view and that value cannot be ch
lse will
result in either huge catalogs or a lot of wasted catalog space.
Second, if you do per-database, that makes it trivial to scale across
multiple servers.
Regarding backups; you can easily do partial either way with pg_dump;
there's really no difference. You can't do partial wi
. Not
only does the OS have to match, you could actually run into problems
switching to a different CPU architecture (among other things, whether
the architecture is big- or small-endian matters).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble!
a look at PgQ (https://wiki.postgresql.org/wiki/PGQ_Tutorial). It
uses something more efficient than bulk deletes to handle it's queue,
and you can set it up so it will handle a large amount of queued items
rather well (increase the number of tables). The one downside you may
run into is yo
wonder what on earth is taking 60ms to update
the tuple. I suspect it's going into either finding a free buffer to put
the new tuple on, or waiting to try and extend the relation. Selecting
ctid from the freshly updated rows and comparing the first number to the
total number of pages in
t see what EXPLAIN or EXPLAIN VERBOSE shows.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
it from an
index, the data will stay in that index until vacuum comes along.
Also, there's no point in doing a REINDEX after a VACUUM FULL; vacuum
full rebuilds all the indexes for you.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.c
On 4/14/15 4:44 PM, Pawel Veselov wrote:
On Tue, Apr 14, 2015 at 1:15 PM, Jim Nasby mailto:jim.na...@bluetreble.com>> wrote:
On 4/14/15 1:28 PM, Pawel Veselov wrote:
I wonder if what I need to do, considering that I update a lot
of "the
same" rows a
ome of the same issues as floating point (there's
been recent discussion of that on -hackers), but I believe it'd be
better than double.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general ma
of hint bit updates until all data has been read a second
time, and b: a lot of this data will come up for freezing at around the
same time, creating a big chunk of work for autovacuum. That's caused
problems for me in the past, though that was on a database that had a
pretty high workload.
the possibility of using tools like
pg_repack and table partitioning. You could do this in pure SQL, but the
community might welcome a patch that adds the ability to use different
tables to the existing large object API.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get i
t'
and then do
SELECT ... FROM my_function( 'some_table' ) AS data( d some_table )
but I'm hoping to avoid the extra level of indirection.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-gen
ng in the OS is
that you're saturating the memory bus... but you'd need WAY more than 5k
pages/s to do that.
I do suspect your pgfree/s is very high though; putting 200k pages/s on
the free list seems like something's broken.
BTW,
http://www.postgresql.org/message-id/466d72
xes for "user" to
get good query performance?
The performance for that query is going to come from quickly identifying
records from the WHERE clause, which is going to use the primary key.
For the query you're showing, indexes on the setting field aren't going
to help.
On 4/17/15 7:39 PM, David G. Johnston wrote:
On Friday, April 17, 2015, Jim Nasby mailto:jim.na...@bluetreble.com>> wrote:
I'm working on a function that will return a set of test data, for
unit testing database stuff. It does a few things, but ultimately
returns SETOF re
tion(...))::name_of_table).*
that gives you the same output as if you'd selected directly from the table.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
very small
percentage of the table. The only ways I can think of offhand where that
will happen is if you ORDER BY columns that are already part of the
WHERE clause, or if there's no WHERE but you do use a LIMIT.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get
rocess that queue. The
main advantages being loose coupling and the ability to send the emails
asynchronously.
If you go that route you might find LISTEN/NOTIFY useful:
http://www.postgresql.org/docs/9.4/static/sql-listen.html
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble
of CPU to find one. To make matters
worse, that search for a buffer takes place while holding the extension
lock.
Would you be able to get a stack trace of a backend that's holding an
extension lock? Or maybe perf would provide some insight.
Would you know a good resource to get more k
On 4/19/15 3:08 PM, Jim Nasby wrote:
> I do suspect your pgfree/s is very high though; putting 200k
pages/s on
> the free list seems like something's broken.
>
The system has constant and considerable load of small writes. The
pg_activity tool shows 300 IOPs sustained (it
On 4/19/15 9:53 PM, Tim Uckun wrote:
On Sat, Apr 18, 2015 at 10:24 AM, Pawel Veselov mailto:pawel.vese...@gmail.com>> wrote:
I found some dangling prepared transactions
How do you find and remove these?
SELECT * FROM pg_prepared_xacts;
ROLLBACK PREPARED xid;
--
Jim Nasby
On 4/17/15 4:29 PM, Andreas Joseph Krogh wrote:
På fredag 17. april 2015 kl. 21:11:05, skrev Jim Nasby
mailto:jim.na...@bluetreble.com>>:
On 4/15/15 9:22 AM, Andreas Joseph Krogh wrote:
> På onsdag 15. april 2015 kl. 16:05:22, skrev Adam Hooper
> mailto:a...@ad
, 10, -73.597070,
45.544083))r)q
, generate_series(1, iterations) i
$body$
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
You also need to consider the indexes. First, make absolutely certain
you need all of them. Get rid of every one that isn't required. Second,
you need to test the amount of data that's written during an update with
the indexes in place *and doing a subsequent VACCUM* compared to
dropp
n the new schema without the FK constraints, copy
the data, then add the constraints afterwards.
You could also deffer the constraints, but that's probably going to be a
lot slower.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.co
error, not a notice.
We need more information from the OP about what they're doing.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
s of extra WAL.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
y to break your data into smaller pieces
and process each piece individually.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
some point. It's a problem that affects all replication systems.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
27;t apply it itself. Or it could be
multi-master and possibly a provider to read-only subscribers.
Finally you'd need to associate tables and sequences with a set. I agree
you'd want to look at FKs. I'd also like to be able to define rules for
a set, like "include everything
...
AND (
f.user_id__a = 57
OR ( m.user_id = 57 AND m.group_id IN (1,2,3) )
)
I'm not sure how fast it'll be though. I suspect your best bet is to put
the UNION approach inside a set returning function; that way the ugly is
contained in one place.
--
Jim Nasby, Data
ication commands. Then the only thing missing would be
defining what sets lived where; that would allow the generic system at
least define almost every aspect of a replication environment. Maybe
that's too ambitious; the first step would be to try just what tables
are in which set and see how
t for this. Is
postgresql.conf the same? Same class of hardware? Same OS version? Same
filesystem? Same OS and filesystem configs? How are you actually loading
the data?
Parallel mode is better than direct mode?
What 'parallel mode'?
--
Jim Nasby, Data Architect, Blue Treble Con
is not a recipe for good
concurrency. For that matter, running DDL on a recurring basis isn't
good either.
BTW, you might want to look at using PgQ to do the queuing; it has
different methods of maintaining it's queues so you don't run into these
sort of problems.
--
Jim Nasby,
elects unless I need a specific CTE feature.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
el.
BTW, John, you mentioned RULEs elsewhere... be very careful about using
those. They're incredibly easy to get wrong and generally not worth the
trouble.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general
licate the details of ALL
activity, including maintenance like VACUUM. In some environments that
might be slower than logical replication.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-g
st rowcount was over 100M.
Nothing was partitioned. Granted, it was running on servers with 512GB
of RAM, but those aren't exactly insanely expensive.
In other words, always remember the first rule of performance
optimization: don't. :)
--
Jim Nasby, Data Architect, Blue Treble Co
erious database
sizes if you're willing to spend money on hardware. Given what kind of a
server you can get for $20k now-a-days, I'd seriously investigate that
if you haven't already.
BTW, there's also https://github.com/citusdata/pg_shard. I've never
really looked at it
= trunc( random()*45000) ;
You could create a temp table with the random value and JOIN to it:
CREATE TEMP TABLE rnd AS SELECT random()*45000;
Another option might be to use a prepared statement:
PREPARE test AS SELECT ... WHERE customer_id = $1;
EXECUTE test( random()*45000 );
--
Jim Nas
. It stores up
a list of affected rows, then COPYs those from the source databases
to the target ones. It's particularly strong when you have large
delays and many updates to the same row.
Doh! My apologies.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in T
ably isn't
the right answer anyway.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
re's an easier/better way than messing around with xmin to do it...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
r the related video useful:
http://www.pgcon.org/2015/schedule/events/829.en.html
https://www.youtube.com/watch?v=b1fcvkl0ffQ
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@p
format. It'd
involve some C coding though.
It would be nice if there was a way to do transforms during COPY. I
vaguely remember some discussion of that on hackers some time ago.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble
d shows up in pg_stat_activity. See BackendStartup() for details.
In particular, BackendInitialize() is what collects the startup packet
from the port before calling BackendRun() which calls PostgresMain()
which is what finally registers the backend in the proc array.
--
Jim Nasby, Data Architect
adding a field to a parent would no longer go to the
end of the children.
#2 could be as simple as a change to the documentation. Patches (or even
just re-written text) welcome.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
D
thout a fully-fledged
parser that is dead on arrival.
ISTM that's also the biggest blocker for allowing extensions that refer
to other schemas to be relocatable. It would be interesting if we had
some way to handle this inside function bodies, perhaps via something
equivalent to @extschema@.
--
e possible to reduce the lock contention of trunkcate.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
ther option is supporting some kind of official way to specially
designate database objects in any procedure language (ie, the @schema@
syntax that extensions use). That would make it possible to rename
properly written functions without adverse side effects.
--
Jim Nasby, Data Architect, Blu
don't know of a way around that, but you might be better off using SSL
certs to authenticate. I believe there's even something similar to
ssh-keychain that would allow you not to store the passphrase on-disk
(though you would have to enter it manually on reboot).
--
Jim Nasby, Data Ar
#x27;ll
have incremental refresh of some sort, which might help in your case.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgs
On 9/15/15 10:32 AM, Ben Chobot wrote:
On Sep 15, 2015, at 12:27 AM, Jim Nasby wrote:
On 9/15/15 12:48 AM, Ben Chobot wrote:
We're in a situation where we would like to take advantage of the pgpass
hostname field to determine which password gets used. For example:
psql -h prod-serv
e_origine BETWEEN t1.date_debut_garantie AND
t1.date_fin_garantie AND t3.id_dossier = $1
6 $function$
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-
ly, there's probably some other hooks you could use for this purpose.
BTW, if you want to extend Pavel's code to also handle logout, you just
need to create a _PG_fini function. See the end of this section in the
docs: http://www.postgresql.org/docs/9.4/static/xfunc-c.html#XFUNC-C-DY
obably some other uses as well.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chan
On 10/1/15 12:27 PM, Steve Pribyl wrote:
I am in the process of testing out BDR
Please send BDR requests to the BDR mailing list. Thanks!
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble
you consistently get lower latency on the ZIL than on the vdev's; that
will make every COMMIT run faster.
(BTW, this is all based on the assumption that ZFS treats fsync as a
synchronous request.)
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Archite
."users" x WHERE
"id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x"
This isn't a locking failure, it's a serialization failure. I'm not sure
why it's happening though... is there an index on date?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin
fsync
latency because it's a) only writing and b) writing sequentially.
Essentially, it's the same idea as having a separate pg_xlog partition
(except that depending on your setup there may well be pg_xlog read
activity as well).
--
Jim Nasby, Data Architect, Blue Treble Consult
On 10/2/15 11:44 AM, Olivier Dony wrote:
On 10/02/2015 12:28 AM, Jim Nasby wrote:
On 9/29/15 9:47 AM, Olivier Dony wrote:
My understanding of the KEY SHARE/NO KEY UPDATE locks introduced in 9.3
was that they would avoid side-effects/blocking between transactions
that are only linked via FK
uld be if you could
shard based on something like object or customer; that way you only have
to look up which shard the customer lives in.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble!
and compare that to a prepared SELECT currval(). You might
notice a difference at higher client counts with no caching, but I doubt
you'd see that much difference with caching turned on.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architectur
me that we could create btree opclasses as well, if
we explicitly compared X before Y (or vice-versa). I certainly wouldn't
call those operators < or >, but maybe there's enough use to supporting
btree for this to make sense.
--
Jim Nasby, Data Architect, Blue Treble Consul
ss with it. Likewise you normally don't need to
explicitly freeze, though it's not a bad idea to do so after a large
data load. Just make sure that there's no open transactions that were
started before the data load transaction when you run the VACUUM.
--
Jim Nasby, Data Architect, Blue
p
in this particular case, but it would significantly expand the
usefulness of a streaming version of md5 and all the other hash operators.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Tr
uld help to make the back-patch happen, please
let me know!
I'd say you should probably open a bug about this to make sure it's
visible if you want it fixed. Or start a thread on -hackers.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Dat
On 10/12/15 10:03 AM, Alex Magnum wrote:
Is there a way to use a limit in the join?
SELECT ... FROM table_a a
LEFT JOIN ( SELECT ... FROM table_b WHERE ... LIMIT 1) b
ON a.blah = b.blah
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data
1",""
2015-10-12 09:29:04.395 CDT,,,12693,,561bb1ae.3195,21,,2015-10-12 08:12:14 CDT,,0,LOG,0,"starting background
worker process ""bdr (6204748238611542317,1,16494,)->bdr
(6204748255428234532,1,""",""
Looks like so
On 10/12/15 10:14 AM, Jim Nasby wrote:
On 10/12/15 9:37 AM, Steve Pribyl wrote:
I am loading up a 60G database into BDR database and these "ERRORS"
are in my logs. Is not normal behavior or is something going bad.
2015-10-12 09:28:59.389 CDT,,,30371,,561bc17d.76a3,1,,2015-10-12
09:1
igger to set the value based on
what's already in the table.
By the way, the issue with doing a simple count is that it's not safe
from race conditions, like an insert and a delete happening together.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics,
ned,
such that the user can know to apply "AT TIME ZONE ..."
to the result as needed.
The problem there is you can't reliably convert a time to a different
timezone without knowing what date you're talking about.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin
LE ...(
...
, record_number smallint NOT NULL CONSTRAINT
record_number_must_be_1_or_2 CHECK( record_number BETWEEN 1 AND 2 )
);
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://Bl
1 - 100 of 493 matches
Mail list logo