users can be found
on the wiki here: https://wiki.postgresql.org/wiki/PGLister_Announce
Once the migration of these lists is complete, an 'after' email will be
sent out.
Thanks!
Stephen
signature.asc
Description: Digital signature
the data load is done. That will avoid having
the table's contents written into the WAL, and PG will treat it as a
regular table post-commit, meaning that it won't be truncated on a
database crash.
Thanks!
Stephen
signature.asc
Description: Digital signature
queries slower. Or would be better to store the array data in a
> separate table e.g. have [s] as it is now but turn [sp] into an
> array aggregated table.
If that's the case then you would probably be better off putting the
arrays into an independent table, yes.
Thanks!
Stephen
sign
nique allows you to reduce the number of tuples stored.
Thanks!
Stephen
signature.asc
Description: Digital signature
I've had to do Oracle work though and I could be mis-remembering. People
> may be confusing Oracle and PostgreSQL.
Yes, that thought has occured to me as well, in some other database
systems you can ask for the system to be quiesced.
Thanks!
Stephen
signature.asc
Description: Digital signature
ending and thankless task, so, my thanks to you for your
efforts. :)
> Never fails to surprise me how many people don't read the docs.
+1MM.
Thanks again!
Stephen
signature.asc
Description: Digital signature
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> On Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby wrote:
> > On 1/22/17 11:32 AM, Stephen Frost wrote:
> >> The 1-second window concern is regarding the validity of a subsequent
> >> incremental backup.
> >
> &g
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost wrote:
> > Just to make sure anyone reading the mailing list archives isn't
> > confused, running pg_start_backup does *not* make PG stop writing to
> > BASEDIR (or DATADIR, or an
you and by ZFS) and are entirely atomic to
the entire PG instance.
For example, I don't believe ZFS snapshots will be atomic if multiple
ZFS filesystems on independent ZFS pools are being used underneath a
single PG instance.
And, as others have also said, always test, test, test.
Thanks
* Jim Nasby (jim.na...@bluetreble.com) wrote:
> On 1/20/17 9:06 AM, Stephen Frost wrote:
> >All the pages are the same size, so I'm surprised you didn't consider
> >just having a format along the lines of: magic+offset+page,
> >magic+offset+page, magic+offset+page, e
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 19:59, Stephen Frost написал(а):
> >>> How are you testing your backups..? Do you have page-level checksums
> >>> enabled on your database?
> >>
> >> Yep,
the backup.
> The backup is Full Backup or Snapshot and it will work whenever needed.
> We are not saying about Incremental Backup yet.
> Along with collecting the XLOG File, you can have incremental backup and
> having complete continuous data backup.
> in this case, Stephen is sug
s is, by far, the
best approach. Attempting to roll your own backup system based on rsync
is not something I am comfortable recommending any more because it is
*not* simple to do correctly.
Thanks!
Stephen
signature.asc
Description: Digital signature
hen restoring the backup or
the database will not be consistent.
Thanks!
Stephen
signature.asc
Description: Digital signature
g_xlog/pg_wal.
Please do not claim that PG stops writing to the DATADIR or BASEDIR
after a pg_start_backup(), that is not correct and could lead to invalid
backups.
Thanks!
Stephen
signature.asc
Description: Digital signature
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 18:06, Stephen Frost написал(а):
> >
> > Right, without incremental or compressed backups, you'd have to have
> > room for 7 full copies of your database. Have you looked at what your
&
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 16:40, Stephen Frost написал(а):
> >> Increments in pgbackrest are done on file level which is not really
> >> efficient. We have done parallelism, compression and page-level increments
>
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 15:22, Stephen Frost написал(а):
> >> This process can be automatized by some applications like barman
> >> http://www.pgbarman.org/
> >
> > Last I checked, barman is still single-t
gbackrest- http://www.pgbackrest.org.
pgbackrest has parallel backup, incremental/differential/full backup
support, supports compression, CRC checking, and a whole ton of other
good stuff.
Thanks!
Stephen
signature.asc
Description: Digital signature
replication (eg: bucardo,
slony, etc) instead of block-based, or you could look at the logical
replication capabilities (pg_logical) to see about using that for your
replica-for-reporting instead.
Thanks!
Stephen
signature.asc
Description: Digital signature
I have the a table with two indexes...
CREATE TABLE mobile_summary_usage
(
import text,
msisdn text,
type text,
totalinteger,
day date,
cycletext
);
CREATE INDEX mobile_summary_usage_msisdn_cycle ON mobile_summary_usage
USING btree (msisdn, cycle);
CREATE I
r replica, of course, but that might be a worthwhile trade-off.
For pure backups, another approach is to use pg_receivexlog and a tool
like barman which supports verifying that the WAL for a given backup has
reached the remote side.
Thanks!
Stephen
signature.asc
Description: Digital signature
ke the backups larger, of course. Also, there's a limit to how
far that will get you- once you get to "no compression", that's just as
fast as pg_basebackup can run.
If you're interested in a backup tool which can operate in parallel, you
might want to look at pgbackrest.
Thanks!
Stephen
signature.asc
Description: Digital signature
Greg,
* Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost wrote:
> > * Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> > > Bigger buckets mean a wider possibility of response times. Some buckets
> > > may contain
ou'd want to have at
least one replica and a setup which allows you to flip traffic to it
very quickly to maintain the 30ms response times.
Thanks!
Stephen
signature.asc
Description: Digital signature
key
are easily available then we could certainly sort those prior to going
and visiting them.
That's not quite the same as keeping the tuples in order in the heap,
but would more-or-less achieve the effect desired, I believe?
Thanks!
Stephen
signature.asc
Description: Digital signature
gt; loading will do a check. For now, I'd like the fastest way. Would
> you suggest commit every 1000 or 3000 records?
The improvement drops off pretty quickly in my experience, but it
depends on the size of the records and other things.
Try it and see..? It's almost certainly
ds (e.g., 100, 500, 1000, etc)
It depends on what you need.
Data will be available to concurrent processes earlier with (1), while
(2) will go faster.
Thanks,
Stephen
signature.asc
Description: Digital signature
use the NFS
server's memory for shared_buffers anyway).
All that said, there has always been a recommendation of caution around
using NFS as a backing store for PG, or any RDBMS..
Thanks,
Stephen
signature.asc
Description: Digital signature
nces.
Costing integer (or anything that doesn't require pointer maniuplations)
operations as cheaper than text-based operations also makes sense to me,
even though of course there's more things happening when we do these
comparisons than the simple CPU-level act of doing the cmp.
Thanks,
Stephen
signature.asc
Description: Digital signature
27;ll more-or-less get you there (turn
on \timing in psql). When reading this thread, I was thinking it might
be useful to add plan time somewhere in explain/explain analyze output
though..
Thanks,
Stephen
signature.asc
Description: Digital signature
lie to the guest OS too. Of course, there are similar concerns about
a SAN or even local RAID cards, but there's a lot more general
familiarity and history around those which reduces the risk there (or at
least, that's the thought).
Thanks,
Stephen
signature.asc
Description: Digital signature
g which you could provide, it would absolutely help us in
understanding and perhaps solving this issue.
Thanks!
Stephen
signature.asc
Description: Digital signature
imple, well-defined test case which shows the failure.
Thanks!
Stephen
signature.asc
Description: Digital signature
eg: when the 'WHEN' components are all constants and the data
type is something we can manage, etc, etc.
Thanks,
Stephen
signature.asc
Description: Digital signature
itioning would certainly be nice. I was really
hoping that was going to happen for 9.3, but it seems unlikely now
(unless I've missed something).
Thanks,
Stephen
signature.asc
Description: Digital signature
switch statement can be.
Thanks,
Stephen
signature.asc
Description: Digital signature
rid of
the expression evaluation (if that'd be possible) would make up for it.
Thanks,
Stephen
signature.asc
Description: Digital signature
t you can get bottle-necked on the WAL
data, unless you've taken steps to avoid that WAL.
Thanks,
Stephen
signature.asc
Description: Digital signature
an probably use the existing
contrib module. The difference is that, with 9.2, it'll actually do
normalization of non-PREPARED queries and will include some additional
statistics and information.
Thanks,
Stephen
signature.asc
Description: Digital signature
you'd like, so we can further improve
things in that area..
Thanks,
Stephen
signature.asc
Description: Digital signature
ven if the server side is magically made much
faster. The constant back-and-forth isn't cheap.
Thanks,
Stephen
signature.asc
Description: Digital signature
ch can handle
1.35M write transactions per second? That's quite a few and regardless
of RDBMS, I expect you'll need quite an I/O system to handle that.
Thanks,
Stephen
signature.asc
Description: Digital signature
l definitely update list ,
> when i get to the point of putting the blame on DB :-) .
Ok. :)
Thanks,
Stephen
signature.asc
Description: Digital signature
logspot.com/2012/04/did-i-say-32-cores-how-about-64.html
That's a pretty contrived test case, but I suppose it's possible your
case is actually close enough to be getting affected also..
Thanks,
Stephen
signature.asc
Description: Digital signature
* Sebastian Melchior (webmas...@mailz.de) wrote:
> Does anyone have any idea what could cause this issue or how we can further
> debug it?
Are you logging checkpoints? If not, you should, if so, then see if
they correllate to the time of the slowdown..?
Thanks,
S
* Stefan Keller (sfkel...@gmail.com) wrote:
> So, are there any developments going on with PostgreSQL as Stephen
> suggested in the former thread?
While the idea has been getting kicked around, I don't know of anyone
actively working on developing code to implement it.
bles?
I'm not aware of any issues (beyond those already documented for async
commit..) with having async commit and unlogged tables.
THanks,
Stephen
signature.asc
Description: Digital signature
emory at the time of the last shut-down, but I'm not sure that would
help your case either since you're rebuilding everything every night and
that's what's trashing your buffers (because everything ends up getting
moved around). You might actually want to consider if that's doing more
harm than good for you. If you weren't doing that, then the cache
wouldn't be getting destroyed every night..
Thanks,
Stephen
signature.asc
Description: Digital signature
e that.
If there's a lot of updates/changes to this table, you might check if
there's a lot of bloat (check_postgres works great for this..).
Eliminating excessive bloat, if there is any, could help with all
accesses to that table, of course, since it would reduce the amount of
data which would need to be.
Thanks,
Stephen
signature.asc
Description: Digital signature
be trivial, but it certainly would be nice as this strikes me as a
very common use-case.
> If you're stuck with this table arrangement, one thing that would help
> is a two-column index on (host_guid, firstloadtime) on each child table.
Agreed, I mentioned this to the OP previousl
using 64bit integers and would allow
you to use all the PG date/time functions and operators. Just a
thought.
Thanks,
Stephen
signature.asc
Description: Digital signature
he latest and pulls
whatever records it can from each day and then stops once it hits the
limit.
Thanks,
Stephen
signature.asc
Description: Digital signature
epared queries and
constraint exclusion..?
Thanks,
Stephen
signature.asc
Description: Digital signature
ou don't need to reset it since a new connection will get the default).
Thanks,
Stephen
signature.asc
Description: Digital signature
data
before copying it..
If you INSERT .. SELECT, you're going to get the real data in the WAL,
and also in the heap of the new table..
Thanks,
Stephen
signature.asc
Description: Digital signature
testing it to see
exactly what happens, of course..
Thanks,
Stephen
signature.asc
Description: Digital signature
ince it all has to end up on disk at
some point). Not much point in having the overhead of COW for that kind
of environment, I wouldn't think.
Thanks,
Stephen
signature.asc
Description: Digital signature
ll to the memory-based FS.
Thanks,
Stephen
signature.asc
Description: Digital signature
just up it for specific queries that may benefit from it.
Thanks,
Stephen
signature.asc
Description: Digital signature
t to 100GB... ;)
I agree that it shouldn't be the default, however. That's asking for
trouble. Do it for the specific queries that need it.
Thanks,
Stephen
signature.asc
Description: Digital signature
go into swap. Note that you can set work_mem for a given
session after you connect to the database, just do:
set work_mem = '1GB';
in your session before running other queries. Doing that won't impact
other sessions.
Thanks,
Stephen
signature.asc
Description: Digital signature
igure out if it can hash the small table (you want that to happen,
trust me..). If you do end up having sorts, it'll also use the work_mem
value to figure out how much memory to use for sorting.
> So... Can Postgres support a 5-10 TB database with the use pattern
> stated above?
t it's the controller that's really the issue here..
Thanks,
Stephen
signature.asc
Description: Digital signature
rt back to a daily backup in the event the OS reboots
due to some wacky NMI, or the ASR going haywire..
Thanks,
Stephen
signature.asc
Description: Digital signature
ecent and you've got a lot of
memory available for kernel cacheing, should be quick.
Thanks,
Stephen
signature.asc
Description: Digital signature
On 4/29/2011 10:24 AM, Mark Steben wrote:
Hi,
Had a recent conversation with a tech from this company called
FUSION-IO. They sell
io cards designed to replace conventional disks. The cards can be up
to 3 TB in size and apparently
are installed in closer proximity to the CPU than the disks are.
total memory per say.
There's probably something else that's trying to grab all the memory and
then tries to use it and PG ends up getting nailed because the kernel
over-attributes memory to it. You should be looking for that other
process..
Thanks,
Stephen
sign
from the same problem that
work_mem has, specifically that PG still won't allocate more than
1GB of memory for any single operation.
Thanks,
Stephen
signature.asc
Description: Digital signature
* Mark Stosberg (m...@summersault.com) wrote:
> Recommendations?
PostGIS, geometry columns, and UTM.. I'm not sure where they are wrt
adding KNN support, but it's something they've been anxious to have for
a while, so I expect support will come quickly.
Thanks,
Does it have less work to do? That's a
bit harder to say but my guess is "not so much that you'd actually be
able to notice it."..
Thanks,
Stephen
signature.asc
Description: Digital signature
ex or
> cluster.
Just to share my experiences- I've found that creating a new table and
inserting into it is actually faster than doing full-table updates, if
that's an option for you.
Thanks,
Stephen
signature.asc
Description: Digital signature
be even *faster* would be the run all 5 of
those queries against the child tables in parallel (given that you have
over 5 CPUs and enough memory that you don't start swapping).
If it's still too big on the per-child basis, you might be able to use
conditionals to do the first 100 strands, then the next hundred, etc.
>I appreciate the comments thus far.
Let's hope you'll always appreciate them. :)
Thanks,
Stephen
signature.asc
Description: Digital signature
When
you run top, is your PG process constantly in 'D' state, or is it in 'R'
state, or what? Might help figure some of that out. Note that
parallelizing the query will help regardless of if it's disk bound or
CPU bound, when you're running on the kind of hardware you're talking
about (lots of spindles, multiple CPUs, etc).
Thanks,
Stephen
signature.asc
Description: Digital signature
en checkpoints, so that the checkpoints
aren't as big and painful. That can be done by making the background
writer more aggressive.
Thanks,
Stephen
signature.asc
Description: Digital signature
all the same actual memory, not 100M
per process.
Thanks,
Stephen
signature.asc
Description: Digital signature
#x27;t great, but honestly is surprisingly good all things
> considered.
I'm kind of suprised at each connection taking 100MB, especially ones
which are just doing simple inserts.
Thanks,
Stephen
signature.asc
Description: Digital signature
running out of memory.
> I think the first thing to do is look into using a connection pooler
> like pgpool to reduce your connection memory overhead.
Yeah.. Having the number of database connections be close to the number
of processors is usually recommended.
Stephen
signature.asc
Description: Digital signature
es atomic changes instead of spinlocks for
certain locking situations..
If that's all the MIT folks did, they certainly made it sound like alot
more. :)
Stephen
signature.asc
Description: Digital signature
could try to replicate the lseek() contention?
I can give it a shot, but the impression I had from the paper is that
the lseek() contention wouldn't be seen without the changes to the lock
manager...? Or did I misunderstand?
Thanks,
Stephen
signature.asc
Description: Digital signature
ntly useful to
test with..
Stephen
signature.asc
Description: Digital signature
which can often be due to
missing pieces from the JOIN clause or misunderstanding of the database
schema...).
Stephen
signature.asc
Description: Digital signature
etting to use...)? Increasing work_mem is often a good
idea if your system can afford it based on the number/kind of queries
running concurrently. Note that you can also increase that setting for
just a single role, single session, or even single query.
Thanks,
Stephen
signature.as
x27;s entirely possible that the index is *not*
the fastest way to pull this data (it's nearly 10% of the table..), if
the stats were better it might use a seq scan instead, not sure how bad
the cost of the filter itself would be.
Thanks,
Stephen
signature.asc
Description: Digital signature
included (which, really, shouldn't be that big a deal).
Did you look at what the other reply suggested? Do you have
constraint_exclusion = 'on' in your postgresql.conf?
Thanks,
Stephen
signature.asc
Description: Digital signature
ULL) OR
> (persons.modified > indexing_persons.indexed))
>-> Seq Scan on persons (cost=0.00..4438.29
> rows=143629 width=16) (actual time=0.000..56.000 rows=143629 loops=1)
>-> Hash (cost=2534.86..2534.86 rows=142286 width=16)
> (actual tim
r any other PG operations (PG would use at most
4GB-shared_buffers, or so).
Thanks,
Stephen
signature.asc
Description: Digital signature
tabase. If you're building alot of indexes then you probably
want to split up the statements into multiple connections and run them
in parallel.
Thanks,
Stephen
signature.asc
Description: Digital signature
built-ins now available in 8.4 (array_agg), that copying
doesn't happen any more.
Thanks,
Stephen
signature.asc
Description: Digital signature
to be an overall improvment of, say,
10%, or a 10-fold improvment. :)
Thanks,
Stephen
signature.asc
Description: Digital signature
saction.
That's not really relevant.. Is it called alot from the same
backend/database connection? If so, and if you're using regular SELECT
statements and the like (not EXECUTE), then they're getting prepared the
first time they're used and that is kept across transactions.
these stored procedures from within a C-based function which
> does the logging, math, control logic, and builds the result sets and
> cursors?
Uhh, I'd guess 'no' to that one.
Thanks,
Stephen
signature.asc
Description: Digital signature
ly
efficient way (not doing seq. scans through everything because you're
operating on every row for something). It needs to be a couple
hundred-thousand rows, but it doesn't need to be the full data set, imv.
Thanks,
Stephen
signature.asc
Description: Digital signature
s in your partitioning setup. What is
constraint_exclusion set to? What version of PG is this? Do the
results og this query look at all correct to you?
Have you considered an index on elevation, btw? How many records in
that city table are there and how many are actually in that range?
Thanks,
day > user_stop_year ||
user_stop) THEN
m.taken BETWEEN (user_start_year || user_start_day)::date AND
((user_stop_year || user_stop)::date + '1
year'::interval)::date
-- I don't think you need/want this..?
-- GROUP BY
sure that those constraints are
really the right ones and that they make sense? You're using a bunch of
extract()'s there too, why not just specify a CHECK constraint on the
date ranges which are allowed in the table..?
Maybe I've misunderstood the whole point here, but I don't think so.
Thanks,
Stephen
signature.asc
Description: Digital signature
y more complicated).
> As in my previous tests, this is only a testing environment: so all data
> is in memory, no disk activity involved at all, no swap etc.
Yea, system calls still aren't free. I would recommend, if you care
about this query, bumping up your work_mem setting for it. Right now,
PG is using an external sort (meaning- on-disk), but the data set
appears to only be like 50M (49560kB). If you increased work_mem to,
say, 128MB (for this query, maybe or maybe not for the entire system),
it'd be able to do an in-memory sort (or maybe a hash or something else,
if it makes sense), which would be faster.
I'd probably rewrite this as a left-join too, to be honest, but based on
what I'm saying, that'd probably get the same query plan as you had
first anyway (the merge anti-join), so it's probably not necessary. I'd
love to hear how PG performs with work_mem bumped up to something
decent...
Thanks,
Stephen
signature.asc
Description: Digital signature
are almost the
> same). What's the best tool to time queries in postgresql? Can this be
> done from pgadmin?
As was mentioned elsewhere, certainly the best tool to test with is your
actual application, if that's possible.. Or at least the language your
application is in.
Thanks,
Stephen
signature.asc
Description: Digital signature
nt is that its 1.7TB. There's a
few other smaller databases on that system too. PG handles it quite
well, though this is primairly for data-mining.
Thanks,
Stephen
signature.asc
Description: Digital signature
code
to do and do well though, so you really have to consider it carefully
and make sure it will help your important use cases (and not too badly
impact your other use cases) before going that route.
autovacuum is your friend.. though you might need to tune it.
Thanks,
Stephen
signature.asc
Description: Digital signature
1 - 100 of 216 matches
Mail list logo