n instead just pretend that it's shorter, knowing
that upper levels don't contain useful information.
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
out the B-Tree code, I suggest that you
start by looking at the code for contrib/amcheck.
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t
> some non-C locale on the slower. strcoll() is pretty darn expensive
> compared to strcmp() :-(
strcoll() is very noticeably slower on macOS, too.
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
this job has been running for years with no issue remotely approaching
> this one. We are also using pgpool.
Did you happen to notice that this occurred when you upgrading point
release? If so, what version did you move from/to?
--
Peter Geoghegan
--
Sent via pgsql-performance mail
se I think it's possible that this is
explained by a regression in 9.5's handling of index bloat, described
here:
http://postgr.es/m/CAH2-Wz=sfakvmv1x9jh19ej8am8tzn9f-yecips9hrrrqss...@mail.gmail.com
I'm trying to track down cases where this could be an issue, to get a
bette
ut going to great lengths. These optimizations are closely
intertwined things, and the lack of clarity on how they all fit
together is probably holding back an implementation of any one of
them.
[1]
https://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#20160524173914
ong here? Is there a way to get Postgres to not do a
> quick sort here?
I would like that too. There is a patch that does what I think you're
describing, but it seems to be in limbo:
https://commitfest.postgresql.org/11/409/
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list
more sophisticated approach, but
it's probably better to focus on duplicate storage or even leaf page
compression, as Stephen mentioned.
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
arguing for the idea that B-Trees should reliably keep
tuples in order by a tie-break condition, that seems difficult to
implement, and likely not worth it in practice.
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I bet that was a factor here.
--
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
to help unless
they result in a fully internal sort.
There is evidence that the heap that tuple sorting uses benefits from
*lower* settings. Sometimes as low as 64MB.
We're working to make this better in 9.6.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsq
2 Dual, so a rather old and slow box) and I could sort
1E6 rows of 128 random bytes in 5.6 seconds. Even if I kept the first 96
bytes constant (so only the last 32 were random), it took only 21
seconds. Either this CPU is really slow or the data is heavily skewed -
is it possible that all dimension
ace to do the
sort, you're not going to be able to complete it. That much is very
clear.
If you're really worried about these costs, I suggest enabling
trace_sort locally, and monitoring the progress of this sort in the
logs.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-perfo
On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:
> wdsah=> explain analyze select facttablename, columnname, term, concept_id,
> t.hidden, language, register
> from term t where facttablename='facttable_stat_fta4' and
> columnname='einh
e_stat_fta4_warenstrom_idx on
facttable_stat_fta4 f (cost=0.00..2124100.90 rows=21787688 width=2) (actual
time=0.029..0.029 rows=1 loops=3)
Index Cond: ((warenstrom)::text = (t.term)::text)
Total runtime: 0.180 ms
(6 rows)
The estimated number of rows in the outer scan is way more a
On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:
> wdsah=> explain analyze select facttablename, columnname, term, concept_id,
> t.hidden, language, register
> from term t where facttablename='facttable_stat_fta4' and
> columnname='einh
:text))
-> Index Scan using facttable_stat_fta4_berechnungsart_idx on
facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual
time=0.089..16263.582 rows=21336180 loops=1)
Total runtime: 30948.648 ms
(6 rows)
Over 30 seconds! That's almost 200'000 times slower.
T
be fairly homogeneous in structure within a
table, just like with systems like MongoDB. Strings within arrays are
keys for our purposes, and these are often used for tags and so on.
But Strings that are the key of an object/pair are much less useful to
index, in my estimation.
--
Regards,
Peter Ge
work alright for arrays of "tags",
and things like that. I tend to think that that's a common enough
use-case.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
'm
suggesting that we give up on the idea of sampling (which would be
crazy).
Streaming algorithms like HyperLogLog are very recent ideas, as these
things go. I wouldn't be all that discouraged by the fact that it
might not have been put to use in this way (for database statistics)
by somebody b
ming of HLL. Believe it or not, the way I use
HLL for estimating cardinality is virtually free. Hashing is really
cheap when the CPU is bottlenecked on memory bandwidth.
If you're interested, download the patch, and enable the debug traces.
You'll see HyperLogLog accurately indicate the
ally important to consider power saving
features in most benchmarks these days, where that might not have been
true a few years ago. The CPU scaling governor can alter the outcome
of many benchmarks quite significantly.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgs
x27;re using standard packages, I'm afraid.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Feb 7, 2014 at 7:36 PM, Huy Nguyen wrote:
> I think pgtune is optimized more for OLTP application. Is there something
> similar to pgtune/pgbench for OLAP?
IIRC pgtune can be told to give out an OLAP-optimized postgresql.conf.
Maybe that's only recent versions?
--
Reg
s immediately after the queue is
> cleared).
Are you aware of hint bits?
https://wiki.postgresql.org/wiki/Hint_Bits
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
pg_class table
How do I get rid of those other files?
Just a guess, but do I shutdown the database, and delete any file not
listed in pg_class? I do not see anything in the PostgreSQL documentation
about this.
Thank again.
On Tue, Jan 28, 2014 at 9:47 AM, Peter Blair wrote:
> Tom,
>
>
Tom,
You are correct. The was an infinate loop created because of the
differences in the date math between Oracle and Postgres.
Thank again for your help.
On Mon, Jan 27, 2014 at 7:43 PM, Tom Lane wrote:
> Peter Blair writes:
> > Have a problem where a stored procedure is taking
les in each directory.
Why are there so many files?
Thank you everyone for your time.
Peter Blair
lower and upper bounds for range
types, and the fraction of empty ranges.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
lized to disk when there's a clean
shutdown. pg_stat_statements is similar to the statistics collector
here.
Why are you posting this to the -performance list?
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
s later
representation (essentially, the internal representation that the
rewriter stage processes).
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nctional index on length(discussion_post.id)?
--
Regards,
Peter Geoghegan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the street is that it may be worth pursuing some
of the ideas described by the literature in just the last few years.
I've often thought that this would be an interesting problem to work
on. I haven't had time to pursue it, though. You may wish to propose a
patch on the pgsql-hackers mailing
thin src/backend/commands/analyze.c (IBM
Research Report RJ 10025 is referenced there).
The general advice here is:
1) Increase default_statistics_target for the column.
2) If that doesn't help, consider using the following DDL:
alter table foo alter column bar set ( n_distinct = 5.0);
--
Pe
On 21 November 2012 15:21, Andrew Dunstan wrote:
> And I continue to think that spelling it "OFFSET 0" is horribly obscure.
I'm not sure that it's any more obscure than the very idea of an
optimisation fence.
--
Peter Geoghegan http://www.2ndQuadrant.com/
Postg
On 21 November 2012 13:04, Heikki Linnakangas wrote:
> Yes, I strongly feel that we should. Writing a query using WITH often makes
> it more readable. It would be a shame if people have to refrain from using
> it, because the planner treats it as an optimization fence.
+1
--
Peter
een telling people to do
that for years, so it's already something that we've effectively
committed to.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
h, particularly if you're not interested in *what* Postgres is
doing with memory, but need to summarise it usefully.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgs
ndexes costs essentially nothing to maintain, and simply
represent an ongoing obligation for ANALYZE to provide statistics for
an expression?
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgsql
expression into a column, analyzed and grouped by
that.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://
On 6 September 2012 04:20, Greg Smith wrote:
> On 08/02/2012 02:02 PM, Peter Geoghegan wrote:
> I dug up what I wrote when trying to provide better advice for this circa
> V8.3. That never really gelled into something worth publishing at the time.
> But I see some similar patter
On 21 August 2012 22:08, Tomas Vondra wrote:
> As others already mentioned, the improvements in pg_stat_statements by
> Peter Geoghean in 9.2 is the first thing you should look into I guess.
> Especially if you're looking for per-query stats.
If people would like to know about a
This has been reposted to this list from the pgsql-hackers list, at
the request of Josh Berkus. Hopefully there will be more interest
here.
-- Forwarded message --
From: Peter Geoghegan
Date: 29 July 2012 16:39
Subject: Help me develop new commit_delay advice
To: PG Hackers
ly equivalent.
http://www.postgresql.org/docs/current/static/functions-subquery.html
See the notes about NULL under IN.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
a select statement's AccessShare
lock.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
esses are made over an hour, for example.
I'm not sure whether it would be possible to calculate that today in
Postgres. Does anyone have any advice?
Best regards,
Peter
[1]:
http://www.fusionio.com/blog/will-fusionio-make-my-database-faster-percona-guest-blog/
--
Peter van Hardenberg
San Fr
rows inside a WITH statement.
it should probably be fixed, but you should find something like
WITH fn AS SELECT f(),
SELECT (fn).a, (fn).b
will make your life better
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
--
t;
Why do you say that? We've had work_mem happily at 100MB for years. Is
there a particular degenerate case you're concerned about?
-p
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
--
Sent via pgsql-
#x27;, hr_timestamp)
2) Create a view on that showing the last value
3) If you want to throw away the data use CREATE TABLE AS on the
results of the view.
You may also want to investigate window functions.
-p
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, an
so the first step
>> to performance enhancement is to upgrade. (As a general rule - there are
>> occasionally specific cases where performance decreases.)
>>
> We're using 9.0.6. Peter, how do you feel about upgrading? :)
>
9.1's in beta; we're working on writing
ing; based on your description the lock
contention would be intermittent, so I wouldn't trust an n=1 test.
-p
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
--
Sent via pgsql-performance mailing list (pgsql-performance@p
On 12 February 2012 22:28, Peter van Hardenberg wrote:
> Yes, I think if we could normalize, anonymize, and randomly EXPLAIN
> ANALYZE 0.1% of all queries that run on our platform we could look for
> bad choices by the planner. I think the potential here could be quite
> remarkabl
that run on our platform we could look for
bad choices by the planner. I think the potential here could be quite
remarkable.
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
--
Sent via pgsql-performance mailing list (pgsql
eas, but I'll keep it in the back of my mind
for a while.
If anyone is interested in seriously exploring the idea of researching
query planner accuracy across an enormous fleet of production
databases with the goal of feeding that information back to the
project please feel free to contact m
On Fri, Feb 10, 2012 at 5:40 PM, Josh Berkus wrote:
> Peter,
>
>> We've funded some work by Peter Geoghegan to make pg_stat_statements
>> more useful, but the patch is currently sitting in the commitfest in
>> need of a champion. I'd very much like to see it lan
On Fri, Feb 10, 2012 at 11:32 AM, Josh Berkus wrote:
> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> So one thought would be to add in pg_stat_statements to your platform
> ... something I'd like to see Heroku do anyway. Then you can sample
> this across dozens (or hundreds)
Hmm, perhaps we could usefully aggregate auto_explain output.
On Thu, Feb 9, 2012 at 7:32 AM, Jeff Janes wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>>> Having read the thread, I don't reall
On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>>> That said, I have access to a very large fleet in which to can collect
>>> data so I
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>> That said, I have access to a very large fleet in which to can collect
>> data so I'm all ears for suggestions about how to measure and would
>> gladly
Having read the thread, I don't really see how I could study what a
more principled value would be.
That said, I have access to a very large fleet in which to can collect
data so I'm all ears for suggestions about how to measure and would
gladly share the results with the list.
Pet
Per the thread from last month, I've updated the default
random_page_cost on Heroku Postgres to reduce the expected cost of a
random_page on all new databases.
Thanks to everyone who helped come to this conclusion!
Peter
--
Peter van Hardenberg
San Francisco, California
"Ever
ead about an unsuccessful
attempt to introduce on-disk bitmap indexes to Postgres.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
improved since 2005 - their performance was
improved quite a bit in 9.0. Here's a more recent analysis:
http://www.depesz.com/index.php/2010/06/28/should-you-use-hash-index/
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
sis and have it
work well for all queries.
thanks
Peter
On Fri, Jul 30, 2010 at 7:03 AM, Tom Lane wrote:
> Peter Hussey writes:
> > Using the default of 1MB work_mem, the planner chooses a hash join plan :
> > "Hash Left Join (cost=252641.82..11847353.87 rows=971572 width
fit in the current work_mem, and choose a
low-memory plan instead?
Excuse the long-winded post; I was trying to give the facts and nothing but
the facts.
Thanks,
Peter Hussey
LabKey Software
On tor, 2010-06-17 at 18:28 -0400, Brant Fitzsimmons wrote:
> Performance has dropped through the floor after converting my db from
> ASCI to UTF8.
Converting from ASCII to UTF8 is a noop.
If you did some configuration changes, you need to tell us which.
--
Sent via pgsql-performance mailing l
> Intel(R) Pentium(R) D CPU 2.80GHz
> 2 GB RAM
> Storage is local disk.
>
> Postgresql parameters (what I felt are relevant) -
> max_connections = 100
> shared_buffers = 64MB
> work_mem = 16MB
> maintenance_work_mem = 16MB
> synchronous_commit on
Do the big tables have
On tis, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:
> I am right now talking to someone on postgresql irc who is measuring
> 15k iops from x25-e and no data loss following power plug test. I am
> becoming increasingly suspicious that peter's results are not
> representative: given that 90% of
that
'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.
I've never seen such messag, this command is either run and finish or
goes to an endless loop as it was written in my initial e-mail.
On Thu, Oct 29, 2009 at 10:59:48AM -0600, Scott Marlowe wrote:
> On Thu, O
ETAIL: CPU 9.55s/119.02u sec elapsed 226.85 sec.
INFO: scanned index "t" to remove 2795935 row versions
DETAIL: CPU 13.09s/102.84u sec elapsed 194.74 sec.
INFO: "phaseangle": removed 2795935 row versions in 25097 pages
DETAIL: CPU 0.67s/0.28u sec elapsed 41.21 sec.
On Monday 20 July 2009 04:46:53 Robert James wrote:
> I'm storing a lot of words in a database. What's the fastest format for
> finding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and
> WHERE w IN ('m', 'ma'). All characters are lowercase a-z, no punctuation,
> no other alphabets.
On Friday 17 July 2009 16:40:40 Matthew Wakeling wrote:
> I'm considering rewriting a postgres extension (GiST index bioseg) to make
> it use version 1 calling conventions rather than version 0.
>
> Does anyone have any ideas/opinions/statistics on what the performance
> difference is between the t
hey folks !
eventually the removing of the group by did improve but still my concern is
why cant we take the result from memory given its same resultset .
But I keep pusing for the developers to move to memcached so we overcome
this limitation .
cheers,
Peter
On Mon, Jun 22, 2009 at 5:23 AM
On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf wrote:
> Peter Alban wrote:
>
> *duration: 2533.734 ms statement: *
>
> *SELECT news.url_text,news.title, comments.name, comments.createdate,
> comments.user_id, comments.comment FROM news, comments WHERE comments.c
erations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
# - Other Planner Options -
#default_statistics_target = 10 # range 1-1000*
cheers,
Peter
On Sun, Jun 21, 2009 at 7:42 PM, Robert Haas wrote:
> On Sun, Jun 21,
cache ?
cheers,
Peter
lower !
cheers,
Peter
On Thu, Jun 18, 2009 at 10:01 PM, Kenneth Marshall wrote:
> On Thu, Jun 18, 2009 at 09:42:47PM +0200, Peter Alban wrote:
> > So Ken ,
> >
> > What do you reckon it should be ? What is the rule of thumb here ?
> >
> > cheers,
> > Peter
So Ken ,
What do you reckon it should be ? What is the rule of thumb here ?
cheers,
Peter
On Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall wrote:
> On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:
> > Hi All,
> >
> > We are having a reasonably powerful m
ns = 32768 # min 100, ~70 bytes each
# - Kernel Resource Usage -
max_files_per_process = 4000# min 25
#preload_libraries = ''
any ideas ?
cheers,
Peter
out there for doing this? Does anyone know of
some resource that talks about doing this?
Thanks,
Peter
I did some performance tests (retrieving data from DB to .NET
application using Npgsql driver) and I found that for command type ==
StoredProcedure is better to first call function Prepare(). But, for
command type == Text (direct query) performance is better if I do not
run function Prepare(). W
Aidan Van Dyk wrote:
* Joshua D. Drake <[EMAIL PROTECTED]> [081209 11:01]:
Yes the SmartArray series is quite common and actually know to perform
reasonably well, in RAID 10. You still appear to be trying RAID 5.
*boggle*
Are people *still* using raid5?
/me gives up!
What do you sugges
s, I am nore interesting in
understanding better what is happening and having better indications
of when backends block on I/O, than necessarily having a proven
improvement in throughput or latency. It makes it easier to reason
about what is happening when you *do* have a measured performanc
during battery capacity tests. If you care about
consistently/deterministically having full performance (with
white-backed battery protected caching), you probably want to confirm
your controller behavior here.
(I've seen this on at least LSI based controllers in Dell 2950:s, and
also on some 3w
rite-outs).
On a system where you really want to keep backend writes to exactly 0
under normal circumstances (discounting vacuuming), and having a large
buffer cache (say the one gig), it might be nice to be able to say "ok
- I have 1 GB of buffer cache. for the purpose of the JIT algorithm,
ple
p://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgpq5Wyr2Jr4u.pgp
Description: PGP signature
mthing you need. (Oh I think checkpoints
might come into this as well but I'm not sure how)
Or at least thats my understanding...
So if your base backup takes a while I would advise running vacuum
afterwards. But then if your running autovacuum there is probably very
little need to worry.
Peter Childs
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2008/10/3 Peter Eisentraut <[EMAIL PROTECTED]>:
> Peter Childs wrote:
>>
>> I have a problem where by an insert on a "large" table will sometimes
>> take longer than usual.
>
>> I think the problem might have something to do with checkpoints,
>
>
Peter Childs wrote:
I have a problem where by an insert on a "large" table will sometimes
take longer than usual.
I think the problem might have something to do with checkpoints,
Then show us your checkpointing-related parameters. Or try to set them
to a lot higher values so c
away after a longer insert and not found loads of
space in the fsm.
I'm using 8.3.1 (I thought I'd upgraded to 8.3.3 but it does not look
like the upgrade worked) I'm more than happy to upgrade just have to
find the down time (even a few seconds can be difficult)
Any help would be
Axel Rau wrote:
Im not yet convinced to switch to non-C locale. Is the following
intended behavior:
With lc_ctype C: select lower('ÄÖÜ'); => ÄÖÜ
With lc_ctype en_US.utf8 select lower('ÆÅË'); => æåë
? (Both have server encoding UTF8)
I would expect exactly that.
--
Sent via pgsql-p
Axel Rau wrote:
some erp software requires a change of my pgsql cluster from
locale Cencoding UTF-8
to
locale de_DE.UTF-8encoding UTF-8
Most of my databases have only ASCII text data (8 bit UTF8 code range)
in the text columns.
Does the above change influence index performa
rformance of the
queries because the server is caching the queries and dblink is using the same
cached querie results as well. Can you 'flush' the results or prevent the
results from being cached for being reused? Is Explain Analyze really 'stable'
for comparing purposes?
Th
6-7 MB/second would be fairly consistent with INSERT/COPY
operations being CPU bound on a modern CPU, in my experience. It may
be that this is entirely untrue in your case, but it sounds like a
reasonable thing to at least consider.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schull
t.
Also, NOW() is equivalent to CURRENT_TIMESTAMP() rather than
CURRENT_DATE(). Perhaps the date vs. timestamp has some implication of
how they query is planned.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Ma
oller.
Is there currently a way of dumping such information? I.e., asking PG
"what are backends waiting on right now?".
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL
Dan Harris wrote:
> My desire would be to have a parallel server that could act as a hot
> standby system with automatic fail over in a multi-master role.
I will add my "me too" for DRBD + Heartbeat.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
Tom, for the very insightful
discussion!
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgppa59ys6aP0.pgp
Description: PGP signature
short periods of time
even for things that are officially declared non-blocking; the
question is whether this falls into this category.)
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [
informative response.
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org
pgpUjtABeKgmx.pgp
Description: PGP signature
1 - 100 of 220 matches
Mail list logo