lz4
>> inherited from dbms
>> dbms/ticker-9.5 atime off
>> inherited from dbms
>> dbms/ticker-9.5 logbias throughput
>> inherited from dbms
>>
>>
>> --
>&g
On 9/27/2016 23:06, Jov wrote:
>
> +1
> larger record size can increase compression ratio,so reduce the io.
>
> Did you set atime off for zfs?
>
>
> 2016年9月28日 6:16 AM,"Karl Denninger" <mailto:k...@denninger.net>>写道:
>
> On 9/27/2016 16:38,
tter value. But this would not be far too high. So
>> please can you explain this?
>
> Because it's not a global value, but an estimate of how much RAM is
> available as a cache for a single query. So if you're running 10
> queries at the same time, they'll have to
ething?
Yep -- Murphy. And he's a bastard.
--
Karl Denninger
k...@denninger.net <mailto:k...@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
smime.p7s
Description: S/MIME Cryptographic Signature
uld look seriously into getting rid of the always-open requirement
> for connections.
I'm trying to figure out where the "always open" requirement comes from;
there are very, very few instances where that's real, when you get down
to it.
--
Karl Denninger
k...@denninger.net <mailto:k...@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
smime.p7s
Description: S/MIME Cryptographic Signature
tctl -a display of one of
>> them on an in-service machine here:
>>
>> 175 Power_Loss_Cap_Test 0x0033 100 100 010Pre-fail Always
>> - 643 (4 6868)
>
>
--
Karl Denninger
k...@denninger.net <mailto:k...@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
smime.p7s
Description: S/MIME Cryptographic Signature
identally while there are people who have questioned the 730 series
power loss protection I've tested it with plug-pulls and in addition it
watchdogs its internal power loss capacitors -- from the smartctl -a
display of one of them on an in-service machine here:
175 Power_Loss_Cap_Test 0x0033
with rotating
disks and SSDs.
This is workload dependent in my experience but in the applications we
put Postgres to there is a very material improvement in throughput using
compression and the larger blocksize, which is counter-intuitive and
also opposite the "conventional wisdom."
For best throughput we use mirrored vdev sets.
--
Karl Denninger
k...@denninger.net <mailto:k...@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
smime.p7s
Description: S/MIME Cryptographic Signature
On 4/29/2014 3:13 AM, Albe Laurenz wrote:
Karl Denninger wrote:
I've been doing a bit of benchmarking and real-world performance
testing, and have found some curious results.
[...]
The odd thing is that I am getting better performance with a 128k record
size on this application than
On 4/28/2014 1:26 PM, Jeff Janes wrote:
On Mon, Apr 28, 2014 at 11:07 AM, Karl Denninger <mailto:k...@denninger.net>> wrote:
Isn't WAL essentially sequential writes during normal operation?
Only if you have some sort of non-volatile intermediary, or are
willing to
On 4/28/2014 1:22 PM, Heikki Linnakangas wrote:
On 04/28/2014 09:07 PM, Karl Denninger wrote:
The WAL is fsync'd frequently. My guess is that that causes a lot of
extra work to repeatedly recompress the same data, or something like
that.
It shouldn't as ZFS re-writes on change,
On 4/28/2014 1:04 PM, Heikki Linnakangas wrote:
On 04/28/2014 06:47 PM, Karl Denninger wrote:
What I am curious about, however, is the xlog -- that appears to suffer
pretty badly from 128k record size, although it compresses even
more-materially; 1.94x (!)
The files in the xlog directory are
I've been doing a bit of benchmarking and real-world performance
testing, and have found some curious results.
The load in question is a fairly-busy machine hosting a web service that
uses Postgresql as its back end.
"Conventional Wisdom" is that you want to run an 8k record size to match
Po
s without the complexity of a BBU RAID and
> multiple spinning disks...
>
> -Dave
Two is one, one is none.
:-)
-
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
y're expensive!) SSD drives have said protection. If
yours does not the only SAFE option is as I described up above using
them as ZFS cache devices.
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
ch*/ faster than using pg_basebackup (by a factor of
four or more at my installation!) -- it appears that the latter does not
effectively use compression of the data stream even if your SSL config
is in use and would normally use it; rsync used with the "z" option does
use it and very effectively
was looking to hack in zlib support and saw that appears to be
already-present support, provided SSL connection security is enabled.
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
On 10/19/2012 10:02 AM, Claudio Freire wrote:
> On Fri, Oct 19, 2012 at 11:44 AM, Karl Denninger wrote:
>> On 10/18/2012 5:21 PM, delongboy wrote:
>>
>> I have replication set up on servers with 9.1 and want to upgrade to 9.2
>> I was hoping I could just bring them b
which, incidentally,
it would be really nice if Postgres supported internally, and it could
quite easily -- I've considered working up a patch set for this and
submitting it.)
For really BIG databases (as opposed to moderately-big) this could be a
much-more material problem than it is for m
al).
>
> This query runs FOREVER, while if I delete "limit 1" it runs
> instantly
>
> Any help?
>
> Regards.
>
>
>
>
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
n 9.2; I have asked a similar question about profiling actual queries
in the past and basically it came down to "turn on explain or run a
separate explain yourself since the app knows what's similar and what's
not", which of course has hideous performance implications (a
On 3/16/2012 9:53 AM, Tom Lane wrote:
> Karl Denninger writes:
>> I am trying to continue profiling which in turn feeds query and index
>> tuning changes for the AKCS-WWW forum software, and appear to have no
>> good way to do what I need to do -- or I've missed somethin
he time and raise alerts when it
sees "outliers" giving me a continually-improving set of targets for
reduction of resource consumption (up until I reach the point where I
don't seem to be able to make it any faster of course :-))
Thanks in advance!
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
Scott Carey wrote:
>
> On Aug 10, 2010, at 11:38 AM, Karl Denninger wrote:
>
> .
>>
>> Most people who will do this won't reload it after a crash. They'll
>> "inspect" the database and say "ok", and put it back online. Bad
>&g
Brad Nicholson wrote:
> On 8/10/2010 2:38 PM, Karl Denninger wrote:
>> Scott Marlowe wrote:
>>> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
>>>
>>>> ANY disk that says "write is complete" when it really is not is entirely
>>&
Scott Marlowe wrote:
> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
>
>> ANY disk that says "write is complete" when it really is not is entirely
>> unsuitable for ANY real database use. It is simply a matter of time
>>
>
> What about
Brad Nicholson wrote:
> On 8/10/2010 12:21 PM, Greg Smith wrote:
>> Scott Carey wrote:
>>> Also, the amount of data at risk in a power loss varies between
>>> drives. For Intel's drives, its a small chunk of data ( < 256K).
>>> For some other drives, the cache can be over 30MB of outstanding
>>>
6700tps?! Wow..
Ok, I'm impressed. May wait a bit for prices to come somewhat, but that
sounds like two of those are going in one of my production machines
(Raid 1, of course)
Yeb Havinga wrote:
> Greg Smith wrote:
>> Greg Smith wrote:
>>> Note that not all of the Sandforce drives include a
Justin Graf wrote:
> On 6/22/2010 4:31 AM, Grzegorz Jaśkiewicz wrote:
>
>> Would moving WAL dir to separate disk help potentially ?
>>
>>
>
> Yes it can have a big impact.
WAL on a separate spindle will make a HUGE difference in performance.
TPS rates frequently double OR BETTER with W
I have a message posted in pgsql-general that outlines what I thought
was an indexing problem - it's not, so I'm bringing it here.
I dumped the table from our production system and stuffed it into a test
machine, then started refining and playing with the query until I was
able to get it to the "d
;
list) the system load went bananas immediately and forced me to back it
out. When I re-wrote the query as above the performance was (and
remains) fine.
I'm running 8.4.2.
I agree (in advance) it shouldn't trash performance - all I know is that
it does and forced me to re-write
"Exists" can be quite slow. So can "not exists"
See if you can re-write it using a sub-select - just replace the "exists
" with "(select ...) is not null"
Surprisingly this often results in a MUCH better query plan under
Postgresql. Why the planner evaluates it "better" eludes me (it
should
Greg Smith wrote:
> Craig James wrote:
>> ... and do I hear you saying that no other vendor is worth
>> considering? Just how far off are they?
> I wasn't trying to summarize every possible possibility, just the
> complicated ones there's some debate over.
>
> What else is OK besides Areca and 3wa
Greg Smith wrote:
> Scott Carey wrote:
>> 9650 was made by 3Ware, essentially a PCIe version of the 9550. The
>> 9690SA
>> was from some sort of acquisition/merger. They are not the same
>> product line
>> at all.
>>
> 3ware became a division of AMCC, which was then bought by LSI. The
> 9590SA
Scott Carey wrote:
> On 12/1/09 6:49 PM, "Greg Smith" wrote:
>
>
>> Scott Carey wrote:
>>
>>> 3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and
>>> PERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.
>>> Haven't tried raid 6 or 5. Haven't trie
Scott Carey wrote:
> On 11/24/09 11:13 AM, "Scott Marlowe" wrote:
>
>
>
>> They get good reviews as well. Both manufacturers have their "star"
>> performers, and their "utility" or work group class controllers. For
>> what you're doing the areca 12xx or 3ware 95xx series should do fine.
>>
Greg Smith wrote:
> Scott Carey wrote:
>> For your database DATA disks, leaving the write cache on is 100%
>> acceptable,
>> even with power loss, and without a RAID controller. And even in
>> high write
>> environments.
>>
>> That is what the XLOG is for, isn't it? That is where this behavior is
Dave Crooke wrote:
> My reply about server failure was shwoing what could go wrong at the
> server level assuming a first-class, properly run data center, with
> fully redundant power, including a server with dual power supplies on
> separate cords fed by separate UPS'es etc.
Never had a mothe
Greg Smith wrote:
> Robert Schnabel wrote:
>> Nope. Forgive my ignorance but isn't that what a UPS is for anyway?
>> Along with a BBU controller.
> If you have a UPS *and* a BBU controller, then things are much
> better--those should have a write cache that insulates you from the
> worst of the p
Greg Smith wrote:
> Karl Denninger wrote:
>> If power is "unexpectedly" removed from the system, this is true. But
>> the caches on the SSD controllers are BUFFERS. An operating system
>> crash does not disrupt the data in them or cause corruption. An
>>
p is widening, not shrinking.
Again, it depends.
With the write cache off on these disks they still are huge wins for
very-heavy-read applications, which many are. The issue is (as always)
operation mix - if you do a lot of inserts and updates then you suffer,
but a lot of database applications
Laszlo Nagy wrote:
> Hello,
>
> I'm about to buy SSD drive(s) for a database. For decision making, I
> used this tech report:
>
> http://techreport.com/articles.x/16255/9
> http://techreport.com/articles.x/16255/10
>
> Here are my concerns:
>
>* I need at least 32GB disk space. So DRAM based SS
round Pl/PGSQL. All is stable and fast.
> >
> > Greetings from Germany,
> > Torsten
>
>
> On Sat, Oct 17, 2009 at 8:55 PM, Karl Denninger <mailto:k...@denninger.net>> wrote:
>
> I am running all my production work on 8.4 at this point; no
> problems of
>
Torsten Zühlsdorff wrote:
> Tory M Blue schrieb:
>
>> Any issues, has it baked long enough, is it time for us 8.3 folks to
>> deal
>> with the pain and upgrade?
>
> I've upgraded all my databases to 8.4. They pain was not so big, the
> new -j Parameter from pg_restore is fantastic. I really like th
Scott Marlowe wrote:
> On Tue, Oct 6, 2009 at 8:26 AM, Scott Marlowe wrote:
>
>> On Tue, Oct 6, 2009 at 7:21 AM, Nikolas Everett wrote:
>>
>>> If my un-word wrapping is correct your running ~90% user cpu. Yikes. Could
>>> you get away with fewer disks for this kind of thing?
>>>
Axel Rau wrote:
>
> Am 05.10.2009 um 23:44 schrieb Karl Denninger:
>
>> Turn on softupdates. Fsck is deferred and the system comes up almost
>> instantly even with TB-sized partitions; the fsck then cleans up the
>> cruft.
> Last time, I checked, there was a issue
Axel Rau wrote:
>
> Am 05.10.2009 um 20:06 schrieb Karl Denninger:
>
>> gjournal, no. ZFS has potential stability issues - I am VERY interested
>> in it when those are resolved. It looks good on a test platform but I'm
>> unwilling to run it in production; there
Scott Carey wrote:
> On 10/5/09 11:15 AM, "Karl Denninger" wrote:
>
>
>> I'm running the 9650s in most of my "busier" machines. Haven't tried a
>> PERC card yet - its on my list. Most of my stuff is configured as RAID
>> 1 although I
Scott Carey wrote:
>
> On 10/5/09 10:27 AM, "Karl Denninger" wrote:
>
>
>> I don't run the 3x series 3ware boards. If I recall correctly they're not
>> true coprocessor boards and rely on the host CPU. Those are always going to
>> be a lose c
Axel Rau wrote:
> Am 05.10.2009 um 19:42 schrieb Karl Denninger:
>
>> I have not yet benchmarked FreeBSD 8.x - my production systems are
>> all on FreeBSD 7.x at present. The improvement going there from 6.x
>> was MASSIVE. 8.x is on my plate to start playing with in the
Claus Guttesen wrote:
>> However, I have certainly seen some inefficiencies with Linux and large use
>> of shared memory -- and I wouldn't be surprised if these problems don't
>> exist on FreeBSD or OpenSolaris.
>>
>
> This came on the freebsd-performance-list a few days ago.
>
> http://docs.
Scott Carey wrote:
> On 10/3/09 7:35 PM, "Karl Denninger" wrote:
>
>
>> I am a particular fan of FreeBSD, and in some benchmarking I did between it
>> and CentOS FreeBSD 7.x literally wiped the floor with the CentOS release I
>> tried on IDENTICAL ha
Denis Lussier wrote:
> I'm a BSD license fan, but, I don't know much about *BSD otherwise
> (except that many advocates say it runs PG very nicely).
>
> On the Linux side, unless your a dweeb, go with a newer, popular &
> well supported release for Production. IMHO, that's RHEL 5.x or
> CentOS 5.x
Dave Dutcher wrote:
>> From: Shiva Raman
>> Subject: Re: [PERFORM] High CPU load on Postgres Server during Peak
>>
> not explicitly committed.
>
>> We have started updating the code on this.
>>
>
> You need a COMMIT for every BEGIN. If you just run a SELECT statement
> without first
Andy Colson wrote:
> Shiva Raman wrote:
>> Hi
>>
>> Today the load observed very high load . I am pasting the top.
>>
>> _*TOP *_
>> top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84,
>> 33.13, 25.84
>> Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 49
Fernando Hevia wrote:
User Access
Total Number of Users is 500
Maximum number of Concurrent users will be 500 during peak time
Off Peak time the maximum number of concurrent user will be
around 150 to 200.
>>> A connection pooler like pgpool or pgbouncer
Bruce Momjian wrote:
> Karl Denninger wrote:
>
>>> Yes. In addition, functions that are part of expression indexes do get
>>> their own optimizer statistics, so it does allow you to get optimizer
>>> stats for your test without having to use booleans.
>>
Bruce Momjian wrote:
> Alvaro Herrera wrote:
>
>> Karl Denninger escribi?:
>>
>>
>>> The individual boolean fields don't kill me and in terms of some of the
>>> application issues they're actually rather easy to code for.
>>>
Robert Haas wrote:
> On Mon, Sep 7, 2009 at 8:51 PM, Karl Denninger wrote:
>
>> Robert Haas wrote:
>>
>> On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger wrote:
>>
>>
>> There was a previous thread and I referenced it. I don't have the other one
&g
Robert Haas wrote:
> On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger wrote:
>
>> There was a previous thread and I referenced it. I don't have the other one
>> in my email system any more to follow up to it.
>>
>> I give up; the attack-dog crowd ha
There was a previous thread and I referenced it. I don't have the other
one in my email system any more to follow up to it.
I give up; the attack-dog crowd has successfully driven me off. Ciao.
Alvaro Herrera wrote:
> Karl Denninger escribió:
>
>> Tom Lane wrote:
>>
Tom Lane wrote:
> Karl Denninger writes:
>
>> Tom Lane wrote:
>>
>>> In that case you'd be wasting your time to get it to use an index
>>> for the condition anyway. Maybe you need to take a step back and
>>> look at the query as a whole
Tom Lane wrote:
> Karl Denninger writes:
>
>> That doesn't help in this case as the returned set will typically be
>> quite large, with the condition typically being valid on anywhere from
>> 10-80% of the returned tuples.
>>
>
> In that case you
Tom Lane wrote:
> Merlin Moncure writes:
>
>> If you are only interested in one or a very small number of cases of
>> 'permission', you can use an expression index to target constant
>> values:
>>
>
>
>> "select ... from where .. and (permission & mask = permission)"
>>
>
s whether the above statement will be reasonably planned
if "mask" is a bit type.
-- Karl Denninger
<>
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Andy Colson wrote:
> Phoenix Kiula wrote:
>> I'm on a CentOS 5 OS 64 bit, latest kernel and all of that.
>> PG version is 8.3.7, compiled as 64bit.
>> The memory is 8GB.
>> It's a 2 x Dual Core Intel 5310.
>> Hard disks are Raid 1, SCSI 15 rpm.
>>
>> The server is running just one website. So there
Tom Lane wrote:
> Karl Denninger writes:
>
>> The problem appearsa to lie in the "nested loop", and I don't understand
>> why that's happening.
>>
> It looks to me like there are several issues here.
>
> One is the drastic underestim
Kevin Grittner wrote:
> Karl Denninger wrote:
>
>>-> Index Scan using forum_name on forum
>> (cost=0.00..250.63 rows=1 width=271) (actual time=0.013..0.408
>> rows=63 loops=1)
>> Filter: (((contrib IS NULL) OR (contrib =
|
open| text |
Indexes:
"forum_pkey" PRIMARY KEY, btree (name)
"forum_name" UNIQUE, btree (name)
"forum_order" UNIQUE, btree (forum_order)
Triggers:
_tickerforum_logtrigger AFTER INSERT OR DELETE OR UPDATE ON
Let's take the following EXPLAIN results:
ticker=# explain select * from post, forum where forum.name = post.forum
and invisible <> 1 and to_tsvector('english', message) @@
to_tsquery('violence') order by modified desc limit
100;
) OR (2) have no match at all. (The first statement matches
these other two cases)
Any ideas? (Its ok if that query(s) are separate; in other words, its
cool if I have to execute two or even three queries and get the results
separately - in fact, that might be preferrable in some circumstances)
Looks like that was the problem - got a day under the belt now with the
8.2.4 rev and all is back to normal.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Karl Denninger wrote:
Aha!
BIG difference. I won't know for sure until the biz day tomorrow but
the "first b
Aha!
BIG difference. I won't know for sure until the biz day tomorrow but
the "first blush" look is that it makes a HUGE difference in system
load, and I no longer have the stats collector process on the top of the
"top" list..
Karl Denninger
Hm. now that's interesting. Stats collector IS accumulating
quite a bit of runtime. me thinks its time to go grab 8.2.4.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Merlin Moncure wrote:
On 7/25/07, Karl Denninger <[EMAIL PROTECTED]> wrote:
Yeah,
ork/exec inside the portmaster, in the setup internally in there, in
the IPC between my process via libPQ, etc - and how I can profile what's
going on.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Tom Lane wrote:
Karl Denninger <[EMAIL PROTECTED]> writes:
But
at the problem I'd like to
know I'm not just wasting money.
The application uses the "C" language interface and just calls
"Connectdb" - the only parameter is the dbname, so it should be
defaulting to the local socket. It appears that this is the case.
The syste
On Sat, Feb 07, 2004 at 01:51:54AM -0500, Tom Lane wrote:
> Karl Denninger <[EMAIL PROTECTED]> writes:
> > akcs=> explain analyze select forum, (replied > (select lastview from forumlog
> > where forumlog.login='genesis' and forumlog.forum='General
059..0.089 rows=5 loops=1)
Index Cond: ((forum = 'General'::text) AND (toppost = 1))
-> Materialize (cost=1.15..1.20 rows=5 width=8) (actual time=0.013..0.046
rows=5 loops=5)
-> Seq Scan on forumlog (cost=0.00..1.15 rows=5 width=8) (actual
t
an
-> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8) (actual
time=0.015..0.027 rows=1 loops=5)
Filter: ((login = 'genesis'::text) AND (forum = 'General'::text) AND
(number = $0))
Total runtime: 0.915 ms
(8 rows)
--
--
Karl D
(8 rows)
Why is the subplan using a sequential scan? At minimum the index on the
post number ("forumlog_number") should be used, no? What would be even
better would be a set of indices that allow at least two (or even all three)
of the keys in the inside SELECT to be used.
What am I m
80 matches
Mail list logo