If you run your benchmarks for more than a few minutes I highly
recommend enabling sysstat service data collection, then you can look
at it after the fact with sar. VERY useful stuff both for
benchmarking and post mortem on live servers.
On Thu, Feb 14, 2013 at 9:32 PM, Dan Kogan wrote:
> Yes, w
Yes, we are seeing higher system % on the CPU, not sure how to quantify in
terms of % right now - will check into that tomorrow.
We were not checking the context switch numbers during our benchmark, will
check that tomorrow as well.
-Original Message-
From: pgsql-performance-ow...@postgr
On 02/14/2013 12:41 PM, Dan Kogan wrote:
> We used scale factor of 3600.
> Yeah, maybe other people see similar load average, we were not sure.
> However, we saw a clear difference right after the upgrade.
> We are trying to determine whether it makes sense for us to go to 11.04 or
> maybe the
We used scale factor of 3600.
Yeah, maybe other people see similar load average, we were not sure.
However, we saw a clear difference right after the upgrade.
We are trying to determine whether it makes sense for us to go to 11.04 or
maybe there is something here we are missing.
-Original
On Thu, Feb 14, 2013 at 7:35 AM, Nicolas Charles
wrote:
>
> It contains 11018592 entries, with the followinf patterns :
> 108492 distinct executiontimestamp
> 14 distinct nodeid
> 59 distinct directiveid
> 26 distinct ruleid
> 35 distinct serial
How many entries fall within a typical query interv
On 02/13/2013 05:30 PM, Dan Kogan wrote:
> Just to be clear - I was describing the current situation in our production.
>
> We were running pgbench on different Ununtu versions today. I don’t have
> 12.04 setup at the moment, but I do have 12.10, which seems to be performing
> about the same as
W dniu 2013-02-14 16:35, Nicolas Charles pisze:
I'm crunching the data by looking for each nodeid/ruleid/directiveid/serial with an
executiontimestamp in an interval:
explain analyze select executiondate, nodeid, ruleid, directiveid, serial, component, keyValue,
executionTimeStamp, eventtype,
Thanks for the info.
Our application does have a lot of concurrency. We checked the zone reclaim
parameter and it is turn off (that was the default, we did not have to change
it).
Dan
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Thursday, February 14, 2013
Hello,
I've been struggling to understand what's happening on my
databases/query for several days, and I'm turning to higher mind for a
logical answer.
I'm dealing with a fairly large database, containing logs informations,
that I crunch to get data out of it, with several indexes on them th
On Tue, Feb 12, 2013 at 11:25 AM, Dan Kogan wrote:
> Hello,
>
>
>
> We upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately
> obeserved increased CPU usage and significantly higher load average on our
> database server.
>
> At the time we were on Postgres 9.0.5. We decided to upgrade
Are the duplicates evenly distributed? You might have started on a big chunk
of dupes.
I'd go about this by loading my new data in a new table, removing the dupes,
then inserting all the new data into the old table. That way you have more
granular information about the process. And you can do
On Thu, Feb 14, 2013 at 3:08 AM, Heikki Linnakangas wrote:
> On 14.02.2013 12:49, Tory M Blue wrote:
>
>> My postgres db ran out of space. I have 27028 files in the pg_xlog
>> directory. I'm unclear what happened this has been running flawless for
>> years. I do have archiving turned on and run a
Tory M Blue wrote:
> My postgres db ran out of space. I have 27028 files in the pg_xlog directory.
> I'm unclear what
> happened this has been running flawless for years. I do have archiving turned
> on and run an archive
> command every 10 minutes.
>
> I'm not sure how to go about cleaning this
On 14.02.2013 12:49, Tory M Blue wrote:
My postgres db ran out of space. I have 27028 files in the pg_xlog
directory. I'm unclear what happened this has been running flawless for
years. I do have archiving turned on and run an archive command every 10
minutes.
I'm not sure how to go about cleani
On Thu, Feb 14, 2013 at 3:01 AM, Ian Lawrence Barwick wrote:
> 2013/2/14 Tory M Blue
>
>> My postgres db ran out of space. I have 27028 files in the pg_xlog
>> directory. I'm unclear what happened this has been running flawless for
>> years. I do have archiving turned on and run an archive comman
2013/2/14 Tory M Blue
> My postgres db ran out of space. I have 27028 files in the pg_xlog
> directory. I'm unclear what happened this has been running flawless for
> years. I do have archiving turned on and run an archive command every 10
> minutes.
>
> I'm not sure how to go about cleaning this
My postgres db ran out of space. I have 27028 files in the pg_xlog
directory. I'm unclear what happened this has been running flawless for
years. I do have archiving turned on and run an archive command every 10
minutes.
I'm not sure how to go about cleaning this up, I got the DB back up, but
I've
Hi everybody!
I'm new in mailing list, and i have a little question.
The tables are:
postalcodes (place_id, code), PK(place_id, code) 600K of rws
places (id, name), PK(id), INDEX(name) 3M of rows
I've to insert another 600k of rows into postalcodes table, in a single
transaction, omitting du
18 matches
Mail list logo