>
> - disk cache settings (EnDskCache - for SSD should be on or you're going
> to lose 90% of your performance)
>
Disk cache is enabled, I know there is a huge performance impact.
> - OS settings e.g.
>
> echo noop > /sys/block/sda/queue/scheduler
> echo 975 > /sys/block/sda/queue/nr_requests
>
On Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
> That is interesting: I've done some testing on this type of card with 16
> (slightly faster Hitachi) SSD attached. Setting WT and NORA should enable
> the so-called 'fastpath' mode for the card [1]. We saw per
I have a beast of a Dell server with the following specifications:
- 4x Xeon E5-4657LV2 (48 cores total)
- 196GB RAM
- 2x SCSI 900GB in RAID1 (for the OS)
- 8x Intel S3500 SSD 240GB in RAID10
- H710p RAID controller, 1GB cache
Centos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512
Sorry for the late reply, but as far as I know when you run pg_stat_reset()
you should always run analyze manually of the database to populate the
statistics.
Strahinja Kustudić | Lead System Engineer | Nordeus
On Tue, Oct 1, 2013 at 1:50 PM, Xenofon Papadopoulos wrote:
> If we reset
, while when it reaches the
vm.dirty_background_bytes it will slowly start flushing those pages to the
disk. As far as I remember vm.dirty_bytes should be configured to be a
little less than the cache size of your RAID controller, while
vm.dirty_background_bytes should be 4 times smaller.
Strahinja
ing and I know those 51MB which were swapped were just staying
there, so swap isn't an issue at all.
Strahinja Kustudić | System Engineer | Nordeus
On Wed, Oct 10, 2012 at 9:30 PM, Jeff Janes wrote:
> On Wed, Oct 10, 2012 at 12:12 AM, Strahinja Kustudić
> wrote:
> > Hi ever
get cached.
Regards,
Strahinja
On Wed, Oct 10, 2012 at 4:38 PM, Shaun Thomas wrote:
> On 10/10/2012 09:35 AM, Strahinja Kustudić wrote:
>
> #sysctl vm.dirty_ratio
>> vm.dirty_ratio = 40
>> # sysctl vm.dirty_background_ratio
>> vm.dirty_background_ratio = 10
>&
Wed, Oct 10, 2012 at 3:09 PM, Shaun Thomas wrote:
> On 10/10/2012 02:12 AM, Strahinja Kustudić wrote:
>
>totalused free shared buffers cached
>> Mem: 96730 96418311 071 93120
>>
>
> Wow, look at all that RAM. Somethi
o set overcommit_memory, since I only
have postgres running, nothing else would allocate memory anyway?
I will set readahead later, first I want to see how is this working.
Strahinja Kustudić | System Engineer | Nordeus
On Wed, Oct 10, 2012 at 10:52 AM, Julien Cigar wrote:
> On 10/10/20
rn of swappiness, I was meaning to do that, but I don't know much
about the overcommit settings, I will read what they do.
@Julien thanks for the suggestions, I will tweak them like you suggested.
Strahinja Kustudić | System Engineer | Nordeus
On Wed, Oct 10, 2012 at 10:11 AM, Julien Ci
Hm, I just notices that shared_buffers + effective_cache_size = 100 > 96GB,
which can't be right. effective_cache_size should probably be 80GB.
Strahinja Kustudić | System Engineer | Nordeus
On Wed, Oct 10, 2012 at 9:12 AM, Strahinja Kustudić
wrote:
> Hi everyone,
>
> I have
Hi everyone,
I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used
for inserting/updating large amounts of data via copy/insert/update
commands, and seldom for running select queries.
Here are the rel
at we never
reuse key ranges. Could you be more clear, or give an example please.
Thanks in advance,
Strahinja
On Tue, Aug 14, 2012 at 6:14 AM, Jeff Janes wrote:
> On Fri, Aug 10, 2012 at 3:15 PM, Strahinja Kustudić
> wrote:
> >
> > For example, yesterday when I checked t
We have PostgreSQL 9.1 running on Centos 5 on two SSDs, one for indices and
one for data. The database is extremely active with reads and writes. We
have autovacuum enabled, but we didn't tweak it's aggressiveness. The
problem is that after some time the database grows even more than 100% on
the fi
14 matches
Mail list logo