pen Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
>
>
> On Thu, May 26, 2016 at 10:52 AM, Artem Tomyuk
> wrote:
> > Please look at the official doc
from Amazon S3 and written to
the volume) before you can access the block"
Quotation from:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html
2016-05-26 17:47 GMT+03:00 Rayson Ho :
> On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk
> wrote:
>
>>
>> 2016-
2016-05-26 16:50 GMT+03:00 Rayson Ho :
> Amazon engineers said that EBS pre-warming is not needed anymore.
but still if you will skip this step you wont get much performance on ebs
created from snapshot.
Yes, the smaller instance you choose - the slower ebs will be.
EBS lives separately from EC2, they are communicating via network. So small
instance = low network bandwidth = poorer disk performance.
But still strong recommendation to pre-warm your ebs in any case,
especially if they created from sn
Hi.
AWS EBS its a really painful story
How was created volumes for RAID? From snapshots?
If you want to get the best performance from EBS it needs to pre-warmed.
Here is the tutorial how to achieve that:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html
Also you should r
I didn't compare impact of virtualization on other hypervisors yet.
2016-04-26 18:21 GMT+03:00 Michael Nolan :
>
>
> On Tue, Apr 26, 2016 at 10:03 AM, Artem Tomyuk
> wrote:
>
>> Hi All.
>>
>> I've noticed that there is a huge (more than ~3x slower)
Hi All.
I've noticed that there is a huge (more than ~3x slower) performance
difference between KVM guest and host machine.
Host machine:
dell r720xd
RAID10 with 12 SAS 15 k drives and RAID0 with 2*128 GB INTEL SSD drives in
Dell CacheCade mode.
*On the KVM guest:*
/usr/pgsql-9.4/bin/pg_test_f
there are two ways:
1. to write bash script with condition if number of conn. is > 1000 send
me email and put that script in crontab
2. monitor it with external monitoring system like zabbix, nagios etc
2016-04-04 18:00 GMT+03:00 Moreno Andreo :
> Il 04/04/2016 16:54, Artem
2016-04-04 17:43 GMT+03:00 Moreno Andreo :
> s there a way to monitor active connections, or at least to report when
> they grow too much?
> (say, I have an 8-core system and want to track down if, and when, active
> connections grow over 80)
>
You can achieve that just running simple query like
Hi All!
Is Postgres use shared_buffers during seq_scan?
In what way i can optimize seq_scan on big tables?
Thanks!
Hi all.
Is there any way of how to retrieve information from pg_stat_activity (its
not very comfort to get it from iotop, because its not showing full text of
query) which query generates or consumes the most IO load or time.
Thanks for any advice.
Hi.
I've noticed that autovac. process worked more than 10 minutes, during this
zabbix logged more than 90% IO disk utilization on db volume
===>29237 2016-03-02 15:17:23 EET 0 [24-1]LOG:
automatic vacuum of table "lb_upr.public._reference32": index scans: 1
pages: 0 rem
Hi.
I've noticed huge decrease in performance.
During this in htop i see a lot (200 - 300) of connections in state
"startup", each of them eats 3-3% of CPU time. This processes are not
visible in pg_stat_activity so i cant understand what they are doing, and i
cant kill them. I cant see the bottle
Hi all.
Is the speed of hash operations stands on the performance of CPU?
Below you can see part from output of explain analyze command
*Intel(R) Xeon(R) CPU E7520 @ 1.87GHz*
" -> Hash (cost=337389.43..337389.43 rows=3224443 width=34)
(actual time=15046.382..15046.382 ro
14 matches
Mail list logo