Am 11.06.2014 22:17, schrieb thegeezer:
> On 06/11/2014 07:57 PM, Stefan G. Weichinger wrote:
>> looks promising:
>>
>
> awesome. i did have a look through the diff, there are lots of scsi
> drivers selected, storage (block) cgroups but i think the crucial factor
> was the HZ was set at 100 previ
On 06/11/2014 07:57 PM, Stefan G. Weichinger wrote:
> looks promising:
>
awesome. i did have a look through the diff, there are lots of scsi
drivers selected, storage (block) cgroups but i think the crucial factor
was the HZ was set at 100 previously and 1000 now. i guess it has
helped kernel-io
looks promising:
virt-backup dumps and packs a 12 GB image-file within ~145 seconds to a
non-compressing btrfs subvolume:
a) does a LVM-snapshot
b) dd with bs=4M and through pigz to the target file
The bigger LV with ~250GB is running right now.
The system feels snappier than with the old ker
On 06/11/2014 03:15 PM, Stefan G. Weichinger wrote:
> Am 11.06.2014 15:44, schrieb Stefan G. Weichinger:
>> Am 11.06.2014 15:32, schrieb thegeezer:
>>
So my kernel-config seems buggy or I should downgrade to something older?
>>> I suspect that in your fully running system somethingelse(tm) is
Am 11.06.2014 15:44, schrieb Stefan G. Weichinger:
> Am 11.06.2014 15:32, schrieb thegeezer:
>
>>> So my kernel-config seems buggy or I should downgrade to something older?
>>
>> I suspect that in your fully running system somethingelse(tm) is
>> stealing the activity. can you start up with no s
Am 11.06.2014 15:32, schrieb thegeezer:
>> So my kernel-config seems buggy or I should downgrade to something older?
>
> I suspect that in your fully running system somethingelse(tm) is
> stealing the activity. can you start up with no services enabled and
> do the test ?
hm, yes. although I h
On 06/11/2014 01:41 PM, Stefan G. Weichinger wrote:
> Am 11.06.2014 13:52, schrieb thegeezer:
>
>> ok baffling.
>> sdc i already said would be slower but not this much slower
>> it certainly should not be slower than the lvm that sits on top of it!
>> i can't see anything in the cgroups that stands
Am 11.06.2014 13:52, schrieb thegeezer:
> ok baffling.
> sdc i already said would be slower but not this much slower
> it certainly should not be slower than the lvm that sits on top of it!
> i can't see anything in the cgroups that stands out, maybe someone else
> can give a better voice to this.
On 06/11/2014 12:21 PM, Stefan G. Weichinger wrote:
> Am 11.06.2014 13:18, schrieb thegeezer:
>
>> just out of curiosity, what happens if you do # dd
>> if=/dev/vg01/amhold of=/dev/null bs=1M count=100 # dd if=/dev/sdc
>> of=/dev/null bs=1M count=100
>
>
> booze ~ # dd if=/dev/vg01/amhold of=/dev
Am 11.06.2014 13:18, schrieb thegeezer:
> just out of curiosity, what happens if you do # dd
> if=/dev/vg01/amhold of=/dev/null bs=1M count=100 # dd if=/dev/sdc
> of=/dev/null bs=1M count=100
booze ~ # dd if=/dev/vg01/amhold of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze a
Am 11.06.2014 13:01, schrieb thegeezer:
> yeah this is very very odd.
> firstly there should not be such discrepancy between hdparm -t and dd if=
> secondly you would imagine that the first dd would be cached and so
> would be faster the second time round
> please check for the turbo boost disable
On 06/11/2014 11:49 AM, Stefan G. Weichinger wrote:
> Am 11.06.2014 12:41, schrieb thegeezer:
>
>>> everything around 380 MB/s ... only ~350 MB/s for
>>> /dev/vg01/winserver_disk0 (which still is nice)
>>
>> OK here is the clue.
>> if the LVs are also showing such fast speed, then please can you sh
On 06/11/2014 11:49 AM, Stefan G. Weichinger wrote:
> Am 11.06.2014 12:41, schrieb thegeezer:
>
>>> everything around 380 MB/s ... only ~350 MB/s for
>>> /dev/vg01/winserver_disk0 (which still is nice)
>>
>> OK here is the clue.
>> if the LVs are also showing such fast speed, then please can you sh
Am 11.06.2014 12:41, schrieb thegeezer:
>> everything around 380 MB/s ... only ~350 MB/s for
>> /dev/vg01/winserver_disk0 (which still is nice)
>
>
> OK here is the clue.
> if the LVs are also showing such fast speed, then please can you show
> your command that you are trying to run that is so
On 06/11/2014 11:34 AM, Stefan G. Weichinger wrote:
> Am 11.06.2014 12:14, schrieb thegeezer:
>
>>> Basically 3 RAID-6 hw-raids over 6 SAS hdds.
>> OK so i'm confused again. RAID6 requires minimum of 4 drives.
>> if you have 3 raid6's then you would need 12 drives (coffee hasn't quite
>> activate
Am 11.06.2014 12:14, schrieb thegeezer:
>> Basically 3 RAID-6 hw-raids over 6 SAS hdds.
>
> OK so i'm confused again. RAID6 requires minimum of 4 drives.
> if you have 3 raid6's then you would need 12 drives (coffee hasn't quite
> activated in me yet so my maths may not be right)
> or do you ha
On 06/11/2014 11:14 AM, thegeezer wrote:
> just some extra thoughts
*cough* yeah i meant to keep typing!
the extra thoughts are that the better way of doing this would be to
create up
RAID1 physicaldisks1+2
RAID6 physicaldisks3,4,5,6
then put lvm on there as vg01 with two PVs, one on the raid1 v
On 06/11/2014 10:34 AM, Stefan G. Weichinger wrote:
> Am 11.06.2014 11:19, schrieb thegeezer:
>
>> Hi Stefan,
>> block size / stripe size mismatches only really penalise random io, if
>> you are trying to use dd and have slow speeds this would suggest
>> something else is awry.
>> I don't know the
On 05/27/2014 02:03 PM, Stefan G. Weichinger wrote:
> I think I have some IO-topic going on ... very likely some mismatch of
> block sizes ... the hw-raid, then LVM, then the snapshot on top of
> that ... and a filesystem with properties as target ... oh my. Chosing
> noop as IO-scheduler helps a b
hello again ... noone interested? ;-)
I understand in a way ...
Maybe I have something in the kernel misconfigured ...
Right now I get these messages again:
[ 1998.118658] hpet1: lost 1 rtc interrupts
Should I disable HPET in the BIOS and/or via kernel command line?
I never know how to set t
Found out something about megacli and checked settings for cache and
stuff following
http://highperfpostgres.com/guides/lsi-megaraid-setup-for-postgresql/
Did I set a wrong Strip Size for the third array?
good night, late here ...
Stefan
# megacli -LDInfo -Lall -aALL
Adapter 0 -- Virtual
additional infos from journalctl.
I don't like the fact with 512-byte logical blocks vs. 4096-byte
physical blocks ... sounds wrong, hm?
->
Jun 10 21:54:31 booze kernel: megaraid_sas :02:00.0: Controller
type: MR,Memory size is: 512MB
Jun 10 21:54:31 booze kernel: scsi7 : LSI SAS based Mega
Am 27.05.2014 15:03, schrieb Stefan G. Weichinger:
>> way too slow ...
>
> I think I have some IO-topic going on ... very likely some mismatch of
> block sizes ...
>
> the hw-raid, then LVM, then the snapshot on top of that ... and a
> filesystem with properties as target ... oh my.
>
> Chosing
Am 26.05.2014 21:57, schrieb Stefan G. Weichinger:
> Am 26.05.2014 19:47, schrieb Stefan G. Weichinger:
>
>> But I somehow think the performance is sub-optimal.
>
> virt-backup is slow as well (using dd and gzip or pigz in my own patched
> version). Yes, that LVM stuff again ...
>
> I run 6 SAS
Am 26.05.2014 19:47, schrieb Stefan G. Weichinger:
> But I somehow think the performance is sub-optimal.
virt-backup is slow as well (using dd and gzip or pigz in my own patched
version). Yes, that LVM stuff again ...
I run 6 SAS disks and built hardware raids.
Should I look into the cache sett
Am 24.05.2014 21:24, schrieb Stefan G. Weichinger:
> Am 23.05.2014 09:52, schrieb Stefan G. Weichinger:
>>
>> Greetings,
>>
>> I have a new Fujitsu TX150 here, with a
>>
>> Intel(R) C600 SAS Controller
>>
>> and an LTO4 drive attached to it.
>>
>> My kernel has support for isci, scsi tape, ahci and
Am 23.05.2014 09:52, schrieb Stefan G. Weichinger:
>
> Greetings,
>
> I have a new Fujitsu TX150 here, with a
>
> Intel(R) C600 SAS Controller
>
> and an LTO4 drive attached to it.
>
> My kernel has support for isci, scsi tape, ahci and all the sas stuff
> ... but I don't get any "st" devices.
Greetings,
I have a new Fujitsu TX150 here, with a
Intel(R) C600 SAS Controller
and an LTO4 drive attached to it.
My kernel has support for isci, scsi tape, ahci and all the sas stuff
... but I don't get any "st" devices.
Do I need SCSI_PROC_FS set? I just wonder ...
thanks, Stefan
28 matches
Mail list logo