[ceph-users] RadosGW problems on Ubuntu

2015-08-14 Thread Stolte, Felix
Hello everyone,

 

we are currently testing ceph (Hammer) and Openstack (Kilo) on Ubuntu 14.04
LTS Servers. Yesterday I tried to setup  the radosgateway with keystone
integration for swift via ceph-deploy. I followed the instructions on
http://ceph.com/docs/master/radosgw/keystone/ and
http://ceph.com/ceph-deploy/docs/rgw.html 

 

I encountered the following problems:

 

1. Bootstrap-rgw.keyring missing. I deployed the cluster under firefly, so I
had to create it manually (according to documentation, this is normal, but
it would be nice to have instructions how to create the
bootstrap-rgw.keyring manually).  

 

2. According to the documentation, the bucket in ceph.conf should be named
[client.radosgw.InstanceName]. Doing so my config wasn't used at all, when I
started rados-gateway using "service radosgw-all start" after changing
bucket name to [client.rgw.InstanceName], nearly everything worked fine. 

 

3. The nss db path parameter from my ceph.conf is still ignored and
openstack can't sync users to radosgw. The only way I got it working was to
start radosgw manually and passing all parameters directly.

 

I'd like to know, if (or what) I am doing wrong or if I hit a bug in
documentation, upstart script or radosgw.

 

My configuration:

 

[client.rgw.gw-v01] # Works except th nss db path

log file = /var/log/radosgw/radosgw.log

rgw frontends = "civetweb port=80"

rgw keystone admin token = secret

rgw keystone url = "http://xxx.xxx.xxx.xxx:5000";

rgw keystone accepted roles = "s3, swift, admin, _member_, user, Member"

rgw s3 auth use keystone = "true"

nss db path = /var/lib/ceph/nss

rgw keyring = /var/lib/ceph/radosgw/ceph-rgw.gw-v01/keyring

rgw host = gw-v01

 

Working radosgw with:

/usr/bin/radosgw --id rgw.gw-v01 --log-file /var/log/radosgw/radosgw.log
--rgw-frontends "civetweb port=80" --rgw-keystone-admin-token secret
--rgw-keystone-url "http://xxx.xxx.xxx.xxx:5000";
--rgw-keystone-accepted-roles "s3, swift, admin, _member_, user, Member"
--rgw-s3-auth-use-keystone "true" --nss-db-path "/var/lib/ceph/nss"
--rgw-keyring /var/lib/ceph/radosgw/ceph-rgw.gw-v01/keyring --rgw-host
gw-v01

 

Best regards

Felix

 

Forschungszentrum Juelich GmbH

52425 Juelich

Sitz der Gesellschaft: Juelich

Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498

Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher

Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),

Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,

Prof. Dr. Sebastian M. Schmidt

 



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Running Openstack Nova and Ceph OSD on same machine

2015-10-26 Thread Stolte, Felix
Hi all,

 

is anyone running nova compute on ceph OSD Servers and could share his
experience? 

 

Thanks and Regards,

 

Felix

 

Forschungszentrum Juelich GmbH

52425 Juelich

Sitz der Gesellschaft: Juelich

Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498

Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher

Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),

Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,

Prof. Dr. Sebastian M. Schmidt

 



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-disk list crashes in infernalis

2015-12-03 Thread Stolte, Felix
Hi all,

 

i upgraded from hammer to infernalis today and even so I had a hard time
doing so I finally got my cluster running in a healthy state (mainly my
fault, because I did not read the release notes carefully).

 

But when I try to list my disks with "ceph-disk list" I get the following
Traceback:

 

 ceph-disk list

Traceback (most recent call last):

  File "/usr/sbin/ceph-disk", line 3576, in 

main(sys.argv[1:])

  File "/usr/sbin/ceph-disk", line 3532, in main

main_catch(args.func, args)

  File "/usr/sbin/ceph-disk", line 3554, in main_catch

func(args)

  File "/usr/sbin/ceph-disk", line 2915, in main_list

devices = list_devices(args)

  File "/usr/sbin/ceph-disk", line 2855, in list_devices

partmap = list_all_partitions(args.path)

  File "/usr/sbin/ceph-disk", line 545, in list_all_partitions

dev_part_list[name] = list_partitions(os.path.join('/dev', name))

  File "/usr/sbin/ceph-disk", line 550, in list_partitions

if is_mpath(dev):

  File "/usr/sbin/ceph-disk", line 433, in is_mpath

uuid = get_dm_uuid(dev)

  File "/usr/sbin/ceph-disk", line 421, in get_dm_uuid

uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')

  File "/usr/sbin/ceph-disk", line 416, in block_path

rdev = os.stat(path).st_rdev

OSError: [Errno 2] No such file or directory: '/dev/cciss!c0d0'

 

 

I'm running ceph 9.2 on Ubuntu 14.04.3 LTS on HP Hardware with HP P400
Raidcontroller. 4 Node Cluster (3 of them are Mons), 5-6 OSDs per Node with
journals on separate drive.

 

Does anyone know how to solve this or did I hit a bug?

 

Regards Felix

 

Forschungszentrum Juelich GmbH

52425 Juelich

Sitz der Gesellschaft: Juelich

Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498

Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher

Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),

Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,

Prof. Dr. Sebastian M. Schmidt

 



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-03 Thread Stolte, Felix
Hi Loic,

thanx for the quick reply and filing the issue.

Regards

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt

-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Donnerstag, 3. Dezember 2015 11:01
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis

Hi Felix,

This is a bug, I file an issue for you at
http://tracker.ceph.com/issues/13970

Cheers

On 03/12/2015 10:56, Stolte, Felix wrote:
> Hi all,
> 
>  
> 
> i upgraded from hammer to infernalis today and even so I had a hard time
doing so I finally got my cluster running in a healthy state (mainly my
fault, because I did not read the release notes carefully).
> 
>  
> 
> But when I try to list my disks with „ceph-disk list“ I get the following
Traceback:
> 
>  
> 
>  ceph-disk list
> 
> Traceback (most recent call last):
> 
>   File "/usr/sbin/ceph-disk", line 3576, in 
> 
> main(sys.argv[1:])
> 
>   File "/usr/sbin/ceph-disk", line 3532, in main
> 
> main_catch(args.func, args)
> 
>   File "/usr/sbin/ceph-disk", line 3554, in main_catch
> 
> func(args)
> 
>   File "/usr/sbin/ceph-disk", line 2915, in main_list
> 
> devices = list_devices(args)
> 
>   File "/usr/sbin/ceph-disk", line 2855, in list_devices
> 
> partmap = list_all_partitions(args.path)
> 
>   File "/usr/sbin/ceph-disk", line 545, in list_all_partitions
> 
> dev_part_list[name] = list_partitions(os.path.join('/dev', name))
> 
>   File "/usr/sbin/ceph-disk", line 550, in list_partitions
> 
> if is_mpath(dev):
> 
>   File "/usr/sbin/ceph-disk", line 433, in is_mpath
> 
> uuid = get_dm_uuid(dev)
> 
>   File "/usr/sbin/ceph-disk", line 421, in get_dm_uuid
> 
> uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
> 
>   File "/usr/sbin/ceph-disk", line 416, in block_path
> 
> rdev = os.stat(path).st_rdev
> 
> OSError: [Errno 2] No such file or directory: '/dev/cciss!c0d0'
> 
>  
> 
>  
> 
> I’m running ceph 9.2 on Ubuntu 14.04.3 LTS on HP Hardware with HP P400
Raidcontroller. 4 Node Cluster (3 of them are Mons), 5-6 OSDs per Node with
journals on separate drive.
> 
>  
> 
> Does anyone know how to solve this or did I hit a bug?
> 
>  
> 
> Regards Felix
> 
>  
> 
> Forschungszentrum Juelich GmbH
> 
> 52425 Juelich
> 
> Sitz der Gesellschaft: Juelich
> 
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> 
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> 
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> 
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> 
> Prof. Dr. Sebastian M. Schmidt
> 
>  
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-06 Thread Stolte, Felix
0:0:0/block/sr0

Regards

Felix

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt


-----Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Samstag, 5. Dezember 2015 19:29
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis

Hi Felix,

Could you please show the output of ls -l /dev /sys/block ?

Thanks !

On 03/12/2015 15:45, Stolte, Felix wrote:
> Hi Loic,
> 
> thanx for the quick reply and filing the issue.
> 
> Regards
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), 
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, 
> Prof. Dr. Sebastian M. Schmidt
> 
> -Ursprüngliche Nachricht-
> Von: Loic Dachary [mailto:l...@dachary.org]
> Gesendet: Donnerstag, 3. Dezember 2015 11:01
> An: Stolte, Felix; ceph-us...@ceph.com
> Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis
> 
> Hi Felix,
> 
> This is a bug, I file an issue for you at
> http://tracker.ceph.com/issues/13970
> 
> Cheers
> 
> On 03/12/2015 10:56, Stolte, Felix wrote:
>> Hi all,
>>
>>  
>>
>> i upgraded from hammer to infernalis today and even so I had a hard 
>> time
> doing so I finally got my cluster running in a healthy state (mainly 
> my fault, because I did not read the release notes carefully).
>>
>>  
>>
>> But when I try to list my disks with „ceph-disk list“ I get the 
>> following
> Traceback:
>>
>>  
>>
>>  ceph-disk list
>>
>> Traceback (most recent call last):
>>
>>   File "/usr/sbin/ceph-disk", line 3576, in 
>>
>> main(sys.argv[1:])
>>
>>   File "/usr/sbin/ceph-disk", line 3532, in main
>>
>> main_catch(args.func, args)
>>
>>   File "/usr/sbin/ceph-disk", line 3554, in main_catch
>>
>> func(args)
>>
>>   File "/usr/sbin/ceph-disk", line 2915, in main_list
>>
>> devices = list_devices(args)
>>
>>   File "/usr/sbin/ceph-disk", line 2855, in list_devices
>>
>> partmap = list_all_partitions(args.path)
>>
>>   File "/usr/sbin/ceph-disk", line 545, in list_all_partitions
>>
>> dev_part_list[name] = list_partitions(os.path.join('/dev', name))
>>
>>   File "/usr/sbin/ceph-disk", line 550, in list_partitions
>>
>> if is_mpath(dev):
>>
>>   File "/usr/sbin/ceph-disk", line 433, in is_mpath
>>
>> uuid = get_dm_uuid(dev)
>>
>>   File "/usr/sbin/ceph-disk", line 421, in get_dm_uuid
>>
>> uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
>>
>>   File "/usr/sbin/ceph-disk", line 416, in block_path
>>
>> rdev = os.stat(path).st_rdev
>>
>> OSError: [Errno 2] No such file or directory: '/dev/cciss!c0d0'
>>
>>  
>>
>>  
>>
>> I’m running ceph 9.2 on Ubuntu 14.04.3 LTS on HP Hardware with HP 
>> P400
> Raidcontroller. 4 Node Cluster (3 of them are Mons), 5-6 OSDs per Node 
> with journals on separate drive.
>>
>>  
>>
>> Does anyone know how to solve this or did I hit a bug?
>>
>>  
>>
>> Regards Felix
>>
>>  
>>
>> Forschungszentrum Juelich GmbH
>>
>> 52425 Juelich
>>
>> Sitz der Gesellschaft: Juelich
>>
>> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
>>
>> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
>>
>> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
>>
>> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
>>
>> Prof. Dr. Sebastian M. Schmidt
>>
>>  
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 

--
Loïc Dachary, Artisan Logiciel Libre



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-08 Thread Stolte, Felix
0 Dez  8 15:07 cciss!c0d7p4
drwxr-xr-x 5 root root0 Dez  8 15:07 cciss!c0d7p5
drwxr-xr-x 5 root root0 Dez  8 15:07 cciss!c0d7p6
drwxr-xr-x 5 root root0 Dez  8 15:07 cciss!c0d7p7
-r--r--r-- 1 root root 4096 Dez  8 15:07 dev
lrwxrwxrwx 1 root root0 Dez  8 15:07 device -> ../../../c0d7
-r--r--r-- 1 root root 4096 Dez  8 15:07 discard_alignment
-r--r--r-- 1 root root 4096 Dez  8 15:07 ext_range
drwxr-xr-x 2 root root0 Dez  8 15:07 holders
-r--r--r-- 1 root root 4096 Dez  8 15:07 inflight
drwxr-xr-x 2 root root0 Dez  8 15:07 power
drwxr-xr-x 3 root root0 Dez  8 15:07 queue
-r--r--r-- 1 root root 4096 Dez  8 15:07 range
-r--r--r-- 1 root root 4096 Dez  8 15:07 removable
-r--r--r-- 1 root root 4096 Dez  8 15:07 ro
-r--r--r-- 1 root root 4096 Dez  8 15:07 size
drwxr-xr-x 2 root root0 Dez  8 15:07 slaves
-r--r--r-- 1 root root 4096 Dez  8 15:07 stat
lrwxrwxrwx 1 root root0 Dez  8 15:07 subsystem ->
../../../../../../../../class/block
drwxr-xr-x 2 root root0 Dez  8 15:07 trace
-rw-r--r-- 1 root root 4096 Dez  8 15:07 uevent


Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt

-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Dienstag, 8. Dezember 2015 15:06
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis

Hi Felix,

Could you please ls -l /dev/cciss /sys/block/cciss*/ ?

Thanks for being the cciss proxy in fixing this problem :-)

Cheers

On 07/12/2015 11:43, Loic Dachary wrote:
> Thanks !
> 
> On 06/12/2015 17:50, Stolte, Felix wrote:
>> Hi Loic,
>>
>> output is:
>>
>> /dev:
>> insgesamt 0
>> crw--- 1 root root 10, 235 Dez  2 17:02 autofs
>> drwxr-xr-x 2 root root1000 Dez  2 17:02 block
>> drwxr-xr-x 2 root root  60 Dez  2 17:02 bsg
>> crw--- 1 root root 10, 234 Dez  5 06:29 btrfs-control
>> drwxr-xr-x 3 root root  60 Dez  2 17:02 bus
>> crw-r--r-- 1 root root255, 171 Dez  2 17:02 casr
>> drwxr-xr-x 2 root root 500 Dez  2 17:02 cciss
>> crw-r--r-- 1 root root255, 173 Dez  2 17:02 ccsm
>> lrwxrwxrwx 1 root root   3 Dez  2 17:02 cdrom -> sr0
>> crw-r--r-- 1 root root255, 178 Dez  2 17:02 cdt
>> crw-r--r-- 1 root root255, 172 Dez  2 17:02 cecc
>> crw-r--r-- 1 root root255, 176 Dez  2 17:02 cevt
>> drwxr-xr-x 2 root root3820 Dez  5 06:29 char
>> crw--- 1 root root  5,   1 Dez  2 17:04 console
>> lrwxrwxrwx 1 root root  11 Dez  2 17:02 core -> /proc/kcore
>> drw-r--r-- 2 root root 200 Dez  2 17:02 cpqhealth
>> drwxr-xr-x 2 root root  60 Dez  2 17:02 cpu
>> crw--- 1 root root 10,  60 Dez  2 17:02 cpu_dma_latency
>> crw-r--r-- 1 root root255, 180 Dez  2 17:02 crom
>> crw--- 1 root root 10, 203 Dez  2 17:02 cuse
>> drwxr-xr-x 8 root root 160 Dez  2 17:02 disk
>> drwxr-xr-x 2 root root 100 Dez  2 17:02 dri
>> crw--- 1 root root 10,  61 Dez  2 17:02 ecryptfs
>> crw-rw 1 root video29,   0 Dez  2 17:02 fb0
>> lrwxrwxrwx 1 root root  13 Dez  2 17:02 fd -> /proc/self/fd
>> crw-rw-rw- 1 root root  1,   7 Dez  2 17:02 full
>> crw-rw-rw- 1 root root 10, 229 Dez  2 17:02 fuse
>> crw--- 1 root root251,   0 Dez  2 17:02 hidraw0
>> crw--- 1 root root251,   1 Dez  2 17:02 hidraw1
>> crw--- 1 root root 10, 228 Dez  2 17:02 hpet
>> drwxr-xr-x 2 root root 360 Dez  2 17:02 hpilo
>> crw--- 1 root root 89,   0 Dez  2 17:02 i2c-0
>> crw--- 1 root root 89,   1 Dez  2 17:02 i2c-1
>> crw--- 1 root root 89,   2 Dez  2 17:02 i2c-2
>> crw--- 1 root root 89,   3 Dez  2 17:02 i2c-3
>> crw-r--r-- 1 root root255, 184 Dez  2 17:02 indc
>> drwxr-xr-x 4 root root 200 Dez  2 17:02 input
>> crw--- 1 root root248,   0 Dez  2 17:02 ipmi0
>> crw--- 1 root root249,   0 Dez  2 17:02 kfd
>> crw-r--r-- 1 root root  1,  11 Dez  2 17:02 kmsg
>> srw-rw-rw- 1 root root   0 Dez  2 17:02 log
>> brw-rw 1 root disk  7,   0 Dez  2 17:02 loop0
>> brw-rw 1 root disk  7,   1 Dez  2 17:02 loop1
>> brw-rw 1 root disk  7,   2 Dez  2 17:02 loop2
>> brw-rw 1 root disk  7,   3 Dez  2 17:02 loop3
>> brw-rw 1 root disk  7,   4 Dez  2 17:02 loop4
>> brw-rw 1 root di

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-08 Thread Stolte, Felix
Yes, they do contain a "!"

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt


-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Dienstag, 8. Dezember 2015 15:17
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis

I also need to confirm that the names that show in /sys/block/*/holders are
with a ! (it would not make sense to me if they were not but ...)

On 08/12/2015 15:05, Loic Dachary wrote:
> Hi Felix,
> 
> Could you please ls -l /dev/cciss /sys/block/cciss*/ ?
> 
> Thanks for being the cciss proxy in fixing this problem :-)
> 
> Cheers
> 
> On 07/12/2015 11:43, Loic Dachary wrote:
>> Thanks !
>>
>> On 06/12/2015 17:50, Stolte, Felix wrote:
>>> Hi Loic,
>>>
>>> output is:
>>>
>>> /dev:
>>> insgesamt 0
>>> crw--- 1 root root 10, 235 Dez  2 17:02 autofs
>>> drwxr-xr-x 2 root root1000 Dez  2 17:02 block
>>> drwxr-xr-x 2 root root  60 Dez  2 17:02 bsg
>>> crw--- 1 root root 10, 234 Dez  5 06:29 btrfs-control
>>> drwxr-xr-x 3 root root  60 Dez  2 17:02 bus
>>> crw-r--r-- 1 root root255, 171 Dez  2 17:02 casr
>>> drwxr-xr-x 2 root root 500 Dez  2 17:02 cciss
>>> crw-r--r-- 1 root root255, 173 Dez  2 17:02 ccsm
>>> lrwxrwxrwx 1 root root   3 Dez  2 17:02 cdrom -> sr0
>>> crw-r--r-- 1 root root255, 178 Dez  2 17:02 cdt
>>> crw-r--r-- 1 root root255, 172 Dez  2 17:02 cecc
>>> crw-r--r-- 1 root root255, 176 Dez  2 17:02 cevt
>>> drwxr-xr-x 2 root root3820 Dez  5 06:29 char
>>> crw--- 1 root root  5,   1 Dez  2 17:04 console
>>> lrwxrwxrwx 1 root root  11 Dez  2 17:02 core -> /proc/kcore
>>> drw-r--r-- 2 root root 200 Dez  2 17:02 cpqhealth
>>> drwxr-xr-x 2 root root  60 Dez  2 17:02 cpu
>>> crw--- 1 root root 10,  60 Dez  2 17:02 cpu_dma_latency
>>> crw-r--r-- 1 root root255, 180 Dez  2 17:02 crom
>>> crw--- 1 root root 10, 203 Dez  2 17:02 cuse
>>> drwxr-xr-x 8 root root 160 Dez  2 17:02 disk
>>> drwxr-xr-x 2 root root 100 Dez  2 17:02 dri
>>> crw--- 1 root root 10,  61 Dez  2 17:02 ecryptfs
>>> crw-rw 1 root video29,   0 Dez  2 17:02 fb0
>>> lrwxrwxrwx 1 root root  13 Dez  2 17:02 fd -> /proc/self/fd
>>> crw-rw-rw- 1 root root  1,   7 Dez  2 17:02 full
>>> crw-rw-rw- 1 root root 10, 229 Dez  2 17:02 fuse
>>> crw--- 1 root root251,   0 Dez  2 17:02 hidraw0
>>> crw--- 1 root root251,   1 Dez  2 17:02 hidraw1
>>> crw--- 1 root root 10, 228 Dez  2 17:02 hpet
>>> drwxr-xr-x 2 root root 360 Dez  2 17:02 hpilo
>>> crw--- 1 root root 89,   0 Dez  2 17:02 i2c-0
>>> crw--- 1 root root 89,   1 Dez  2 17:02 i2c-1
>>> crw--- 1 root root 89,   2 Dez  2 17:02 i2c-2
>>> crw--- 1 root root 89,   3 Dez  2 17:02 i2c-3
>>> crw-r--r-- 1 root root255, 184 Dez  2 17:02 indc
>>> drwxr-xr-x 4 root root 200 Dez  2 17:02 input
>>> crw--- 1 root root248,   0 Dez  2 17:02 ipmi0
>>> crw--- 1 root root249,   0 Dez  2 17:02 kfd
>>> crw-r--r-- 1 root root  1,  11 Dez  2 17:02 kmsg
>>> srw-rw-rw- 1 root root   0 Dez  2 17:02 log
>>> brw-rw 1 root disk  7,   0 Dez  2 17:02 loop0
>>> brw-rw 1 root disk  7,   1 Dez  2 17:02 loop1
>>> brw-rw 1 root disk  7,   2 Dez  2 17:02 loop2
>>> brw-rw 1 root disk  7,   3 Dez  2 17:02 loop3
>>> brw-rw 1 root disk  7,   4 Dez  2 17:02 loop4
>>> brw-rw 1 root disk  7,   5 Dez  2 17:02 loop5
>>> brw-rw 1 root disk  7,   6 Dez  2 17:02 loop6
>>> brw-rw 1 root disk  7,   7 Dez  2 17:02 loop7
>>> crw--- 1 root root 10, 237 Dez  2 17:02 loop-control
>>> drwxr-xr-x 2 root root  60 Dez  2 17:02 mapper
>>> crw--- 1 root root 10, 227 Dez  2 17:02 mcelog
>>> crw-r- 1 root kmem  1,   1 Dez  2 17:02 mem
>>> crw--- 1 root root 10,  57 Dez  2 17:02 memory_bandwidth
>>> crw--- 1 root root 10, 220 Dez  2 17:02 mptctl
&

[ceph-users] Fujitsu

2017-04-20 Thread Stolte, Felix
Hello cephers,

is anyone using Fujitsu Hardware for Ceph OSDs with the PRAID EP400i
Raidcontroller in JBOD Mode? We are having three identical servers with
identical Disk placement. First three Slots are SSDs for journaling and
remaining nine slots with SATA Disks. Problem is, that in Ubuntu (and I
would guess in any other distribution as well) the disk paths for the same
physical drive slot differ between the three servers. For example on Server
A first disk is identified as "pci-:01:00.0-scsi-0:0:14:0" and as
"pci-:01:00.0-scsi-0:0:17:0" on the other. This makes provisioning osds
nearly impossible. Anyone ran into the same issue an knows how to fix this?
Fujitsu support couldn't help (in fact they did not know, that you can put
the controller in JBOD mode ...). I actvated JBOD via the "Enable JBOD"
option in the controller management menu of the praid ep400i raid
controller.

Cheers Felix


Felix Stolte
IT-Services
E-Mail: f.sto...@fz-juelich.de
Internet: http://www.fz-juelich.de

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Stolte, Felix
Hi Loic,

I applied the fixed version. I don't get error messages when running
ceph-disk list, but the output is not as I expect it to be (On hammer
release I saw all partitions):

ceph-disk list
/dev/cciss/c0d0 other, unknown
/dev/cciss/c0d1 other, unknown
/dev/cciss/c0d2 other, unknown
/dev/cciss/c0d3 other, unknown
/dev/cciss/c0d4 other, unknown
/dev/cciss/c0d5 other, unknown
/dev/cciss/c0d6 other, unknown
/dev/cciss/c0d7 other, unknown
/dev/loop0 other, unknown
/dev/loop1 other, unknown
/dev/loop2 other, unknown
/dev/loop3 other, unknown
/dev/loop4 other, unknown
/dev/loop5 other, unknown
/dev/loop6 other, unknown
/dev/loop7 other, unknown
/dev/ram0 other, unknown
/dev/ram1 other, unknown
/dev/ram10 other, unknown
/dev/ram11 other, unknown
/dev/ram12 other, unknown
/dev/ram13 other, unknown
/dev/ram14 other, unknown
/dev/ram15 other, unknown
/dev/ram2 other, unknown
/dev/ram3 other, unknown
/dev/ram4 other, unknown
/dev/ram5 other, unknown
/dev/ram6 other, unknown
/dev/ram7 other, unknown
/dev/ram8 other, unknown
/dev/ram9 other, unknown
/dev/sr0 other, unknown

Executed as root user on an OSD-Node with osds on c0d1 to c0d6 and journals
on c0d7.

Greets Felix

Mit freundlichem Gruß

Felix Stolte
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
Internet: http://www.fz-juelich.de

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt


-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Mittwoch, 9. Dezember 2015 23:55
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis

Hi Felix,

It would be great if you could try the fix from
https://github.com/dachary/ceph/commit/7395a6a0c5776d4a92728f1abf0e8a87e5d5e
4bb . It's only changing the ceph-disk file so you could just get it from
https://github.com/dachary/ceph/raw/7395a6a0c5776d4a92728f1abf0e8a87e5d5e4bb
/src/ceph-disk and replace the existing (after a backup) ceph-disk on one of
your machines. 

It passes integration tests
http://167.114.248.156:8081/ubuntu-2015-12-09_19:37:44-ceph-disk-wip-13970-c
eph-disk-cciss-infernalis---basic-openstack/ but these do not have the
driver you're using. They only show nothing has been broken by the patch ;-)

Cheers

On 08/12/2015 15:27, Stolte, Felix wrote:
> Yes, they do contain a "!"
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), 
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, 
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> -Ursprüngliche Nachricht-
> Von: Loic Dachary [mailto:l...@dachary.org]
> Gesendet: Dienstag, 8. Dezember 2015 15:17
> An: Stolte, Felix; ceph-us...@ceph.com
> Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis
> 
> I also need to confirm that the names that show in 
> /sys/block/*/holders are with a ! (it would not make sense to me if 
> they were not but ...)
> 
> On 08/12/2015 15:05, Loic Dachary wrote:
>> Hi Felix,
>>
>> Could you please ls -l /dev/cciss /sys/block/cciss*/ ?
>>
>> Thanks for being the cciss proxy in fixing this problem :-)
>>
>> Cheers
>>
>> On 07/12/2015 11:43, Loic Dachary wrote:
>>> Thanks !
>>>
>>> On 06/12/2015 17:50, Stolte, Felix wrote:
>>>> Hi Loic,
>>>>
>>>> output is:
>>>>
>>>> /dev:
>>>> insgesamt 0
>>>> crw--- 1 root root 10, 235 Dez  2 17:02 autofs
>>>> drwxr-xr-x 2 root root1000 Dez  2 17:02 block
>>>> drwxr-xr-x 2 root root  60 Dez  2 17:02 bsg
>>>> crw--- 1 root root 10, 234 Dez  5 06:29 btrfs-control
>>>> drwxr-xr-x 3 root root  60 Dez  2 17:02 bus
>>>> crw-r--r-- 1 root root255, 171 Dez  2 17:02 casr
>>>> drwxr-xr-x 2 root root 500 Dez  2 17:02 cciss
>>>> crw-r--r-- 1 root root255, 173 Dez  2 17:02 ccsm
>>>> lrwxrwxrwx 1 root root   3 Dez  2 17:02 cdrom -> sr0
>>>> crw-r--r-- 1 root root255, 178 Dez  2 17:02 cdt
>>>> crw-r--r-- 1 root root255, 172 Dez  2 17:02 cecc
>>>> crw-r--r-- 1 root root255, 176 Dez  2 17:02 cevt
>>>> drwxr-xr-x 2 root root3820 Dez  5 06:29 char
&g

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Stolte, Felix
Hi Loic,

output is still the same: 

ceph-disk list
/dev/cciss/c0d0 other, unknown
/dev/cciss/c0d1 other, unknown
/dev/cciss/c0d2 other, unknown
/dev/cciss/c0d3 other, unknown
/dev/cciss/c0d4 other, unknown
/dev/cciss/c0d5 other, unknown
/dev/cciss/c0d6 other, unknown
/dev/cciss/c0d7 other, unknown
/dev/loop0 other, unknown
/dev/loop1 other, unknown
/dev/loop2 other, unknown
/dev/loop3 other, unknown
/dev/loop4 other, unknown
/dev/loop5 other, unknown
/dev/loop6 other, unknown
/dev/loop7 other, unknown
/dev/ram0 other, unknown
/dev/ram1 other, unknown
/dev/ram10 other, unknown
/dev/ram11 other, unknown
/dev/ram12 other, unknown
/dev/ram13 other, unknown
/dev/ram14 other, unknown
/dev/ram15 other, unknown
/dev/ram2 other, unknown
/dev/ram3 other, unknown
/dev/ram4 other, unknown
/dev/ram5 other, unknown
/dev/ram6 other, unknown
/dev/ram7 other, unknown
/dev/ram8 other, unknown
/dev/ram9 other, unknown
/dev/sr0 other, unknown

Regards

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt

-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Freitag, 11. Dezember 2015 02:12
An: Stolte, Felix; ceph-us...@ceph.com
Betreff: Re: AW: AW: [ceph-users] ceph-disk list crashes in infernalis

Hi,

I missed two, could you please try again with:

https://raw.githubusercontent.com/dachary/ceph/b1ad205e77737cfc42400941ffbb5
6907508efc5/src/ceph-disk

This is from https://github.com/ceph/ceph/pull/6880

Thanks for your patience :-)

Cheers

On 10/12/2015 10:27, Stolte, Felix wrote:
> Hi Loic,
> 
> I applied the fixed version. I don't get error messages when running 
> ceph-disk list, but the output is not as I expect it to be (On hammer 
> release I saw all partitions):
> 
> ceph-disk list
> /dev/cciss/c0d0 other, unknown
> /dev/cciss/c0d1 other, unknown
> /dev/cciss/c0d2 other, unknown
> /dev/cciss/c0d3 other, unknown
> /dev/cciss/c0d4 other, unknown
> /dev/cciss/c0d5 other, unknown
> /dev/cciss/c0d6 other, unknown
> /dev/cciss/c0d7 other, unknown
> /dev/loop0 other, unknown
> /dev/loop1 other, unknown
> /dev/loop2 other, unknown
> /dev/loop3 other, unknown
> /dev/loop4 other, unknown
> /dev/loop5 other, unknown
> /dev/loop6 other, unknown
> /dev/loop7 other, unknown
> /dev/ram0 other, unknown
> /dev/ram1 other, unknown
> /dev/ram10 other, unknown
> /dev/ram11 other, unknown
> /dev/ram12 other, unknown
> /dev/ram13 other, unknown
> /dev/ram14 other, unknown
> /dev/ram15 other, unknown
> /dev/ram2 other, unknown
> /dev/ram3 other, unknown
> /dev/ram4 other, unknown
> /dev/ram5 other, unknown
> /dev/ram6 other, unknown
> /dev/ram7 other, unknown
> /dev/ram8 other, unknown
> /dev/ram9 other, unknown
> /dev/sr0 other, unknown
> 
> Executed as root user on an OSD-Node with osds on c0d1 to c0d6 and 
> journals on c0d7.
> 
> Greets Felix
> 
> Mit freundlichem Gruß
> 
> Felix Stolte
> IT-Services
> Telefon 02461 61-9243
> E-Mail: f.sto...@fz-juelich.de
> Internet: http://www.fz-juelich.de
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), 
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, 
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> -Ursprüngliche Nachricht-
> Von: Loic Dachary [mailto:l...@dachary.org]
> Gesendet: Mittwoch, 9. Dezember 2015 23:55
> An: Stolte, Felix; ceph-us...@ceph.com
> Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis
> 
> Hi Felix,
> 
> It would be great if you could try the fix from 
> https://github.com/dachary/ceph/commit/7395a6a0c5776d4a92728f1abf0e8a8
> 7e5d5e 4bb . It's only changing the ceph-disk file so you could just 
> get it from 
> https://github.com/dachary/ceph/raw/7395a6a0c5776d4a92728f1abf0e8a87e5
> d5e4bb /src/ceph-disk and replace the existing (after a backup) 
> ceph-disk on one of your machines.
> 
> It passes integration tests
> http://167.114.248.156:8081/ubuntu-2015-12-09_19:37:44-ceph-disk-wip-1
> 3970-c eph-disk-cciss-infernalis---basic-openstack/ but these do not 
> have the driver you're using. They only show nothing has been broken 
> by the patch ;-)
> 
> Cheers
> 
> On 08/12/2015 15:27, Stolte, Felix wrote:
>> Yes, they do contain a "!"
&

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Stolte, Felix
Hi Jens,

output is attached (stderr + stdout)

Regards

-Ursprüngliche Nachricht-
Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] 
Gesendet: Freitag, 11. Dezember 2015 09:10
An: Stolte, Felix
Cc: Loic Dachary; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 8:19 GMT+01:00 Stolte, Felix :
> Hi Loic,
>
> output is still the same:
>
> ceph-disk list
> /dev/cciss/c0d0 other, unknown
> /dev/cciss/c0d1 other, unknown
> /dev/cciss/c0d2 other, unknown
> /dev/cciss/c0d3 other, unknown
> /dev/cciss/c0d4 other, unknown
> /dev/cciss/c0d5 other, unknown
> /dev/cciss/c0d6 other, unknown
> /dev/cciss/c0d7 other, unknown

Can you please rerun as "ceph-disk -v list"? That should give some more 
information about where things go wrong.

> /dev/loop0 other, unknown
> /dev/loop1 other, unknown
> /dev/loop2 other, unknown
> /dev/loop3 other, unknown
> /dev/loop4 other, unknown
> /dev/loop5 other, unknown
> /dev/loop6 other, unknown
> /dev/loop7 other, unknown
> /dev/ram0 other, unknown
> /dev/ram1 other, unknown
> /dev/ram10 other, unknown
> /dev/ram11 other, unknown
> /dev/ram12 other, unknown
> /dev/ram13 other, unknown
> /dev/ram14 other, unknown
> /dev/ram15 other, unknown
> /dev/ram2 other, unknown
> /dev/ram3 other, unknown
> /dev/ram4 other, unknown
> /dev/ram5 other, unknown
> /dev/ram6 other, unknown
> /dev/ram7 other, unknown
> /dev/ram8 other, unknown
> /dev/ram9 other, unknown
> /dev/sr0 other, unknown

@Loic: Is there a reason for listing all the ram and loop devices?
Happens on my local system too and they don't get listed with the old Hammer 
version. Do you want to handle that as a separate bug?
DEBUG:ceph-disk:list_all_partitions: sr0
DEBUG:ceph-disk:get_dm_uuid /dev/sr0 uuid path is /sys/dev/block/11:0/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram0
DEBUG:ceph-disk:get_dm_uuid /dev/ram0 uuid path is /sys/dev/block/1:0/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram1
DEBUG:ceph-disk:get_dm_uuid /dev/ram1 uuid path is /sys/dev/block/1:1/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram2
DEBUG:ceph-disk:get_dm_uuid /dev/ram2 uuid path is /sys/dev/block/1:2/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram3
DEBUG:ceph-disk:get_dm_uuid /dev/ram3 uuid path is /sys/dev/block/1:3/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram4
DEBUG:ceph-disk:get_dm_uuid /dev/ram4 uuid path is /sys/dev/block/1:4/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram5
DEBUG:ceph-disk:get_dm_uuid /dev/ram5 uuid path is /sys/dev/block/1:5/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram6
DEBUG:ceph-disk:get_dm_uuid /dev/ram6 uuid path is /sys/dev/block/1:6/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram7
DEBUG:ceph-disk:get_dm_uuid /dev/ram7 uuid path is /sys/dev/block/1:7/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram8
DEBUG:ceph-disk:get_dm_uuid /dev/ram8 uuid path is /sys/dev/block/1:8/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram9
DEBUG:ceph-disk:get_dm_uuid /dev/ram9 uuid path is /sys/dev/block/1:9/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop0
DEBUG:ceph-disk:get_dm_uuid /dev/loop0 uuid path is /sys/dev/block/7:0/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop1
DEBUG:ceph-disk:get_dm_uuid /dev/loop1 uuid path is /sys/dev/block/7:1/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop2
DEBUG:ceph-disk:get_dm_uuid /dev/loop2 uuid path is /sys/dev/block/7:2/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop3
DEBUG:ceph-disk:get_dm_uuid /dev/loop3 uuid path is /sys/dev/block/7:3/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop4
DEBUG:ceph-disk:get_dm_uuid /dev/loop4 uuid path is /sys/dev/block/7:4/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop5
DEBUG:ceph-disk:get_dm_uuid /dev/loop5 uuid path is /sys/dev/block/7:5/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop6
DEBUG:ceph-disk:get_dm_uuid /dev/loop6 uuid path is /sys/dev/block/7:6/dm/uuid
DEBUG:ceph-disk:list_all_partitions: loop7
DEBUG:ceph-disk:get_dm_uuid /dev/loop7 uuid path is /sys/dev/block/7:7/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram10
DEBUG:ceph-disk:get_dm_uuid /dev/ram10 uuid path is /sys/dev/block/1:10/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram11
DEBUG:ceph-disk:get_dm_uuid /dev/ram11 uuid path is /sys/dev/block/1:11/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram12
DEBUG:ceph-disk:get_dm_uuid /dev/ram12 uuid path is /sys/dev/block/1:12/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram13
DEBUG:ceph-disk:get_dm_uuid /dev/ram13 uuid path is /sys/dev/block/1:13/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram14
DEBUG:ceph-disk:get_dm_uuid /dev/ram14 uuid path is /sys/dev/block/1:14/dm/uuid
DEBUG:ceph-disk:list_all_partitions: ram15
DEBUG:ceph-disk:get_dm_uuid /dev/ram15 uuid path is /sys/dev/block/1:15/dm/uuid
DEBUG:ceph-disk:list_all_partitions: cciss!c0d0
DEBUG:ceph-disk:get_dm_uuid /dev/cciss/c0d0 uuid path is 
/sys/dev/block/104:0/dm/uuid
DEBUG:ceph-disk:l

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Stolte, Felix
Hi Loic,

now it is working as expected. Thanks a lot for fixing it!

Output is:

/dev/cciss/c0d0p2 other
 /dev/cciss/c0d0p5 swap, swap
 /dev/cciss/c0d0p1 other, ext4, mounted on /
/dev/cciss/c0d1 :
 /dev/cciss/c0d1p1 ceph data, active, cluster ceph, osd.0, journal 
/dev/cciss/c0d7p2
/dev/cciss/c0d2 :
 /dev/cciss/c0d2p1 ceph data, active, cluster ceph, osd.1, journal 
/dev/cciss/c0d7p3
/dev/cciss/c0d3 :
 /dev/cciss/c0d3p1 ceph data, active, cluster ceph, osd.2, journal 
/dev/cciss/c0d7p4
/dev/cciss/c0d4 :
 /dev/cciss/c0d4p1 ceph data, active, cluster ceph, osd.3, journal 
/dev/cciss/c0d7p5
/dev/cciss/c0d5 :
 /dev/cciss/c0d5p1 ceph data, active, cluster ceph, osd.4, journal 
/dev/cciss/c0d7p6
/dev/cciss/c0d6 :
 /dev/cciss/c0d6p1 ceph data, active, cluster ceph, osd.10, journal 
/dev/cciss/c0d7p7
/dev/cciss/c0d7 :
 /dev/cciss/c0d7p2 ceph journal, for /dev/cciss/c0d1p1
 /dev/cciss/c0d7p3 ceph journal, for /dev/cciss/c0d2p1
 /dev/cciss/c0d7p4 ceph journal, for /dev/cciss/c0d3p1
 /dev/cciss/c0d7p5 ceph journal, for /dev/cciss/c0d4p1
 /dev/cciss/c0d7p6 ceph journal, for /dev/cciss/c0d5p1
 /dev/cciss/c0d7p7 ceph journal, for /dev/cciss/c0d6p1
/dev/loop0 other, unknown
/dev/loop1 other, unknown
/dev/loop2 other, unknown
/dev/loop3 other, unknown
/dev/loop4 other, unknown
/dev/loop5 other, unknown
/dev/loop6 other, unknown
/dev/loop7 other, unknown
/dev/ram0 other, unknown
/dev/ram1 other, unknown
/dev/ram10 other, unknown
/dev/ram11 other, unknown
/dev/ram12 other, unknown
/dev/ram13 other, unknown
/dev/ram14 other, unknown
/dev/ram15 other, unknown
/dev/ram2 other, unknown
/dev/ram3 other, unknown
/dev/ram4 other, unknown
/dev/ram5 other, unknown
/dev/ram6 other, unknown
/dev/ram7 other, unknown
/dev/ram8 other, unknown
/dev/ram9 other, unknown
/dev/sr0 other, unknown

Mit freundlichem Gruß

Felix Stolte
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
Internet: http://www.fz-juelich.de

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt

-Ursprüngliche Nachricht-
Von: Loic Dachary [mailto:l...@dachary.org] 
Gesendet: Freitag, 11. Dezember 2015 15:17
An: Stolte, Felix; Jens Rosenboom
Cc: ceph-us...@ceph.com
Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis

Hi Felix,

Could you try again ? Hopefully that's the right one :-)

https://raw.githubusercontent.com/dachary/ceph/741da8ec91919db189ba90432ab4cee76a20309e/src/ceph-disk

is the lastest from https://github.com/ceph/ceph/pull/6880

Cheers

On 11/12/2015 09:16, Stolte, Felix wrote:
> Hi Jens,
> 
> output is attached (stderr + stdout)
> 
> Regards
> 
> -Ursprüngliche Nachricht-
> Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] 
> Gesendet: Freitag, 11. Dezember 2015 09:10
> An: Stolte, Felix
> Cc: Loic Dachary; ceph-us...@ceph.com
> Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis
> 
> 2015-12-11 8:19 GMT+01:00 Stolte, Felix :
>> Hi Loic,
>>
>> output is still the same:
>>
>> ceph-disk list
>> /dev/cciss/c0d0 other, unknown
>> /dev/cciss/c0d1 other, unknown
>> /dev/cciss/c0d2 other, unknown
>> /dev/cciss/c0d3 other, unknown
>> /dev/cciss/c0d4 other, unknown
>> /dev/cciss/c0d5 other, unknown
>> /dev/cciss/c0d6 other, unknown
>> /dev/cciss/c0d7 other, unknown
> 
> Can you please rerun as "ceph-disk -v list"? That should give some more 
> information about where things go wrong.
> 
>> /dev/loop0 other, unknown
>> /dev/loop1 other, unknown
>> /dev/loop2 other, unknown
>> /dev/loop3 other, unknown
>> /dev/loop4 other, unknown
>> /dev/loop5 other, unknown
>> /dev/loop6 other, unknown
>> /dev/loop7 other, unknown
>> /dev/ram0 other, unknown
>> /dev/ram1 other, unknown
>> /dev/ram10 other, unknown
>> /dev/ram11 other, unknown
>> /dev/ram12 other, unknown
>> /dev/ram13 other, unknown
>> /dev/ram14 other, unknown
>> /dev/ram15 other, unknown
>> /dev/ram2 other, unknown
>> /dev/ram3 other, unknown
>> /dev/ram4 other, unknown
>> /dev/ram5 other, unknown
>> /dev/ram6 other, unknown
>> /dev/ram7 other, unknown
>> /dev/ram8 other, unknown
>> /dev/ram9 other, unknown
>> /dev/sr0 other, unknown
> 
> @Loic: Is there a reason for listing all the ram and loop devices?
> Happens on my local system too and they don't get listed with the old Hammer 
> version. Do you want to handle that as a separate bug?
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Required caps for cephfs

2019-04-30 Thread Stolte, Felix
Hi folks,

we are using nfs-ganesha to expose cephfs (Luminous) to nfs clients. I want to 
make use of snapshots, but limit the creation of snapshots to ceph admins. I 
read about cephx capabilities which allow/deny the creation of snapshots a 
while ago, but I can’t find the info anymore. Can someone help me?

Best regards
Felix
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix


smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
Hi folks,
 
we are running a luminous cluster and using the cephfs for fileservices. We use 
Tivoli Storage Manager to backup all data in the ceph filesystem to tape for 
disaster recovery. Backup runs on two dedicated servers, which mounted the 
cephfs via kernel mount. In order to complete the Backup in time we are using 
60 Backup Threads per Server. While backup is running, ceph health often 
changes from “OK” to “2 clients failing to respond to cache pressure”. After 
investigating and doing research in the mailing list I set the following 
parameters:
 
mds_cache_memory_limit = 34359738368 (32 GB) on MDS Server
 
client_oc_size = 104857600 (100 MB, default is 200 MB) on Backup Servers
 
All Servers running Ubuntu 18.04 with Kernel 4.15.0-47 and ceph 12.2.11. We 
have 3 MDS Servers, 1 Active, 2 Standby. Changing to multiple active MDS 
Servers is not an option, since we are planning to use snapshots. Cephfs holds 
78,815,975 files.
 
Any advice on getting rid of the Warning would be very much appreciated. On a 
sidenote: Although MDS Cache Memory is set to 32GB htop shows 60GB Memory Usage 
for the ceph-mds process
 
Best regards
Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix


smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
Hi Paul,

we are using Kernel 4.15.0-47.

Regards 
Felix

IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

Am 08.05.19, 13:58 schrieb "Paul Emmerich" :

Which kernel are you using on the clients?

Paul
-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, May 8, 2019 at 1:10 PM Stolte, Felix  wrote:
>
> Hi folks,
>
> we are running a luminous cluster and using the cephfs for fileservices. 
We use Tivoli Storage Manager to backup all data in the ceph filesystem to tape 
for disaster recovery. Backup runs on two dedicated servers, which mounted the 
cephfs via kernel mount. In order to complete the Backup in time we are using 
60 Backup Threads per Server. While backup is running, ceph health often 
changes from “OK” to “2 clients failing to respond to cache pressure”. After 
investigating and doing research in the mailing list I set the following 
parameters:
>
> mds_cache_memory_limit = 34359738368 (32 GB) on MDS Server
>
> client_oc_size = 104857600 (100 MB, default is 200 MB) on Backup Servers
>
> All Servers running Ubuntu 18.04 with Kernel 4.15.0-47 and ceph 12.2.11. 
We have 3 MDS Servers, 1 Active, 2 Standby. Changing to multiple active MDS 
Servers is not an option, since we are planning to use snapshots. Cephfs holds 
78,815,975 files.
>
> Any advice on getting rid of the Warning would be very much appreciated. 
On a sidenote: Although MDS Cache Memory is set to 32GB htop shows 60GB Memory 
Usage for the ceph-mds process
>
> Best regards
> Felix
>
> 
-
> 
-
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
-
> 
-
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-09 Thread Stolte, Felix
Thanks for the info Patrick. We are using ceph packages from ubuntu main repo, 
so it will take some weeks until I can do the update. In the meantime is there 
anything I can do manually to decrease the number of caps hold by the backup 
nodes, like flushing the client cache or something like that? Is it possible to 
mount cephfs without caching on specific mounts? 

I had a look at the mds sessions and both nodes had over 5 million num_caps...

Regards Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

Am 08.05.19, 18:33 schrieb "Patrick Donnelly" :

On Wed, May 8, 2019 at 4:10 AM Stolte, Felix  wrote:
>
> Hi folks,
>
> we are running a luminous cluster and using the cephfs for fileservices. 
We use Tivoli Storage Manager to backup all data in the ceph filesystem to tape 
for disaster recovery. Backup runs on two dedicated servers, which mounted the 
cephfs via kernel mount. In order to complete the Backup in time we are using 
60 Backup Threads per Server. While backup is running, ceph health often 
changes from “OK” to “2 clients failing to respond to cache pressure”. After 
investigating and doing research in the mailing list I set the following 
parameters:
>
> mds_cache_memory_limit = 34359738368 (32 GB) on MDS Server
>
> client_oc_size = 104857600 (100 MB, default is 200 MB) on Backup Servers
>
> All Servers running Ubuntu 18.04 with Kernel 4.15.0-47 and ceph 12.2.11. 
We have 3 MDS Servers, 1 Active, 2 Standby. Changing to multiple active MDS 
Servers is not an option, since we are planning to use snapshots. Cephfs holds 
78,815,975 files.
>
> Any advice on getting rid of the Warning would be very much appreciated. 
On a sidenote: Although MDS Cache Memory is set to 32GB htop shows 60GB Memory 
Usage for the ceph-mds process

With clients doing backup it's likely that they hold millions of caps.
This is not a good situation to be in. I recommend upgrading to
12.2.12 as we recently backported a fix for the MDS to limit the
number of caps held by clients to 1M. Additionally, trimming the cache
and recalling caps is now throttled. This may help a lot for your
workload.

Note that these fixes haven't been backported to Mimic yet.

-- 
Patrick Donnelly


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Nfs-ganesha with rados_kv backend

2019-05-29 Thread Stolte, Felix
Hi,

is anyone running an active-passive nfs-ganesha cluster with cephfs backend and 
using the rados_kv recovery backend? My setup runs fine, but takeover is giving 
me a headache. On takeover I see the following messages in ganeshas log file:

29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 : 
ganesha.nfsd-9793[dbus_heartbeat] nfs_start_grace :STATE :EVENT :NFS Server Now 
IN GRACE, duration 5
29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 : 
ganesha.nfsd-9793[dbus_heartbeat] nfs_start_grace :STATE :EVENT :NFS Server 
recovery event 5 nodeid -1 ip 10.0.0.5
29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 : 
ganesha.nfsd-9793[dbus_heartbeat] rados_kv_traverse :CLIENT ID :EVENT :Failed 
to lst kv ret=-2
29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 : 
ganesha.nfsd-9793[dbus_heartbeat] rados_kv_read_recov_clids_takeover :CLIENT ID 
:EVENT :Failed to takeover
29/05/2019 15:38:26 : epoch 5cee88c4 : cephgw-e2-1 : ganesha.nfsd-9793[reaper] 
nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE

The result is clients hanging for up to 2 Minutes. Has anyone ran into the same 
problem?

Ceph Version: 12.2.11
nfs-ganesha: 2.7.3

ganesha.conf (identical on both nodes besides nodeid in rados_kv:

NFS_CORE_PARAM {
Enable_RQUOTA = false;
Protocols = 3,4;
}

CACHEINODE {
Dir_Chunk = 0;
NParts = 1;
Cache_Size = 1;
}

NFS_krb5 {
Active_krb5 = false;
}

NFSv4 {
Only_Numeric_Owners = true;
RecoveryBackend = rados_kv;
Grace_Period = 5;
Lease_Lifetime = 5;
Minor_Versions = 1,2;
}

RADOS_KV {
ceph_conf = '/etc/ceph/ceph.conf';
userid = "ganesha";
pool = "cephfs_metadata";
namespace = "ganesha";
nodeid = "cephgw-k2-1";
}

Any hint would be appreciated.

Best regards 
Felix
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Expected IO in luminous Ceph Cluster

2019-06-06 Thread Stolte, Felix
Hello folks,

we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB 
SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs 
have 10Gb for Public and Cluster Network. The cluster is running stable for 
over a year. We didn’t had a closer look on IO until one of our customers 
started to complain about a VM we migrated from VMware with Netapp Storage to 
our Openstack Cloud with ceph storage. He sent us a sysbench report from the 
machine, which I could reproduce on other VMs as well as on a mounted RBD on 
physical hardware:

sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
--file-test-mode=rndrw --file-rw-ratio=2 run
sysbench 1.0.11 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Initializing random number generator from current time

Extra file open flags: 0
128 files, 8MiB each
1GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 2.00
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test

File operations:
reads/s:  36.36
writes/s: 18.18
fsyncs/s: 2318.59

Throughput:
read, MiB/s:  0.57
written, MiB/s:   0.28

General statistics:
total time:  10.0071s
total number of events:  23755

Latency (ms):
 min:  0.01
 avg:  6.74
 max:   1112.58
 95th percentile: 26.68
 sum: 160022.67

Threads fairness:
events (avg/stddev):   1484.6875/52.59
execution time (avg/stddev):   10.0014/0.00

Are these numbers reasonable for a cluster of our size?

Best regards
Felix
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-06 Thread Stolte, Felix
I have no performance data before we migrated to bluestore. You should start a 
separate topic regarding your question.

Could anyone with an more or less equally sized cluster post the output of a 
sysbench with the following parameters (either from inside an openstack vm or a 
mounted rbd)?

sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
--file-test-mode=rndrw --file-rw-ratio=2 prepare

sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
--file-test-mode=rndrw --file-rw-ratio=2 run

Thanks in advance.

Regards
Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

Am 06.06.19, 15:09 schrieb "Marc Roos" :


I am also thinking of moving the wal/db to ssd of the sata hdd's. Did 
you do tests before and after this change, and know what the difference 
is iops? And is the advantage more or less when your sata hdd's are 
slower? 


-Original Message-----
    From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] 
Sent: donderdag 6 juni 2019 10:47
To: ceph-users
Subject: [ceph-users] Expected IO in luminous Ceph Cluster

Hello folks,

we are running a ceph cluster on Luminous consisting of 21 OSD Nodes 
with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB 
(1:3 Ratio). OSDs have 10Gb for Public and Cluster Network. The cluster 
is running stable for over a year. We didn’t had a closer look on IO 
until one of our customers started to complain about a VM we migrated 
from VMware with Netapp Storage to our Openstack Cloud with ceph 
storage. He sent us a sysbench report from the machine, which I could 
reproduce on other VMs as well as on a mounted RBD on physical hardware:

sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
--file-test-mode=rndrw --file-rw-ratio=2 run sysbench 1.0.11 (using 
system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Initializing random number generator from current time

Extra file open flags: 0
128 files, 8MiB each
1GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 2.00 Periodic FSYNC 
enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test

File operations:
reads/s:  36.36
writes/s: 18.18
fsyncs/s: 2318.59

Throughput:
read, MiB/s:  0.57
written, MiB/s:   0.28

General statistics:
total time:  10.0071s
total number of events:  23755

Latency (ms):
 min:  0.01
 avg:  6.74
 max:   1112.58
 95th percentile: 26.68
 sum: 160022.67

Threads fairness:
events (avg/stddev):   1484.6875/52.59
execution time (avg/stddev):   10.0014/0.00

Are these numbers reasonable for a cluster of our size?

Best regards
Felix
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de

-

-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), 
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. 
Dr. Sebastian M. Schmidt

-

-
 

___
  

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
Hi Sinan,

that would be great. The numbers should differ a lot, since you have an all 
flash pool, but it would be interesting, what we could expect from such a 
configuration.

Regards
Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

Am 07.06.19, 12:02 schrieb "Sinan Polat" :

Hi Felix,

I can run your commands inside an OpenStack VM. Tthe storage cluster 
contains of 12 OSD servers, holding each 8x 960GB SSD. Luminous FileStore. 
Replicated 3.

Would it help you to run your command on my cluster?

Sinan

> Op 7 jun. 2019 om 08:52 heeft Stolte, Felix  het 
volgende geschreven:
> 
> I have no performance data before we migrated to bluestore. You should 
start a separate topic regarding your question.
> 
> Could anyone with an more or less equally sized cluster post the output 
of a sysbench with the following parameters (either from inside an openstack vm 
or a mounted rbd)?
> 
> sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
>--file-test-mode=rndrw --file-rw-ratio=2 prepare
> 
> sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
>--file-test-mode=rndrw --file-rw-ratio=2 run
> 
> Thanks in advance.
> 
> Regards
> Felix
> 
> 
-
> 
-
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
-
> 
-
> 
> 
> Am 06.06.19, 15:09 schrieb "Marc Roos" :
> 
> 
>I am also thinking of moving the wal/db to ssd of the sata hdd's. Did 
>you do tests before and after this change, and know what the 
difference 
>is iops? And is the advantage more or less when your sata hdd's are 
>slower? 
> 
> 
>-Original Message-
>From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] 
>Sent: donderdag 6 juni 2019 10:47
>To: ceph-users
>Subject: [ceph-users] Expected IO in luminous Ceph Cluster
> 
>Hello folks,
> 
>we are running a ceph cluster on Luminous consisting of 21 OSD Nodes 
>with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB 
>(1:3 Ratio). OSDs have 10Gb for Public and Cluster Network. The 
cluster 
>is running stable for over a year. We didn’t had a closer look on IO 
>until one of our customers started to complain about a VM we migrated 
>from VMware with Netapp Storage to our Openstack Cloud with ceph 
>storage. He sent us a sysbench report from the machine, which I could 
>reproduce on other VMs as well as on a mounted RBD on physical 
hardware:
> 
>sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
>--file-test-mode=rndrw --file-rw-ratio=2 run sysbench 1.0.11 (using 
>system LuaJIT 2.1.0-beta3)
> 
>Running the test with following options:
>Number of threads: 16
>Initializing random number generator from current time
> 
>Extra file open flags: 0
>128 files, 8MiB each
>1GiB total file size
>Block size 16KiB
>Number of IO requests: 0
>Read/Write ratio for combined random IO test: 2.00 Periodic FSYNC 
>enabled, calling fsync() each 1 requests.
>Calling fsync() at the end of test, Enabled.

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
Hi Sinan,

thanks for the numbers. I am a little bit surprised that your SSD pool has 
nearly the same stats as you SAS pool. 

Nevertheless I would expect our pools to perform like your SAS pool, at least 
regarding to writes since all our write ops should be placed on our SSDs. But 
since I only achieve 10% of your numbers I need to figure out my bottle neck. 
For now I have no clue. According to our monitoring system network bandwith, 
ram or cpu usage is even close to be saturated. 

Could someone advice me on where to look?

Regards Felix
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

Am 07.06.19, 13:33 schrieb "Sinan Polat" :

Hi Felix,

I have 2 Pools, a SSD only and a SAS only pool.

SSD pool is spread over 12 OSD servers.
SAS pool is spread over 6 OSD servers.


See results (SSD Only Pool):

# sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G
--file-test-mode=rndrw --file-rw-ratio=2 run
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:
Number of threads: 16
Initializing random number generator from current time


Extra file open flags: (none)
128 files, 8MiB each
1GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 2.00
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
reads/s:  508.38
writes/s: 254.19
fsyncs/s: 32735.14

Throughput:
read, MiB/s:  7.94
written, MiB/s:   3.97

General statistics:
total time:  10.0103s
total number of events:  36

Latency (ms):
 min:0.00
 avg:0.48
 max:   10.18
 95th percentile:2.11
 sum:   159830.07

Threads fairness:
events (avg/stddev):   20833.5000/335.70
execution time (avg/stddev):   9.9894/0.00
#

See results (SAS Only Pool):
# sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G
--file-test-mode=rndrw --file-rw-ratio=2 run
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:
Number of threads: 16
Initializing random number generator from current time


Extra file open flags: (none)
128 files, 8MiB each
1GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 2.00
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
reads/s:  490.11
writes/s: 245.10
fsyncs/s: 31565.00

Throughput:
read, MiB/s:  7.66
written, MiB/s:   3.83

General statistics:
total time:  10.0143s
total number of events:  321477

Latency (ms):
 min:0.00
 avg:0.50
 max:   20.50
 95th percentile:2.30
 sum:   159830.82

Threads fairness:
events (avg/stddev):   20092.3125/186.66
execution time (avg/stddev):   9.9894/0.00
#


Kind regards,
Sinan Polat



> Op 7 juni 2019 om 12:47 schreef "Stolte, Felix" :
> 
> 
> Hi Sinan,
> 
> that would be great. The numbers 

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-11 Thread Stolte, Felix
Hi John,

I have 9 HDDs and 3 SSDs behind a SAS3008 PCI-Express Fusion-MPT SAS-3 from 
LSI. HDDs are HGST HUH721008AL (8TB, 7200k rpm), SSDs are Toshiba PX05SMB040 
(400GB). OSDs are bluestore format, 3 HDDs have their wal and db on one SSD (DB 
Size 50GB, wal 10 GB). I did not change any cache settings. 

I disabled cstates which improved performance slightly. Do you suggest to turn 
off caching on disks?

Regards
Felix

-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-


Von: John Petrini 
Datum: Freitag, 7. Juni 2019 um 15:49
An: "Stolte, Felix" 
Cc: Sinan Polat , ceph-users 
Betreff: Re: [ceph-users] Expected IO in luminous Ceph Cluster

How's iowait look on your disks? 

How have you configured your disks and what are your cache settings? 

Did you disable cstates? 

On Friday, June 7, 2019, Stolte, Felix <mailto:f.sto...@fz-juelich.de> wrote:
> Hi Sinan,
>
> thanks for the numbers. I am a little bit surprised that your SSD pool has 
> nearly the same stats as you SAS pool.
>
> Nevertheless I would expect our pools to perform like your SAS pool, at least 
> regarding to writes since all our write ops should be placed on our SSDs. But 
> since I only achieve 10% of your numbers I need to figure out my bottle neck. 
> For now I have no clue. According to our monitoring system network bandwith, 
> ram or cpu usage is even close to be saturated.
>
> Could someone advice me on where to look?
>
> Regards Felix
> -
> -
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> -
> -
>
>
> Am 07.06.19, 13:33 schrieb "Sinan Polat" <mailto:si...@turka.nl>:
>
>     Hi Felix,
>
>     I have 2 Pools, a SSD only and a SAS only pool.
>
>     SSD pool is spread over 12 OSD servers.
>     SAS pool is spread over 6 OSD servers.
>
>
>     See results (SSD Only Pool):
>
>     # sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G
>     --file-test-mode=rndrw --file-rw-ratio=2 run
>     sysbench 1.0.17 (using system LuaJIT 2.0.4)
>
>     Running the test with following options:
>     Number of threads: 16
>     Initializing random number generator from current time
>
>
>     Extra file open flags: (none)
>     128 files, 8MiB each
>     1GiB total file size
>     Block size 16KiB
>     Number of IO requests: 0
>     Read/Write ratio for combined random IO test: 2.00
>     Periodic FSYNC enabled, calling fsync() each 1 requests.
>     Calling fsync() at the end of test, Enabled.
>     Using synchronous I/O mode
>     Doing random r/w test
>     Initializing worker threads...
>
>     Threads started!
>
>
>     File operations:
>         reads/s:                      508.38
>         writes/s:                     254.19
>         fsyncs/s:                     32735.14
>
>     Throughput:
>         read, MiB/s:                  7.94
>         written, MiB/s:               3.97
>
>     General statistics:
>         total time:                          10.0103s
>         total number of events:              36
>
>     Latency (ms):
>              min:                                    0.00
>              avg:                                    0.48
>              max:                                   10.18
>              95th percentile:                        2.11
>              sum:                               159830.07
>
>     Threads fairness:
>         events (avg/stddev):           20833.5000/335.70
>         execution time (avg/stddev):   9.9894/0.00
&g

[ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-27 Thread Stolte, Felix
Hi folks,

I have a nautilus 14.2.1 cluster with a non-default cluster name (ceph_stag 
instead of ceph). I set “cluster = ceph_stag” in /etc/ceph/ceph_stag.conf.

ceph-volume is using the correct config file but does not use the specified 
clustername. Did I hit a bug or do I need to define the clustername elsewere?

Regards
Felix
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Stolte, Felix


smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Missing Ubuntu Packages on Luminous

2019-07-08 Thread Stolte, Felix
Hi folks,

I want to use the community repository http://download.ceph.com/debian-luminous 
for my luminous cluster instead of the packages provided by ubuntu itself. But 
apparently only the ceph-deploy package is available for bionic (Ubuntu 18.04). 
All packages exist for trusty though. Is this intended behavior?

Regards Felix
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] New best practices for osds???

2019-07-16 Thread Stolte, Felix
Hi guys,

our ceph cluster is performing way less than it could, based on the disks we 
are using. We could narrow it down to the storage controller (LSI SAS3008 HBA) 
in combination with an SAS expander. Yesterday we had a meeting with our 
hardware reseller and sale representatives of the hardware manufacturer to 
resolve the issue.

They told us, that "best practices" for ceph would be to deploy disks as Raid 0 
consisting of one disk using a raid controller with a big writeback cache. 

Since this "best practice" is new to me, I would like to hear your opinion on 
this topic.

Regards Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] SSDs behind Hardware Raid

2019-12-04 Thread Stolte, Felix


smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs behind Hardware Raid

2019-12-04 Thread Stolte, Felix


smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com