Hi! I use ceph pool mounted via cephfs for cloudstack secondary storage and have problem with consistency of files stored on it. I have uploaded file for three time and checked it, but at each time i have got different checksum (at second time it was a valid checksum). Each try of upload gave permanent result (I checked twice each time), but each next try did different one. Please help me with finding of PoF.
root@lw01p01-mgmt01:/export/secondary# uname -a Linux lw01p01-mgmt01 3.14.1-031401-generic #201404141220 SMP Mon Apr 14 16:21:48 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux root@lw01p01-mgmt01:/export/secondary# ceph status cluster e405d974-3fb6-42c8-b34a-a0ac5a1fef3a health HEALTH_OK monmap e1: 3 mons at {lw01p01-node01=10.0.15.1:6789/0,lw01p01-node02=10.0.15.2:6789/0,lw01p01-node03=10.0.15.3:6789/0}, election epoch 52, quorum 0,1,2 lw01p01-node01,lw01p01-node02,lw01p01-node03 mdsmap e17: 1/1/1 up {0=lw01p01-node01=up:active} osdmap e338: 20 osds: 20 up, 20 in pgmap v160161: 656 pgs, 6 pools, 30505 MB data, 8159 objects 61377 MB used, 25512 GB / 25572 GB avail 656 active+clean client io 0 B/s rd, 1418 B/s wr, 1 op/s root@lw01p01-mgmt01:/export/secondary# mount | grep ceph 10.0.15.1:/ on /export/secondary type ceph (name=cloudstack,key=client.cloudstack) root@lw01p01-mgmt01:/export/secondary# cephfs /export/secondary show_layout layout.data_pool: 3 layout.object_size: 4194304 layout.stripe_unit: 4194304 layout.stripe_count: 1 root@lw01p01-mgmt01:/export/secondary# wget http://lw01p01-templates01.example.com/ISO/XXX.iso --2014-08-27 10:12:39-- http://lw01p01-templates01.example.com/ISO/XXX.iso Resolving lw01p01-templates01.example.com (lw01p01-templates01.example.com)... 10.0.15.1 Connecting to lw01p01-templates01.example.com (lw01p01-templates01.example.com)|10.0.15.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3249803264 (3.0G) [application/x-iso9660-image] Saving to: ‘XXX.iso’ 100%[=====================================================>] 3,249,803,264 179MB/s in 18s 2014-08-27 10:12:57 (173 MB/s) - ‘XXX.iso’ saved [3249803264/3249803264] root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso 45b940c6cb76ed0e76c9fac4cba01c3c XXX.iso root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso 45b940c6cb76ed0e76c9fac4cba01c3c XXX.iso root@lw01p01-mgmt01:/export/secondary# wget http://lw01p01-templates01.example.com/ISO/XXX.iso --2014-08-27 10:14:11-- http://lw01p01-templates01.example.com/ISO/XXX.iso Resolving lw01p01-templates01.example.com (lw01p01-templates01.example.com)... 10.0.15.1 Connecting to lw01p01-templates01.example.com (lw01p01-templates01.example.com)|10.0.15.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3249803264 (3.0G) [application/x-iso9660-image] Saving to: ‘XXX.iso.1’ 100%[=====================================================>] 3,249,803,264 154MB/s in 19s 2014-08-27 10:14:30 (161 MB/s) - ‘XXX.iso.1’ saved [3249803264/3249803264] root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.1 5488d85797cd53d1d1562e73122522c1 XXX.iso.1 root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.1 5488d85797cd53d1d1562e73122522c1 XXX.iso.1 root@lw01p01-mgmt01:/export/secondary# wget http://lw01p01-templates01.example.com/ISO/XXX.iso --2014-08-27 10:15:23-- http://lw01p01-templates01.example.com/ISO/XXX.iso Resolving lw01p01-templates01.example.com (lw01p01-templates01.example.com)... 10.0.15.1 Connecting to lw01p01-templates01.example.com (lw01p01-templates01.example.com)|10.0.15.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3249803264 (3.0G) [application/x-iso9660-image] Saving to: ‘XXX.iso.2’ 100%[=====================================================>] 3,249,803,264 160MB/s in 20s 2014-08-27 10:15:44 (152 MB/s) - ‘XXX.iso.2’ saved [3249803264/3249803264] root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2 5e28d425f828440b025d769609c5bb41 XXX.iso.2 root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2 5e28d425f828440b025d769609c5bb41 XXX.iso.2 -- Michael Kolomiets _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com