I'm actually using ZFS as I've been a user of ZFS for 15y now, and have
been saved too many times by its ability to detect issues (especially
non-disk related hardware issues) that other filesystems don't.

With regards to the stripe_length, I have verified that changing the
stripe_length changes the size of the resulting files, and that returning
to the default value also gave the same result.

On Sun, Apr 9, 2017 at 4:51 PM Laszlo Budai <las...@componentsoft.eu> wrote:

> Hello Kate,
>
> thank you for your answer. Are you using ext4 or xfs in your VMs?
>
> Kind regards,
> Laszlo
>
>
> On 08.04.2017 14:11, Kate Ward wrote:
> > I had a similar issue where files were padded out to the length of
> ceph.dir.layout.stripe_unit (see email entitled "Issue with Ceph padding
> files out to ceph.dir.layout.stripe_unit size" on this list). I never found
> a solution, but I can say that I am currently running 10.2.6 and haven't
> seen the issue so far reappear. 10.2.5 worked as well as best I recall.
> >
> > I realise I'm running the next release after you, and that the
> circumstances aren't the same, but the patterns of behaviour are similar
> enough that I wanted to raise awareness.
> >
> > k8
> >
> > On Sat, Apr 8, 2017 at 6:39 AM Laszlo Budai <las...@componentsoft.eu
> <mailto:las...@componentsoft.eu>> wrote:
> >
> >     Hello Peter,
> >
> >     Thank you for your answer.
> >     In our setup we have the virtual machines running in KVM, and
> accessing the ceph storage using librbd.
> >     The rbd cache is set to "writethrough until flush = true". Here it
> is the result of ceph config show | grep cache :
> >     # ceph --admin-daemon
> /run/ceph/guests/ceph-client.cinder.30601.140275026217152.asok config show
> | grep cache
> >          "debug_objectcacher": "0\/5",
> >          "mon_osd_cache_size": "10",
> >          "mon_cache_target_full_warn_ratio": "0.66",
> >          "mon_warn_on_cache_pools_without_hit_sets": "true",
> >          "client_cache_size": "16384",
> >          "client_cache_mid": "0.75",
> >          "mds_cache_size": "100000",
> >          "mds_cache_mid": "0.7",
> >          "mds_dump_cache_on_map": "false",
> >          "mds_dump_cache_after_rejoin": "false",
> >          "osd_pool_default_cache_target_dirty_ratio": "0.4",
> >          "osd_pool_default_cache_target_full_ratio": "0.8",
> >          "osd_pool_default_cache_min_flush_age": "0",
> >          "osd_pool_default_cache_min_evict_age": "0",
> >          "osd_tier_default_cache_mode": "writeback",
> >          "osd_tier_default_cache_hit_set_count": "4",
> >          "osd_tier_default_cache_hit_set_period": "1200",
> >          "osd_tier_default_cache_hit_set_type": "bloom",
> >          "osd_tier_default_cache_min_read_recency_for_promote": "1",
> >          "osd_map_cache_size": "500",
> >          "osd_pg_object_context_cache_count": "64",
> >          "leveldb_cache_size": "134217728",
> >          "rocksdb_cache_size": "0",
> >          "filestore_omap_header_cache_size": "1024",
> >          "filestore_fd_cache_size": "128",
> >          "filestore_fd_cache_shards": "16",
> >          "keyvaluestore_header_cache_size": "4096",
> >          "rbd_cache": "true",
> >          "rbd_cache_writethrough_until_flush": "true",
> >          "rbd_cache_size": "134217728",
> >          "rbd_cache_max_dirty": "100663296",
> >          "rbd_cache_target_dirty": "67108864",
> >          "rbd_cache_max_dirty_age": "1",
> >          "rbd_cache_max_dirty_object": "0",
> >          "rbd_cache_block_writes_upfront": "false",
> >          "rgw_cache_enabled": "true",
> >          "rgw_cache_lru_size": "10000",
> >          "rgw_keystone_token_cache_size": "10000",
> >          "rgw_bucket_quota_cache_size": "10000",
> >
> >
> >     I did some tests and the problem has appeared when I was using ext4
> in the VM, but not in the case of xfs.
> >     I did an other test when I was calling a sync at the end of the
> while loop, and in this case the issue did NOT appeared.
> >
> >
> >     Kind regards,
> >     Laszlo
> >
> >     On 08.04.2017 00:07, Peter Maloney wrote:
> >     > You should describe your configuration...
> >     >
> >     > krbd? librbd? cephfs?
> >     > is rbd_cache = true?
> >     > rbd cache writethrough until flush = true?
> >     > is it kvm?
> >     > maybe the filesystem in the VM is relevant
> >     >
> >     > (I saw something similar testing cephfs... if I blacklisted a
> client and
> >     > then force unmounted, I would get whole files (appended I think)
> or ends
> >     > of files (new files) with zeros)
> >     >
> >     > On 04/07/17 13:36, Laszlo Budai wrote:
> >     >> Hello,
> >     >>
> >     >> we have observed that there are null characters written into the
> open
> >     >> files when hard rebooting a VM. Is tis a known issue?
> >     >> Our VM is using ceph (0.94.10) storage.
> >     >> we have a script like this:
> >     >> while sleep 1; do date >> somefile ; done
> >     >>
> >     >> if we hard reset the VM while the above line is running we end up
> with
> >     >> NULL characters in the file :
> >     >>
> >     >> 00000000  54 68 75 20 41 70 72 20  20 36 20 31 34 3a 33 37  |Thu
> Apr
> >     >> 6 14:37|
> >     >> 00000010  3a 33 33 20 43 45 53 54  20 32 30 31 37 0a 54 68  |:33
> CEST
> >     >> 2017.Th|
> >     >> 00000020  75 20 41 70 72 20 20 36  20 31 34 3a 33 37 3a 33  |u
> Apr  6
> >     >> 14:37:3|
> >     >> 00000030  34 20 43 45 53 54 20 32  30 31 37 0a 54 68 75 20  |4
> CEST
> >     >> 2017.Thu |
> >     >> 00000040  41 70 72 20 20 36 20 31  34 3a 33 37 3a 33 35 20  |Apr
> 6
> >     >> 14:37:35 |
> >     >> 00000050  43 45 53 54 20 32 30 31  37 0a 54 68 75 20 41 70  |CEST
> >     >> 2017.Thu Ap|
> >     >> 00000060  72 20 20 36 20 31 34 3a  33 37 3a 33 36 20 43 45  |r  6
> >     >> 14:37:36 CE|
> >     >> 00000070  53 54 20 32 30 31 37 0a  54 68 75 20 41 70 72 20  |ST
> >     >> 2017.Thu Apr |
> >     >> 00000080  20 36 20 31 34 3a 33 37  3a 33 39 20 43 45 53 54  | 6
> >     >> 14:37:39 CEST|
> >     >> 00000090  20 32 30 31 37 0a 54 68  75 20 41 70 72 20 20 36  |
> 2017.Thu
> >     >> Apr  6|
> >     >> 000000a0  20 31 34 3a 33 37 3a 34  30 20 43 45 53 54 20 32  |
> 14:37:40
> >     >> CEST 2|
> >     >> 000000b0  30 31 37 0a 54 68 75 20  41 70 72 20 20 36 20 31
> |017.Thu
> >     >> Apr  6 1|
> >     >> 000000c0  34 3a 33 37 3a 34 31 20  43 45 53 54 20 32 30 31
> |4:37:41
> >     >> CEST 201|
> >     >> 000000d0  37 0a 00 00 00 00 00 00  00 00 00 00 00 00 00 00
> >     >> |7...............|
> >     >> 000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
> >     >> |................|
> >     >>
> >     >> We've observed the same in the syslog file also.
> >     >>
> >     >> Any thoughts about it?
> >     >>
> >     >> kind regards,
> >     >> Laszlo
> >     >> _______________________________________________
> >     >> ceph-users mailing list
> >     >> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >     >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >     >
> >     >
> >     >
> >     _______________________________________________
> >     ceph-users mailing list
> >     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to