[ceph-users] Bluestore: compession heuristic

2017-06-27 Thread Timofey Titovets
Hi, i've write some heuristic code for btrfs compression subsystem, userspace option are simple and fast enough (~4GiB/s on my notebook). Heuristic can detect good/bad compressible data, to decrease cpu load and avoid compressing bad compressible data. https://github.com/Nefelim4ag/Entropy_Calcul

Re: [ceph-users] Network redundancy...

2017-05-29 Thread Timofey Titovets
2017-05-29 11:37 GMT+03:00 Marco Gaiarin : > > I've setup a little Ceph cluster (3 host, 12 OSD), all belonging to a > single switch, using 2-1Gbit/s LACP links. > > Supposing to have two identical switches, there's some way to setup a > ''redundant'' configuration? > For example, something similar

[ceph-users] [Question] RBD Striping

2017-04-27 Thread Timofey Titovets
Hi, i found that RBD Striping documentation are not enough detail. Can some one explain how RBD stripe own object over more objects and why it's better use striping instead of small rbd object size? Also if RBD use object size = 4MB by default does it's mean that every time object has modified OSD

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Timofey Titovets
JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and this pass stress tests. 1. Don't try pass RBD directly to LIO, this setup are unstable 2. Instead of that, use Qemu + KVM (i use proxmox for that create VM) 3. Attach RBD to VM as VIRTIO-SCSI disk (must be exported by target_co

[ceph-users] Ceph Kraken + CephFS + Kernel NFSv3/v4.1 + ESXi

2017-03-13 Thread Timofey Titovets
Hi does anyone try this stack, may be someone can provide some feedback about it? Thanks. P.S. AFAIK at now Ceph RBD + LIO lack of iSCSI HA support, so i think about NFS. UPD1: I did some tests and get strange behavior: Every several minutes io from nfs client to nfs proxy just stops, no message

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Timofey Titovets
flock is at /usr/bin/flock > > My problem is that "ceph" service is doing everything, and all others > systemd services does not run... > > it seems there is a problem switching from old init.d services to new > systemd.. > > On 12/03/2015 08:31 PM, Timofey Titovets

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Timofey Titovets
Lol, it's opensource guys https://github.com/ceph/ceph/tree/master/systemd ceph-disk@ 2015-12-03 21:59 GMT+03:00 Florent B : > "ceph" service does mount : > > systemctl status ceph -l > ● ceph.service - LSB: Start Ceph distributed file system daemons at boot > time >Loaded: loaded (/etc/init.d

Re: [ceph-users] Remap PGs with size=1 on specific OSD

2015-12-03 Thread Timofey Titovets
On 3 Dec 2015 9:35 p.m., "Robert LeBlanc" wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Reweighting the OSD to 0.0 or setting the osd out (but not terminating > the process) should allow it to backfill the PGs to a new OSD. I would > try the reweight first (and in a test environ

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Timofey Titovets
On 3 Dec 2015 8:56 p.m., "Florent B" wrote: > > By the way, when system boots, "ceph" service is starting everything > fine. So "ceph-osd@" service is disabled => how to restart an OSD ?! > AFAIK, ceph now have 2 services: 1. Mount device 2. Start OSD Also, service can be disabled, but this not m

[ceph-users] Ceph osd on btrfs maintenance/optimization

2015-12-02 Thread Timofey Titovets
Hi list, I create small tool for maintenance/optimization of btrfs based OSD store: https://github.com/Nefelim4ag/ceph-btrfs-butler May be it's can be useful for somebody At now script can find rarely accessed objects on disk and based on this information can: 1. Defrag objs 2. Compress objs 3. De

Re: [ceph-users] RBD: Max queue size

2015-11-30 Thread Timofey Titovets
Big Thanks Ilya, for explanation 2015-11-30 22:15 GMT+03:00 Ilya Dryomov : > On Mon, Nov 30, 2015 at 7:47 PM, Timofey Titovets > wrote: >> >> On 30 Nov 2015 21:19, "Ilya Dryomov" wrote: >>> >>> On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets

Re: [ceph-users] RBD: Max queue size

2015-11-30 Thread Timofey Titovets
On 30 Nov 2015 21:19, "Ilya Dryomov" wrote: > > On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets wrote: > > Hi list, > > Short: > > i just want ask, why i can't do: > > echo 129 > /sys/class/block/rbdX/queue/nr_requests > > > >

[ceph-users] RBD: Max queue size

2015-11-30 Thread Timofey Titovets
Hi list, Short: i just want ask, why i can't do: echo 129 > /sys/class/block/rbdX/queue/nr_requests i.e. why i can't set value greater then 128? Why such a restriction? Long: Usage example: i have slow CEPH HDD based storage and i want it export by iSCSI proxy machine for ESXi cluster If i have

[ceph-users] RBD fiemap already safe?

2015-11-30 Thread Timofey Titovets
Hi list, AFAIK, fiemap disabled by default because it cause rbd corruption. Someone already test it with recent kernels? Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Performance issues on small cluster

2015-11-10 Thread Timofey Titovets
On small cluster i've get a great sequental perfomance by using btrfs on OSD, journal file (max sync interval ~180s) and with option filestore journal parallel = true 2015-11-11 10:12 GMT+03:00 Ben Town : > Hi Guys, > > > > I’m in the process of configuring a ceph cluster and am getting some less

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Timofey Titovets
Alex, are you use ESXi? If yes, you use iSCSI Software adapter? If yes, you use active/passive, fixed, RoundRobin MPIO? Do you tune something on Initiator side? If possible can you give more details? Please 2015-11-09 17:41 GMT+03:00 Timofey Titovets : > Great thanks, Alex, you give me a h

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Timofey Titovets
d be just the superior > switch response on a higher end switch. > > Using blk_mq scheduler, it's been reported to improve performance on random > IO. > > Good luck! > > -- > Alex Gorbachev > Storcium > > On Sun, Nov 8, 2015 at 5:07 PM, Timofey Titovets > wr

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-08 Thread Timofey Titovets
5-10s. > > Sorry I couldn't be of more help. > > Nick > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Timofey Titovets >> Sent: 07 November 2015 11:44 >> To: ceph-users@lists.ceph.com &g

[ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-07 Thread Timofey Titovets
Hi List, I Searching for advice from somebody, who use Legacy client like ESXi with Ceph I try to build High-performance fault-tolerant storage with Ceph 0.94 In production i have 50+ TB of VMs (~800 VMs) 8 NFS servers each: 2xIntel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 12xSeagate ST2000NM0023 1xLS

Re: [ceph-users] ceph0.72 tgt wmware performance very bad

2015-06-22 Thread Timofey Titovets
Which backend you use in TGT for rbd? 2015-06-23 5:44 GMT+03:00 maoqi1982 : > Hi list: > my cluster include 4 server ,12 osd(4 osd/server), 1 mon(1 server), 1Gbps > link, ceph version is 0.72 , the cluster status is ok, client is vmware > vcenter. > use rbd as tgt backend,expose 2TB LUN via iscs

Re: [ceph-users] How does CephFS export storage?

2015-06-22 Thread Timofey Titovets
cephfs is just fs, like ext4 and btrfs & etc. But you can export it by NFS or Samba share. P.S. I did test NFS kernel implementation and NFS-ganesha, both have stability problems in my tests (strange deadlocks). 2015-06-22 11:16 GMT+03:00 Joakim Hansson : > Hi list! > I'm doing an internship at a

Re: [ceph-users] Crush rule freeze cluster

2015-05-11 Thread Timofey Titovets
Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to map rbd device with map like above! Hooray! 2015-05-11 12:11 GMT+03:00 Timofey Titovets : > FYI and history > Rule: > # rules > rule replicated_ruleset { > ruleset 0 > type replicated > min_size 1 >

Re: [ceph-users] Crush rule freeze cluster

2015-05-11 Thread Timofey Titovets
FYI and history Rule: # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type room step choose firstn 0 type rack step choose firstn 0 type host step chooseleaf firstn 0 type osd step emit } And after reset

Re: [ceph-users] Crush rule freeze cluster

2015-05-10 Thread Timofey Titovets
Georgios, oh, sorry for my poor english _-_, may be I poor expressed what i want =] i know how to write simple Crush rule and how use it, i want several things things: 1. Understand why, after inject bad map, my test node make offline. This is unexpected. 2. May be somebody can explain what and wh

[ceph-users] Crush rule freeze cluster

2015-05-09 Thread Timofey Titovets
Hi list, i had experiments with crush maps, and I've try to get raid1 like behaviour (if cluster have 1 working osd node, duplicate data across local disk, for avoiding data lose in case local disk failure and allow client working, because this is not a degraded state) ( in best case, i want dyna

Re: [ceph-users] Btrfs defragmentation

2015-05-06 Thread Timofey Titovets
2015-05-06 20:51 GMT+03:00 Lionel Bouton : > On 05/05/15 02:24, Lionel Bouton wrote: >> On 05/04/15 01:34, Sage Weil wrote: >>> On Mon, 4 May 2015, Lionel Bouton wrote: Hi, we began testing one Btrfs OSD volume last week and for this first test we disabled autodefrag and began t

Re: [ceph-users] Btrfs defragmentation

2015-05-04 Thread Timofey Titovets
Hi list, Excuse me, what I'm saying is off topic @Lionel, if you use btrfs, did you already try to use btrfs compression for OSD? If yes, сan you share the your experience? 2015-05-05 3:24 GMT+03:00 Lionel Bouton : > On 05/04/15 01:34, Sage Weil wrote: >> On Mon, 4 May 2015, Lionel Bouton wrote: