[ceph-users] Ceph + SAMBA (vfs_ceph)
I'm running a ceph installation on a lab to evaluate for production and I have a cluster running, but I need to mount on different windows servers and desktops. I created an NFS share and was able to mount it on my Linux desktop, but not a Win 10 desktop. Since it seems that Windows server 2016 is required to mount the NFS share I quit that route and decided to try samba. I compiled a version of Samba that has this vfs_ceph module, but I can't set it up correctly. It seems I'm missing some user configuration as I've hit this error: " ~$ smbclient -U samba.gw //10.17.6.68/cephfs_a WARNING: The "syslog" option is deprecated Enter WORKGROUP\samba.gw's password: session setup failed: NT_STATUS_LOGON_FAILURE " Does anyone know of any good setup tutorial to follow? This is my smb config so far: # Global parameters [global] load printers = No netbios name = SAMBA-CEPH printcap name = cups security = USER workgroup = CEPH smbd: backgroundqueue = no idmap config * : backend = tdb cups options = raw valid users = samba [cephfs] create mask = 0777 directory mask = 0777 guest ok = Yes guest only = Yes kernel share modes = No path = / read only = No vfs objects = ceph ceph: user_id = samba ceph:config_file = /etc/ceph/ceph.conf Thanks -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email.___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph + SAMBA (vfs_ceph)
This is the result: # testparm -s Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[homes]" Processing section "[cephfs]" Processing section "[printers]" Processing section "[print$]" Loaded services file OK. Server role: ROLE_STANDALONE # Global parameters [global] load printers = No netbios name = SAMBA-CEPH printcap name = cups security = USER workgroup = CEPH smbd: backgroundqueue = no idmap config * : backend = tdb cups options = raw valid users = samba ... [cephfs] create mask = 0777 directory mask = 0777 guest ok = Yes guest only = Yes kernel share modes = No path = / read only = No vfs objects = ceph ceph: user_id = samba ceph:config_file = /etc/ceph/ceph.conf I cut off some parts I thought were not relevant. -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Wednesday, August 28, 2019 5:44 AM, Maged Mokhtar wrote: > On 27/08/2019 21:39, Salsa wrote: > >> I'm running a ceph installation on a lab to evaluate for production and I >> have a cluster running, but I need to mount on different windows servers and >> desktops. I created an NFS share and was able to mount it on my Linux >> desktop, but not a Win 10 desktop. Since it seems that Windows server 2016 >> is required to mount the NFS share I quit that route and decided to try >> samba. >> >> I compiled a version of Samba that has this vfs_ceph module, but I can't set >> it up correctly. It seems I'm missing some user configuration as I've hit >> this error: >> >> " >> ~$ smbclient -U samba.gw //10.17.6.68/cephfs_a >> WARNING: The "syslog" option is deprecated >> Enter WORKGROUP\samba.gw's password: >> session setup failed: NT_STATUS_LOGON_FAILURE >> " >> Does anyone know of any good setup tutorial to follow? >> >> This is my smb config so far: >> >> # Global parameters >> [global] >> load printers = No >> netbios name = SAMBA-CEPH >> printcap name = cups >> security = USER >> workgroup = CEPH >> smbd: backgroundqueue = no >> idmap config * : backend = tdb >> cups options = raw >> valid users = samba >> >> [cephfs] >> create mask = 0777 >> directory mask = 0777 >> guest ok = Yes >> guest only = Yes >> kernel share modes = No >> path = / >> read only = No >> vfs objects = ceph >> ceph: user_id = samba >> ceph:config_file = /etc/ceph/ceph.conf >> >> Thanks >> >> -- >> Salsa >> >> Sent with [ProtonMail](https://protonmail.com) Secure Email. >> >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > The error seems to be a samba security issue. below is a conf file we use, it > uses kernel client rather than vfs, but may help with permission: > > [global] > workgroup = WORKGROUP > server string = Samba Server %v > security = user > map to guest = bad user > > # clustering > netbios name= PETASAN > clustering=yes > passdb backend = tdbsam > idmap config * : backend = tdb2 > idmap config * : range = 100-199 > private dir = /mnt/cephfs/lock > > [Public] >path = /mnt/cephfs/share/public >browseable = yes >writable = yes >guest ok = yes >guest only = yes >read only = no >create mode = 0777 >directory mode = 0777 >force user = nobody > > [Protected] > path = /mnt/cephfs/share/protected > valid users = @smbgroup > guest ok = no > writable = yes > browsable = yes > > Maged___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph + SAMBA (vfs_ceph)
This is the result: # testparm -s Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[homes]" Processing section "[cephfs]" Processing section "[printers]" Processing section "[print$]" Loaded services file OK. Server role: ROLE_STANDALONE # Global parameters [global] load printers = No netbios name = SAMBA-CEPH printcap name = cups security = USER workgroup = CEPH smbd: backgroundqueue = no idmap config * : backend = tdb cups options = raw valid users = samba ... [cephfs] create mask = 0777 directory mask = 0777 guest ok = Yes guest only = Yes kernel share modes = No path = / read only = No vfs objects = ceph ceph: user_id = samba ceph:config_file = /etc/ceph/ceph.conf I cut off some parts I thought were not relevant. -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Wednesday, August 28, 2019 3:09 AM, Konstantin Shalygin wrote: >> I'm running a ceph installation on a lab to evaluate for production and I >> have a cluster running, but I need to mount on different windows servers and >> desktops. I created an NFS share and was able to mount it on my Linux >> desktop, but not a Win 10 desktop. Since it seems that Windows server 2016 >> is required to mount the NFS share I quit that route and decided to try >> samba. >> >> I compiled a version of Samba that has this vfs_ceph module, but I can't set >> it up correctly. It seems I'm missing some user configuration as I've hit >> this error: >> >> " >> ~$ smbclient -U samba.gw //10.17.6.68/cephfs_a >> WARNING: The "syslog" option is deprecated >> Enter WORKGROUP\samba.gw's password: >> session setup failed: NT_STATUS_LOGON_FAILURE >> " >> Does anyone know of any good setup tutorial to follow? >> >> This is my smb config so far: >> >> # Global parameters >> [global] >> load printers = No >> netbios name = SAMBA-CEPH >> printcap name = cups >> security = USER >> workgroup = CEPH >> smbd: backgroundqueue = no >> idmap config * : backend = tdb >> cups options = raw >> valid users = samba >> >> [cephfs] >> create mask = 0777 >> directory mask = 0777 >> guest ok = Yes >> guest only = Yes >> kernel share modes = No >> path = / >> read only = No >> vfs objects = ceph >> ceph: user_id = samba >> ceph:config_file = /etc/ceph/ceph.conf >> >> Thanks > > Your configuration seems correct, but conf have or don't have special > characters such a spaces, lower case options. First what you should do is run > `testparm -s` and paste here what in output. > > k___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Need advice with setup planning
I have tested Ceph using VMs but never got to put it to use and had a lot of trouble to get it to install. Now I've been asked to do a production setup using 3 servers (Dell R740) with 12 4TB each. My plan is this: - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I can use it later for storage) - Install CentOS 7.7 - Use 2 vLANs, one for ceph internal usage and another for external access. Since they've 4 network adapters, I'll try to bond them in pairs to speed up network (1Gb). - I'll try to use ceph-ansible for installation. I failed to use it on lab, but it seems more recommended. - Install Ceph Nautilus - Each server will host OSD, MON, MGR and MDS. - One VM for ceph-admin: This wil be used to run ceph-ansible and maybe to host some ceph services later - I'll have to serve samba, iscsi and probably NFS too. Not sure how or on which servers. Am I missing anything? Am I doing anything "wrong"? I searched for some actual guidance on setup but I couldn't find anything complete, like a good tutorial or reference based on possible use-cases. So, is there any suggestions you could share or links and references I should take a look? Thanks; -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email.___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Need advice with setup planning
Replying inline -- Salsa Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, September 20, 2019 1:31 PM, Marc Roos wrote: > > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case > > I can use it later for storage) > > OS not? get enterprise ssd as os (I think some recommend it when > colocating monitors, can generate a lot of disk io) Yes, OS. I have no option to get a SSD. > > > - Install CentOS 7.7 > > Good choice > > > - Use 2 vLANs, one for ceph internal usage and another for external > > access. Since they've 4 network adapters, I'll try to bond them in pairs > to speed up network (1Gb). > > Bad, get 10Gbit, yes really Again, that's not an option. We'll have to use the hardware we got. > > > - I'll try to use ceph-ansible for installation. I failed to use it on > > lab, but it seems more recommended. > > Where did you get it from that ansible is recommended? Ansible is a tool > to help you automate deployments, but I have the impression it is mostly > used as a 'I do not know how to install something' so lets use ansible > tool. From reaing various sites/guides for lab. > > > - Install Ceph Nautilus > > > > > > - Each server will host OSD, MON, MGR and MDS. > > > - One VM for ceph-admin: This wil be used to run ceph-ansible and > > maybe to host some ceph services later > > Don't waste a vm on this? You think it is a waste to have a VM for this? Won't I need another machine to host other ceph services? > > > - I'll have to serve samba, iscsi and probably NFS too. Not sure how > > or on which servers. > > > > > If you want to create a fancy solution, you can use something like mesos > that manages your nfs,smb,iscsi or rgw daemons, so if you bring down a > host, applications automatically move to a different host ;) Hmm, gotta read more on that. Thanks for the advice. > > -Original Message- > From: Salsa [mailto:sa...@protonmail.com] > Sent: vrijdag 20 september 2019 18:14 > To: ceph-users@lists.ceph.com > Subject: [ceph-users] Need advice with setup planning > > I have tested Ceph using VMs but never got to put it to use and had a > lot of trouble to get it to install. > > Now I've been asked to do a production setup using 3 servers (Dell R740) > with 12 4TB each. > > My plan is this: > > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I > can use it later for storage) > > - Install CentOS 7.7 > - Use 2 vLANs, one for ceph internal usage and another for external > access. Since they've 4 network adapters, I'll try to bond them in pairs > to speed up network (1Gb). > > - I'll try to use ceph-ansible for installation. I failed to use it on > lab, but it seems more recommended. > > - Install Ceph Nautilus > - Each server will host OSD, MON, MGR and MDS. > - One VM for ceph-admin: This wil be used to run ceph-ansible and maybe > to host some ceph services later > > - I'll have to serve samba, iscsi and probably NFS too. Not sure how or > on which servers. > > Am I missing anything? Am I doing anything "wrong"? > > I searched for some actual guidance on setup but I couldn't find > anything complete, like a good tutorial or reference based on possible > use-cases. > > So, is there any suggestions you could share or links and references I > should take a look? > > Thanks; > > -- > > Salsa > > Sent with ProtonMail https://protonmail.com Secure Email. > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Need advice with setup planning
Replying inline. -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, September 20, 2019 1:34 PM, Martin Verges wrote: > Hello Salsa, > >> I have tested Ceph using VMs but never got to put it to use and had a lot of >> trouble to get it to install. > > if you want to get rid of all the troubles from installing to day2day > operations, you could consider using https://croit.io/croit-virtual-demo Amazing! Where were you 3 months ago? Only problem is that I think we have no moer budget for this so I can't get approval for software license. >> - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I can >> use it later for storage) >> - Install CentOS 7.7 > > Is ok, but won't be necessary if you choose croit as we boot from the network > and don't install a operating system. No budget for software license >> - Use 2 vLANs, one for ceph internal usage and another for external access. >> Since they've 4 network adapters, I'll try to bond them in pairs to speed up >> network (1Gb). > > If there is no internal policy that forces you to do seperate networks, you > can use a simple 1 vlan setup and bond 4*1GbE. Otherwise it's ok. The service is critical and we are afraid that the network might be congested and QoS for the end user degrades. > > >> - I'll try to use ceph-ansible for installation. I failed to use it on lab, >> but it seems more recommended. >> - Install Ceph Nautilus > > Ultra easy with croit, maybe look at our videos on youtube - > https://www.youtube.com/playlist?list=PL1g9zo59diHDSJgkZcMRUq6xROzt_YKox Thanks! I'll be watching them. >> - Each server will host OSD, MON, MGR and MDS. > > ok, but you should use ssd for metadata. No budget and no option to get those now. > > >> - One VM for ceph-admin: This wil be used to run ceph-ansible and maybe to >> host some ceph services later > > perfect for croit ;) > >> - I'll have to serve samba, iscsi and probably NFS too. Not sure how or on >> which servers. > > Just put it on the servers as well, with croit it is just a click away and > everything is included in our interface. > If not using croit, you can still install it on the same systems and > configure it by hand/script. Great! Thanks for the help and congratulations on that demo. It is the best I've used and the easiest ceph setup I've found. As feedback, the last part of the demo tutorial is not 100% compatible with the master branch from github. The RBD pool creation has a different interface than the one presented in your tutorial (Or I made some mistake along the way). Also, my cluster is showing error in my placement groups after RB pool creation, but I'll try to find out what happened. Thanks again! > -- > Martin Verges > Managing director > > Mobile: +49 174 9335695 > E-Mail: martin.ver...@croit.io > Chat: https://t.me/MartinVerges > > croit GmbH, Freseniusstr. 31h, 81247 Munich > CEO: Martin Verges - VAT-ID: DE310638492 > Com. register: Amtsgericht Munich HRB 231263 > > Web: https://croit.io > YouTube: https://goo.gl/PGE1Bx > > Am Fr., 20. Sept. 2019 um 18:14 Uhr schrieb Salsa : > >> I have tested Ceph using VMs but never got to put it to use and had a lot of >> trouble to get it to install. >> >> Now I've been asked to do a production setup using 3 servers (Dell R740) >> with 12 4TB each. >> >> My plan is this: >> - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I can >> use it later for storage) >> - Install CentOS 7.7 >> - Use 2 vLANs, one for ceph internal usage and another for external access. >> Since they've 4 network adapters, I'll try to bond them in pairs to speed up >> network (1Gb). >> - I'll try to use ceph-ansible for installation. I failed to use it on lab, >> but it seems more recommended. >> - Install Ceph Nautilus >> - Each server will host OSD, MON, MGR and MDS. >> - One VM for ceph-admin: This wil be used to run ceph-ansible and maybe to >> host some ceph services later >> - I'll have to serve samba, iscsi and probably NFS too. Not sure how or on >> which servers. >> >> Am I missing anything? Am I doing anything "wrong"? >> >> I searched for some actual guidance on setup but I couldn't find anything >> complete, like a good tutorial or reference based on possible use-cases. >> >> So, is there any suggestions you could share or links and references I >> should take a look? >> >> Thanks; >> >> -- >> Salsa >> >> Sent with [ProtonMail](https://protonmail.com) Secure Email. >> >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Can't create erasure coded pools with k+m greater than hosts?
I have probably misunterstood how to create erasure coded pools so I may be in need of some theory and appreciate if you can point me to documentation that may clarify my doubts. I have so far 1 cluster with 3 hosts and 30 OSDs (10 each host). I tried to create an erasure code profile like so: " # ceph osd erasure-code-profile get ec4x2rs crush-device-class= crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=false k=4 m=2 plugin=jerasure technique=reed_sol_van w=8 " If I create a pool using this profile or any profile where K+M > hosts , then the pool gets stuck. " # ceph -s cluster: id: eb4aea44-0c63-4202-b826-e16ea60ed54d health: HEALTH_WARN Reduced data availability: 16 pgs inactive, 16 pgs incomplete 2 pools have too many placement groups too few PGs per OSD (4 < min 30) services: mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 11d) mgr: ceph01(active, since 74m), standbys: ceph03, ceph02 osd: 30 osds: 30 up (since 2w), 30 in (since 2w) data: pools: 11 pools, 32 pgs objects: 0 objects, 0 B usage: 32 GiB used, 109 TiB / 109 TiB avail pgs: 50.000% pgs not active 16 active+clean 16 creating+incomplete # ceph osd pool ls test_ec test_ec2 " The pool will never leave this "creating+incomplete" state. The pools were created like this: " # ceph osd pool create test_ec2 16 16 erasure ec4x2rs # ceph osd pool create test_ec 16 16 erasure " The default profile pool is created correctly. My profiles are like this: " # ceph osd erasure-code-profile get default k=2 m=1 plugin=jerasure technique=reed_sol_van # ceph osd erasure-code-profile get ec4x2rs crush-device-class= crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=false k=4 m=2 plugin=jerasure technique=reed_sol_van w=8 " From what I've read it seems to be possible to create erasure code pools with higher than hosts K+M. Is this not so? What am I doing wrong? Do I have to create any special crush map rule? -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email.___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Can't create erasure coded pools with k+m greater than hosts?
Ok, I'm lost here. How am I supposed to write a crush rule? So far I managed to run: #ceph osd crush rule dump test -o test.txt So I can edit the rule. Now I have two problems: 1. Whats the functions and operations to use here? Is there documentation anywhere abuot this? 2. How may I create a crush rule using this file? 'ceph osd crush rule create ... -i test.txt' does not work. Am I taking the wrong approach here? -- Salsa Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, October 18, 2019 3:56 PM, Paul Emmerich wrote: > Default failure domain in Ceph is "host" (see ec profile), i.e., you > need at least k+m hosts (but at least k+m+1 is better for production > setups). > You can change that to OSD, but that's not a good idea for a > production setup for obvious reasons. It's slightly better to write a > crush rule that explicitly picks two disks on 3 different hosts > > Paul > > > > Paul Emmerich > > Looking for help with your Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Fri, Oct 18, 2019 at 8:45 PM Salsa sa...@protonmail.com wrote: > > > I have probably misunterstood how to create erasure coded pools so I may be > > in need of some theory and appreciate if you can point me to documentation > > that may clarify my doubts. > > I have so far 1 cluster with 3 hosts and 30 OSDs (10 each host). > > I tried to create an erasure code profile like so: > > " > > > > ceph osd erasure-code-profile get ec4x2rs > > > > == > > > > crush-device-class= > > crush-failure-domain=host > > crush-root=default > > jerasure-per-chunk-alignment=false > > k=4 > > m=2 > > plugin=jerasure > > technique=reed_sol_van > > w=8 > > " > > If I create a pool using this profile or any profile where K+M > hosts , > > then the pool gets stuck. > > " > > > > ceph -s > > > > > > > > cluster: > > id: eb4aea44-0c63-4202-b826-e16ea60ed54d > > health: HEALTH_WARN > > Reduced data availability: 16 pgs inactive, 16 pgs incomplete > > 2 pools have too many placement groups > > too few PGs per OSD (4 < min 30) > > services: > > mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 11d) > > mgr: ceph01(active, since 74m), standbys: ceph03, ceph02 > > osd: 30 osds: 30 up (since 2w), 30 in (since 2w) > > data: > > pools: 11 pools, 32 pgs > > objects: 0 objects, 0 B > > usage: 32 GiB used, 109 TiB / 109 TiB avail > > pgs: 50.000% pgs not active > > 16 active+clean > > 16 creating+incomplete > > > > ceph osd pool ls > > > > = > > > > test_ec > > test_ec2 > > " > > The pool will never leave this "creating+incomplete" state. > > The pools were created like this: > > " > > > > ceph osd pool create test_ec2 16 16 erasure ec4x2rs > > > > > > > > ceph osd pool create test_ec 16 16 erasure > > > > === > > > > " > > The default profile pool is created correctly. > > My profiles are like this: > > " > > > > ceph osd erasure-code-profile get default > > > > == > > > > k=2 > > m=1 > > plugin=jerasure > > technique=reed_sol_van > > > > ceph osd erasure-code-profile get ec4x2rs > > > > == > > > > crush-device-class= > > crush-failure-domain=host > > crush-root=default > > jerasure-per-chunk-alignment=false > > k=4 > > m=2 > > plugin=jerasure > > technique=reed_sol_van > > w=8 > > " > > From what I've read it seems to be possible to create erasure code pools > > with higher than hosts K+M. Is this not so? > > What am I doing wrong? Do I have to create any special crush map rule? > > -- > > Salsa > > Sent with ProtonMail Secure Email. > > > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Can't create erasure coded pools with k+m greater than hosts?
Just to clarify my situation, We have 2 datacenters with 3 hosts each, 12 4TB disks each host (2 are RAID with OS installed and the remaining 10 are used for Ceph). Right now I'm trying a single DC installation and intended to migrate to multi site mirroring DC1 to DC2, so if we lose DC1 we can activate DC2 (NOTE: I have no idea how this is setup and have not planned at all; I thought of geting DC1 to work first and later set the mirroring) I don't think I'll be able to change the setup in any way, so my next question is: Should I go with a replica 3 or would an erasure 2,1 be ok? There's a very small chance we get 2 extra hosts for each DC in a near future, but we'll probably use all the available storage space in the nearer future. We're trying to use as much space as possible. Thanks; -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Monday, October 21, 2019 2:53 AM, Martin Verges wrote: > Just don't do such setups for production, It will be a lot of pain, trouble, > and cause you problems. > > Just take a cheap system, put some of the disks in it and do a way way better > deployment than something like 4+2 on 3 hosts. Whatever you do with that > cluster (example kernel update, reboot, PSU failure, ...) causes you and all > attached clients, especially bad with VMs on that Ceph cluster, to stop any > IO or even crash completely. > > -- > Martin Verges > Managing director > > Mobile: +49 174 9335695 > E-Mail: martin.ver...@croit.io > Chat: https://t.me/MartinVerges > > croit GmbH, Freseniusstr. 31h, 81247 Munich > CEO: Martin Verges - VAT-ID: DE310638492 > Com. register: Amtsgericht Munich HRB 231263 > > Web: https://croit.io > YouTube: https://goo.gl/PGE1Bx > > Am Sa., 19. Okt. 2019 um 01:51 Uhr schrieb Chris Taylor : > >> Full disclosure - I have not created an erasure code pool yet! >> >> I have been wanting to do the same thing that you are attempting and >> have these links saved. I believe this is what you are looking for. >> >> This link is for decompiling the CRUSH rules and recompiling: >> >> https://docs.ceph.com/docs/luminous/rados/operations/crush-map-edits/ >> >> This link is for creating the EC rules for 4+2 with only 3 hosts: >> >> https://ceph.io/planet/erasure-code-on-small-clusters/ >> >> I hope that helps! >> >> Chris >> >> On 2019-10-18 2:55 pm, Salsa wrote: >>> Ok, I'm lost here. >>> >>> How am I supposed to write a crush rule? >>> >>> So far I managed to run: >>> >>> #ceph osd crush rule dump test -o test.txt >>> >>> So I can edit the rule. Now I have two problems: >>> >>> 1. Whats the functions and operations to use here? Is there >>> documentation anywhere abuot this? >>> 2. How may I create a crush rule using this file? 'ceph osd crush rule >>> create ... -i test.txt' does not work. >>> >>> Am I taking the wrong approach here? >>> >>> >>> -- >>> Salsa >>> >>> Sent with ProtonMail Secure Email. >>> >>> ‐‐‐ Original Message ‐‐‐ >>> On Friday, October 18, 2019 3:56 PM, Paul Emmerich >>> wrote: >>> >>>> Default failure domain in Ceph is "host" (see ec profile), i.e., you >>>> need at least k+m hosts (but at least k+m+1 is better for production >>>> setups). >>>> You can change that to OSD, but that's not a good idea for a >>>> production setup for obvious reasons. It's slightly better to write a >>>> crush rule that explicitly picks two disks on 3 different hosts >>>> >>>> Paul >>>> >>>> >>>> >>>> Paul Emmerich >>>> >>>> Looking for help with your Ceph cluster? Contact us at >>>> https://croit.io >>>> >>>> croit GmbH >>>> Freseniusstr. 31h >>>> 81247 München >>>> www.croit.io >>>> Tel: +49 89 1896585 90 >>>> >>>> On Fri, Oct 18, 2019 at 8:45 PM Salsa sa...@protonmail.com wrote: >>>> >>>> > I have probably misunterstood how to create erasure coded pools so I may &
Re: [ceph-users] Ceph install from EL7 repo error
I faced that same behaviour, but can't remember whats was happening or how I solved it. Could you please post the commands used? I think it was related to some bad options to ceph-deploy. Are you using '--release nautilus' ? -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Wednesday, November 6, 2019 6:43 PM, Cranage, Steve wrote: > I cannot get an install of ceph to work on a fresh Centos 7.6 environment and > it is a repo error I haven't seen before. Here is my ceph.repo file: > > [Ceph] > name=Ceph packages for $basearch > baseurl=http://download.ceph.com/rpm-mimic/el7/$basearch > enabled=1 > gpgcheck=1 > type=rpm-md > gpgkey=https://download.ceph.com/keys/release.asc > priority=1 > > [Ceph-noarch] > name=Ceph noarch packages > baseurl=http://download.ceph.com/rpm-mimic/el7/noarch > enabled=1 > gpgcheck=1 > type=rpm-md > gpgkey=https://download.ceph.com/keys/release.asc > priority=1 > > [ceph-source] > name=Ceph source packages > baseurl=http://download.ceph.com/rpm-mimic/el7/SRPMS > enabled=1 > gpgcheck=1 > type=rpm-md > gpgkey=https://download.ceph.com/keys/release.asc > priority=1 > > Trying to keep this simple and just installing on 1 machine first, I try > ceph-deploy install and get this: > > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-common-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-mds-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-mgr-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-mon-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-osd-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-radosgw-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-selinux-14.2.4-0.el7.x86_64.rpm: > [Errno 14] HTTP Error 404 - Not Found > [cephadmin][WARNIN] Trying other mirror. > [cephadmin][WARNIN] > [cephadmin][WARNIN] > [cephadmin][WARNIN] Error downloading packages: > [cephadmin][WARNIN] 2:ceph-common-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-radosgw-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-mon-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-osd-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-mgr-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-14.2.4-0.el7.x86_64: [Errno 256] No more mirrors > to try. > [cephadmin][WARNIN] 2:ceph-mds-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-selinux-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] 2:ceph-base-14.2.4-0.el7.x86_64: [Errno 256] No more > mirrors to try. > [cephadmin][WARNIN] > [cephadmin][ERROR ] RuntimeError: command returned non-zero exit status: 1 > [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install > ceph ceph-radosgw > > If I'm reading this right, It's trying to find nautilus file versions under > the mimic tree and of course not finding them. Where it gets really weird is > that when I try to change the repo to use nautilus instead of mimic, the repo > file gets changed back to mimic as soon as I re-run ceph-deploy install again > and I'm back to the same error. > > What gives??___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph install from EL7 repo error
Then try running with "--release nautilus". -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Wednesday, November 6, 2019 7:30 PM, Cranage, Steve wrote: > No, I’m just running Ceph-deploy install with the one host name. > > Steve Cranage > > Principal Architect, Co-Founder > > DeepSpace Storage > > 719-930-6960 > > ------- > > From: Salsa > Sent: Wednesday, November 6, 2019 3:05:32 PM > To: Cranage, Steve > Cc: ceph-users > Subject: Re: [ceph-users] Ceph install from EL7 repo error > > I faced that same behaviour, but can't remember whats was happening or how I > solved it. Could you please post the commands used? I think it was related to > some bad options to ceph-deploy. Are you using '--release nautilus' ? > > -- > Salsa > > Sent with [ProtonMail](https://protonmail.com) Secure Email. > > ‐‐‐ Original Message ‐‐‐ > On Wednesday, November 6, 2019 6:43 PM, Cranage, Steve > wrote: > >> I cannot get an install of ceph to work on a fresh Centos 7.6 environment >> and it is a repo error I haven't seen before. Here is my ceph.repo file: >> >> [Ceph] >> name=Ceph packages for $basearch >> baseurl=http://download.ceph.com/rpm-mimic/el7/$basearch >> enabled=1 >> gpgcheck=1 >> type=rpm-md >> gpgkey=https://download.ceph.com/keys/release.asc >> priority=1 >> >> [Ceph-noarch] >> name=Ceph noarch packages >> baseurl=http://download.ceph.com/rpm-mimic/el7/noarch >> enabled=1 >> gpgcheck=1 >> type=rpm-md >> gpgkey=https://download.ceph.com/keys/release.asc >> priority=1 >> >> [ceph-source] >> name=Ceph source packages >> baseurl=http://download.ceph.com/rpm-mimic/el7/SRPMS >> enabled=1 >> gpgcheck=1 >> type=rpm-md >> gpgkey=https://download.ceph.com/keys/release.asc >> priority=1 >> >> Trying to keep this simple and just installing on 1 machine first, I try >> ceph-deploy install and get this: >> >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-common-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-mds-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-mgr-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-mon-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-osd-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-radosgw-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> http://download.ceph.com/rpm-mimic/el7/x86_64/ceph-selinux-14.2.4-0.el7.x86_64.rpm: >> [Errno 14] HTTP Error 404 - Not Found >> [cephadmin][WARNIN] Trying other mirror. >> [cephadmin][WARNIN] >> [cephadmin][WARNIN] >> [cephadmin][WARNIN] Error downloading packages: >> [cephadmin][WARNIN] 2:ceph-common-14.2.4-0.el7.x86_64: [Errno 256] No more >> mirrors to try. >> [cephadmin][WARNIN] 2:ceph-radosgw-14.2.4-0.el7.x86_64: [Errno 256] No >> more mirrors to try. >> [cephadmin][WARNIN] 2:ceph-mon-14.2.4-0.el7.x86_64: [Errno 256] No more >> mirrors to try. >> [cephadmin][WARNIN] 2:ceph-osd-14.2.4-0.el7.x86_64: [Errno 256] No more >> mirrors to try. >> [cephadmin][WARNIN] 2:ceph-mgr-14.2.4-0.el7.x86_64: [Errno 256] No more >> mirrors to try. >> [cephadmin][WARNIN] 2:ceph-14.2.4-0.el7.x86_64: [Errno 256] No more >> mirrors to try. >> [cephadmin][WARNIN] 2:ceph-mds-14.2.4-0.el7.x86_64: [Errno 256] No more >> mirrors to try. >> [cephadmin][WARNIN] 2:ceph-selinux-14.2.4-0.el7.x86_64: [Errno 256] No >> more mir