[ceph-users] Error while installing ceph

2018-10-13 Thread ceph ceph
. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Directly connect client to OSD using HTTP

2015-03-28 Thread ceph
Hi, I am designing an infrastructure using Ceph. The client will fetch data though HTTP. I saw the radosgw, that is made for that, it has, however, some weakness for me : as far as I understood, when a client want to fetch a file, it connects to the radosgw, which will connect to the right OSD

[ceph-users] Radosgw GC parallelization

2015-04-08 Thread ceph
Hi, I have a Ceph cluster, used through radosgw. In that cluster, I write files each seconds: input files are known, predictible and stable, there is always the same number of new fiexd-size files, each second. Theses files are kept a few days, then remove after a fixed duration. And thus, I

Re: [ceph-users] Kernel version for CephFS client ?

2015-05-04 Thread ceph
u are trying to say. >> >> Wheezy was released with kernel 3.2 and bugfixes are applied to 3.2 by >> Debian throughout Wheezy's support cycle. >> >> But by using the Wheezy backports repository one can use kernel 3.16, >> including the ceph code which is incl

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-06 Thread ceph
23:32, Chris Armstrong wrote: > Hi folks, > > Calling on the collective Ceph knowledge here. Since upgrading to > Hammer, we're now seeing: > > health HEALTH_WARN > too many PGs per OSD (1536 > max 300) > > We have 3 OSDs, so we have used the pg_n

Re: [ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-21 Thread ceph
ned etc), instead of using the whole raw disk I'm using these steps: - create a partition - mkfs.xfs - mkdir & mount - ceph-deploy osd prepare host:/path/to/mounted-fs Dunno if it's the right way, seems to work so far On 21/05/2014 16:05, 10 minus wrote: Hi, I have just start

Re: [ceph-users] Extra RAM use as Read Cache

2015-09-07 Thread ceph
IO cache may be handled be the kernel, not userspace Are you sure it is not already in use ? Do not look for userspace memory On 07/09/2015 23:19, Vickey Singh wrote: > Hello Experts , > > I want to increase my Ceph cluster's read performance. > > I have several OSD nodes ha

Re: [ceph-users] Mon quorum fails

2015-12-06 Thread ceph
gt; >Thanks and regards. > > > > >___ >ceph-users mailing list >ceph-users@lists.ceph.com >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com __

Re: [ceph-users] How to avoid kernel conflicts

2016-05-07 Thread ceph
Hi, > > I saw this tip in the troubleshooting section: > > DO NOT mount kernel clients directly on the same node as your Ceph Storage > Cluster, > because kernel conflicts can arise. However, you can mount kernel clients > within > virtual machines (VMs) on a sing

Re: [ceph-users] civetweb vs Apache for rgw

2016-05-24 Thread ceph
che because of >> HTTPS config. > > Civetweb should be able to handle ssl just fine: > > rgw_frontends = civetweb port=7480s ssl_certificate=/path/to/some_cert.pem > > > > _______ > ceph-users mailing list > ceph-users@lists.

Re: [ceph-users] 2 networks vs 2 NICs

2016-06-04 Thread ceph
obably forgo the separate >> cluster network and just run them over the same IP, as after running the >> cluster, I don't see any benefit from separate networks when taking into >> account the extra complexity. Something to consider. >> >>> -Original Messag

Re: [ceph-users] strange behavior using resize2fs vm image on rbd pool

2016-06-16 Thread ceph
and sparse/thinly-provisioned LUNs, but it is > off by default until sufficient testing has been done. On 16/06/2016 12:24, Zhongyan Gu wrote: > Hi, > it seems using resize2fs on rbd image would generate lots of garbage > objects in ceph. > The experiment is: > 1. use resize2fs to e

Re: [ceph-users] CEPH Replication

2016-07-01 Thread ceph
/crush-map/#crush-map-bucket-types and http://docs.ceph.com/docs/hammer/rados/configuration/pool-pg-config-ref/) On 01/07/2016 13:49, Ashley Merrick wrote: > Hello, > > Looking at setting up a new CEPH Cluster, starting with the following. > > 3 x CEPH OSD Servers > > Eac

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread ceph
40Gbps can be used as 4*10Gbps I guess welcome feedbacks should not be stuck by "usage of a 40Gbps ports", but extented to "usage of more than a single 10Gbps port, eg 20Gbps etc too" Is there people here that are using more than 10G on an ceph server ? On 13/07/2016 14:27

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread ceph
per per > port, and won¹t be so hard to drive. 40Gbe is very hard to fill. I > personally probably would not do 40 again. > > Warren Wang > > > > On 7/13/16, 9:10 AM, "ceph-users on behalf of Götz Reinicke - IT > Koordinator" goetz.reini...@filmakad

Re: [ceph-users] please help explain about failover

2016-08-15 Thread ceph
e (near-full cluster that cannot handle an OSD failure) - too many OSDs died at the same time, making autohealing unefficient: you will have "some" objects missing (if all copies were on missing OSDs, there is no way to recreate them) On 15/08/2016 11:18, kp...@freenet.de wrote: > hell

Re: [ceph-users] Single-node Ceph & Systemd shutdown

2016-08-20 Thread ceph
Simple solution that always works : purge systemd Tested and approved on all my ceph nodes, and all my servers :) On 20/08/2016 19:35, Marcus wrote: > Blablabla systemd blablabla ___ ceph-users mailing list ceph-users@lists.ceph.com h

Re: [ceph-users] Squeezing Performance of CEPH

2017-06-22 Thread ceph
that device ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph and IPv4 -> IPv6

2017-06-28 Thread ceph
On 28/06/2017 13:42, Wido den Hollander wrote: > Honestly I think there aren't that many IPv6 deployments with Ceph out there. > I for sure am a big fan a deployer of Ceph+IPv6, but I don't know many around > me. I got that ! Because IPv6 is so much better than IPv4 :dance:

Re: [ceph-users] dropping filestore+btrfs testing for luminous

2017-06-30 Thread ceph
g in to > luminous+btrfs territory. > Is that good enough? > > sage This seems sane to me ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] New cluster - configuration tips and reccomendation - NVMe

2017-07-05 Thread ceph
; * 6x NVMe U2 for OSD >> * 2x 100Gib ethernet cards >> >> We have yet not sure about which Intel and how much RAM we should put on >> it to avoid CPU bottleneck. >> Can you help me to choose the right couple of CPU? >> Did you see any issue on the configur

Re: [ceph-users] New cluster - configuration tips and reccomendation - NVMe

2017-07-05 Thread ceph
my guess would be the 8 NVME drives + > 2x100Gbit would be too much for > the current Xeon generation (40 PCIE lanes) to fully utilize. > > Cheers, > Robert van Leeuwen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.

Re: [ceph-users] New cluster - configuration tips and reccomendation - NVMe

2017-07-05 Thread ceph
matters for latency so you >> probably want to up that. >> >> You can also look at the DIMM configuration. >> TBH I am not sure how much it impacts Ceph performance but having just 2 >> DIMMS slots populated will not give you max memory bandwidth. >> Having some

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread ceph
gt; >> ,Ashley >> >> >> From: Henrik Korkuc >> Sent: 06 September 2017 06:58:52 >> To: Ashley Merrick; ceph-us...@ceph.com >> Subject: Re: [ceph-users] Luminous Upgrade KRBD >> >> On 17-09-06 07:33, Ashley Merr

Re: [ceph-users] Fwd: FileStore vs BlueStore

2017-09-20 Thread ceph
data from journal to definitive storage Bluestore: - write data to definitive storage (free space, not overwriting anything) - write metadata > > I find a topic on reddit that told by journal, ceph avoid buffer cache, is > it true? is is drawback of POSIX? > https://ww

Re: [ceph-users] Prefer ceph monitor

2017-11-21 Thread ceph
r answer. > >Regards, >Erik > > > > >___ >ceph-users mailing list >ceph-users@lists.ceph.com >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2017-12-23 Thread ceph
Hi Stefan, Am 14. Dezember 2017 09:48:36 MEZ schrieb Stefan Kooman : >Hi, > >We see the following in the logs after we start a scrub for some osds: > >ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700 0 >log_channel(cluster) log [DBG] : 1.2d8 scrub starts >ceph-osd.2

Re: [ceph-users] Performance issues on Luminous

2018-01-04 Thread ceph
I assume you have size of 3 then divide your expected 400 with 3 and you are not far Away from what you get... In Addition you should Never use Consumer grade ssds for ceph as they will be reach the DWPD very soon... Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski" :

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread ceph
s is absolutely in production Environment :) On Client side we using proxmox qemu/kvm to Access the rbds (librbd). Actually 12.2.2 - Mehmet ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

2018-01-11 Thread ceph
I don't understand how all of this is related to Ceph Ceph runs on a dedicated hardware, there is nothing there except Ceph, and the ceph daemons have already all power on ceph's data. And there is no random-code execution allowed on this node. Thus, spectre & meltdown are me

Re: [ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

2018-01-12 Thread ceph
Well, if a stranger have access to my whole Ceph data (this, all my VMs & rgw's data), I don't mind if he gets root access too :) On 01/12/2018 10:18 AM, Van Leeuwen, Robert wrote: Ceph runs on a dedicated hardware, there is nothing there except Ceph, and the ceph daemons ha

Re: [ceph-users] Ceph Future

2018-01-23 Thread ceph
) Please welcome the cli interfaces. - USER taks: create new images, increase images size, sink images size, check daily status and change broken disks whenever is needed. Who does that ? For instance, Ceph can be used for VMs. Your VMs system create images, resizes images, whate

Re: [ceph-users] Ceph Future

2018-01-23 Thread ceph
I think I was not clear There are VMs management system, look at https://fr.wikipedia.org/wiki/Proxmox_VE, https://en.wikipedia.org/wiki/Ganeti, probably https://en.wikipedia.org/wiki/OpenStack too Theses systems interacts with Ceph. When you create a VM, a rbd volume is created When you

Re: [ceph-users] Ceph Future

2018-01-23 Thread ceph
On 01/23/2018 04:33 PM, Massimiliano Cuttini wrote: With Ceph you have to install an orchestrator 3rd party in order to have a clear picture of what is going on. Which can be ok, but not alway pheasable. Just as with everything As said wikipedia, for instance, "Proxmox VE supports

Re: [ceph-users] Ceph Day Germany :)

2018-02-11 Thread ceph
Am 9. Februar 2018 11:51:08 MEZ schrieb Lenz Grimmer : >Hi all, > >On 02/08/2018 11:23 AM, Martin Emrich wrote: > >> I just want to thank all organizers and speakers for the awesome Ceph >> Day at Darmstadt, Germany yesterday. >> >> I learned of some cool stu

Re: [ceph-users] Is there a "set pool readonly" command?

2018-02-14 Thread ceph
ars ago (2/3) i had a look in calamarie and there was a flag which could be set to stop client io only... Someone out there who is using calamarie Today and could give a short response? - Mehmet >On Mon, Feb 12, 2018 at 1:56 PM Reed Dier >wrote: > >> I do know that there is a

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-25 Thread ceph
ched txt files with >output of your pg query of the pg 0.223? >output of ceph -s >output of ceph df >output of ceph osd df >output of ceph osd dump | grep pool >output of ceph osd crush rule dump > >Thank you and I’ll see if I can get something to ease your pain. > >As

Re: [ceph-users] pg stuck with unfound objects on non exsisting osd's

2016-11-01 Thread ceph
degraded, with 25 >unfound objects. > ># ceph health detail >HEALTH_WARN 2 pgs degraded; 2 pgs recovering; 2 pgs stuck degraded; 2 >pgs stuck unclean; 2 pgs stuck undersized; 2 pgs undersized; recovery >294599/149522370 objects degraded (0.197%); recovery 640073/149522370 >obje

Re: [ceph-users] 10.2.5 on Jessie?

2016-12-20 Thread ceph
> > Thanks! > Chad. > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http:

Re: [ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread ceph
As always: ceph status On 22/12/2016 11:53, Stéphane Klein wrote: > Hi, > > When I shutdown one osd node, where can I see the block movement? > Where can I see percentage progression? > > Best regards, > Stéphane > > > > ___

Re: [ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread ceph
That's correct :) On 22/12/2016 12:12, Stéphane Klein wrote: > HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized; > recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min > 30); 1/3 in osds are down; > > Here Ceph say there

Re: [ceph-users] Cluster pause - possible consequences

2017-01-02 Thread ceph
group number from 3072 to 8192 or better > to 165336 and I think doing it without client operations will be much faster. > > Thanks > Regards > Matteo > > > > > ___ > ceph-users mailing list > ceph-users@l

Re: [ceph-users] Cluster pause - possible consequences

2017-01-02 Thread ceph
d-only? >>> >>> I need to quickly upgrade placement group number from 3072 to 8192 or >>> better to 165336 and I think doing it without client operations will be >>> much faster. >>> >>> Thanks >>> Regards >>> Matteo >

Re: [ceph-users] is there any filesystem like wrapper that dont need to map and mount rbd ?

2018-08-01 Thread ceph
Sound like cephfs to me On 08/01/2018 09:33 AM, Will Zhao wrote: > Hi: >I want to use ceph rbd, because it shows better performance. But I dont > like kernal module and isci target process. So here is my requirments: >I dont want to map it and mount it , But I still want

Re: [ceph-users] Running 12.2.5 without problems, should I upgrade to 12.2.7 or wait for 12.2.8?

2018-08-10 Thread ceph
y predictions when the 12.2.8 release will be available? > > >Micha Krause >___ >ceph-users mailing list >ceph-users@lists.ceph.com >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph

Re: [ceph-users] Optane 900P device class automatically set to SSD not NVME

2018-08-12 Thread ceph
Am 1. August 2018 10:33:26 MESZ schrieb Jake Grimmett : >Dear All, Hello Jake, > >Not sure if this is a bug, but when I add Intel Optane 900P drives, >their device class is automatically set to SSD rather than NVME. > AFAIK ceph actually difference only between hdd and ssd

Re: [ceph-users] [Jewel 10.2.11] OSD Segmentation fault

2018-08-12 Thread ceph
quot;, "file_number": 4350} > -1> 2018-08-03 12:12:53.146753 7f12c38d0a80 0 osd.154 89917 load_pgs > 0> 2018-08-03 12:12:57.526910 7f12c38d0a80 -1 *** Caught signal >(Segmentation fault) ** > in thread 7f12c38d0a80 thread_name:ceph-osd > ceph version 10.2.11 (e4b

Re: [ceph-users] Least impact when adding PG's

2018-08-13 Thread ceph
uld be increased First - not sure which one, but the docs and Mailinglist history should be helpfull. Hope i could give a Bit usefull hints - Mehmet >Thanks, > >John ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Still risky to remove RBD-Images?

2018-08-21 Thread ceph
Am 20. August 2018 17:22:35 MESZ schrieb Mehmet : >Hello, Hello me, > >AFAIK removing of big RBD-Images would lead ceph to produce blocked >requests - I dont mean caused by poor disks. > >Is this still the case with "Luminous (12.2.4)"? > To answer my qu

Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread ceph
age Weil" wrote: >> >Hi everyone, >> > >> >Please help me welcome Mike Perez, the new Ceph community manager! >> > >> >Mike has a long history with Ceph: he started at DreamHost working >on >> >OpenStack and Ceph back in the early da

Re: [ceph-users] Understanding the output of dump_historic_ops

2018-09-02 Thread ceph
"event": "done" >} >] >} >}, > >Seems like I have an operation that was delayed over 2 seconds in >queued_for_pg state. >What does that mean? What was it waiting for? > >Regards, >*Ronnie Lazar* >*R&D* > >T: +972 77 556-1727 >E: ron...@stratoscale.com > > >Web <http://www.stratoscale.com/> | Blog ><http://www.stratoscale.com/blog/> > | Twitter <https://twitter.com/Stratoscale> | Google+ ><https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts> > | Linkedin <https://www.linkedin.com/company/stratoscale> ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to setup Ceph OSD auto boot up on node reboot

2018-09-07 Thread ceph
Hi karri, Am 4. September 2018 23:30:01 MESZ schrieb Pardhiv Karri : >Hi, > >I created a ceph cluster manually (not using ceph-deploy). When I >reboot >the node the osd's doesn't come backup because the OS doesn't know that >it >need to bring up the OSD.

Re: [ceph-users] backup ceph

2018-09-18 Thread ceph
Hi, I assume that you are speaking of rbd only Taking snapshot of rbd volumes and keeping all of them on the cluster is fine However, this is no backup A snapshot is only a backup if it is exported off-site On 09/18/2018 11:54 AM, ST Wong (ITSC) wrote: > Hi, > > We're newbie to

Re: [ceph-users] backup ceph

2018-09-19 Thread ceph
For cephfs & rgw, it all depends on your needs, as with rbd You may want to trust blindly Ceph Or you may backup all your data, just in case (better safe than sorry, as he said) To my knowledge, there is no (or few) impact of keeping a large number of snapshot on a cluster With rbd, you

Re: [ceph-users] [CEPH]-[RADOS] Deduplication feature status

2018-09-27 Thread ceph
As of today, there is no such feature in Ceph Best regards, On 09/27/2018 04:34 PM, Gaël THEROND wrote: > Hi folks! > > As I’ll soon start to work on a new really large an distributed CEPH > project for cold data storage, I’m checking out a few features availability > and status

Re: [ceph-users] RBD Mirror Question

2018-10-04 Thread ceph
Hello Vikas, Could you please provide us which Commands you have uses to Setup rbd-mirror? Would be Great if you could Provide a short howto :) Thanks in advise - Mehmet Am 2. Oktober 2018 22:47:08 MESZ schrieb Vikas Rana : >Hi, > >We have a CEPH 3 node cluster at primary site. We

Re: [ceph-users] Some questions concerning filestore --> bluestore migration

2018-10-04 Thread ceph
2018 at 4:25 AM Massimo Sgaravatto < >massimo.sgarava...@gmail.com> wrote: > >> Hi >> >> I have a ceph cluster, running luminous, composed of 5 OSD nodes, >which is >> using filestore. >> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA >

Re: [ceph-users] deep scrub error caused by missing object

2018-10-05 Thread ceph
Hello Roman, I am Not sure if i could be a help but perhaps this Commands can help to find the objects in question... Ceph Heath Detail rados list-inconsistent-pg rbd rados list-inconsistent-obj 2.10d I guess it is also interresting to know you use bluestore or filestore... Hth - Mehmet Am

[ceph-users] rbd export(-diff) --whole-object

2018-03-09 Thread ceph
does it do if this feature is disabled ? Why is `whole-object` an option, and not the default behavior ? Regards, ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rbd export(-diff) --whole-object

2018-03-09 Thread ceph
s it do if this feature is disabled ? > > It behaves the same way -- it will export the full object if at least > one byte has changed. > >> Why is `whole-object` an option, and not the default behavior ? > > ... because it can result in larger export-diffs. >

Re: [ceph-users] Moving OSDs between hosts

2018-03-16 Thread ceph
w one. I'm running >Luminous 12.2.2 on Ubuntu 16.04 and everything was created with >ceph-deploy. > >What is the best course of action for moving these drives? I have read >some >posts that suggest I can simply move the drive and once the new OSD >node >sees the drive it will

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread ceph
The stock kernel from Debian is perfect Spectre / meltdown mitigations are worthless for a Ceph point of view, and should be disabled (again, strictly from a Ceph point of view) If you need the luminous features, using the userspace implementations is required (librbd via rbd-nbd or qemu

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread ceph
e uses unsupported features: 0x38 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread ceph
il". >> rbd: map failed: (6) No such device or address >> # dmesg | tail -1 >> [1108045.667333] rbd: image truc: image uses unsupported features: 0x38 > > Those are rbd image features. Your email also mentioned "libcephfs via > fuse", so I assumed you

Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread ceph
I would Check with Tools like atop the utilization of your Disks also. Perhaps something Related in dmesg or dorthin? - Mehmet Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan : >Hi guys, >We are running a production OpenStack backend by Ceph. > >At present, we are meet

Re: [ceph-users] Enable object map kernel module

2018-03-24 Thread ceph
Am 24. März 2018 00:05:12 MEZ schrieb Thiago Gonzaga : >Hi All, > >I'm starting with ceph and faced a problem while using object-map > >root@ceph-mon-1:/home/tgonzaga# rbd create test -s 1024 --image-format >2 >--image-feature exclusive-lock >root@ceph-mon-1:/home

Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread ceph
SQL is a no brainer really)  >-- Ursprungligt meddelande--Från: Sam HuracanDatum: lör 24 mars >2018 19:20Till: c...@elchaka.de;Kopia: >ceph-users@lists.ceph.com;Ämne:Re: [ceph-users] Fwd: High IOWait Issue >This is from iostat: >I'm using Ceph jewel, has no HW error.Cep

Re: [ceph-users] Enable object map kernel module

2018-03-26 Thread ceph
This is an extra package: rbd-nbd On 03/26/2018 04:41 PM, Thiago Gonzaga wrote: > It seems the bin is not present, is it part of ceph packages? > > tgonzaga@ceph-mon-3:~$ sudo rbd nbd map test > /usr/bin/rbd-nbd: exec failed: (2) No such file or directory > rbd: rbd-nbd failed w

Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread ceph
Am 4. April 2018 20:58:19 MESZ schrieb Robert Stanford : >I read a couple of versions ago that ceph-deploy was not recommended >for >production clusters. Why was that? Is this still the case? We have a I cannot Imagine that. Did use it Now a few versions before 2.0 and it works

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-04-08 Thread ceph
Hi Marc, Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos : > >How do you resolve these issues? > In my Case i could get rid of this by deleting the existing Snapshots. - Mehmet > >Apr 7 22:39:21 c03 ceph-osd: 2018-04-07 22:39:21.928484 7f0826524700 >-1 >osd.13 pg_epo

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-04-08 Thread ceph
o:c...@elchaka.de] >Sent: zondag 8 april 2018 10:44 >To: ceph-users@lists.ceph.com >Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for >$object? > >Hi Marc, > >Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos >: >> >>How do you resolve these i

Re: [ceph-users] Deleting an rbd image hangs

2018-05-08 Thread ceph
der, the snapshot id is now gone. > Hmm... that makes me curious... So when i have a vm-image (rbd) on ceph and am doing One or more Snapshots from this Image i *must have* to delete the snapshot(s) at First completely before i delete the origin Image? How can we then get rid of this orph

Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-05-26 Thread ceph
join us for the next one on May 28: > >https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/ > >The presented topic is "High available (active/active) NFS and CIFS >exports upon CephFS". > >Kindest Regards >-- >Robert Sander >Heinlein Support GmbH >S

[ceph-users] Fwd: v13.2.0 Mimic is out

2018-06-01 Thread ceph
FYI De: "Abhishek" À: "ceph-devel" , "ceph-users" , ceph-maintain...@ceph.com, ceph-annou...@ceph.com Envoyé: Vendredi 1 Juin 2018 14:11:00 Objet: v13.2.0 Mimic is out We're glad to announce the first stable release of Mimic, the next long term release se

Re: [ceph-users] rbd map hangs

2018-06-07 Thread ceph
Just a bet: have you inconsistant MTU across your network ? I already had your issue when OSD and client was using jumbo frames, but MON did not (or something like that) On 06/07/2018 05:12 AM, Tracy Reed wrote: > > Hello all! I'm running luminous with old style non-bluestore

Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

2018-06-08 Thread ceph
overhead, > - everywhere 10Gb is recommended because of better latency. (I even > posted here something to make ceph better performing with 1Gb eth, > disregarded because it would add complexity, fine, I can understand) > > And then because of some start-up/automation issues (because th

Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-06-12 Thread ceph
Great, Am 5. Juni 2018 17:13:12 MESZ schrieb Robert Sander : >Hi, > >On 27.05.2018 01:48, c...@elchaka.de wrote: >> >> Very interested to the Slides/vids. > >Slides are now available: >https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/ Thanks you very much

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-12 Thread ceph
Hello Paul, Am 5. Juni 2018 22:17:15 MESZ schrieb Paul Emmerich : >Hi, > >If anyone wants to play around with Ceph on Debian: I just made our >mirror >for our >dev/test image builds public: > >wget -q -O- 'https://static.croit.io/keys/release.asc'

Re: [ceph-users] How to throttle operations like "rbd rm"

2018-06-13 Thread ceph
Hi yao, IIRC there is a *sleep* Option which is usefull when delete Operation is being done from ceph sleep_trim or something like that. - Mehmet Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao : >Hi Jason, > > >Thank you very much for your reply. >I think the RBD tras

Re: [ceph-users] How to throttle operations like "rbd rm"

2018-06-21 Thread ceph
Hi Paul, Am 14. Juni 2018 00:33:09 MESZ schrieb Paul Emmerich : >2018-06-13 23:53 GMT+02:00 : > >> Hi yao, >> >> IIRC there is a *sleep* Option which is usefull when delete Operation >is >> being done from ceph sleep_trim or something like that. >> >

Re: [ceph-users] Kraken bluestore compression

2017-06-06 Thread ceph
ions that compression is available in Kraken for > bluestore OSDs, however, I can find almost nothing in the documentation > that indicates how to use it. > > I've found: > - http://docs.ceph.com/docs/master/radosgw/compression/ > - http://ceph.com/releases/v11-2-0-kraken-released/ &

Re: [ceph-users] risk mitigation in 2 replica clusters

2017-06-21 Thread ceph
upkeep than because of real failures) you're suddenly drastically > increasing the risk of data-loss. So I find myself wondering if there is a > way to tell Ceph I want an extra replica created for a particular PG or set > thereof, e.g., something that would enable the functional equivale

Re: [ceph-users] risk mitigation in 2 replica clusters

2017-06-21 Thread ceph
et awareness, i.e., secondary >> disk >>> failure based purely on chance of any 1 of the remaining 99 OSDs failing >>> within recovery time). 5 nines is just fine for our purposes, but of >> course >>> multiple disk failures are only part of the story. >>

Re: [ceph-users] OSD behavior, in case of its journal disk (either HDD or SSD) failure

2016-01-25 Thread ceph
Hi, I have try this a month before. Unfortunaly the Script Did Not worked out but if you Do the described steps manualy it works. The important thing is that /var/lib/ceph/osd.x/Journal (Not sure about the path) should Show to right place where your Journal should be. Am 25. Januar 2016 16:48

[ceph-users] KVstore vs filestore

2016-01-26 Thread ceph
Hi, I'm questionning myself about leveldb-based KVstore Is it a full drop-in for filestore ? I mean, can I store multiple TB in a levelDB OSD ? Is there any restriction for leveldb ? (object size etc) Is leveldb-based KVstore considered by the ceph community as "stable" ? Can

Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread ceph
mance of > cluster. Something like Linux page cache for OSD write operations. > > I assume that by default Linux page cache can use free memory to improve > OSD read performance ( please correct me if i am wrong). But how about OSD > write improvement , How to improve that with free

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread ceph
; On a few servers, updated from Hammer to Infernalis, and from Debian >>> Wheezy to Jessie, I can see that it seems to have some mixes between old >>> sysvinit "ceph" script and the new ones on systemd. >>> >>> I always have an /etc/init.d/ceph old s

Re: [ceph-users] [Hammer upgrade]: procedure for upgrade

2016-03-03 Thread ceph
Hi, As the docs said, mon, then osd, then rgw Restart each daemon after upgrade the code Works fine On 03/03/2016 22:11, Andrea Annoè wrote: > Hi to all, > An architecture of Ceph have: > 1 RGW > 3 MON > 4 OSD > > Someone have tested procedure for upgrade Ceph architec

Re: [ceph-users] Adding new disk/OSD to ceph cluster

2016-04-09 Thread ceph
Without knowing proxmox specific stuff .. #1: just create an OSD the regular way #2: it is safe; However, you may, either create a spoof (osd_crush_chooseleaf_type = 0), or underuse your cluster (osd_crush_chooseleaf_type = 1) On 09/04/2016 14:39, Mad Th wrote: > We have a 3 node proxmox/c

Re: [ceph-users] Deprecating ext4 support

2016-04-12 Thread ceph
On Tue, 12 Apr 2016, Jan Schermer wrote: > I'd like to raise these points, then > > 1) some people (like me) will never ever use XFS if they have a choice > given no choice, we will not use something that depends on XFS Huh ? > 3) doesn't majority of Ceph users only c

Re: [ceph-users] Deprecating ext4 support

2016-04-12 Thread ceph
ey have a choice >>> given no choice, we will not use something that depends on XFS >>> >>> 2) choice is always good >> >> Okay! >> >>> 3) doesn't majority of Ceph users only care about RBD? >> >> Probably that's true now. We

Re: [ceph-users] Deprecating ext4 support

2016-04-12 Thread ceph
On 12/04/2016 22:33, Jan Schermer wrote: > I don't think it's apples and oranges. > If I export two files via losetup over iSCSI and make a raid1 swraid out of > them in guest VM, I bet it will still be faster than ceph with bluestore. > And yet it will provide the same guar

Re: [ceph-users] Using s3 (radosgw + ceph) like a cache

2016-04-24 Thread ceph
> Regards > Dominik > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Verifying the location of the wal

2018-10-28 Thread ceph
IIRC there is a Command like Ceph osd Metadata Where you should be able to find Information like this Hab - Mehmet Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford : > I did exactly this when creating my osds, and found that my total >utilization is about the same as the sum

Re: [ceph-users] ceph.conf mon_max_pg_per_osd not recognized / set

2018-10-31 Thread ceph
Isn't this a mgr variable ? On 10/31/2018 02:49 PM, Steven Vacaroaia wrote: > Hi, > > Any idea why different value for mon_max_pg_per_osd is not "recognized" ? > I am using mimic 13.2.2 > > Here is what I have in /etc/ceph/ceph.conf > >

[ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread ceph
Hi, I have some wild freeze using cephfs with the kernel driver For instance: [Tue Dec 4 10:57:48 2018] libceph: mon1 10.5.0.88:6789 session lost, hunting for new mon [Tue Dec 4 10:57:48 2018] libceph: mon2 10.5.0.89:6789 session established [Tue Dec 4 10:58:20 2018] ceph: mds0 caps stale

Re: [ceph-users] RBD snapshot atomicity guarantees?

2018-12-18 Thread ceph
haw), but > preceeded by an fstrim. With virtio-scsi, using fstrim propagates the > discards from within the VM to Ceph RBD (if qemu is configured > accordingly), > and a lot of space is saved. > > We have yet to observe these hangs, we are running this with ~5 VMs with > ~10 d

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-16 Thread ceph
in Nautilus, we'll be doing it for Octopus. > > Are there major python-{rbd,cephfs,rgw,rados} users that are still Python > 2 that we need to be worried about? (OpenStack?) > > sage > ___ > ceph-users mailin

Re: [ceph-users] Using Ceph central backup storage - Best practice creating pools

2019-01-22 Thread ceph
ct, I believe you have to implement that on top of Ceph For instance, let say you simply create a pool, with a rbd volume in it You then create a filesystem on that, and map it on some server Finally, you can push your files on that mountpoint, using various Linux's user, acl or whatever : beyond

Re: [ceph-users] Configure libvirt to 'see' already created snapshots of a vm rbd image

2019-01-24 Thread ceph
>on >the rbd image it is using for the vm? > >I have already a vm running connected to the rbd pool via >protocol='rbd', and rbd snap ls is showing for snapshots. > > > > > >___ >ceph-users mailing list >cep

  1   2   3   4   >