Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2019-09-23 Thread V A Prabha
Dear Jason A small update in the setup is that now the image syncing shows 8% and remains still in the same status...after 1 day I can see the image got replicated the other side Answer few of my queries: 1.Does the image sync will work one by one after 1 image or all images get sync at the s

Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2019-08-26 Thread Jason Dillaman
On Mon, Aug 26, 2019 at 7:54 AM V A Prabha wrote: > > Dear Jason > I shall explain my setup first > The DR centre is 300 Kms apart from the site > Site-A - OSD 0 - 1 TB Mon - 10.236.248.XX /24 > Site-B - OSD 0 - 1 TB Mon - 10.236.228.XX/27 - RBD-Mirror deamon > running > All por

Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2019-08-26 Thread V A Prabha
Dear Jason I shall explain my setup first The DR centre is 300 Kms apart from the site Site-A - OSD 0 - 1 TB Mon - 10.236.248.XX /24 Site-B - OSD 0 - 1 TB Mon - 10.236.228.XX/27 - RBD-Mirror deamon running All ports are open and no firewall..Connectivity is there between My ini

Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2019-08-20 Thread Jason Dillaman
On Tue, Aug 20, 2019 at 9:23 AM V A Prabha wrote: > I too face the same problem as mentioned by Sat > All the images created at the primary site are in the state : down+ > unknown > Hence in the secondary site the images is 0 % up + syncing all time > No progress > The only error log th

Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2019-08-20 Thread V A Prabha
I too face the same problem as mentioned by Sat All the images created at the primary site are in the state : down+ unknown Hence in the secondary site the images is 0 % up + syncing all time No progress The only error log that is continuously hitting is 2019-08-20 18:04:38.556908 7f7d4

Re: [ceph-users] Question regarding client-network

2019-01-31 Thread Buchberger, Carsten
Thank you - we were expecting that, but wanted to be sure. By the way - we are running our clusters on IPv6-BGP, to achieve massive scalability and load-balancing ;-) Mit freundlichen Grüßen Carsten Buchberger [WITCOM_LOGO_CS4_CMYK_1.png] WiTCOM Wiesbadener Informations- und Telekommunikations

Re: [ceph-users] Question regarding client-network

2019-01-30 Thread Robert Sander
On 30.01.19 08:55, Buchberger, Carsten wrote: > So as long as there is ip-connectivity between the client, and the > client-network ip –adressses of our ceph-cluster everything is fine ? Yes, client traffic is routable. Even inter-OSD traffic is routable, there are reports from people running ro

Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2018-08-27 Thread sat
> -Original Message- > From: Jason Dillaman > Sent: Friday, August 24, 2018 12:09 AM > To: sat > Cc: ceph-users > Subject: Re: [ceph-users] [question] one-way RBD mirroring doesn't work > > On Thu, Aug 23, 2018 at 10:56 AM sat wrote: > > > >

Re: [ceph-users] Question about 'firstn|indep'

2018-08-23 Thread Gregory Farnum
On Thu, Aug 23, 2018 at 10:21 AM Cody wrote: > So, is it okay to say that compared to the 'firstn' mode, the 'indep' > mode may have the least impact on a cluster in an event of OSD > failure? Could I use 'indep' for replica pool as well? > You could, but shouldn't. Imagine if the primary OSD fa

Re: [ceph-users] Question about 'firstn|indep'

2018-08-23 Thread Cody
So, is it okay to say that compared to the 'firstn' mode, the 'indep' mode may have the least impact on a cluster in an event of OSD failure? Could I use 'indep' for replica pool as well? Thank you! Regards, Cody On Wed, Aug 22, 2018 at 7:12 PM Gregory Farnum wrote: > > On Wed, Aug 22, 2018 at 1

Re: [ceph-users] [question] one-way RBD mirroring doesn't work

2018-08-23 Thread Jason Dillaman
On Thu, Aug 23, 2018 at 10:56 AM sat wrote: > > Hi, > > > I'm trying to make a one-way RBD mirroed cluster between two Ceph clusters. > But it > hasn't worked yet. It seems to sucecss, but after making an RBD image from > local cluster, > it's considered as "unknown". > > ``` > $ sudo rbd --clus

Re: [ceph-users] Question about 'firstn|indep'

2018-08-22 Thread Gregory Farnum
On Wed, Aug 22, 2018 at 12:56 AM Konstantin Shalygin wrote: > > Hi everyone, > > > > I read an earlier thread [1] that made a good explanation on the 'step > > choose|chooseleaf' option. Could someone further help me to understand > > the 'firstn|indep' part? Also, what is the relationship betwee

Re: [ceph-users] Question about 'firstn|indep'

2018-08-22 Thread Konstantin Shalygin
Hi everyone, I read an earlier thread [1] that made a good explanation on the 'step choose|chooseleaf' option. Could someone further help me to understand the 'firstn|indep' part? Also, what is the relationship between 'step take' and 'step choose|chooseleaf' when it comes to define a failure dom

Re: [ceph-users] Question to avoid service stop when osd is full

2018-05-17 Thread 渥美 慶彦
Thank you, David. I found "ceph osd pool set-quota" command. I think using this command to SSD pool is useful to avoid the problem in quotation, isn't it? best regards On 2018/04/10 5:22, David Turner wrote: The proper way to prevent this is to set your full ratios safe and monitor your disk

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-11 Thread David Turner
That's right. I didn't actually use Jewel for very long. I'm glad it worked for you. On Fri, May 11, 2018, 4:49 PM Webert de Souza Lima wrote: > Thanks David. > Although you mentioned this was introduced with Luminous, it's working > with Jewel. > > ~# ceph osd pool stats > >

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-11 Thread Webert de Souza Lima
Thanks David. Although you mentioned this was introduced with Luminous, it's working with Jewel. ~# ceph osd pool stats Fri May 11 17:41:39 2018 pool rbd id 5 client io 505 kB/s rd, 3801 kB/s wr, 46 op/s rd, 27 op/s wr pool rbd_cache id 6 client io 2538 kB/s rd,

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-11 Thread David Turner
`ceph osd pool stats` with the option to specify the pool you are interested in should get you the breakdown of IO per pool. This was introduced with luminous. On Fri, May 11, 2018 at 2:39 PM Webert de Souza Lima wrote: > I think ceph doesn't have IO metrics will filters by pool right? I see IO

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-11 Thread Webert de Souza Lima
I think ceph doesn't have IO metrics will filters by pool right? I see IO metrics from clients only: ceph_client_io_ops ceph_client_io_read_bytes ceph_client_io_read_ops ceph_client_io_write_bytes ceph_client_io_write_ops and pool "byte" metrics, but not "io": ceph_pool(write/read)_bytes(_total)

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-09 Thread Webert de Souza Lima
Hey Jon! On Wed, May 9, 2018 at 12:11 PM, John Spray wrote: > It depends on the metadata intensity of your workload. It might be > quite interesting to gather some drive stats on how many IOPS are > currently hitting your metadata pool over a week of normal activity. > Any ceph built-in tool f

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-09 Thread John Spray
On Wed, May 9, 2018 at 3:32 PM, Webert de Souza Lima wrote: > Hello, > > Currently, I run Jewel + Filestore for cephfs, with SSD-only pools used for > cephfs-metadata, and HDD-only pools for cephfs-data. The current > metadata/data ratio is something like 0,25% (50GB metadata for 20TB data). > > R

Re: [ceph-users] Question: CephFS + Bluestore

2018-05-09 Thread Webert de Souza Lima
I'm sorry I have mixed up some information. The actual ratio I have now is 0,0005% (*100MB for 20TB data*). Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 9, 2018 at 11:32 AM, Webert de Souza Lima wrote: > Hello, > > Current

Re: [ceph-users] Question to avoid service stop when osd is full

2018-04-09 Thread David Turner
The proper way to prevent this is to set your full ratios safe and monitor your disk usage. That will allow you to either clean up old data or add new storage before you get to 95 full on any OSDs. What I mean by setting your full ratios safe is that if your use case can fill 20% of your disk spa

Re: [ceph-users] Question about Erasure-coding clusters and resiliency

2018-02-13 Thread Caspar Smit
Hi Tim, With the current setup you can only handle 1 host failure without loosing any data, BUT everything will probably freeze until you bring the failed node (or the OSD"s in it) back up. Your setup indicates k=6, m=2 and all 8 shards are distributed to 4 hosts (2 shards/osds per host). Be awar

Re: [ceph-users] question on rbd resize

2018-01-03 Thread Richard Hesketh
No, most filesystems can be expanded pretty trivially (shrinking is a more complex operation but usually also doable). Assuming the likely case of an ext2/3/4 filesystem, the command "resize2fs /dev/rbd0" should resize the FS to cover the available space in the block device. Rich On 03/01/18 1

Re: [ceph-users] question on rbd resize

2018-01-03 Thread 13605702...@163.com
hi Jason the data won't be lost if i resize the filesystem in the image? thanks 13605702...@163.com From: Jason Dillaman Date: 2018-01-03 20:57 To: 13605702...@163.com CC: ceph-users Subject: Re: [ceph-users] question on rbd resize You need to resize the filesystem within the RBD

Re: [ceph-users] question on rbd resize

2018-01-03 Thread Jason Dillaman
You need to resize the filesystem within the RBD block device. On Wed, Jan 3, 2018 at 7:37 AM, 13605702...@163.com <13605702...@163.com> wrote: > hi > > a rbd image is out of space (old size is 1GB), so i resize it to 10GB > > # rbd info rbd/test > rbd image 'test': > size 10240 MB in 2560 objects

Re: [ceph-users] Question about librbd with qemu-kvm

2018-01-02 Thread Alexandre DERUMIER
It's not possible to use multiple threads by disk in qemu currently. (It's on qemu roadmap). but you can create multiple disk/rbd image and use multiple qemu iothreads. (1 by disk). (BTW, I'm able to reach around 70k iops max with 4k read, with 3,1ghz cpu, rbd_cache=none, disabling debug and

Re: [ceph-users] Question about BUG #11332

2017-12-04 Thread Gregory Farnum
On Thu, Nov 23, 2017 at 1:55 AM 许雪寒 wrote: > Hi, everyone. > > We also encountered this problem: http://tracker.ceph.com/issues/11332. > And we found that this seems to be caused by the lack of mutual exclusion > between applying "trim" and handling subscriptions. Since > "build_incremental" ope

Re: [ceph-users] question pool usage vs. pool usage raw

2017-11-23 Thread Konstantin Shalygin
In next time: submit "Answer to All" button, for make copy of your message in ML. On 11/24/2017 12:45 AM, bernhard glomm wrote: and is there ANY way to figure out how much space is being additional consumed by the snapshots at the moment (either by pool, preferable or by cluster?) The way i

Re: [ceph-users] question pool usage vs. pool usage raw

2017-11-23 Thread Konstantin Shalygin
What is the difference between the "usage" and the "raw usage" of a pool? Usage - is your data. Raw - is what actually your data use with all copies (pool 'size' option). I.e. if your data is 1000G - your raw is 3000G. ___ ceph-users mailing list ceph-

Re: [ceph-users] Question about the Ceph's performance with spdk

2017-09-21 Thread Alejandro Comisario
Bump ! i saw this on the documentation for Bluestore also http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#spdk-usage Does anyone has any experience ? On Thu, Jun 8, 2017 at 2:27 AM, Li,Datong wrote: > Hi all, > > I’m new in Ceph, and I wonder to know the performance r

Re: [ceph-users] Question about rbd-mirror

2017-06-29 Thread Jason Dillaman
On Wed, Jun 28, 2017 at 11:42 PM, YuShengzuo wrote: > Hi Jason Dillaman, > > > > I am using rbd-mirror now (release Jewel). > > > > 1. > > And in many webs or other information introduced rbd-mirror notices that two > ceph cluster should be the ‘same fsid’. > > But It’s nothing bad or wrong when I

Re: [ceph-users] Question about PGMonitor::waiting_for_finished_proposal

2017-06-01 Thread Joao Eduardo Luis
On 06/01/2017 05:35 AM, 许雪寒 wrote: Hi, everyone. Recently, I’m reading the source code of Monitor. I found that, in PGMonitor::preprare_pg_stats() method, a callback C_Stats is put into PGMonitor::waiting_for_finished_proposal. I wonder, if a previous PGMap incremental is in PAXOS's propose

Re: [ceph-users] [Question] RBD Striping

2017-04-28 Thread Jason Dillaman
Here is a background on Ceph striping [1]. By default, RBD will stripe data with a stripe unit of 4MB and a stripe count of 1. Decreasing the default RBD image object size will balloon the number of objects in your backing Ceph cluster but will also result in less data to copy during snapshot and c

Re: [ceph-users] Question about the OSD host option

2017-04-26 Thread Gregory Farnum
On Fri, Apr 21, 2017 at 12:07 PM, Fabian wrote: > Hi Everyone, > > I play a bit around with ceph on a test cluster with 3 servers (each MON > and OSD at the same time). > I use some self written ansible rules to deploy the config and crate > the OSD with ceph-disk. Because ceph-disk use the next f

Re: [ceph-users] Question about the OSD host option

2017-04-22 Thread Henrik Korkuc
mon.* and osd.* sections are not mandatory in config. So unless you want to set something per daemon, you can skip them completely. On 17-04-21 19:07, Fabian wrote: Hi Everyone, I play a bit around with ceph on a test cluster with 3 servers (each MON and OSD at the same time). I use some self

Re: [ceph-users] Question about RadosGW subusers

2017-04-13 Thread Ben Hines
; > > Gesendet: Donnerstag, 13. April 2017 um 20:15 Uhr > Von: "Trey Palmer" > An: ceph.nov...@habmalnefrage.de > Cc: "Trey Palmer" , ceph-us...@ceph.com > Betreff: Re: [ceph-users] Question about RadosGW subusers > > Anton, > > It turns out that Adam Emerson is

Re: [ceph-users] Question about RadosGW subusers

2017-04-13 Thread ceph . novice
:15 Uhr Von: "Trey Palmer" An: ceph.nov...@habmalnefrage.de Cc: "Trey Palmer" , ceph-us...@ceph.com Betreff: Re: [ceph-users] Question about RadosGW subusers Anton,    It turns out that Adam Emerson is trying to get bucket policies and roles merged in time for Luminous:   https://github

Re: [ceph-users] Question about RadosGW subusers

2017-04-13 Thread Trey Palmer
Anton, It turns out that Adam Emerson is trying to get bucket policies and roles merged in time for Luminous: https://github.com/ceph/ceph/pull/14307 Given this, I think we will only be using subusers temporarily as a method to track which human or service did what in which bucket. This seems t

Re: [ceph-users] Question about RadosGW subusers

2017-04-13 Thread ceph . novice
Hey Trey. Sounds great, we were discussing the same kind of requirements and couldn't agree on/find something "useful"... so THANK YOU for sharing!!! It would be great if you could provide some more details or an example how you configure the "bucket user" and sub-users and all that stuff. Even

Re: [ceph-users] Question about unfound objects

2017-03-30 Thread Nick Fisk
roll >back??? From: Steve Taylor [mailto:steve.tay...@storagecraft.com] Sent: 30 March 2017 20:07 To: n...@fisk.me.uk; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Question about unfound objects One other thing to note with this experience is that we do a LOT of RBD snap trimming, l

Re: [ceph-users] Question about unfound objects

2017-03-30 Thread Steve Taylor
One other thing to note with this experience is that we do a LOT of RBD snap trimming, like hundreds of millions of objects per day added to our snap_trimqs globally. All of the unfound objects in these cases were found on other OSDs in the cluster with identical contents, but associated with di

Re: [ceph-users] Question about unfound objects

2017-03-30 Thread Steve Taylor
Good suggestion, Nick. I actually did that at the time. The "ceph osd map" wasn't all that interesting because the OSDs had been outed and their PGs had been mapped to new OSDs. Everything appeared to be in order with the PGs being mapped to the right number of new OSDs. The PG mappings looked f

Re: [ceph-users] Question about unfound objects

2017-03-30 Thread Nick Fisk
Hi Steve, If you can recreate or if you can remember the object name, it might be worth trying to run "ceph osd map" on the objects and see where it thinks they map to. And/or maybe pg query might show something? Nick From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behal

Re: [ceph-users] question about block sizes, rados objects and file striping (and maybe more)

2017-03-20 Thread Jason Dillaman
On Mon, Mar 20, 2017 at 6:49 PM, Alejandro Comisario wrote: > Jason, thanks for the reply, you really got my question right. > So, some doubts that might show that i lack of some general knowledge. > > When i read that someone is testing a ceph cluster with secuential 4k > block writes, does that

Re: [ceph-users] question about block sizes, rados objects and file striping (and maybe more)

2017-03-20 Thread Alejandro Comisario
Jason, thanks for the reply, you really got my question right. So, some doubts that might show that i lack of some general knowledge. When i read that someone is testing a ceph cluster with secuential 4k block writes, does that could happen inside a vm that is using an RBD backed OS ? In that case

Re: [ceph-users] question about block sizes, rados objects and file striping (and maybe more)

2017-03-20 Thread Jason Dillaman
It's a very broad question -- are you trying to determine something more specific? Notionally, your DB engine will safely journal the changes to disk, commit the changes to the backing table structures, and prune the journal. Your mileage my vary depending on the specific DB engine and its configu

Re: [ceph-users] question about block sizes, rados objects and file striping (and maybe more)

2017-03-20 Thread Alejandro Comisario
anyone ? On Fri, Mar 17, 2017 at 5:40 PM, Alejandro Comisario wrote: > Hi, it's been a while since im using Ceph, and still im a little > ashamed that when certain situation happens, i dont have the knowledge > to explain or plan things. > > Basically what i dont know is, and i will do an exercis

Re: [ceph-users] Question regarding CRUSH algorithm

2017-02-17 Thread Richard Hesketh
On 16/02/17 20:44, girish kenkere wrote: > Thanks David, > > Its not quiet what i was looking for. Let me explain my question in more > detail - > > This is excerpt from Crush paper, this explains how crush algo running on > each client/osd maps pg to an osd during the write operation[lets assu

Re: [ceph-users] Question regarding CRUSH algorithm

2017-02-16 Thread girish kenkere
Thanks David, Its not quiet what i was looking for. Let me explain my question in more detail - This is excerpt from Crush paper, this explains how crush algo running on each client/osd maps pg to an osd during the write operation[lets assume]. *"Tree buckets are structured as a weighted binary

Re: [ceph-users] Question regarding CRUSH algorithm

2017-02-16 Thread David Turner
As a piece to the puzzle, the client always has an up to date osd map (which includes the crush map). If it's out of date, then it has to get a new one before it can request to read or write to the cluster. That way the client will never have old information and if you add or remove storage, t

Re: [ceph-users] Question about user's key

2017-01-20 Thread Joao Eduardo Luis
On 01/20/2017 03:52 AM, Chen, Wei D wrote: Hi, I have read through some documents about authentication and user management about ceph, everything works fine with me, I can create a user and play with the keys and caps of that user. But I cannot find where those keys or capabilities stored, obv

Re: [ceph-users] Question about user's key

2017-01-20 Thread Martin Palma
I don't know exactly where but I'm guessing in the database of the monitor server which should be located at "/var/lib/ceph/mon/". Best, Martin On Fri, Jan 20, 2017 at 8:55 AM, Chen, Wei D wrote: > Hi Martin, > > Thanks for your response! > Could you pls tell me where it is on the monitor nodes?

Re: [ceph-users] Question about user's key

2017-01-19 Thread Martin Palma
Hi, They are stored on the monitore nodes. Best, Martin On Fri, 20 Jan 2017 at 04:53, Chen, Wei D wrote: > Hi, > > > > I have read through some documents about authentication and user > management about ceph, everything works fine with me, I can create > > a user and play with the keys and cap

Re: [ceph-users] Question about writing a program that transfer snapshot diffs between ceph clusters

2016-11-01 Thread Wes Dillingham
You might want to have a look at this: https://github.com/camptocamp/ceph-rbd-backup/blob/master/ceph-rbd-backup.py I have a bash implementation of this, but it basically boils down to wrapping what peter said: an export-diff to stdout piped to an import-diff on a different cluster. The "transfer"

Re: [ceph-users] Question about writing a program that transfer snapshot diffs between ceph clusters

2016-11-01 Thread Peter Maloney
On 11/01/16 10:22, Peter Maloney wrote: > On 11/01/16 06:57, xxhdx1985126 wrote: >> Hi, everyone. >> >> I'm trying to write a program based on the librbd API that transfers >> snapshot diffs between ceph clusters without the need for a temporary >> storage which is required if I use the "rbd export

Re: [ceph-users] Question about writing a program that transfer snapshot diffs between ceph clusters

2016-11-01 Thread Peter Maloney
On 11/01/16 06:57, xxhdx1985126 wrote: > Hi, everyone. > > I'm trying to write a program based on the librbd API that transfers > snapshot diffs between ceph clusters without the need for a temporary > storage which is required if I use the "rbd export-diff" and "rbd > import-diff" pair. You don't

Re: [ceph-users] Question about OSDSuperblock

2016-10-22 Thread xxhdx1985126
Title: 缃戞槗閭 Sorry, sir. I don't quite follow you. I agree that the osds must get the current map to know who to contact so it can catch up. But it looks to me that the osd is getting the current map through get_map(superblock.current_epoch) in which the content of the variable superblock.current_

Re: [ceph-users] Question about OSDSuperblock

2016-10-22 Thread xxhdx1985126
Title: 缃戞槗閭 Sorry, sir. I don't quite follow you. I agree that the osds must get the current map to know who to contact so it can catch up. But it looks to me that the osd is getting the current map through get_map(superblock.current_epoch) in which the variable superblock.current_epoch is read f

Re: [ceph-users] Question about OSDSuperblock

2016-10-22 Thread David Turner
The osd needs to know where it thought data was, in particular so it knows what it has. Then it gets the current map so it knows who to talk to so it can catch back up. Sent from my iPhone On Oct 22, 2016, at 7:12 AM, xxhdx1985126 mailto:xxhdx1985...@163.com>> wrote: Hi, everyone. I'm trying

Re: [ceph-users] Question on RGW MULTISITE and librados

2016-09-23 Thread Paul Nimbley
Paul -Original Message- From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com] Sent: Friday, September 23, 2016 10:44 AM To: Paul Nimbley Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Question on RGW MULTISITE and librados On Thu, Sep 22, 2016 at 1:52 PM, Paul Nimbley wrote: > Fa

Re: [ceph-users] Question on RGW MULTISITE and librados

2016-09-23 Thread Yehuda Sadeh-Weinraub
On Thu, Sep 22, 2016 at 1:52 PM, Paul Nimbley wrote: > Fairly new to ceph so please excuse any misused terminology. We’re > currently exploring the use of ceph as a replacement storage backend for an > existing application. The existing application has 2 requirements which > seemingly can be met

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Ok i will try without creating by myself Never the less thanks a lot Christian for your patience, i will try more clever questions when i'm ready for them Le 5 août 2016 02:44, "Christian Balzer" a écrit : Hello, On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote: > Maybe you are mispell

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Christian Balzer
Hello, On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote: > Maybe you are mispelling, but in the docs they dont use white space but : > this is quite misleading if it works > I'm quoting/showing "ceph-disk", which is called by ceph-deploy, which indeed uses a ":". Christian > Le 5 août 20

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Maybe you are mispelling, but in the docs they dont use white space but : this is quite misleading if it works Le 5 août 2016 02:30, "Christian Balzer" a écrit : > > Hello, > > On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote: > > > I am reading half your answer > > > > Do you mean that c

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Christian Balzer
Hello, On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote: > I am reading half your answer > > Do you mean that ceph will create by itself the partitions for the journal? > Yes, "man ceph-disk". > If so its cool and weird... > It can be very weird indeed. If sdc is your data (OSD) disk a

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
I am reading half your answer Do you mean that ceph will create by itself the partitions for the journal? If so its cool and weird... Le 5 août 2016 02:01, "Christian Balzer" a écrit : > > Hello, > > you need to work on your google skills. ^_- > > I wrote about his just yesterday and if you se

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Yeah you are right >From what i understand is that using a ceph is a good idea But the fact is that it dont work So i circumvent that by configuring ceph-deploy to use root Was it the main goal, i dont think so Thanks for your answer Le 5 août 2016 02:01, "Christian Balzer" a écrit : > > He

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Christian Balzer
Hello, you need to work on your google skills. ^_- I wrote about his just yesterday and if you search for "ceph-deploy wrong permission" the second link is the issue description: http://tracker.ceph.com/issues/13833 So I assume your journal partitions are either pre-made or non-GPT. Christian

Re: [ceph-users] Question on Sequential Write performance at 4K blocksize

2016-07-13 Thread Christian Balzer
Hello, On Wed, 13 Jul 2016 18:15:10 + EP Komarla wrote: > Hi All, > > Have a question on the performance of sequential write @ 4K block sizes. > Which version of Ceph? Any significant ceph.conf modifications? > Here is my configuration: > > Ceph Cluster: 6 Nodes. Each node with :- > 20x

Re: [ceph-users] Question about how to start ceph OSDs with systemd

2016-07-11 Thread Ernst Pijper
Hi Manual, This is a well known issue. You are definitely not the first one to hit this problem. Before Jewel i (and other as well) added the line ceph-disk activate all to /etc/rc.local to get the OSD’s running at boot. In Jewel, however, this doesn’t work anymore. Now i add these line to /et

Re: [ceph-users] Question about how to start ceph OSDs with systemd

2016-07-08 Thread Tom Barron
On 07/08/2016 11:59 AM, Manuel Lausch wrote: > hi, > > In the last days I do play around with ceph jewel on debian Jessie and > CentOS 7. Now I have a question about systemd on this Systems. > > I installed ceph jewel (ceph version 10.2.2 > (45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian

Re: [ceph-users] Question about object partial writes in RBD

2016-06-13 Thread Wido den Hollander
> Op 13 juni 2016 om 16:07 schreef George Shuklin : > > > Hello. > > How objects are handled in the rbd? If user writes 16k in the RBD image > with 4Mb object size, how much would be written in the OSD? 16k x > replication or 4Mb x replication (+journals for both cases)? > librbd will write

Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Jason Dillaman
The notice about image format 1 being deprecated was somewhat hidden in the release notes. Displaying that message when opening an existing format 1 image is overkill and should be removed (at least until we come up with some sort of online migration tool in a future Ceph release). [1] https://gi

Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Diego Castro
One more thing: I haven't seen anything regarding the following message: # rbd lock list 25091 2016-04-22 19:39:31.523542 7fd199d57700 -1 librbd::image::OpenRequest: RBD image format 1 is deprecated. Please copy this image to image format 2. Is it something that i should worry ? --- Diego Cast

Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Diego Castro
yeah, i followed the release notes. Everything is working, just hit this issue until enabled services individually. Tks --- Diego Castro / The CloudFather GetupCloud.com - Eliminamos a Gravidade 2016-04-22 12:24 GMT-03:00 Vasu Kulkarni : > Hope you followed the release notes and are on 0.94.4

Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Vasu Kulkarni
Hope you followed the release notes and are on 0.94.4 or above http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer 1) upgrade ( ensure you dont have user 'ceph' before) 2) stop the service /etc/init.d/ceph stop (since you are on centos/hammer) 3) change ownership

Re: [ceph-users] Question about cache tier and backfill/recover

2016-03-26 Thread Bob R
Mike, No. It will simply rebuild the degraded PGs which have nothing to do with the cache tier. Bob On Mar 26, 2016 8:13 AM, "Mike Miller" wrote: > Christian, Bob, > > Recovery would be based on placement groups and those degraded groups >>> would only exist on the storage pool(s) rather than t

Re: [ceph-users] Question about cache tier and backfill/recover

2016-03-26 Thread Mike Miller
Christian, Bob, Recovery would be based on placement groups and those degraded groups would only exist on the storage pool(s) rather than the cache tier in this scenario. Precisely. They are entirely different entities. There may be partially identical data (clean objects) in them, but that w

Re: [ceph-users] Question about cache tier and backfill/recover

2016-03-25 Thread Christian Balzer
On Fri, 25 Mar 2016 14:14:37 -0700 Bob R wrote: > Mike, > > Recovery would be based on placement groups and those degraded groups > would only exist on the storage pool(s) rather than the cache tier in > this scenario. > Precisely. They are entirely different entities. There may be partially id

Re: [ceph-users] Question about cache tier and backfill/recover

2016-03-25 Thread Bob R
Mike, Recovery would be based on placement groups and those degraded groups would only exist on the storage pool(s) rather than the cache tier in this scenario. Bob On Fri, Mar 25, 2016 at 8:30 AM, Mike Miller wrote: > Hi, > > in case of a failure in the storage tier, say single OSD disk failu

Re: [ceph-users] Question: replacing all OSDs of one node in 3node cluster

2016-02-11 Thread Daniel.Balsiger
expected. All nodes fully functional again. Thank you very much for the help. Best, Daniel --- From: Mihai Gheorghe [mailto:mcaps...@gmail.com] Sent: Mittwoch, 10. Februar 2016 18:51 To: Ivan Grcic Cc: Balsiger Daniel, INI-INO-ECO-MXT ; ceph-users Subject: Re: [ceph-users] Question

Re: [ceph-users] Question: replacing all OSDs of one node in 3node cluster

2016-02-10 Thread Mihai Gheorghe
As far as i know you can do it in two ways: (assuming you hace a pool size of 3 on all 3 nodes with min_size 2 to still have access to data) 1. Set noout for not starting the rebalance of the cluster. Reinstall OS on the faulty node and redeploy the node with all keys and conf files (either manual

Re: [ceph-users] Question: replacing all OSDs of one node in 3node cluster

2016-02-10 Thread Ivan Grcic
Hi Daniel, oops, wrong copy paste, here are the correct commands: ceph osd pool get pool-name size ceph osd pool set pool-name size 2 On Wed, Feb 10, 2016 at 6:27 PM, Ivan Grcic wrote: > Grüezi Daniel, > > my first question would be: Whats your pool size / min_size? > > ceph osd pool get pool-

Re: [ceph-users] Question: replacing all OSDs of one node in 3node cluster

2016-02-10 Thread Ivan Grcic
Grüezi Daniel, my first question would be: Whats your pool size / min_size? ceph osd pool get pool-name It is probably 3 (default size). If you want to have healthy state again with only 2 nodes (all the OSDs on node 3 are down), you have to set your pool size to 2: ceph osd pool set pool-name

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-28 Thread Jason Dillaman
> Thanks for your reply, why not rebuild object-map when object-map feature is > enabled. > > Cheers, > xinxin > My initial motivation was to avoid a potentially lengthy rebuild when enabling the feature. Perhaps that option could warn you to rebuild the object map after its been enabled. -

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-27 Thread Shu, Xinxin
Thanks for your reply, why not rebuild object-map when object-map feature is enabled. Cheers, xinxin -Original Message- From: Jason Dillaman [mailto:dilla...@redhat.com] Sent: Tuesday, October 27, 2015 9:20 PM To: Shu, Xinxin Cc: ceph-users Subject: Re: Question about rbd flag(RBD_FLAG

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-27 Thread Jason Dillaman
> Hi Jason dillaman > Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when > I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID > flag. > When I create a rbd with “—image-features = 13 ” , we enable object-map > feature without setting RBD_FLAG_OBJEC

Re: [ceph-users] Question about hardware and CPU selection

2015-10-25 Thread Christian Balzer
Hello, There are of course a number of threads in the ML archives about things like this. On Sat, 24 Oct 2015 17:48:35 +0200 Mike Miller wrote: > Hi, > > as I am planning to set up a ceph cluster with 6 OSD nodes with 10 > harddisks in each node, could you please give me some advice about >

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 My understanding of growing file systems is the same as yours, it can only grow at the end not the beginning. In addition to that, having partition 2 before partition 1 just cries to me to have it fixed, but that is just aesthetic. Because the weigh

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson
Christian, Thanks for the feedback. I guess I'm wondering about step 4 "clobber partition, leaving data in tact and grow partition and the file system as needed". My understanding of xfs_growfs is that the free space must be at the end of the existing file system. In this case the existing part

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread Christian Balzer
Hello, On Wed, 16 Sep 2015 07:21:26 -0500 John-Paul Robinson wrote: > The move journal, partition resize, grow file system approach would > work nicely if the spare capacity were at the end of the disk. > That shouldn't matter, you can "safely" loose your journal in controlled circumstances. T

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson (Campus)
So I just realized I had described the partition error incorrectly in my initial post. The journal was placed at the 800GB mark leaving the 2TB data partition at the end of the disk. (See my follow-up to Lionel for details.) I'm working to correct that so I have a single large partition the siz

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson
The move journal, partition resize, grow file system approach would work nicely if the spare capacity were at the end of the disk. Unfortunately, the gdisk (0.8.1) end of disk location bug caused the journal placement to be at the 800GB mark, leaving the largest remaining partition at the end of

Re: [ceph-users] question on reusing OSD

2015-09-15 Thread Lionel Bouton
Le 16/09/2015 01:21, John-Paul Robinson a écrit : > Hi, > > I'm working to correct a partitioning error from when our cluster was > first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB > partitions for our OSDs, instead of the 2.8TB actually available on > disk, a 29% space hit. (Th

Re: [ceph-users] Question on cephfs recovery tools

2015-09-14 Thread Shinobu Kinjo
ssage - From: "Goncalo Borges" To: "Shinobu Kinjo" , "John Spray" Cc: ceph-users@lists.ceph.com Sent: Tuesday, September 15, 2015 12:39:57 PM Subject: Re: [ceph-users] Question on cephfs recovery tools Hi Shinobu >>> c./ After recovering the cluster, I th

Re: [ceph-users] Question on cephfs recovery tools

2015-09-14 Thread Goncalo Borges
Hi Shinobu c./ After recovering the cluster, I though I was in a cephfs situation where I had c.1 files with holes (because of lost PGs and objects in the data pool) c.2 files without metadata (because of lost PGs and objects in the metadata pool) What does "files without metadata" m

Re: [ceph-users] Question on cephfs recovery tools

2015-09-14 Thread Goncalo Borges
Hello John... Thank you for the replies. I do have some comments in line. Bare a bit with me while I give you a bit of context. Questions will appear at the end. 1) I am currently running ceph 9.0.3 and I have install it to test the cephfs recovery tools. 2) I've created a situation where

Re: [ceph-users] Question on cephfs recovery tools

2015-09-11 Thread Shinobu Kinjo
> In your procedure, the umount problems have nothing to do with > corruption. It's (sometimes) hanging because the MDS is offline. If How did you notice that the MDS was offline? It's just because ceph client could not unmount filesystem, or anything? I would like to see logs in mds and osd. B

Re: [ceph-users] Question on cephfs recovery tools

2015-09-10 Thread Shinobu Kinjo
er. e.g: broken metadata, data or whatever you're thinking now. 3. What you exactly did shortly not bla bla bla... 4. What you really want to do (shortly)? Otherwise there would be a bunch of back-end-force messages. Shinobu - Original Message - From: "John Spray&qu

  1   2   3   >