Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread BASSAGET Cédric
Hello Robert,
My disks did not reach 100% on the last warning, they climb to 70-80%
usage. But I see rrqm / wrqm counters increasing...

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz   await r_await w_await  svctm  %util

sda   0.00 4.000.00   16.00 0.00   104.0013.00
0.000.000.000.00   0.00   0.00
sdb   0.00 2.001.00 3456.00 8.00 25996.0015.04
5.761.670.001.67   0.03   9.20
sdd   4.00 0.00 41462.00 1119.00 331272.00  7996.00
 15.9419.890.470.480.21   0.02  66.00

dm-0  0.00 0.00 6825.00  503.00 330856.00  7996.0092.48
4.000.550.560.30   0.09  66.80
dm-1  0.00 0.001.00 1129.00 8.00 25996.0046.02
1.030.910.000.91   0.09  10.00


sda is my system disk (SAMSUNG   MZILS480HEGR/007  GXL0), sdb and sdd are
my OSDs

would "osd op queue = wpq" help in this case ?
Regards

Le sam. 8 juin 2019 à 07:44, Robert LeBlanc  a écrit :

> With the low number of OSDs, you are probably satuarting the disks. Check
> with `iostat -xd 2` and see what the utilization of your disks are. A lot
> of SSDs don't perform well with Ceph's heavy sync writes and performance is
> terrible.
>
> If some of your drives are 100% while others are lower utilization, you
> can possibly get more performance and greatly reduce the blocked I/O with
> the WPQ scheduler. In the ceph.conf add this to the [osd] section and
> restart the processes:
>
> osd op queue = wpq
> osd op queue cut off = high
>
> This has helped our clusters with fairness between OSDs and making
> backfills not so disruptive.
> 
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
>
> On Thu, Jun 6, 2019 at 1:43 AM BASSAGET Cédric <
> cedric.bassaget...@gmail.com> wrote:
>
>> Hello,
>>
>> I see messages related to REQUEST_SLOW a few times per day.
>>
>> here's my ceph -s  :
>>
>> root@ceph-pa2-1:/etc/ceph# ceph -s
>>   cluster:
>> id: 72d94815-f057-4127-8914-448dfd25f5bc
>> health: HEALTH_OK
>>
>>   services:
>> mon: 3 daemons, quorum ceph-pa2-1,ceph-pa2-2,ceph-pa2-3
>> mgr: ceph-pa2-3(active), standbys: ceph-pa2-1, ceph-pa2-2
>> osd: 6 osds: 6 up, 6 in
>>
>>   data:
>> pools:   1 pools, 256 pgs
>> objects: 408.79k objects, 1.49TiB
>> usage:   4.44TiB used, 37.5TiB / 41.9TiB avail
>> pgs: 256 active+clean
>>
>>   io:
>> client:   8.00KiB/s rd, 17.2MiB/s wr, 1op/s rd, 546op/s wr
>>
>>
>> Running ceph version 12.2.9 (9e300932ef8a8916fb3fda78c58691a6ab0f4217)
>> luminous (stable)
>>
>> I've check :
>> - all my network stack : OK ( 2*10G LAG )
>> - memory usage : ok (256G on each host, about 2% used per osd)
>> - cpu usage : OK (Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz)
>> - disk status : OK (SAMSUNG   AREA7680S5xnNTRI  3P04 => samsung DC series)
>>
>> I heard on IRC that it can be related to samsung PM / SM series.
>>
>> Do anybody here is facing the same problem ? What can I do to solve that ?
>> Regards,
>> Cédric
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread Robert LeBlanc
On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric <
cedric.bassaget...@gmail.com> wrote:

> Hello Robert,
> My disks did not reach 100% on the last warning, they climb to 70-80%
> usage. But I see rrqm / wrqm counters increasing...
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda   0.00 4.000.00   16.00 0.00   104.0013.00
> 0.000.000.000.00   0.00   0.00
> sdb   0.00 2.001.00 3456.00 8.00 25996.0015.04
> 5.761.670.001.67   0.03   9.20
> sdd   4.00 0.00 41462.00 1119.00 331272.00  7996.00
>  15.9419.890.470.480.21   0.02  66.00
>
> dm-0  0.00 0.00 6825.00  503.00 330856.00  7996.00
>  92.48 4.000.550.560.30   0.09  66.80
> dm-1  0.00 0.001.00 1129.00 8.00 25996.0046.02
> 1.030.910.000.91   0.09  10.00
>
>
> sda is my system disk (SAMSUNG   MZILS480HEGR/007  GXL0), sdb and sdd are
> my OSDs
>
> would "osd op queue = wpq" help in this case ?
> Regards
>

Your disk times look okay, just a lot more unbalanced than I would expect.
I'd give wpq a try, I use it all the time, just be sure to also include the
op_cutoff setting too or it doesn't have much effect. Let me know how it
goes.

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread BASSAGET Cédric
Hi Robert,
Before doing anything on my prod env, I generate r/w on ceph cluster using
fio .
On my newest cluster, release 12.2.12, I did not manage to get
the (REQUEST_SLOW) warning, even if my OSD disk usage goes above 95% (fio
ran from 4 diffrent hosts)

On my prod cluster, release 12.2.9, as soon as I run fio on a single host,
I see a lot of REQUEST_SLOW warninr gmessages, but "iostat -xd 1" does not
show me a usage more that 5-10% on disks...

Le lun. 10 juin 2019 à 10:12, Robert LeBlanc  a
écrit :

> On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric <
> cedric.bassaget...@gmail.com> wrote:
>
>> Hello Robert,
>> My disks did not reach 100% on the last warning, they climb to 70-80%
>> usage. But I see rrqm / wrqm counters increasing...
>>
>> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>>
>> sda   0.00 4.000.00   16.00 0.00   104.00
>>  13.00 0.000.000.000.00   0.00   0.00
>> sdb   0.00 2.001.00 3456.00 8.00 25996.00
>>  15.04 5.761.670.001.67   0.03   9.20
>> sdd   4.00 0.00 41462.00 1119.00 331272.00  7996.00
>>  15.9419.890.470.480.21   0.02  66.00
>>
>> dm-0  0.00 0.00 6825.00  503.00 330856.00  7996.00
>>  92.48 4.000.550.560.30   0.09  66.80
>> dm-1  0.00 0.001.00 1129.00 8.00 25996.00
>>  46.02 1.030.910.000.91   0.09  10.00
>>
>>
>> sda is my system disk (SAMSUNG   MZILS480HEGR/007  GXL0), sdb and sdd are
>> my OSDs
>>
>> would "osd op queue = wpq" help in this case ?
>> Regards
>>
>
> Your disk times look okay, just a lot more unbalanced than I would expect.
> I'd give wpq a try, I use it all the time, just be sure to also include the
> op_cutoff setting too or it doesn't have much effect. Let me know how it
> goes.
> 
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD caching on EC-pools (heavy cross OSD communication on cached reads)

2019-06-10 Thread Janne Johansson
Den sön 9 juni 2019 kl 18:29 skrev :

> make sense - makes the cases for ec pools smaller though.
>
> Sunday, 9 June 2019, 17.48 +0200 from paul.emmer...@croit.io <
> paul.emmer...@croit.io>:
>
> Caching is handled in BlueStore itself, erasure coding happens on a higher
> layer.
>
>
>
In your case, caching at cephfs MDS would be even more efficient then?

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread BASSAGET Cédric
an update from 12.2.9 to 12.2.12 seems to have fixed the problem !

Le lun. 10 juin 2019 à 12:25, BASSAGET Cédric 
a écrit :

> Hi Robert,
> Before doing anything on my prod env, I generate r/w on ceph cluster using
> fio .
> On my newest cluster, release 12.2.12, I did not manage to get
> the (REQUEST_SLOW) warning, even if my OSD disk usage goes above 95% (fio
> ran from 4 diffrent hosts)
>
> On my prod cluster, release 12.2.9, as soon as I run fio on a single host,
> I see a lot of REQUEST_SLOW warninr gmessages, but "iostat -xd 1" does not
> show me a usage more that 5-10% on disks...
>
> Le lun. 10 juin 2019 à 10:12, Robert LeBlanc  a
> écrit :
>
>> On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric <
>> cedric.bassaget...@gmail.com> wrote:
>>
>>> Hello Robert,
>>> My disks did not reach 100% on the last warning, they climb to 70-80%
>>> usage. But I see rrqm / wrqm counters increasing...
>>>
>>> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
>>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>>>
>>> sda   0.00 4.000.00   16.00 0.00   104.00
>>>  13.00 0.000.000.000.00   0.00   0.00
>>> sdb   0.00 2.001.00 3456.00 8.00 25996.00
>>>  15.04 5.761.670.001.67   0.03   9.20
>>> sdd   4.00 0.00 41462.00 1119.00 331272.00  7996.00
>>>  15.9419.890.470.480.21   0.02  66.00
>>>
>>> dm-0  0.00 0.00 6825.00  503.00 330856.00  7996.00
>>>  92.48 4.000.550.560.30   0.09  66.80
>>> dm-1  0.00 0.001.00 1129.00 8.00 25996.00
>>>  46.02 1.030.910.000.91   0.09  10.00
>>>
>>>
>>> sda is my system disk (SAMSUNG   MZILS480HEGR/007  GXL0), sdb and sdd
>>> are my OSDs
>>>
>>> would "osd op queue = wpq" help in this case ?
>>> Regards
>>>
>>
>> Your disk times look okay, just a lot more unbalanced than I would
>> expect. I'd give wpq a try, I use it all the time, just be sure to also
>> include the op_cutoff setting too or it doesn't have much effect. Let me
>> know how it goes.
>> 
>> Robert LeBlanc
>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-10 Thread Stefan Kooman
Quoting solarflow99 (solarflo...@gmail.com):
> can the bitmap allocator be set in ceph-ansible?  I wonder why is it not
> default in 12.2.12

We don't use ceph-ansible. But if ceph-ansible allow you to set specific
([osd]) settings in ceph.conf I guess you can do it.

I don't know what the policy is for changing default settings in Ceph. Not sure
if they ever do that. The feature is only available since 12.2.12 and
is not battle tested in luminous. It's not the default in Mimic either
IIRC. Might be default in Nautilus?

Behaviour changes can be tricky without people knowing about it.

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous PG stuck peering after added nodes with noin

2019-06-10 Thread Aleksei Gutikov



Hi all!

Previous week we ran into terrible situation after added 4 new nodes 
into one of our clusters.

Trying to reduce pg moves we set noin flag.
Then deployed 4 new node so added 30% of OSDs with reweight=0.
After that a huge number of PGs stalled in peering or activating state - 
about 20%.

Please, see ceph -s output below.
Number of peering and activating PGs was decreasing very slowly - like 
5-10 per minute.


Unfortunately we did not collected any useful logs.
And there were no error in OSD logs with default loglevel.

Also we noticed strange CPU utilization of affected OSDs.
Not all OSD were affected - about 1/3 of OSDs on every host.
Affected OSDs were utilizing 3.5-4 CPU cores.
And every of 3 messenger threads were utilizing whole 100% CPU.

We were able to fix that state only by restarting all OSDs in hdd pool.
After restart OSDs finished peering as expected - in several seconds.

Everything looks like PG overdose, but:
- We have about 150 PG/OSD, while mon_max_pg_per_osd=400 and 
osd_max_pg_per_osd_hard_ratio=4.0
- We write "withhold creation of pg" with loglevel 0 and there were no 
such messages in OSD logs

- Unexplained msgr CPU usage

A couple days after we started to add new bunch of nodes.
In this case - one-by-one, and also with noin flag.
And after first node added to cluster - we got the same 20% stucked 
inactive PGs.

Again no unusual messages in logs.
We were already aware how to fix that - so restarted OSDs.
After peering was completed and backfilling stabilized we continued 
adding new OSD nodes to cluster.
And while backfilling was in progress issue with inactive PGs was not 
reproduced after adding next 3 nodes.


We have a number of small patches and do not want to file a bug before 
become sure that the root of this issue is not one of out patches.


So if anybody already know about this kind of issues - please let me know.

What log would you suggest to enable to see details about PG lifecycle 
in OSD?


Ceph version 12.2.8.


State we had 2-nd time, after added single OSD node:

   cluster:
 id: ad99506a-05a5-11e8-975e-74d4351a7990
 health: HEALTH_ERR
 noin flag(s) set
 38625525/570124661 objects misplaced (6.775%)
 Reduced data availability: 4637 pgs inactive, 2483 pgs peering
 Degraded data redundancy: 2875/570124661 objects degraded 
(0.001%), 2 pgs degraded, 1 pg undersized

 26 slow requests are blocked > 5 sec. Implicated osds 312,792
 4199 stuck requests are blocked > 30 sec. Implicated osds 
3,4,7,10,12,13,14,21,27,28,29,31,33,35,39,47,48,51,54,55,57,58,59,63,64,67,69,70,71,72,73,74,75,83,85,86,87,92,94,96,100,102,104,107,113,117,118,119,121,125,126,129,130,131,133,136,138,140,141,145,146,148,153,154,155,156,158,160,162,163,164,165,166,168,176,179,182,183,185,187,188,189,192,194,198,199,200,201,203,205,207,208,209,210,213,215,216,220,221,223,224,226,228,230,232,234,235,238,239,240,242,244,245,246,250,252,253,255,256,257,259,261,263,264,267,271,272,273,275,279,282,284,286,288,289,291,292,293,299,300,307,311,318,319,321,323,324,327,329,330,332,333,334,339,341,342,343,345,346,348,352,354,355,356,360,361,363,365,366,367,369,370,372,378,382,384,393,396,398,401,402,404,405,409,411,412,415,416,418,421,428,429,432,434,435,436,438,441,444,446,447,448,449,451,452,453,456,457,458,460,461,462,464,465,466,467,468,469,471,472,474,478,479,480,481,482,483,485,486,487,489,492,494,498,499,503,504,505,506,507,508,509,510,512,513,515,516,517,520,521,522,523,524,527,528,530,531,533,535,536,5

38,539,541,542,546,549,550,554,555,559,561,562,563,564,565,566,568,571,573,574,578,581,582,583,588,589,590,592,593,594,595,596,597,598,599,602,604,605,606,607,608,609,610,611,612,613,614,617,618,619,620,621,622,624,627,628,630,632,633,634,636,637,638,639,640,642,643,644,645,646,647,648,650,651,652,656,659,660,661,662,663,666,668,669,671,672,673,674,675,676,678,681,682,683,686,687,691,692,694,695,696,697,699,701,704,705,706,707,708,709,712,714,716,717,718,719,720,722,724,727,729,732,733,736,737,738,739,740,741,742,743,745,746,750,751,752,754,755,756,758,759,760,761,762,763,765,766,767,768,769,770,771,772,773,774,775,776,777,778,779,780,781,782,783,784,785,786,787,788,789,790,791,793,794,795,796

   services:
 mon: 3 daemons, quorum 
BC-SR1-4R9-CEPH-MON1,BC-SR1-4R3-CEPH-MON1,BC-SR1-4R6-CEPH-MON1
 mgr: BC-SR1-4R9-CEPH-MON1(active), standbys: BC-SR1-4R3-CEPH-MON1, 
BC-SR1-4R6-CEPH-MON1

 osd: 828 osds: 828 up, 798 in; 5355 remapped pgs
  flags noin
 rgw: 187 daemons active

   data:
 pools:   14 pools, 21888 pgs
 objects: 53.44M objects, 741TiB
 usage:   1.04PiB used, 5.55PiB / 6.59PiB avail
 pgs: 21.203% pgs not active
  2875/570124661 objects degraded (0.001%)
  38625525/570124661 objects misplaced (6.775%)
  15382 active+clean
  1847  remapped+peering
  1642  activating+remapped
  1244  active+re

Re: [ceph-users] balancer module makes OSD distribution worse

2019-06-10 Thread Josh Haft
PGs are not perfectly balanced per OSD, but I think that's expected/OK
due to setting crush_compat_metrics to bytes? Though realizing as I
type this that what I really want is equal percent-used, which may not
be possible given the slight variation in disk size (see below) in my
cluster?

# ceph osd df | awk '{print $3}'|sort -g|uniq -c
 84 8.90959
 11 9.03159
 96 9.03200
144 9.14240
131 10.69179
 60 10.92470

Josh

On Sat, Jun 8, 2019 at 8:08 AM Igor Podlesny  wrote:
>
> On Thu, 6 Jun 2019 at 03:01, Josh Haft  wrote:
> >
> > Hi everyone,
> >
> > On my 13.2.5 cluster, I recently enabled the ceph balancer module in
> > crush-compat mode.
>
> Why did you choose compat mode? Don't you want to try another one instead?
>
> --
> End of message. Next message?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread Robert LeBlanc
I'm glad it's working, to be clear did you use wpq, or is it still the prio
queue?

Sent from a mobile device, please excuse any typos.

On Mon, Jun 10, 2019, 4:45 AM BASSAGET Cédric 
wrote:

> an update from 12.2.9 to 12.2.12 seems to have fixed the problem !
>
> Le lun. 10 juin 2019 à 12:25, BASSAGET Cédric <
> cedric.bassaget...@gmail.com> a écrit :
>
>> Hi Robert,
>> Before doing anything on my prod env, I generate r/w on ceph cluster
>> using fio .
>> On my newest cluster, release 12.2.12, I did not manage to get
>> the (REQUEST_SLOW) warning, even if my OSD disk usage goes above 95% (fio
>> ran from 4 diffrent hosts)
>>
>> On my prod cluster, release 12.2.9, as soon as I run fio on a single
>> host, I see a lot of REQUEST_SLOW warninr gmessages, but "iostat -xd 1"
>> does not show me a usage more that 5-10% on disks...
>>
>> Le lun. 10 juin 2019 à 10:12, Robert LeBlanc  a
>> écrit :
>>
>>> On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric <
>>> cedric.bassaget...@gmail.com> wrote:
>>>
 Hello Robert,
 My disks did not reach 100% on the last warning, they climb to 70-80%
 usage. But I see rrqm / wrqm counters increasing...

 Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
 avgrq-sz avgqu-sz   await r_await w_await  svctm  %util

 sda   0.00 4.000.00   16.00 0.00   104.00
  13.00 0.000.000.000.00   0.00   0.00
 sdb   0.00 2.001.00 3456.00 8.00 25996.00
  15.04 5.761.670.001.67   0.03   9.20
 sdd   4.00 0.00 41462.00 1119.00 331272.00  7996.00
  15.9419.890.470.480.21   0.02  66.00

 dm-0  0.00 0.00 6825.00  503.00 330856.00  7996.00
  92.48 4.000.550.560.30   0.09  66.80
 dm-1  0.00 0.001.00 1129.00 8.00 25996.00
  46.02 1.030.910.000.91   0.09  10.00


 sda is my system disk (SAMSUNG   MZILS480HEGR/007  GXL0), sdb and sdd
 are my OSDs

 would "osd op queue = wpq" help in this case ?
 Regards

>>>
>>> Your disk times look okay, just a lot more unbalanced than I would
>>> expect. I'd give wpq a try, I use it all the time, just be sure to also
>>> include the op_cutoff setting too or it doesn't have much effect. Let me
>>> know how it goes.
>>> 
>>> Robert LeBlanc
>>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] krbd namespace missing in /dev

2019-06-10 Thread Jonas Jelten
When I run:

  rbd map --name client.lol poolname/somenamespace/imagename

The image is mapped to /dev/rbd0 and

  /dev/rbd/poolname/imagename

I would expect the rbd to be mapped to (the rbdmap tool tries this name):

  /dev/rbd/poolname/somenamespace/imagename

The current map point would not allow same-named images in different 
namespaces, and the automatic mount of rbdmap fails
because of this.


Are there plans to fix this?


Cheers
-- Jonas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd namespace missing in /dev

2019-06-10 Thread Jason Dillaman
On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten  wrote:
>
> When I run:
>
>   rbd map --name client.lol poolname/somenamespace/imagename
>
> The image is mapped to /dev/rbd0 and
>
>   /dev/rbd/poolname/imagename
>
> I would expect the rbd to be mapped to (the rbdmap tool tries this name):
>
>   /dev/rbd/poolname/somenamespace/imagename
>
> The current map point would not allow same-named images in different 
> namespaces, and the automatic mount of rbdmap fails
> because of this.
>
>
> Are there plans to fix this?

I opened a tracker ticket for this issue [1].

>
> Cheers
> -- Jonas
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[1] http://tracker.ceph.com/issues/40247

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd namespace missing in /dev

2019-06-10 Thread Ilya Dryomov
On Mon, Jun 10, 2019 at 8:03 PM Jason Dillaman  wrote:
>
> On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten  wrote:
> >
> > When I run:
> >
> >   rbd map --name client.lol poolname/somenamespace/imagename
> >
> > The image is mapped to /dev/rbd0 and
> >
> >   /dev/rbd/poolname/imagename
> >
> > I would expect the rbd to be mapped to (the rbdmap tool tries this name):
> >
> >   /dev/rbd/poolname/somenamespace/imagename
> >
> > The current map point would not allow same-named images in different 
> > namespaces, and the automatic mount of rbdmap fails
> > because of this.
> >
> >
> > Are there plans to fix this?
>
> I opened a tracker ticket for this issue [1].
>
> [1] http://tracker.ceph.com/issues/40247

If we are going to touch it, we might want to include cluster fsid as
well.  There is an old ticket on this:

http://tracker.ceph.com/issues/16811

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Day Netherlands CFP Extended to June 14th

2019-06-10 Thread Mike Perez
 Hey everyone,

We have extended the CFP for Ceph Day Netherlands to June 14! The event
itself will be taking place on July 2nd. You can find more information on
how to register for the event and apply for the CFP here:

https://ceph.com/cephdays/netherlands-2019/

We look forward to seeing you for some great discussion and content in
Utrecht!

— Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph IRC channel linked to Slack

2019-06-10 Thread Alvaro Soto
Hello Cephers,
for the ones who find it easy to be connected to the community using slack,
the openstack community in Latam configured this on [1] In the channel
#ceph and you can auto invite in [2].
Feel free to use and share.

[1] https://openstack-latam.slack.com
[2] https://latam.openstackday.mx/

NOTE: This nowhere near to be official is just an effort to reach more
people in the community.

Cheers.
-- 

ATTE. Alvaro Soto Escobar

--
Great people talk about ideas,
average people talk about things,
small people talk ... about other people.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com