deployment is
a bad idea? Do I really need dedicated storage
nodes?
By converged, I mean every node hosting an OSD.
At the same time, workload on the node may mount
RBD volumes or access CephFS. Do I have to isolate
the OSD daemon in its own VM?
Any advice would be appreciated.
-kc
K.C. Wong
kcw
ave specialized nodes
and make every one the same as the next one on the rack.
Thanks,
-kc
K.C. Wong
kcw...@verseon.com
4096R/B8995EDE E527 CBE8 023E 79EA 8BBB 5C77 23A6 92E9 B899 5EDE
hkps://hkps.pool.sks-keyservers.net
signature.asc
Description: Message signed with OpenPGP using GPGMa
-f -p ` and that process
promptly exits (with RC 0) and the hang resolves itself
I'll tried to capture the strace output the next time I run into
it and share with the mailing list.
Thanks, Ilya.
-kc
> On May 9, 2016, at 2:21 AM, Ilya Dryomov wrote:
>
> On Mon, May 9, 2016 at 12:
'm running infernalis:
# ceph --version
ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
in my set up, on CentOS 7.2 hosts
# uname -r
3.10.0-327.22.2.el7.x86_64
I appreciate any assistance,
-kc
K.C. Wong
kcw...@verseon.com
4096R/B8995EDE E527 CBE8 023E 79EA 8BBB 5C77 23A6
t is no longer running the watch should expire within 30
> seconds. If you are still experiencing this issue, you can blacklist
> the mystery client via "ceph osd blacklist add".
>
> On Wed, Aug 3, 2016 at 6:06 PM, K.C. Wong wrote:
>> I'm having a hard time removin
scenarios? Increasing the default pool size from 2
to 3?
Many thanks for any input/insight you may have.
-kc
K.C. Wong
kcw...@verseon.com
M: +1 (408) 769-8235
-
Confidentiality Notice:
This message contains confidential information. If you
for speed any day.
Thanks for any suggestion or insight.
-kc
BTW, I disable NetworkManager which, I know, kind of breaks
network-online.target.
K.C. Wong
kcw...@verseon.com
4096R/B8995EDE E527 CBE8 023E 79EA 8BBB 5C77 23A6 92E9 B899 5EDE
hkps://hkps.pool.sks-keyservers.net
probably not perfect, but it seems to work for me. Personally, I like
> using a native service to accomplish this rather than using fstab and
> the generator.
>
> https://gist.github.com/jcollie/60f8b278d1ac5eadb4794db1f4c0e87d
>
> On Mon, Aug 22, 2016 at 1:16 PM, K.C. Wong wrote:
. Nothing worked.
Many thanks,
-kc
K.C. Wong
kcw...@verseon.com
M: +1 (408) 769-8235
-
Confidentiality Notice:
This message contains confidential information. If you are not the
intended recipient and received this message in error, any use or
); do rados list-inconsistent-pg $i;
done
[]
["1.65"]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
So, that’d put the inconsistency in the cephfs_data pool.
Thank you for your help,
-kc
K.C. Wong
kcw...@verseon.com <mailto:kcw...@verseon.com>
M: +
0 00:16:43.965966 39934'184512
39934:388820[62,67,47] 62 [62,67,47] 62 28743'183853
2018-11-04 01:31:27.042458 28743'1838532018-11-04 01:31:27.042458
It’s similar to when I is
t;size":4194304,
"omap_digest":"0x”,
"data_digest":"0xf437a612”,
"attrs”:[
{"name":"_”,
"value":”EAgNAQAABAM1AA...",
"Base64":true},
{"name":"snapset”,
"v
12 matches
Mail list logo