Hi;
Yesterday i started to upgrade my Ceph environment from 0.94.9 to 0.94.10.
All monitor servers upgraded successfully but i experience problems on
starting upgraded OSD daemons.
When i try to start an Ceph OSD Daemon(/usr/bin/ceph-osd) receives
Segmentation Fault and it kills after 2-3 minutes.
Hi Jason,
You hit it on the head, that was the problem, on other installations I was
using the client.admin and the consequent daemon, and in this case I
created a dedicated user/daemon, but I didn't disable the admin daemon.
Thanks for the help!.
On Fri, Mar 17, 2017 at 2:17 AM, Jason Dillaman
On Friday, March 17, 2017 at 7:44 AM, Deepak Naidu wrote:
> , df always reports entire cluster
> size
... instead of CephFS data pool's size.
This issue has been recorded as a feature
request recently,
http://tracker.ceph.com/issues/19109
> Not sure, if this is still true with Jewel CephFS ie
>
Hello Brad,
I've fond the reason for the segfault. On the OSD servers the
/etc/cep/ceph.cilent.admin.keyring file was missing. This showed up when I've
set the debugging parameters you've suggested.
Once I've copied the file from the monitor, the import-rados has worked out.
Now the cluster se
On Fri, Mar 17, 2017 at 7:43 PM, Laszlo Budai wrote:
> Hello Brad,
>
> I've fond the reason for the segfault. On the OSD servers the
> /etc/cep/ceph.cilent.admin.keyring file was missing. This showed up when
> I've set the debugging parameters you've suggested.
That makes sense in the context of
Hi all,
I've found that the problem was due to missing
/etc/ceph/ceph.client.admin.keyring file on the storage node where I was trying
to do the import-rados operation.
Kind regards,
Laszlo
On 15.03.2017 20:22, Laszlo Budai wrote:
Hello,
I'm trying to do an import-rados operation, but the
Hello,
Ceph status is showing:
1 pgs inconsistent
1 scrub errors
1 active+clean+inconsistent
I located the error messages in the logfile after querying the pg in
question:
root@hqosd3:/var/log/ceph# zgrep -Hn 'ERR' ceph-osd.32.log.1.gz
ceph-osd.32.log.1.gz:846:2017-03-17 02:25:20.281608 7f7
We went through a period of time where we were experiencing these daily...
cd to the PG directory on each OSD and do a find for "238e1f29.0076024c"
(mentioned in your error message). This will likely return a file that has
a slash in the name, something like rbd\udata.
238e1f29.0076024c_he
On 03/16/2017 03:47 PM, Graham Allan wrote:
This might be a dumb question, but I'm not at all sure what the
"global quotas" in the radosgw region map actually do.
It is like a default quota which is applied to all users or buckets,
without having to set them individually, or is it a blanket/a
Hi All,
I've just ran an upgrade in our test cluster, going from 10.2.3 to
10.2.6 and got the wonderful "failed to encode map with expected crc"
message.
Wasn't this supposed to only happen from pre-jewel to jewel?
should I be looking at something else?
thanks
__
Brian,
Thank you for the detailed information. I was able to compare the 3
hexdump files and it looks like the primary pg is the odd man out.
I stopped the OSD and then I attempted to move the object:
root@hqosd3:/var/lib/ceph/osd/ceph-32/current/3.2b8_head/DIR_8/DIR_B/DIR_2/DIR_A/DIR_0#
mv
Brian,
Never mind...looking back though some older emails I do see an
indication of a problem with that drive.
I will fail out the osd and replace the drive.
Thanks again for the help,
Shian
On 03/17/2017 03:08 PM, Shain Miley wrote:
This sender failed our fraud detection checks
I have a 4 node cluster shown by `ceph osd tree` below. Monitors are
running on hosts 1, 2 and 3. It has a single replicated pool of size
3. I have a VM with its hard drive replicated to OSDs 11(host3),
5(host1) and 3(host2).
I can 'fail' any one host by disabling the SAN network interface and
the
Hello everyone,
I`m deploying a ceph cluster with cephfs and I`d like to tune ceph cache
tiering, and I`m
a little bit confused of the settings hit_set_count, hit_set_period and
min_read_recency_for_promote. The docs are very lean and I can`f find any
more detailed explanation anywhere.
Could som
On Thu, Mar 16, 2017 at 9:25 PM, 许雪寒 wrote:
> Hi, Gregory.
>
> On the other hand, I checked the fix 63e44e32974c9bae17bb1bfd4261dcb024ad845c
> should be the one that we need. However, I notice that this fix has only been
> backported down to v11.0.0, can we simply apply it to our Hammer
> versi
Hi, it's been a while since im using Ceph, and still im a little
ashamed that when certain situation happens, i dont have the knowledge
to explain or plan things.
Basically what i dont know is, and i will do an exercise.
EXCERCISE:
a virtual machine running on KVM has an extra block device where
16 matches
Mail list logo