The Foundation schools were established in 2009. The Gunjur branch has become
one of the top CBSE schools in varthur, Gunjur, Whitefield and best school in
Sarjapur Road areas in a short period of time. The schools follow a methodology
that helps children through hands on activities and experien
On Wed, Nov 6, 2019 at 1:29 PM Sage Weil wrote:
>
> My current working theory is that the mgr is getting hung up when it tries
> to scrape the device metrics from the mon. The 'tell' mechanism used to
> send mon-targetted commands is pretty kludgey/broken in nautilus and
> earlier. It's been rew
We are running the Mimic version of Ceph (13.2.6) and I would like to know
a proper way of replacing a defective OSD disk that has its DB and WAL on a
separate SSD drive which is shared with 9 other OSDs. More specifically,
the failing disk for osd.327 is on /dev/sdai and its wal/db are on
/dev/sdc
Thanks Casey!
Adding the following to my swiftclient put_object call caused it to start
compressing the data:
headers={'x-object-storage-class': 'STANDARD'}
I appreciate the help!
Bryan
> On Nov 7, 2019, at 9:26 AM, Casey Bodley wrote:
>
> On 11/7/19 10:35 AM, Bryan Stillwell wrote:
>> Than
Dear Benjeman, dear all,
indeed, after waiting a bit longer and an mgr restart, it now works
(for the single case where I temporarily had SELinux off)!
So at least we now know the remaining issues with health metrics :-).
Cheers,
Oliver
Am 07.11.19 um 18:51 schrieb Oliver Freyermuth:
Dear Benjeman,
thanks! Indeed, it seems I have to do something similar to that to get:
ceph daemon osd.14 smart
to work. For some reason, "ceph device get-health-metrics" and friends still
get stuck for me,
but maybe that would just need more time.
Now I have to ponder whether to really apply
Dear Sage,
Am 07.11.19 um 14:33 schrieb Sage Weil:
On Thu, 7 Nov 2019, Thomas Schneider wrote:
Hi,
I have installed package
ceph-mgr_14.2.4-1-gd592e56-1bionic_amd64.deb
manually:
root@ld5505:/home# dpkg --force-depends -i
ceph-mgr_14.2.4-1-gd592e56-1bionic_amd64.deb
(Reading database ... 10746
On 11/7/19 10:35 AM, Bryan Stillwell wrote:
Thanks Casey!
Hopefully this makes it in before 14.2.5. Is there any way to tell the python
boto or swiftclient modules to not send those headers?
It is likely to make 14.2.5.
You'd actually want to force these clients to send the headers as a
Thanks Casey!
Hopefully this makes it in before 14.2.5. Is there any way to tell the python
boto or swiftclient modules to not send those headers?
Bryan
> On Nov 7, 2019, at 8:04 AM, Casey Bodley wrote:
>
> Hi Bryan,
>
> This is a bug related to storage classes. Compression does take effect
Hi,
I activated balancer in order to balance data distribution:
root@ld3955:~# ceph balancer status
{
"active": true,
"plans": [],
"mode": "upmap"
}
However, the data stored on 1.6TB HDD in specific pool "hdb_backup" is
not balanced; the range starts with
osd.265 size: 1.6 usage: 52.83
Hi Bryan,
This is a bug related to storage classes. Compression does take effect
for requests that specify the storage class via the s3
x-amz-storage-class or swift x-object-storage-class header. But when
this header is absent, we default to the STANDARD storage class without
consulting its c
Hi,
looks like I sent my previous email too soon.
The error
2019-11-07 15:53:06.077 7f7ea8afe700 0 auth: could not find secret_id=3887
2019-11-07 15:53:06.077 7f7ea8afe700 0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=3887
is back in MGR log.
;-(
Am 07.11.
Hi,
I have installed all ceph packages from Sage's repo, means
ceph ceph-common ceph-mds ceph-mgr-dashboard ceph-mon ceph-osd
libcephfs2 librados2 libradosstriper1 librbd1 librgw2
python-ceph-argparse python-cephfs python-rados python-rbd python-rgw
after adding his repo and executing
apt upgrade
On Thu, 7 Nov 2019, Thomas Schneider wrote:
> Hi,
>
> I have installed package
> ceph-mgr_14.2.4-1-gd592e56-1bionic_amd64.deb
> manually:
> root@ld5505:/home# dpkg --force-depends -i
> ceph-mgr_14.2.4-1-gd592e56-1bionic_amd64.deb
> (Reading database ... 107461 files and directories currently insta
On Thu, Nov 7, 2019 at 6:40 PM Karsten Nielsen wrote:
>
> That is awesome.
>
> Now I just need to figure out where the lost+found files needs to go.
> And what happened to the missing objects for the dirs.
>
lost+found files are likely files that were deleted. you can keep the
lost+found dir for
Hi all,
We are experiencing some sporadic 502's on our traefik/rgw setup. It's
similar to the issue described here:
https://github.com/containous/traefik/issues/3237
The solution seems to be to disable keep-alive in the traefik and rgw
configurations.
We found the option for civetweb (enable_keep_
Hi,
I have installed package
ceph-mgr_14.2.4-1-gd592e56-1bionic_amd64.deb
manually:
root@ld5505:/home# dpkg --force-depends -i
ceph-mgr_14.2.4-1-gd592e56-1bionic_amd64.deb
(Reading database ... 107461 files and directories currently installed.)
Preparing to unpack ceph-mgr_14.2.4-1-gd592e56-1bioni
Dear Thomas,
the most correct thing to do is probably to add the full repo
(the original link was still empty for me, but
https://shaman.ceph.com/repos/ceph/wip-no-scrape-mons-nautilus/ seems to work).
The commit itself suggests the ceph-mgr package should be sufficient.
I'm still pondering tho
That is awesome.
Now I just need to figure out where the lost+found files needs to go.
And what happened to the missing objects for the dirs.
Any tool that is able to do that ?
Thanks
- Karsten
-Original message-
From: Yan, Zheng
Sent: Thu 07-11-2019 09:22
Subject:Re: [ceph
I have tracked down the root cause. See https://tracker.ceph.com/issues/42675
Regards
Yan, Zheng
On Thu, Nov 7, 2019 at 4:01 PM Karsten Nielsen wrote:
>
> -Original message-
> From: Yan, Zheng
> Sent: Thu 07-11-2019 07:21
> Subject:Re: [ceph-users] Re: mds crash loop
> To:
-Original message-
From: Yan, Zheng
Sent: Thu 07-11-2019 07:21
Subject:Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Thu, Nov 7, 2019 at 5:50 AM Karsten Nielsen wrote:
> >
> > -Original message-
> > From: Yan, Zheng
21 matches
Mail list logo