On Fri, Oct 20, 2017 at 10:10 AM, Mehmet wrote:
> Hello,
>
> yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous" (12.2.1).
> This went realy smooth - Thanks! :)
>
> Today i wanted to enable the BuildIn Dasboard via
>
> #> vi ceph.conf
> [...]
> [mgr]
> mgr_modules = dashboard
> [...
Hi guys,
We use Ceph as S3-compatible object store and we have our self-development
web-interface for our customers on different domain.
Now we use Hammer(FCGI + Apache as RGW frontend) but we have plans for upgrade
Ceph from hammer to luminous.
In luminous release FCGI frontend was dropped an
If you add the external domain to the zonegroup's hostnames and endpoints,
then it will be able to respond to that domain. This is assuming that the
error message is that the URL is not a valid bucket. We ran into this issue
when we upgraded from 10.2.5 to 10.2.9. Any domain used to access RGW that
Hello,
Since we upgraded ceph cluster we are facing a lot of problems. Most of
them due to osd crashing. What can cause this?
This morning I woke up with thi message:
root@red-compute:~# ceph -w
cluster 9028f4da-0d77-462b-be9b-dbdf7fa57771
health HEALTH_ERR
1 pgs are stuck
Hello,
I ran today a lot read IO with an simple rsync... and again, an OSD
crashed :
But as before, I can't restart OSD. It continue crashing again. So OSD
is out, cluster is recovering.
I had just time to increase OSD log.
# ceph tell osd.14 injectargs --debug-osd 5/5
Join log :
# grep
On freshly installed ubuntu 16.04 servers with the HWE kernel selected
(4.10). I can not use ceph-deploy or ceph-disk to provision osd.
whenever I try I get the following::
ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys
--bluestore --cluster ceph --fs-type xfs -- /dev/s
I've completely upgraded my cluster and made sure my clients were luminous
too. Our cluster creates lots of directories really fast and because of
the layering it takes >1 second creating those directories. I would really
like to be able to diagnose exactly where the slowness is. I'm thinking
mds,
Hello John,
Am 22. Oktober 2017 13:58:34 MESZ schrieb John Spray :
>On Fri, Oct 20, 2017 at 10:10 AM, Mehmet wrote:
>> Hello,
>>
>> yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
>(12.2.1).
>> This went realy smooth - Thanks! :)
>>
>> Today i wanted to enable the BuildIn Das
On Sun, Oct 22, 2017 at 01:31:03PM +, Rudenko Aleksandr wrote:
> In past we rewrite http response header by Apache rules for our
> web-interface and pass CORS check. But now it’s impossible to solve on
> balancer level.
You CAN modify the CORS responses at the load-balancer level.
Find below t
2017-10-22 17:32:56.031086 7f3acaff5700 1 osd.14 pg_epoch: 72024 pg[37.1c(
v 71593'41657 (60849'38594,71593'41657] local-les=72023 n=13 ec=7037
les/c/f 72023/72023/66447 72022/72022/72022) [14,1,41] r=0 lpr=72022
crt=71593'41657 lcod 0'
0 mlcod 0'0 active+clean] hit_set_trim 37:3800:.ceph-inte
With help from the list we recently recovered one of our Jewel based
clusters that started failing when we got to about 4800 cephfs snapshots.
We understand that cephfs snapshots are still marked experimental. We
are running a single active MDS with 2 standby MDS. We only have a single
file syst
On Mon, Oct 23, 2017 at 9:35 AM, Eric Eastman
wrote:
> With help from the list we recently recovered one of our Jewel based
> clusters that started failing when we got to about 4800 cephfs snapshots.
> We understand that cephfs snapshots are still marked experimental. We are
> running a single a
On Mon, Oct 23, 2017 at 12:46 AM, Daniel Pryor wrote:
> I've completely upgraded my cluster and made sure my clients were luminous
> too. Our cluster creates lots of directories really fast and because of the
> layering it takes >1 second creating those directories. I would really like
> to be ab
On Sun, Oct 22, 2017 at 8:05 PM, Yan, Zheng wrote:
> On Mon, Oct 23, 2017 at 9:35 AM, Eric Eastman
> wrote:
> > With help from the list we recently recovered one of our Jewel based
> > clusters that started failing when we got to about 4800 cephfs snapshots.
> > We understand that cephfs snapsho
Hey everyone,
Long time listener first time caller.
Thank you to everyone who works on Ceph, docs and code, I'm loving Ceph.
I've been playing with Ceph for awhile and have a few Qs.
Ceph cache tiers, can you have multiple tiered caches?
Also with cache tiers, can you have one cache pool for mul
> Op 22 oktober 2017 om 18:45 schreef Sean Sullivan :
>
>
> On freshly installed ubuntu 16.04 servers with the HWE kernel selected
> (4.10). I can not use ceph-deploy or ceph-disk to provision osd.
>
>
> whenever I try I get the following::
>
> ceph-disk -v prepare --dmcrypt --dmcrypt-key-di
16 matches
Mail list logo