Hi Serkan Coban,
We adapted the script and the solution you proposed is working fine . Thank
you for your support.
Thanks,
Muthu
On Wed, Apr 18, 2018 at 8:53 PM, Serkan Çoban wrote:
> >68 OSDs per node sounds an order of magnitude above what you should be
> doing, unless you have vast experien
Remote syslog server, and buffering writes to the log?
Actually this is another argument to fix logging to syslog a bit,
because the default syslog is also be set to throttle and group the
messages like:
Mar 9 17:59:35 db1 influxd: last message repeated 132 times
https://www.mail-archive.c
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Just a quick note to say thanks for organising the London Ceph/OpenStack day.
I got a lot out of it, and it was nice to see the community out in force.
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/
Hi Marc,
I'm using CephFS and mgr could not get the metadata of the mds. I enabled
the dashboard module and everytime I visit the ceph filesystem page, it got
internal error 500.
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 20, 2018 at 9:24 AM, Marc Roos wrote:
>
> Remote sys
Hi,
We have some buckets with ~25M files inside.
We've also using bucket index sharding. The performance is good, we are
focused on read.
BR,
Rafal Wadolowski
On 20.04.2018 00:57, Robert Stanford wrote:
>
> The rule of thumb is not to have tens of millions of objects in a
> radosgw bucket,
Hi Charles,
I am more or less responding to your syslog issue. I don’t have the
experience on cephfs to give you a reliable advice. So lets wait for the
experts to reply. But I guess you have to give a little more background
info, like
This happened to running cluster, you didn’t apply any
I manually created the radosgw pools from serveral different sources.
And I have it sort of running. But as you can see some pools stay empty.
Can I delete all used 0 pools?
POOL_NAME USED
.intent-log 0
.rgw.buckets0
.rgw.buckets.
Thanks Alfredo. I will use ceph-volume.
On Thu, Apr 19, 2018 at 4:24 PM, Alfredo Deza wrote:
> On Thu, Apr 19, 2018 at 11:10 AM, Shantur Rathore
> wrote:
> > Hi,
> >
> > I am building my first Ceph cluster from hardware leftover from a
> previous
> > project. I have been reading a lot of Ceph
I want to start using the radowsgw a bit. For now I am fine with the 3
replicated setup, in the near future when I add a host. I would like to
switch to ec, is there something I should do now to make this switch
more smoothly?
___
ceph-users mai
Dear Ceph Experts,
I'm try to switch an old Ceph cluster from manual administration to
ceph-deploy, but I'm running into the following error:
# ceph-deploy gatherkeys HOSTNAME
[HOSTNAME][INFO ] Running command: /usr/bin/ceph --connect-timeout=25
--cluster=ceph --admin-daemon=/var/run/ceph/cep
2018-04-20 6:06 GMT+02:00 Marc Roos :
>
> I want to start using the radowsgw a bit. For now I am fine with the 3
> replicated setup, in the near future when I add a host. I would like to
> switch to ec, is there something I should do now to make this switch
> more smoothly?
>
That will not be sup
Marc,
Thanks.
The mgr log spam occurs even without dashboard module enabled. I never
checked the ceph mgr log before because the ceph cluster is always healthy.
Based on the ceph mgr logs in syslog, the spam occurred long before and
after I enabled the dashboard module.
# ceph -s
> cluster:
>
Quoting Oliver Schulz (oliver.sch...@tu-dortmund.de):
> Dear Ceph Experts,
>
> I'm try to switch an old Ceph cluster from manual administration to
> ceph-deploy, but I'm running into the following error:
>
> # ceph-deploy gatherkeys HOSTNAME
>
> [HOSTNAME][INFO ] Running command: /usr/bin/ceph
Dear Stefan,
thanks, I tried your suggestion. Unfortunately no matter whether I put
mon_initial_members = hostname1,hostname2,hostname3
or
mon_initial_members = a,b,c
into ceph.conf (both on deployment and mon host),
"ceph-deploy gatherkeys" still tries to use
"--admin-daemon=/var/run
If I use another cluster name (other than the default "ceph"), I've
learned that I have to create symlinks in /var/lib/ceph/osd/ with
[cluster-name]-[osd-num] that symlink to ceph-[osd-num]. The ceph-disk
command doesn't seem to take a --cluster argument like other commands.
Is this a known iss
Not sure about this specific issue, but I believe we've deprecated the use
of cluster names due to (very) low usage and trouble reliably testing for
all the little things like this. :/
-Greg
On Fri, Apr 20, 2018 at 10:18 AM Robert Stanford
wrote:
>
> If I use another cluster name (other than th
Thanks Gregory. How much trouble I'd have saved if I'd only known this...
On Fri, Apr 20, 2018 at 3:41 PM, Gregory Farnum wrote:
> Not sure about this specific issue, but I believe we've deprecated the use
> of cluster names due to (very) low usage and trouble reliably testing for
> all the li
I have set acl's on a bucket via cyberduck.
I can see them being set via s3cmd, yet I don’t see the bucket in Test2
user's account. Should I do more than add just an acl to a bucket? Has
this to do with multi-tenancy users "test$tester1", "test$tester2"?
[@~]$ s3cmd info s3://test
s3://test/
19 matches
Mail list logo