>
> [root@osd5 ceph-admin]# grep -Hn 'ERR' /var/log/ceph/ceph-osd.69.log
>
> [root@osd5 ceph]# zgrep -Hn 'ERR' ./ceph-osd.69.log-*
> ./ceph-osd.69.log-20170512.gz:717:2017-05-11 09:23:11.734142 7ff46cbe4700
> -1 log_channel(cluster) log [ERR] : scrub 1.959
>
> Op 12 mei 2017 om 21:45 schreef Patrick McGarry :
>
>
> Hey cephers,
>
> Sorry to be the bearer of bad news on a Friday, but the decision was
> made this week to cancel the Ceph conference that was planned for
> later this year in Boston on 23-25 August.
>
> For more details please visit the
Haha, that was it.
I thought the first mds was active but it was the second one.
I issued the command on the right mds and it does show it all.
Thank you very much.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
On Fri, May 12, 2017 at 9:03 AM, John Spray w
Hi, no I have not seen any log entries related to scrubs.
I see slow requests for various operations including readdir, unlink.
Sometimes rdlock, sometimes wrlock.
On 12 May 2017 at 16:10, Peter Maloney
wrote:
> On 05/12/17 16:54, James Eckersall wrote:
> > Hi,
> >
> > We have an 11 node ceph c
Hey cephers,
Sorry to be the bearer of bad news on a Friday, but the decision was
made this week to cancel the Ceph conference that was planned for
later this year in Boston on 23-25 August.
For more details please visit the Cephalocon page:
http://ceph.com/cephalocon2017/
If you have any quest
On Fri, May 12, 2017 at 7:17 AM, Алексей Усов wrote:
> Thanks for reply.
>
> But tell command itself doesn't make changes persistent, so I must add them
> to ceph.conf across the entire cluster (that's where configuration
> management comes in), am I correct?
Mind filing an RFE for this at http:/
ceph/ceph-osd.69.log
[root@osd5 ceph]# zgrep -Hn 'ERR' ./ceph-osd.69.log-*
./ceph-osd.69.log-20170512.gz:717:2017-05-11 09:23:11.734142 7ff46cbe4700
-1 log_channel(cluster) log [ERR] : scrub 1.959
1:9a97a372:::10004313b01.0004:head on disk size (0) does not match
object info size (
On 05/12/2017 04:17 PM, Алексей Усов wrote:
Usov,
Thanks for reply.
But tell command itself doesn't make changes persistent, so I must add
them to ceph.conf across the entire cluster (that's where
configuration management comes in), am I correct?
Yes, that's correct.
--
With regards,
Ric
On 05/12/17 16:54, James Eckersall wrote:
> Hi,
>
> We have an 11 node ceph cluster 8 OSD nodes with 5 disks each and 3
> MDS servers.
> Since upgrading from Jewel to Kraken last week, we are seeing the
> active MDS constantly reporting a number of slow requests > 30 seconds.
> The load on the Ceph
That's regarding situations where restart is necessary, software update for
example. If you have 1000+ OSDs and need to perform a minor version
update(e.g. 10.2.7 > 10.2.8) - how do you do it? Do you restart OSDs
manually, use some kind of script, etc? Tthis is rather an automation
question more th
Hi,
We have an 11 node ceph cluster 8 OSD nodes with 5 disks each and 3 MDS
servers.
Since upgrading from Jewel to Kraken last week, we are seeing the active
MDS constantly reporting a number of slow requests > 30 seconds.
The load on the Ceph servers is not excessive. None of the OSD disks
appea
Hi,
On 05/12/2017 03:35 PM, Vladimir Prokofev wrote:
My best guess is that using systemd you can write some basic script to
restart whatever OSDs you want. Another option is to use the same
mechanics that ceph-deploy uses, but the principle is all the same -
write some automation script.
I would
As other's have said, best bet is update conf and then just use injectargs,
but if you need to restart a group of OSD's you could script it. Assuming
you are using linux, you could do something like.
//If you wanted to restart osd 1-10
for i in (1..10);
do
HOST=`ceph osd find ${i} | jq -r .crush_l
Thanks for reply.
But tell command itself doesn't make changes persistent, so I must add them
to ceph.conf across the entire cluster (that's where configuration
management comes in), am I correct?
On 12 May 2017 at 16:35, Richard Arends wrote:
> On 05/12/2017 02:49 PM, Алексей Усов wrote:
>
> U
When switching from Apache+fcgi to civetweb it seems like we are
losing access to some useful information, like the response time and
the response size. Do others also have this issue or is there maybe a
solution that I have not found yet?
I have opened a feature request for this, just in case:
h
Intersting question. AFAIK, there's no built-in solution.
Also, if you think about it, restarting whole cluster at once can lead to
service interruption, as you can easily bring all PG copies in stale state
for a short time, and even longer if some OSDs won't go up for some reason.
You should reall
On 05/12/2017 02:49 PM, Алексей Усов wrote:
Usov,
Could someone, please, tell me how do I restart all daemons in a
cluster if I make changes in ceph.conf, if it's needed indeed? Since
enterprise-scale ceph clusters usually tend to comprise of hundreds of
OSDs, I doubt one must restart the en
Greetings,
Could someone, please, tell me how do I restart all daemons in a cluster if
I make changes in ceph.conf, if it's needed indeed? Since enterprise-scale
ceph clusters usually tend to comprise of hundreds of OSDs, I doubt one
must restart the entire cluster by hand or use some sort of exte
Greetings,
Could someone, please, tell me how do I restart all daemons in a cluster if
I make changes in ceph.conf, if it's needed indeed? Since enterprise-scale
ceph clusters usually tend to comprise of hundreds of OSDs, I doubt one
must restart the entire cluster by hand or use some sort of exte
On Fri, May 12, 2017 at 12:47 PM, Webert de Souza Lima
wrote:
> Thanks John,
>
> I did as yuo suggested but unfortunately I only found information regarding
> the objecter nicks "writ, read and actv", any more suggestions?
The daemonperf command itself is getting its list of things to display
by
Thanks John,
I did as yuo suggested but unfortunately I only found information regarding
the objecter nicks "writ, read and actv", any more suggestions?
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
On Wed, May 10, 2017 at 3:46 AM, John Spray wrote:
> On T
On Wed, May 10, 2017 at 4:09 AM, gjprabu wrote:
> Hi Webert,
>
> Thanks for your reply , can pls suggest ceph pg value for data and
> metadata. I have set 128 for data and 128 for metadata , is this correct
>
Well I think this has nothing to do with your current problem but the PG
number d
Hi,
I'm having the same issues running MDS version 11.2.0 and kernel clients
4.10.
Regards
Jose
El 10/05/17 a las 09:11, gjprabu escribió:
> HI John,
>
> Thanks for you reply , we are using below version for client and
> MDS (ceph version 10.2.2)
>
> Regards
> Prabu GJ
>
>
> On W
I am unable to perform ceph-install {node} on Debian Wheezy.
[server][DEBUG ] Hit http://security.debian.org wheezy/updates/main
Translation-en
[server][DEBUG ] Hit http://download.ceph.com wheezy Release
[server][DEBUG ] Hit http://download.ceph.com wheezy/main amd64 Packages
[server][DEBUG
24 matches
Mail list logo