On Wed, Jul 2, 2014 at 5:19 AM, Erik Logtenberg wrote:
> Hi Zheng,
>
> Yes, it was mounted implicitly with acl's enabled. I disabled it by
> adding "noacl" to the mount command, and now the behaviour is correct!
> No more changing permissions.
>
> So it appears to be related to acl's indeed, even
Hello,
Even though you did set the pool default size to 2 in your ceph
configuration, I think this value (and others) is ignored in the initial
setup, for the default pools.
So either make sure these pools really have a replication of 2 by deleting
and re-creating them or add a third storage nod
Gregory Farnum writes:
>
> On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
> wrote:
> > "profile": "bobtail",
>
> Okay. That's unusual. What's the oldest client you need to support,
> and what Ceph version are you using?
This is a fresh install (as of today) running the latest firefly.
Thank you. try to do it
02.07.2014, 05:30, "Gregory Farnum" :
> Yeah, the features are new from January or something so you need a
> very new kernel to support it. There are no options to set.
> But in general I wouldn't use krbd if you can use librbd instead; it's
> easier to update and more feat
Yeah, the features are new from January or something so you need a
very new kernel to support it. There are no options to set.
But in general I wouldn't use krbd if you can use librbd instead; it's
easier to update and more featureful!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.
Hi! There is some option in the kernel which must be included, or just upgrade to the latest version of the kernel? I use 3.13.0-24 Thanks 01.07.2014, 20:17, "Gregory Farnum" :It looks like you're using a kernel RBD mount in the second case? I imagine your kernel doesn't support caching pools and y
On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
wrote:
> Hi Greg,
>
> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>> Sorry we let this drop; we've all been busy traveling and things.
>>
>> There have been a lot of changes to librados between Dumpling and
>> Firefly, but we have no
Can you reproduce with
debug osd = 20
debug filestore = 20
debug ms = 1
?
-Sam
On Tue, Jul 1, 2014 at 1:21 AM, Pierre BLONDEAU
wrote:
> Hi,
>
> I join :
> - osd.20 is one of osd that I detect which makes crash other OSD.
> - osd.23 is one of osd which crash when i start osd.20
> - mds, is one
Hi Zheng,
Yes, it was mounted implicitly with acl's enabled. I disabled it by
adding "noacl" to the mount command, and now the behaviour is correct!
No more changing permissions.
So it appears to be related to acl's indeed, even though I didn't
actually set any acl's. Simply mounting with acl's e
On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
wrote:
> "profile": "bobtail",
Okay. That's unusual. What's the oldest client you need to support,
and what Ceph version are you using? You probably want to set the
crush tunables to "optimal"; the "bobtail" ones are going to have all
kinds of is
Gregory Farnum writes:
> So those disks are actually different sizes, in proportion to their
> weights? It could be having an impact on this, although it *shouldn't*
> be an issue. And your tree looks like it's correct, which leaves me
> thinking that something is off about your crush rules. :/
>
On Tue, Jul 1, 2014 at 11:57 AM, Brian Lovett
wrote:
> Gregory Farnum writes:
>
>> ...and one more time, because apparently my brain's out to lunch today:
>>
>> ceph osd tree
>>
>> *sigh*
>>
>
> haha, we all have those days.
>
> [root@monitor01 ceph]# ceph osd tree
> # idweight type name
Gregory Farnum writes:
> ...and one more time, because apparently my brain's out to lunch today:
>
> ceph osd tree
>
> *sigh*
>
haha, we all have those days.
[root@monitor01 ceph]# ceph osd tree
# idweight type name up/down reweight
-1 14.48 root default
-2 7.24
I've never worked enough with rbd to be sure. I know for files, when I turned
on striping, I got far better performance. It seems like for RBD, the default
is:
Just to see if it helps with rbd, I would try stripe_count=4,
stripe_unit=1mb... or something like that. If you tinker with these
On Tue, Jul 1, 2014 at 11:45 AM, Gregory Farnum wrote:
> On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett
> wrote:
>> Brian Lovett writes:
>>
>>
>> I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
>> the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in
On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett
wrote:
> Brian Lovett writes:
>
>
> I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
> the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in
>
> Why would that be?
The OSDs report each other down much mor
Gregory Farnum writes:
>
> What's the output of "ceph osd map"?
>
> Your CRUSH map probably isn't trying to segregate properly, with 2
> hosts and 4 OSDs each.
> Software Engineer #42http://inktank.com | http://ceph.com
>
Is this what you are looking for?
ceph osd map rbd ceph
osdmap e1
Brian Lovett writes:
I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in
Why would that be?
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
What's the output of "ceph osd map"?
Your CRUSH map probably isn't trying to segregate properly, with 2
hosts and 4 OSDs each.
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Jul 1, 2014 at 11:22 AM, Brian Lovett
wrote:
> I'm pulling my hair out with ceph. I am testing thin
I'm pulling my hair out with ceph. I am testing things with a 5 server
cluster. I have 3 monitors, and two storage machines each with 4 osd's. I
have started from scratch 4 times now, and can't seem to figure out how to
get a clean status. Ceph health reports:
HEALTH_WARN 34 pgs degraded; 192
On 01/07/2014 18:21, Sylvain Munaut wrote:
> Hi,
>
>>> And then I start the process, and it starts fine.
>>> http://pastebin.com/TPzNth6P
>>> I even see one active tcp connection to a mon from that process.
>>>
>>> But the osd never becomes "up" or do anything ...
>>
>> I suppose there are erro
Hi,
>> And then I start the process, and it starts fine.
>> http://pastebin.com/TPzNth6P
>> I even see one active tcp connection to a mon from that process.
>>
>> But the osd never becomes "up" or do anything ...
>
> I suppose there are error messages in logs somewhere regarding the fact that
>
It looks like you're using a kernel RBD mount in the second case? I imagine
your kernel doesn't support caching pools and you'd need to upgrade for it
to work.
-Greg
On Tuesday, July 1, 2014, Никитенко Виталий wrote:
> Good day!
> I have server with Ubunu 14.04 and installed ceph firefly. Config
Hi,
On 01/07/2014 17:48, Sylvain Munaut wrote:
> Hi,
>
>
> As an exercise, I killed an OSD today, just killed the process and
> removed its data directory.
>
> To recreate it, I recreated an empty data dir, then
>
> ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs
>
> (I tried
Hi,
I set the same weight for all the hosts, same weight for all the osds under the
hosts in crushmap, and set pool replica size to 3. However, after upload
1M/4M/400M/900M files to the pool, I found the data replication is not even on
every osds and the utilization for the osds are not the sam
Hi,
As an exercise, I killed an OSD today, just killed the process and
removed its data directory.
To recreate it, I recreated an empty data dir, then
ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs
(I tried with and without giving the monmap).
I then restored the keyring fil
I know FileStore.ondisk_finisher handle C_OSD_OpCommit , and from
"journaled_completion_queue" to "op_commit" cost 3.6 seconds, maybe cost in the
function of ReplicatedPG::op_commit .
Through OpTracker , I find that ReplicatedPG::op_commit first lock pg, but it
sometimes cost from 0.5 to 1 sec
Hi all,
if it should be nagios/icinga and not Zabbix, there is a remote check
from me that can be found here:
https://github.com/Crapworks/check_ceph_dash
This one uses ceph-dash to monitor the overall cluster status via http:
https://github.com/Crapworks/ceph-dash
But it can be easily adopted
Good day!
I have server with Ubunu 14.04 and installed ceph firefly. Configured main_pool
(2 osd) and ssd_pool (1 ssd osd). I want use ssd_pool as cache pool for
main_pool
ceph osd tier add main_pool ssd_pool
ceph osd tier cache-mode ssd_pool writeback
ceph osd tier set-overlay main_pool
Hi,
May be you can use that : https://github.com/thelan/ceph-zabbix, but i
am interested to view Craig's script and template.
Regards
Le 01/07/2014 10:16, Georgios Dimitrakakis a écrit :
Hi Craig,
I am also interested at the Zabbix templates and scripts if you can
publish them.
Regards,
G
Hi,
I join :
- osd.20 is one of osd that I detect which makes crash other OSD.
- osd.23 is one of osd which crash when i start osd.20
- mds, is one of my MDS
I cut log file because they are to big but. All is here :
https://blondeau.users.greyc.fr/cephlog/
Regards
Le 30/06/2014 17:35, Gre
Hi Craig,
I am also interested at the Zabbix templates and scripts if you can
publish them.
Regards,
G.
On Mon, 30 Jun 2014 18:15:12 -0700, Craig Lewis wrote:
You should check out Calamari (https://github.com/ceph/calamari [3]),
Inktanks monitoring and administration tool.
I started befor
Hello,
>From the output we can also see that the server fails to install wget.
On each on your servers you have to set the proxy environment variables:
https_proxy, http_proxy etc..
For redhat/centos you can do it globally in a file in /etc/profile.d
For ubuntu / debian you have to define
33 matches
Mail list logo