knowledge from the guys who have actually installed it and have it running in
their environment.
Any help is appreciated.
Thanks.
—Jiten Shah
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rge/puppet-ceph/blob/master/USECASES.md#i-want-to-try-this-module,-heard-of-ceph,-want-to-see-it-in-action
>
> Cheers,
> -Steve
>
>
> On Fri, Aug 22, 2014 at 5:25 PM, JIten Shah wrote:
> Hi Guys,
>
> I have been looking to try out a test ceph cluster in my lab to see
getting below error:
[nk21l01si-d01-ceph001][INFO ] Running command: sudo ceph-disk -v activate
--mark-init sysvinit --mount /var/local/osd0
[nk21l01si-d01-ceph001][WARNIN] DEBUG:ceph-disk:Cluster uuid is
08985bbc-5a98-4614-9267-3e0a91e7358b
[nk21l01si-d01-ceph001][WARNIN] INFO:ceph-disk:Runnin
Hello Cephers,
We created a ceph cluster with 100 OSD, 5 MON and 1 MSD and most of the stuff
seems to be working fine but we are seeing some degrading on the osd's due to
lack of space on the osd's. Is there a way to resize the OSD without bringing
the cluster down?
--jiten
___
We ran into the same issue where we could not mount the filesystem on the
clients because it had 3.9. Once we upgraded the kernel on the client node, we
were able to mount it fine. FWIW, you need kernel 3.14 and above.
--jiten
On Sep 5, 2014, at 6:55 AM, James Devine wrote:
> No messages in d
Thanks Christian. Replies inline.
On Sep 6, 2014, at 8:04 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 05 Sep 2014 15:31:01 -0700 JIten Shah wrote:
>
>> Hello Cephers,
>>
>> We created a ceph cluster with 100 OSD, 5 MON and 1 MSD and most of the
>
On Sep 6, 2014, at 8:22 PM, Christian Balzer wrote:
>
> Hello,
>
> On Sat, 06 Sep 2014 10:28:19 -0700 JIten Shah wrote:
>
>> Thanks Christian. Replies inline.
>> On Sep 6, 2014, at 8:04 AM, Christian Balzer wrote:
>>
>>>
>>> Hello,
>
While checking the health of the cluster, I ran to the following error:
warning: health HEALTH_WARN too few pgs per osd (1< min 20)
When I checked the pg and php numbers, I saw the value was the default value of
64
ceph osd pool get data pg_num
pg_num: 64
ceph osd pool get data pgp_num
pgp_num:
Thanks Greg.
—Jiten
On Sep 8, 2014, at 10:31 AM, Gregory Farnum wrote:
> On Mon, Sep 8, 2014 at 10:08 AM, JIten Shah wrote:
>> While checking the health of the cluster, I ran to the following error:
>>
>> warning: health HEALTH_WARN too few pgs per osd (1< min 20)
&
So, if it doesn’t refer to the entry in ceph.conf. Where does it actually store
the new value?
—Jiten
On Sep 8, 2014, at 10:31 AM, Gregory Farnum wrote:
> On Mon, Sep 8, 2014 at 10:08 AM, JIten Shah wrote:
>> While checking the health of the cluster, I ran to the follow
ktank.com | http://ceph.com
>
>
> On Mon, Sep 8, 2014 at 10:50 AM, JIten Shah wrote:
>> So, if it doesn’t refer to the entry in ceph.conf. Where does it actually
>> store the new value?
>>
>> —Jiten
>>
>> On Sep 8, 2014, at 10:31 AM, Gregory Farnu
Looking at the docs (as below), it seems like .95 and .85 are the default
numbers for full and near full ratio and if you reach the full ratio, it will
stop reading an writing to avoid data corruption.
http://ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity
So, few ques
What does your mount command look like ?
Sent from my iPhone 5S
> On Sep 12, 2014, at 4:56 PM, Erick Ocrospoma wrote:
>
> Hi,
>
> I'm n00b in the ceph world, so here I go. I was following this tutorials
> [1][2] (in case you need to know if I missed something), while trying to
> mount a bl
ess of your MON node, not
> the one of the MDS node.
>
> Or may be you have deployed both a MON and an MDS on ceph01?
>
>
> JC
>
>
>
>> On Sep 12, 2014, at 18:41, Erick Ocrospoma wrote:
>>
>>
>>
>> On 12 September 2014 20:32, JIten S
Here's an example:
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
Sent from my iPhone 5S
> On Sep 12, 2014, at 7:14 PM, JIten Shah wrote:
>
> Yes. It has to be the name of the MON server. If there are mor
Sent from my iPhone 5S
> On Sep 12, 2014, at 8:01 PM, Erick Ocrospoma wrote:
>
>
>
>> On 12 September 2014 21:16, JIten Shah wrote:
>> Here's an example:
>>
>> sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
>> name=admi
Thanks Craig. That’s exactly what I was looking for.
—Jiten
On Sep 16, 2014, at 2:42 PM, Craig Lewis wrote:
>
>
> On Fri, Sep 12, 2014 at 4:35 PM, JIten Shah wrote:
>
> 1. If we need to modify those numbers, do we need to update the values in
> ceph.conf and restart e
Hi Guys,
We have a cluster with 1000 OSD nodes and 5 MON nodes and 1 MDS node. In order
to be able to loose quite a few OSD’s and still survive the load, we were
thinking of making the replication factor to 50.
Is that too big of a number? what is the performance implications and any other
iss
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Please send your /etc/hosts contents here.
--Jiten
On Oct 15, 2014, at 7:27 AM, Support - Avantek wrote:
> I may be completely overlooking something here but I keep getting “ssh;
> cannot resolve hostname” when I try to contact my OSD node’s from my monitor
> node. I have set the ipaddress’s
Hi Guys,
We are trying to install cephFS using puppet on all the ODS nodes, as well as
MON and MDS. Are there recommended puppet modules that anyone has used in the
past or created their own?
Thanks.
—Jiten
___
ceph-users mailing list
ceph-users@lis
Hi Guys,
I am sure many of you guys have installed cephfs using puppet. I am trying to
install “firefly” using the puppet module from
https://github.com/ceph/puppet-ceph.git
and running into the “ceph_config” file issue where it’s unable to find the
config file and I am not sure why.
Here’
Dachary wrote:
>
> Hi,
>
> At the moment puppet-ceph does not support CephFS. The error you're seeing
> does not ring a bell, would you have more context to help diagnose it ?
>
> Cheers
>
>> On 06/11/2014 23:44, JIten Shah wrote:
>> Hi Guys,
>&g
eph odd pool create cephfsmeta xxx
> c) ceph mds newfs {cephfsmeta_poolid} {cephfsdata_poolid}
> 5) ceph-deploy mds create {mdshostname}
>
> Make sure you have password-less ssh access into the later host.
>
> I think this should do the trick
>
> JC
>
>
>
>
Hi Guys,
We ran into this issue after we nearly max’ed out the sod’s. Since then, we
have cleaned up a lot of data in the sod’s but pg’s seem to stuck for last 4 to
5 days. I have run "ceph osd reweight-by-utilization” and that did not seem to
work.
Any suggestions?
ceph -s
cluster 909c
Thanks Chad. It seems to be working.
—Jiten
On Nov 11, 2014, at 12:47 PM, Chad Seys wrote:
> Find out which OSD it is:
>
> ceph health detail
>
> Squeeze blocks off the affected OSD:
>
> ceph osd reweight OSDNUM 0.8
>
> Repeat with any OSD which becomes toofull.
>
> Your cluster is only ab
Actually there were 100’s that were too full. We manually set the OSD weights
to 0.5 and it seems to be recovering.
Thanks of the tips on crush reweight. I will look into it.
—Jiten
On Nov 11, 2014, at 1:37 PM, Craig Lewis wrote:
> How many OSDs are nearfull?
>
> I've seen Ceph want two toof
I agree. This was just our brute-force method on our test cluster. We won't do
this on production cluster.
--Jiten
On Nov 11, 2014, at 2:11 PM, cwseys wrote:
> 0.5 might be too much. All the PGs squeezed off of one OSD will need to be
> stored on another. The fewer you move the less likely
age -
>> From: "JIten Shah"
>> To: "Jean-Charles LOPEZ"
>> Cc: "ceph-users"
>> Sent: Friday, November 7, 2014 7:18:10 PM
>> Subject: Re: [ceph-users] Installing CephFs via puppet
>>
>> Thanks JC and Loic but we HAVE to use puppe
Hi Guys,
I had to rekick some of the hosts where OSD’s were running and after re-kick,
when I try to run puppet and install OSD’s again, it gives me a key mismatch
error (as below). After the hosts were shutdown for rekick, I removed the OSD’s
from the osd tree and the crush map too. Why is it
; -Greg
>
> On Fri, Nov 14, 2014 at 4:42 PM, JIten Shah wrote:
>> Hi Guys,
>>
>> I had to rekick some of the hosts where OSD’s were running and after
>> re-kick, when I try to run puppet and install OSD’s again, it gives me a key
>> mismatch error (as below)
Ok. I will do that. Thanks
--Jiten
> On Nov 14, 2014, at 4:57 PM, Gregory Farnum wrote:
>
> It's still creating and storing keys in case you enable it later.
> That's exactly what the error is telling you and that's why it's not
> working.
>
>>
After i rebuilt the OSD’s, the MDS went into the degraded mode and will not
recover.
[jshah@Lab-cephmon001 ~]$ sudo tail -100f
/var/log/ceph/ceph-mds.Lab-cephmon001.log
2014-11-17 17:55:27.855861 7fffef5d3700 0 -- X.X.16.111:6800/3046050 >>
X.X.16.114:0/838757053 pipe(0x1e18000 sd=22 :6800 s=
After rebuilding a few OSD’s, I see that the pg’s are stuck in degraded mode.
Sone are in the unclean and others are in the stale state. Somehow the MDS is
also degraded. How do I recover the OSD’s and the MDS back to healthy ? Read
through the documentation and on the web but no luck so far.
p
id you replace? It almost
> looks like you replaced multiple OSDs at the same time, and lost data because
> of it.
>
> Can you give us the output of `ceph osd tree`, and `ceph pg 2.33 query`?
>
>
> On Wed, Nov 19, 2014 at 2:14 PM, JIten Shah wrote:
> After rebuilding
D rebuild? If you
> rebuilt all 3 OSDs at the same time (or without waiting for a complete
> recovery between them), that would cause this problem.
>
>
>
> On Thu, Nov 20, 2014 at 11:40 AM, JIten Shah wrote:
> Yes, it was a healthy cluster and I had to rebuild because
tell Ceph that you lost the OSDs. For each OSD you moved, run ceph
> osd lost , then try the force_create_pg command again.
>
> If that doesn't work, you can keep fighting with it, but it'll be faster to
> rebuild the cluster.
>
>
>
> On Thu, Nov 20, 2014
PM, JIten Shah wrote:
> Ok. Thanks.
>
> —Jiten
>
> On Nov 20, 2014, at 2:14 PM, Craig Lewis wrote:
>
>> If there's no data to lose, tell Ceph to re-create all the missing PGs.
>>
>> ceph pg force_create_pg 2.33
>>
>> Repeat for each of the
again.
—Jiten
On Nov 20, 2014, at 5:47 PM, Michael Kuriger wrote:
> Maybe delete the pool and start over?
>
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> JIten Shah
> Sent: Thursday, November 20, 2014 5:46 PM
> To: Craig Lewis
> Cc:
I am trying to setup 3 MDS servers (one on each MON) but after I am done
setting up the first one, it give me below error when I try to start it on the
other ones. I understand that only 1 MDS is functional at a time, but I thought
you can have multiple of them up, incase the first one dies? Or
built the OSDs"?
> -Greg
>
> On Mon, Nov 17, 2014 at 12:52 PM, JIten Shah wrote:
>> After i rebuilt the OSD’s, the MDS went into the degraded mode and will not
>> recover.
>>
>>
>> [jshah@Lab-cephmon001 ~]$ sudo tail -100f
>> /var/log/ceph/ceph-mds.
:21 PM, JIten Shah wrote:
>> I am trying to setup 3 MDS servers (one on each MON) but after I am done
>> setting up the first one, it give me below error when I try to start it on
>> the other ones. I understand that only 1 MDS is functional at a time, but I
>> thought you c
Do I need to update the ceph.conf to support multiple MDS servers?
—Jiten
On Nov 24, 2014, at 6:56 AM, Gregory Farnum wrote:
> On Sun, Nov 23, 2014 at 10:36 PM, JIten Shah wrote:
>> Hi Greg,
>>
>> I haven’t setup anything in ceph.conf as mds.cephmon002 nor in any ce
ing your cluster.
> But you don't need to do anything explicit like tell everybody
> globally that there are multiple MDSes.
> -Greg
>
> On Mon, Dec 8, 2014 at 10:48 AM, JIten Shah wrote:
>> Do I need to update the ceph.conf to support multiple MDS servers?
>>
>>
d the rest will be standbys.
>
> Chris
>
> On Tue, Dec 9, 2014 at 3:10 PM, JIten Shah wrote:
> Hi Greg,
>
> Sorry for the confusion. I am not looking for active/active configuration
> which I know is not supported but what documentation can I refer to for
> installi
So what happens if we upgrade from Firefly to Giant? Do we loose the pools?
—Jiten
On Dec 18, 2014, at 5:12 AM, Thomas Lemarchand
wrote:
> I remember reading somewhere (maybe in changelogs) that default pools
> were not created automatically anymore.
>
> You can create pools you need yourself.
48 matches
Mail list logo