Gregory Farnum writes:
> On Thu, May 21, 2015 at 8:24 AM, Kenneth Waegeman
> wrote:
>> Hi,
>>
>> Some strange issue wrt boolean values in the config:
>>
>> this works:
>>
>> osd_crush_update_on_start = 0 -> osd not updated
>> osd_crush_update_on_start = 1 -> osd updated
>>
>> In a previous versi
I am getting a Permission Denied error when browsing the webpage of the
calamari server on a newly installed calamari server running under CentOS 6.
I had to stop iptables to access the server, and the installation when
as planned w/o errors.
I also had to install supervisor 3.1.3 outside of ep
Since you can't start osd with 0.94(without data change), I think it's
safe to start 0.87 again.
On Sat, May 23, 2015 at 12:46 PM, Mingfai wrote:
> thx for your info.
>
> My installation was upgraded with "ceph-deploy install --release hammer
> HOST", and it can't be downgrade with "--release gia
thx for your info.
My installation was upgraded with "ceph-deploy install --release hammer
HOST", and it can't be downgrade with "--release giant" as stated clearly
in a warning message in the doc saying "Important Once you upgrade a
daemon, you cannot downgrade it.".
I plan to do a clean install
Experimental feature like keyvaluestore won't support upgrade from 0.87 to 0.94.
Sorry
On Sat, May 23, 2015 at 7:35 AM, Mingfai wrote:
> hi,
>
> I have a ceph cluster that use keyvaluestore-dev. After upgraded from v0.87
> to v0.94.1, and changed the configuration (removed "-dev" suffix and adde
also the mon_status of both monitors:
# ceph daemon mon.monitor02 mon_status
{ "name": "monitor02",
"rank": 1,
"state": "probing",
"election_epoch": 690,
"quorum": [],
"outside_quorum": [
"monitor02"],
"extr
the other monitor shows the folowing in
the logs:
2015-05-23 03:35:05.425037 7fef8f758700 1
mon.monitor02@1(probing) e6 _ms_dispatch dropping stray message
mon_subscribe({monmap=0+,osdmap=190533}) from client.10756902
192.168.1.69:0/1293654400
Dears,
i have a cluster of ceph, with two monitors.
earlier i tried to add a monitor but it stuck syncing and refused to
join the quorum.
then the two monitors i had got there stores very big, ~25GB.
i restarted one of the monitors (a suggestion to get the mon to
hi,
I have a ceph cluster that use keyvaluestore-dev. After upgraded from v0.87
to v0.94.1, and changed the configuration (removed "-dev" suffix and added
"enable experimental ..."), OSD is still fail to start, with the following
error:
7fb8b06ce900 -1 osd.0 0 OSD::init() : unable to read osd supe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 22/05/2015 20:06, Gregory Farnum wrote:
Ugh. We appear to be trying to allocate too much memory for this event
in the journal dump; we'll need to fix this. :(
It's not even per-event, it tries to load the entire journal into memory
in one go. This a hangover from the old Dumper/Resetter
Alright, bumping that up 10 worked. the MDS server came up and
"recovered". Took about 1 minute.
Thanks again, guys.
--
Adam
On Fri, May 22, 2015 at 2:50 PM, Gregory Farnum wrote:
> On Fri, May 22, 2015 at 12:45 PM, Adam Tygart wrote:
>> Fair enough. Anyway, is it safe to now increase the '
On Fri, May 22, 2015 at 12:45 PM, Adam Tygart wrote:
> Fair enough. Anyway, is it safe to now increase the 'mds beacon grace'
> to try and get the mds server functional again?
Yep! Let us know how it goes...
>
> I realize there is nothing simple about the things that are being
> accomplished her
Fair enough. Anyway, is it safe to now increase the 'mds beacon grace'
to try and get the mds server functional again?
I realize there is nothing simple about the things that are being
accomplished here, and thank everyone for their hard work on making
this stuff work as well as it does.
--
Adam
On Fri, May 22, 2015 at 12:34 PM, Adam Tygart wrote:
> I believe I grabbed all of theses files:
>
> for x in $(rados -p metadata ls | grep -E '^200\.'); do rados -p
> metadata get ${x} /tmp/metadata/${x}; done
> tar czSf journal.tar.gz /tmp/metadata
>
> https://drive.google.com/file/d/0B4XF1RWjuGh
I believe I grabbed all of theses files:
for x in $(rados -p metadata ls | grep -E '^200\.'); do rados -p
metadata get ${x} /tmp/metadata/${x}; done
tar czSf journal.tar.gz /tmp/metadata
https://drive.google.com/file/d/0B4XF1RWjuGh5MVFqVFZfNmpfQWc/view?usp=sharing
When this crash occurred, the r
On Fri, May 22, 2015 at 11:34 AM, Adam Tygart wrote:
> On Fri, May 22, 2015 at 11:47 AM, John Spray wrote:
>>
>>
>> On 22/05/2015 15:33, Adam Tygart wrote:
>>>
>>> Hello all,
>>>
>>> The ceph-mds servers in our cluster are performing a constant
>>> boot->replay->crash in our systems.
>>>
>>> I ha
On Fri, May 22, 2015 at 11:47 AM, John Spray wrote:
>
>
> On 22/05/2015 15:33, Adam Tygart wrote:
>>
>> Hello all,
>>
>> The ceph-mds servers in our cluster are performing a constant
>> boot->replay->crash in our systems.
>>
>> I have enable debug logging for the mds for a restart cycle on one of
Yeah that's what I said at first but they want to keep everything managed
inside the OpenStack ecosystem, so I guess they'll be keen to test Manila
integration!
On Friday, May 22, 2015, Gregory Farnum wrote:
> If you guys have stuff running on Hadoop, you might consider testing
> out CephFS too.
If you guys have stuff running on Hadoop, you might consider testing
out CephFS too. Hadoop is a predictable workload that we haven't seen
break at all in several years and the bindings handle data locality
and such properly. :)
-Greg
On Thu, May 21, 2015 at 11:24 PM, Wang, Warren
wrote:
>
> On 5
On Thu, May 21, 2015 at 8:24 AM, Kenneth Waegeman
wrote:
> Hi,
>
> Some strange issue wrt boolean values in the config:
>
> this works:
>
> osd_crush_update_on_start = 0 -> osd not updated
> osd_crush_update_on_start = 1 -> osd updated
>
> In a previous version we could set boolean values in the c
On Thu, May 21, 2015 at 3:09 AM, Michel Hollands wrote:
> Hello,
>
> Is it possible to use the rados_clone_range() librados API call with an
> erasure coded pool ? The documentation doesn’t mention it’s not possible.
> However running the clonedata command from the rados utility (which seems to
>
I notice in both logs, the last entry before the MDS restart/failover is when
the mds is replaying the journal and gets to
/homes/gundimed/IPD/10kb/1e-500d/DisplayLog/
2015-05-22 09:59:19.116231 7f9d930c1700 10 mds.0.journal EMetaBlob.replay for
[2,head] had [inode 13f8e31 [...2,head]
/hom
On 22/05/2015 15:33, Adam Tygart wrote:
Hello all,
The ceph-mds servers in our cluster are performing a constant
boot->replay->crash in our systems.
I have enable debug logging for the mds for a restart cycle on one of
the nodes[1].
You found a bug, or more correctly you probably found mult
I knew I forgot to include something with my initial e-mail.
Single active with failover.
dumped mdsmap epoch 30608
epoch 30608
flags 0
created 2015-04-02 16:15:55.209894
modified2015-05-22 11:39:15.992774
tableserver 0
root0
session_timeout 60
session_autoclose 300
max_
I've experienced MDS issues in the past, but nothing sticks out to me in your
logs.
Are you using a single active MDS with failover, or multiple active MDS?
--Lincoln
On May 22, 2015, at 10:10 AM, Adam Tygart wrote:
> Thanks for the quick response.
>
> I had 'debug mds = 20' in the first log
Thanks for the quick response.
I had 'debug mds = 20' in the first log, I added 'debug ms = 1' for this one:
https://drive.google.com/file/d/0B4XF1RWjuGh5bXFnRzE1SHF6blE/view?usp=sharing
Based on these logs, it looks like heartbeat_map is_healthy 'MDS' just
times out and then the mds gets respawn
Hi Adam,
You can get the MDS to spit out more debug information like so:
# ceph mds tell 0 injectargs '--debug-mds 20 --debug-ms 1'
At least then you can see where it's at when it crashes.
--Lincoln
On May 22, 2015, at 9:33 AM, Adam Tygart wrote:
> Hello all,
>
> The ceph-mds servers
Hello all,
The ceph-mds servers in our cluster are performing a constant
boot->replay->crash in our systems.
I have enable debug logging for the mds for a restart cycle on one of
the nodes[1].
Kernel debug from cephfs client during reconnection attempts:
[732586.352173] ceph: mdsc delayed_work
PG = Placement Group
PGP = Placement Group for Placement purpose
pg_num = number of placement groups mapped to an OSD
When pg_num is increased for any pool, every PG of this pool splits into half,
but they all remain mapped to their parent OSD.
Until this time, Ceph does not start rebalancing
To answer the 1st question, yes you can mount the RBD’s on the existing nodes,
however there have been reported problems with RBD clients on the same server
as the OSD’s. From memory these have been mainly crashes and hangs. Whether or
not you will come across these problems is something you wil
Hi, Ariel, gentlemen,
I have the same question but with regard to multipath. Is it possible to
just export iSCSI target on each Ceph node and use a multipath on client
side?
Can it possibly lead to data inconsistency?
Regards, Vasily.
On Fri, May 22, 2015 at 12:59 PM, Gerson Ariel wrote:
> I
Hi,
Waiting for CephFS, you can use clustered filesystem like OCFS2 or GFS2
on top of RBD mappings so that each host can access the same device and
clustered filesystem.
Regards,
Frédéric.
Le 21/05/2015 16:10, gjprabu a écrit :
Hi All,
We are using rbd and map the same rbd image t
I apologize beforehand for not using more descriptive subject for my
question.
On Fri, May 22, 2015 at 4:55 PM, Gerson Ariel wrote:
> Our hardware is like this, three identical servers with 8 osd disks, 1 ssd
> disk
> as journal, 1 for os, 32GB of ECC RAM, 4 GiB copper ethernet. We deploy
> thi
Our hardware is like this, three identical servers with 8 osd disks, 1 ssd
disk
as journal, 1 for os, 32GB of ECC RAM, 4 GiB copper ethernet. We deploy this
cluster since February 2015 and most of the the system load is not too
great,
lots of idle time.
Right now we have a node that mounts rbd blo
Hi Vasiliy,
Do we have any procedure for nfs over ceph and mount option.
Regards
Prabu
On Thu, 21 May 2015 22:02:09 +0530 Vasiliy
Angapov wrote
CephFS is I believe not very production ready. Use production quality clustered
filesystems or consider using NFS o
36 matches
Mail list logo