Hi,
I was doing some performance tuning on test cluster of just 2
nodes (each 10 OSDs). I have test pool of 2 replicas (size=2, min_size=2)
then one of OSD crashed due to failing harddrive. All remaining OSDs were
fine, but health status reported one lost object..
here's detail:
"recovery_s
On Wed, May 4, 2016 at 3:39 AM, Burkhard Linke
wrote:
> Hi,
>
> On 03.05.2016 18:39, Gregory Farnum wrote:
>>
>> On Tue, May 3, 2016 at 9:30 AM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>> we have a number of legacy applications that do not cope well with the
>>> POSIX
>>> locking semantics in C
Hi Vincenzo,
theoretically you might also be able to use a directory.
But dont forget, that ceph relies on this xattr stuff from the FS.
XFS is currently the most save choice to choose.
Also you will need a journal. This is usually a symblink to a device
inside the osd directory.
My advice to y
Hi,
On 05/04/2016 09:15 AM, Yan, Zheng wrote:
On Wed, May 4, 2016 at 3:39 AM, Burkhard Linke
wrote:
Hi,
On 03.05.2016 18:39, Gregory Farnum wrote:
On Tue, May 3, 2016 at 9:30 AM, Burkhard Linke
wrote:
Hi,
we have a number of legacy applications that do not cope well with the
POSIX
locking
On Wed, May 4, 2016 at 4:51 PM, Burkhard Linke
wrote:
> Hi,
>
>
> How does CephFS handle locking in case of missing explicit locking control
> (e.g. flock / fcntl)? And what's the default of mmap'ed memory access in
> that case?
>
Nothing special. Actually, I have no idea why using flock improves
Hello,
We have been running infernalis with RGW in a federated configuration.
I want to upgrade to Jewel however i'm confused by the new configuration
requirements of realms and the default .rgw.root pool.
In our infernalis configuration, for the master region/zone I have the
following in ce
On 05/03/2016 04:17 PM, Vincenzo Pii wrote:
https://github.com/ceph/ceph-docker
Is someone using ceph-docker in production or the project is meant more
for development and experimentation?
Vincenzo Pii| TERALYTICS
*DevOps Engineer
*
I'm not aware of anyone currently using it in production, bu
On Wed, May 4, 2016 at 12:00 AM, Nikola Ciprich
wrote:
> Hi,
>
> I was doing some performance tuning on test cluster of just 2
> nodes (each 10 OSDs). I have test pool of 2 replicas (size=2, min_size=2)
>
> then one of OSD crashed due to failing harddrive. All remaining OSDs were
> fine, but healt
On Wed, May 4, 2016 at 2:16 AM, Yan, Zheng wrote:
> On Wed, May 4, 2016 at 4:51 PM, Burkhard Linke
> wrote:
>> Hi,
>>
>>
>> How does CephFS handle locking in case of missing explicit locking control
>> (e.g. flock / fcntl)? And what's the default of mmap'ed memory access in
>> that case?
>>
>
> N
Hi Gregory,
thanks for the reply.
>
> Is OSD 0 the one which had a failing hard drive? And OSD 10 is
> supposed to be fine?
yes, OSD 0 crashed due to disk errors, rest of the cluster was without
problems, no crash, no restarts.. that's why it scared me a bit..
pity I purged lost placement grou
I’m running CentOS 7.2. I upgraded one server from hammer to jewel. I cannot
get ceph to start using these new systems scripts. Can anyone help?
I tried to enable ceph-osd@.service by creating symlinks manually.
# systemctl list-unit-files|grep ceph
ceph-create-keys@.service s
sadly there are still some issues with jewel/master branch for centos
systemctl service,
As a workaround if you run "systemctl status" and look at the top most
service name in the ceph-osd service tree and use that to stop/start
it should work.
On Wed, May 4, 2016 at 9:00 AM, Michael Kuriger wro
How are others starting ceph services? Am I the only person trying to install
jewel on CentOS 7?
Unfortunately, systemctl status does not list any “ceph” services at all.
On 5/4/16, 9:37 AM, "Vasu Kulkarni" wrote:
>sadly there are still some issues with jewel/master branch for centos
I think this is actually fixed in master, probaby not yet backported
to jewel, systemctl status should list ceph services unless there is
some other issue with your node
ex output:
└─system.slice
├─system-ceph\x2dosd.slice
│ └─ceph-osd@0.service
Hi Michael,
Systemctl pattern for OSD with Infernalis or higher is 'systemctl
start ceph-osd@' (or status, restart)
It will start OSD in default cluster 'ceph' or other cluster if you
have set 'CLUSTER=' in /etc/sysconfig/ceph
If by chance you have 2 clusters on the same hardware you'll have to
I was able to hack the ceph /etc/init.d script to start my osd’s
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
On 5/4/16, 9:58 AM, "ceph-users on behalf of Michael Kuriger"
wrote:
>How are others starting ceph services? Am I the only person trying t
When I issue the "ceph pg repair 1.32" command I *do* see it reported in
the "ceph -w" output but I *do not* see any new messages about page 1.32 in
the log of osd.6 - even if I turn debug messages way up.
# ceph pg repair 1.32
instructing pg 1.32 on osd.6 to repair
(ceph -w shows)
2016-05-04 11:
Hi,
I am getting messages like the following from my Ceph systems. Normally this
would indicate issues with Drives. But when I restart my system, different and
randomly a couple OSDs again start spitting out the same message.
SO definitely it's not the same drives every time.
Any ideas on how t
Hi Blade,
you can try to set the min_size to 1, to get it back online, and if/when
the error vanish ( maybe after another repair command ) you can set the
min_size again to 2.
you can try to simply out/down/?remove? the osd where it is on.
--
Mit freundlichen Gruessen / Best regards
Oliver Dz
I'm preparing to use it in production, and have been contributing
fixes for bugs I find. It's getting fairly solid, but it does need to
be moved to Jewel before we really scale it out.
--
Mike Shuey
On Wed, May 4, 2016 at 8:50 AM, Daniel Gryniewicz wrote:
> On 05/03/2016 04:17 PM, Vincenzo Pii
Hello,
On Wed, 4 May 2016 21:08:02 + Garg, Pankaj wrote:
> Hi,
>
> I am getting messages like the following from my Ceph systems. Normally
> this would indicate issues with Drives. But when I restart my system,
> different and randomly a couple OSDs again start spitting out the same
> messa
Centos 7.2.
.. and i think i just figured it out. One node had directories from former
OSDs in /var/lib/ceph/osd. When restarting other OSDs on this host, ceph
apparently added those to the crush map, too.
[root@sm-cld-mtl-013 osd]# ls -la /var/lib/ceph/osd/
total 128
drwxr-x--- 8 ceph ceph 90 F
Ceph 9.2.1, Centos 7.2
I noticed these errors sometimes when removing objects. It's getting a 'No
such file or directory' on the OSD when deleting things sometimes. Any
ideas here? Is this expected?
(i anonymized the full filename, but it's all the same file)
RGW log:
2016-05-04 23:14:32.2163
23 matches
Mail list logo