Hi,
If the OSD represents the primary one for a PG, then all IO will be
stopped..which may lead to application failure..
no, that's not how it works. You have an acting set of OSDs for a PG,
typically 3 OSDs in a replicated pool. If the primary OSD goes down,
the secondary becomes the prim
Hi list,
I have encounter this problem both on jewel cluster and luminous cluster.
The symptom is some request will be blocked forever and the whole
cluster won't able to receive any data anymore. Further investigating
shows the blocked request happened on 2 osds(the pool size is 2 so I
guess it wi
On Tue, Jan 22, 2019 at 01:26:29PM -0800, Void Star Nill wrote:
> Regarding Mykola's suggestion to use Read-Only snapshots, what is the
> overhead of creating these snapshots? I assume these are copy-on-write
> snapshots, so there's no extra space consumed except for the metadata?
Yes.
--
Mykol
Hi,
thats a bad news.
round about 5000 OSDs are affected from this issue. It's not realy a
solution to redeploy this OSDs.
Is it possible to migrate the local keys to the monitors?
I see that the OSDs with the "lockbox feature" has only one key for
data and journal partition and the older OSDs h
Hello, Ceph users,
is it possible to migrate already deployed Ceph cluster, which uses
public network only, to a split public/dedicated networks? If so,
can this be done without service disruption? I have now got a new
hardware which makes this possible, but I am not sure how to do it.
Hi Yenya,
Can I ask how your cluster looks and why you want to do the network
splitting?
We used to set up 9-12 OSD nodes (12-16 HDDs each) clusters using 2x10Gb
for access and 2x10Gb for cluster network, however, I don't see the reasons
to not use just one network for next cluster setup.
Thank
Yes sort of. I do have an inconsistent pg for a while, but it is on a
different pool. But I take it this is related to a networking issue I
currently have with rsync and broken pipe.
Where exactly does it go wrong? The cephfs kernel clients is sending a
request to the osd, but the osd never re
Hi,
due to performance issues RGW is not an option.
This statement may be wrong, but there's the following aspect to consider.
If I write a backup that is typically a large file, this is normally a
single IO stream.
This causes massive performance issues on Ceph because this single IO
stream is se
Hi,
we are having issues with the crush location hooks on Mimic:
we deployed the same script we have been using since Hammer (and has
been working fine also in Jewel) that returns:
root=fresh-install host=$(hostname -s)-fresh
however it seems the output of the script is completely disregarded.
Jakub Jaszewski wrote:
: Hi Yenya,
:
: Can I ask how your cluster looks and why you want to do the network
: splitting?
Jakub,
we have deployed the Ceph cluster originally as a proof of concept for
a private cloud. We run OpenNebula and Ceph on about 30 old servers
with old HDDs (2 OSDs
How can I get the snapshot create date on cephfs. When I do an ls on
.snap dir it will give me the date of the snapshot source date.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
jes...@krogh.cc wrote:
: Hi.
:
: We're currently co-locating our mons with the head node of our Hadoop
: installation. That may be giving us some problems, we dont know yet, but
: thus I'm speculation about moving them to dedicated hardware.
:
: It is hard to get specifications "small" engough ..
On Wed, Jan 23, 2019 at 4:01 AM Manuel Lausch wrote:
>
> Hi,
>
> thats a bad news.
>
> round about 5000 OSDs are affected from this issue. It's not realy a
> solution to redeploy this OSDs.
>
> Is it possible to migrate the local keys to the monitors?
> I see that the OSDs with the "lockbox featur
On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote:
Hi,
thats a bad news.
round about 5000 OSDs are affected from this issue. It's not realy a
solution to redeploy this OSDs.
Is it possible to migrate the local keys to the monitors?
I see that the OSDs with the "lockbox feature" has
On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote:
>
> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote:
> >Hi,
> >
> >thats a bad news.
> >
> >round about 5000 OSDs are affected from this issue. It's not realy a
> >solution to redeploy this OSDs.
> >
> >Is it possible to migrate the
On Wed, Jan 23, 2019 at 6:07 PM Marc Roos wrote:
>
> Yes sort of. I do have an inconsistent pg for a while, but it is on a
> different pool. But I take it this is related to a networking issue I
> currently have with rsync and broken pipe.
>
> Where exactly does it go wrong? The cephfs kernel clie
On Wed, 23 Jan 2019 14:25:00 +0100
Jan Fajerski wrote:
> I might be wrong on this, since its been a while since I played with
> that. But iirc you can't migrate a subset of ceph-disk OSDs to
> ceph-volume on one host. Once you run ceph-volume simple activate,
> the ceph-disk systemd units and ud
On Wed, 23 Jan 2019 08:11:31 -0500
Alfredo Deza wrote:
> I don't know how that would look like, but I think it is worth a try
> if re-deploying OSDs is not feasible for you.
yes, is there a working way to migrate this I will have a try it.
>
> The key api for encryption is *very* odd and a lo
On Wed, Jan 23, 2019 at 4:15 PM Manuel Lausch wrote:
> yes you are right. The activate disables system wide the ceph-disk.
> This is done by symlinking /etc/systemd/system/ceph-disk@.service
> to /dev/null.
> After deleting this symlink my OSDs started again after reboot.
> The startup processes f
On 1/23/19 3:05 PM, Alfredo Deza wrote:
> On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote:
>>
>> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote:
>>> Hi,
>>>
>>> thats a bad news.
>>>
>>> round about 5000 OSDs are affected from this issue. It's not realy a
>>> solution to redeploy
On Wed, Jan 23, 2019 at 11:03 AM Dietmar Rieder
wrote:
>
> On 1/23/19 3:05 PM, Alfredo Deza wrote:
> > On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote:
> >>
> >> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote:
> >>> Hi,
> >>>
> >>> thats a bad news.
> >>>
> >>> round about 5000
Are there any others I need to grab? So can I do all at once. I do not
like to have to restart this one so often.
>
> Yes sort of. I do have an inconsistent pg for a while, but it is on a
> different pool. But I take it this is related to a networking issue I
> currently have with rsync and bro
Hi Ceph Community,
I am using ansible 2.2 and ceph branch stable-2.2, on centos7, to deploy
the playbook. But the deployment get hangs in this step "TASK [ceph-mon :
test if rbd exists]". it gets hangs there and doesnot move.
I have all the three ceph nodes ceph-admin, ceph-mon, ceph-osd
I appreci
Hi,
How is the commercial support for Ceph? More specifically, I was recently
pointed in the direction of the very interesting combination of CephFS,
Samba and ctdb. Is anyone familiar with companies that provide commercial
support for in-house solutions like this?
Regards, Ketil
___
On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote:
>
> Hi,
>
> How is the commercial support for Ceph? More specifically, I was recently
> pointed in the direction of the very interesting combination of CephFS, Samba
> and ctdb. Is anyone familiar with companies that provide commercial support
Suse as well
https://www.suse.com/products/suse-enterprise-storage/
On Wed, Jan 23, 2019, 6:01 PM Alex Gorbachev On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote:
> >
> > Hi,
> >
> > How is the commercial support for Ceph? More specifically, I was
> recently pointed in the direction of the ve
26 matches
Mail list logo