Yes, I believe so. Is there any workarounds?
-邮件原件-
发件人: Jason Dillaman [mailto:jdill...@redhat.com]
发送时间: 2017年7月13日 21:13
收件人: 许雪寒
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] 答复: No "snapset" attribute for clone object
Quite possibly the same as this issue? [1]
[1] http://track
So only the ceph-mds is affected? Let's say if we have mons and osds
on 10.2.8 and the MDS on 10.2.6 or 10.2.7 we would be "safe"?
I'm asking since we need to add new storage nodes to our production cluster.
Best,
Martin
On Wed, Jul 12, 2017 at 10:44 PM, Patrick Donnelly wrote:
> On Wed, Jul 12
Hi,
Why you would like to maintain copies by yourself. You replicate on ceph
and then on different files inside ceph? Let ceph take care of counting.
Create a pool with 3 or more copies and let ceph take care of what's
stored and where.
Best regards,
El 13/07/17 a las 17:06, li...@marcelof
Hi,
10.2.9 is there:
apt list --upgradable
Listing... Done
ceph/stable 10.2.9-1~bpo80+1 amd64 [upgradable from: 10.2.8-1~bpo80+1]
Change-File??
Udo
Am 2017-07-14 09:26, schrieb Martin Palma:
So only the ceph-mds is affected? Let's say if we have mons and osds
on 10.2.8 and the MDS on 10.2.6 or
Hi All,
I have been reviewing the sizing of our PGs with a view to some intermittent
performance issues. When we have scrubs running, even when only a few are, we
can sometimes get severe impacts on the performance of RBD images, enough to
start causing VMs to appear stalled or unresponsive.
Hello,
I am trying to install a test CephFS "Luminous" system on Ubuntu 16.04.
Everything looks fine, but the `mount.ceph` command fails (error 110, timeout);
kernel logs show a number of messages like these before the `mount`
prog gives up:
libceph: ... feature set mismatch, my 107b84a842ac
according to some slide in https://www.youtube.com/watch?v=gp6if858HUI
the support is:
> TUNABLE RELEASE CEPH_VERSION KERNEL
> CRUSH_TUNABLES argonaut v0.48.1 v3.6
> CRUSH_TUNABLES2 bobtail v0.55 v3.9
> CRUSH_TUNABLES3 firefly v0.78 v3.15
> CRUSH_V4
On Fri, Jul 14, 2017 at 11:29 AM, Riccardo Murri
wrote:
> Hello,
>
> I am trying to install a test CephFS "Luminous" system on Ubuntu 16.04.
>
> Everything looks fine, but the `mount.ceph` command fails (error 110,
> timeout);
> kernel logs show a number of messages like these before the `mount`
> Op 11 juli 2017 om 22:35 schreef Sage Weil :
>
>
> On Tue, 11 Jul 2017, Wido den Hollander wrote:
> > > Op 11 juli 2017 om 17:03 schreef Sage Weil :
> > >
> > >
> > > Hi all,
> > >
> > > Luminous features a new 'service map' that lets rgw's (and rgw nfs
> > > gateways and iscsi gateways an
On Wed, Jul 12, 2017 at 7:11 PM, wrote:
> Hi!
>
> I have installed Ceph using ceph-deploy.
> The Ceph Storage Cluster setup includes these nodes:
> ld4257 Monitor0 + Admin
> ld4258 Montor1
> ld4259 Monitor2
> ld4464 OSD0
> ld4465 OSD1
>
> Ceph Health status is OK.
>
> However, I cannot mount Ceph
Thanks, Sage.
It doesn't happen every time, but the probability is high
Reproduce as Follows:
HOST-A HOST-B HOST-C
osd 7 osd 21 osd11
1. osdmap epoch95, pg 1.20f on osd acting set [11,7]/ up set[11,7],then
shutdown HOST-C
2. for a long time, cluster
On 14/07/17 11:03, Ilya Dryomov wrote:
> On Fri, Jul 14, 2017 at 11:29 AM, Riccardo Murri
> wrote:
>> Hello,
>>
>> I am trying to install a test CephFS "Luminous" system on Ubuntu 16.04.
>>
>> Everything looks fine, but the `mount.ceph` command fails (error 110,
>> timeout);
>> kernel logs show a
Hi All,
I am new to ceph and I am trying to debug a few scenarios . I have 2 queries as
listed below -
1.Regarding enabling debug logs for ceph
2.Regarding internal processes of ceph
QUERY 1
I have enabled the logs by setting the log level in /etc/ceph/ceph.conf
attached above -
But none of
Hi All,
I am new to ceph and I am trying to debug a few scenarios . I have 2 queries as
listed below -
1.Regarding enabling debug logs for ceph
2.Regarding internal processes of ceph
QUERY 1 >>
I have enabled the logs by setting the log level in /etc/ceph/ceph.conf
attached above -
But none
Hi All,
I am new to ceph and I am trying to debug a few scenarios . I have 2 queries as
listed below -
1.Regarding enabling debug logs for ceph
2.Regarding internal processes of ceph
QUERY 1
I have enabled the logs by setting the log level in ceph conf file attached
above -
But none of this i
On Mon, Jul 10, 2017 at 5:06 PM, Sage Weil wrote:
> On Mon, 10 Jul 2017, Luis Periquito wrote:
>> Hi Dan,
>>
>> I've enabled it in a couple of big-ish clusters and had the same
>> experience - a few seconds disruption caused by a peering process
>> being triggered, like any other crushmap update d
Hi,
Occasionally we want to change the scrub schedule for a pool or whole
cluster, but we want to do this by injecting new settings without
restarting every daemon.
I've noticed that in jewel, changes to scrub_min/max_interval and
deep_scrub_interval do not take immediate effect, presumably becau
The only people that have experienced it seem to be using cache
tiering. I don't know if anyone has deeply investigate it yet. You
could attempt to evict those objects from the cache tier so that the
snapdir request is proxied down to the base tier to see if that works.
On Fri, Jul 14, 2017 at 3:0
Gonzalo,
You are right, i told so much about my enviroment actual and maybe i didn't
know explain my problem the better form, with ceph in the moment, mutiple hosts
clients can mount and write datas in my system and this is one problem, because
i could have filesystem corruption.
Example, t
On Fri, 14 Jul 2017, Chenyehua wrote:
> Thanks, Sage.
>
> It doesn't happen every time, but the probability is high
>
> Reproduce as Follows:
> HOST-A HOST-B HOST-C
> osd 7 osd 21 osd11
> 1. osdmap epoch95, pg 1.20f on osd acting set [11,7]/ up set[11,7
On Fri, Jul 14, 2017 at 12:26 AM, Martin Palma wrote:
> So only the ceph-mds is affected? Let's say if we have mons and osds
> on 10.2.8 and the MDS on 10.2.6 or 10.2.7 we would be "safe"?
Yes, only the MDS was affected.
As Udo mentioned, v10.2.9 is out so feel free to upgrade to that instead.
Dear all,
The current upgrade procedure to jewel, as stated by the RC's release
notes, can be boiled down to
- upgrade all monitors first
- upgrade osds only after we have a **full** quorum, comprised of all
the monitors in the monmap, of luminous monitors (i.e., once we have the
'luminous'
On Fri, 14 Jul 2017, Joao Eduardo Luis wrote:
> Dear all,
>
>
> The current upgrade procedure to jewel, as stated by the RC's release notes,
You mean (jewel or kraken) -> luminous, I assume...
> can be boiled down to
>
> - upgrade all monitors first
> - upgrade osds only after we have a **full
On 07/14/2017 03:12 PM, Sage Weil wrote:
On Fri, 14 Jul 2017, Joao Eduardo Luis wrote:
Dear all,
The current upgrade procedure to jewel, as stated by the RC's release notes,
You mean (jewel or kraken) -> luminous, I assume...
Yeah. *sigh*
-Joao
__
On 2017-07-14T14:12:08, Sage Weil wrote:
> > Any thoughts on how to mitigate this, or on whether I got this all wrong and
> > am missing a crucial detail that blows this wall of text away, please let me
> > know.
> I don't know; the requirement that mons be upgraded before OSDs doesn't
> seem th
Thank you for the clarification and yes we saw that v10.2.9 was just
released. :-)
Best,
Martin
On Fri, Jul 14, 2017 at 3:53 PM, Patrick Donnelly wrote:
> On Fri, Jul 14, 2017 at 12:26 AM, Martin Palma wrote:
>> So only the ceph-mds is affected? Let's say if we have mons and osds
>> on 10.2.8 a
Having run ceph clusters in production for the past six years and upgrading
from every stable release starting with argonaut to the next, I can honestly
say being careful about order of operations has not been a problem.
> On Jul 14, 2017, at 10:27 AM, Lars Marowsky-Bree wrote:
>
> On 2017-07-
On 2017-07-14T10:34:35, Mike Lowe wrote:
> Having run ceph clusters in production for the past six years and upgrading
> from every stable release starting with argonaut to the next, I can honestly
> say being careful about order of operations has not been a problem.
This requirement did not e
On 07/14/2017 03:12 PM, Sage Weil wrote:
On Fri, 14 Jul 2017, Joao Eduardo Luis wrote:
On top of this all, I found during my tests that any OSD, running luminous
prior to the luminous quorum, will need to be restarted before it can properly
boot into the cluster. I'm guessing this is a bug rathe
It was required for Bobtail to Cuttlefish and Cuttlefish to Dumpling.
Exactly how many mons do you have such that you are concerned about failure?
If you have let’s say 3 mons, you update all the bits, then it shouldn’t take
you more than 2 minutes to restart the mons one by one. You can tak
On Fri, 14 Jul 2017, Lars Marowsky-Bree wrote:
> On 2017-07-14T14:12:08, Sage Weil wrote:
>
> > > Any thoughts on how to mitigate this, or on whether I got this all wrong
> > > and
> > > am missing a crucial detail that blows this wall of text away, please let
> > > me
> > > know.
> > I don't k
Dear all,
I'm reading the docs at
http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/
regarding the cluster network and I wonder which nodes are connected to the
dedicated cluster network?
The digram on the mentioned page only shows the OSDs connected to the cluster netwo
On Fri, Jul 14, 2017 at 9:44 AM, wrote:
> Gonzalo,
>
>
>
> You are right, i told so much about my enviroment actual and maybe i didn't
> know explain my problem the better form, with ceph in the moment, mutiple
> hosts clients can mount and write datas in my system and this is one
> problem, beca
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Jason Dillaman
> Sent: 14 July 2017 16:40
> To: li...@marcelofrota.info
> Cc: ceph-users
> Subject: Re: [ceph-users] Ceph mount rbd
>
> On Fri, Jul 14, 2017 at 9:44 AM, wrote:
> > Gonzal
Only the osds use the dedicated cluster network. Ping the mons and mds
services on the network will do nothing.
On Fri, Jul 14, 2017, 11:39 AM Laszlo Budai wrote:
> Dear all,
>
> I'm reading the docs at
> http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/
> regarding the c
Is there going to be an announcement for 10.2.9 either? I haven't seen
anything other than users noticing the packages.
On Fri, Jul 14, 2017, 10:30 AM Martin Palma wrote:
> Thank you for the clarification and yes we saw that v10.2.9 was just
> released. :-)
>
> Best,
> Martin
>
> On Fri, Jul 14,
Hi,
I'm following the instructions of the web (
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/) and I'm trying
to create a manager on my first node.
In my environment I have 2 nodes:
- vdicnode01 (mon, mgr and osd)
- vdicnode02 (osd)
Each server has to NIC, the public and the private
I've been trying to work through similar mgr issues for Xenial-Luminous...
roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr create
It is tested for master and is working fine, I will run those same tests on
luminous and check if there is an issue and update here. mgr create is
needed for luminous+ bulids only.
On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown wrote:
> I've been trying to work through similar mgr issues for Xeni
I'm testing on latest Jewell version I've found in repositories:
[root@vdicnode01 yum.repos.d]# ceph --version
ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e)
thanks a lot!
2017-07-14 19:21 GMT+02:00 Vasu Kulkarni :
> It is tested for master and is working fine, I will run those
I issued the pg deep scrub command ~24 hours ago and nothing has changed. I see
nothing in the active osd's log about kicking off the scrub.
On Jul 13, 2017, at 2:24 PM, David Turner
mailto:drakonst...@gmail.com>> wrote:
# ceph pg deep-scrub 22.1611
On Thu, Jul 13, 2017 at 1:00 PM Aaron Basset
You probably have osd_max_scrubs=1 and the PG just isn't getting a
slot to start.
Here's a little trick to get that going right away:
ceph osd set noscrub
ceph osd set nodeep-scrub
ceph tell osd.* injectargs -- --osd_max_scrubs 2
ceph pg deep-scrub 22.1611
... wait until it starts scrubbing ...
ce
On Fri, Jul 14, 2017 at 10:37 AM, Oscar Segarra
wrote:
> I'm testing on latest Jewell version I've found in repositories:
>
you can skip that command then, I will fix the document to add a note for
jewel or pre luminous build.
>
> [root@vdicnode01 yum.repos.d]# ceph --version
> ceph version 10.
On Fri, Jul 14, 2017 at 5:41 AM Dan van der Ster wrote:
> Hi,
>
> Occasionally we want to change the scrub schedule for a pool or whole
> cluster, but we want to do this by injecting new settings without
> restarting every daemon.
>
> I've noticed that in jewel, changes to scrub_min/max_interval
v10.2.8 Jewel released
==
This point release brought a number of important bugfixes in all major
components of Ceph. However, it also introduced a regression that
could cause MDS damage, and a new release, v10.2.9, was published to
address this. Therefore, Jewel users should
v10.2.9 Jewel released
==
This point release fixes a regression introduced in v10.2.8.
We recommend that all Jewel users upgrade.
For more detailed information, see the complete changelog[1]
and release notes[2].
Notable Changes
---
* cephfs: Damaged MDS with 1
I'm having an issue with small sequential reads (such as searching
through source code files, etc), and I found that multiple small reads
withing a 4MB boundary would fetch the same object from the OSD multiple
times, as it gets inserted into the RBD cache partially.
How to reproduce: rbd image a
On Fri, Jul 14, 2017 at 3:43 PM, Ruben Rodriguez wrote:
>
> I'm having an issue with small sequential reads (such as searching
> through source code files, etc), and I found that multiple small reads
> withing a 4MB boundary would fetch the same object from the OSD multiple
> times, as it gets ins
48 matches
Mail list logo