On Thu, Sep 15, 2016 at 10:30 PM, Jim Kilborn wrote:
> I have a replicated cache pool and metadata pool which reside on ssd drives,
> with a size of 2, backed by a erasure coded data pool
> The cephfs filesystem was in a healthy state. I pulled an SSD drive, to
> perform an exercise in osd failu
On Thu, Sep 15, 2016 at 4:19 PM, Burkhard Linke
wrote:
> Hi,
>
>
> On 09/15/2016 12:00 PM, John Spray wrote:
>>
>> On Thu, Sep 15, 2016 at 2:20 PM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>> does CephFS impose an upper limit on the number of files in a directory?
>>>
>>>
>>> We currently have o
Erick,
You can use erasure coding but it has to be fronted by a replicated cache
tier, or so states the documentation, I have never set up this
configuration, and always opt to use RBD directly on replicated pools.
https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/paged/storage-
John,
thanks for the tips.
I ran a recursive long listing of the cephfs volume, and didn’t receive any
errors. So I guess it wasn’t serious.
I also tried running the following from
ceph tell mds.0 damage ls
2016-09-16 07:11:36.824330 7fc2ff00e700 0 client.224234 ms_handle_reset on
1
Hi Yehuda,
Thank you for the idea. I will try to test that and see if it helps.
If that is the case, would that be considered a bug with radosgw? I ask
because, that version of curl seems to be what is currently standard on
RHEL/CentOS 7 (fully updated). I will have to either compile it or s
Hi Yehuda,
Well, again, thank you!
I was able to get a package built from the latest curl release, and after
upgrading on my radosgw hosts, the load is no longer running high. The load
is just sitting at almost nothing and I only see the radosgw process using
CPU when it is actually doing
Hi Lewis,
This sounds a lot like https://bugzilla.redhat.com/1347904 , currently
slated for the upcoming RHEL 7.3 (and CentOS 7.3).
There's an SRPM in that BZ that you can rebuild and test out. This
method won't require you to keep chasing upstream curl versions
forever (curl has a lot of CVEs).
In the meantime, we've made changes to radosgw so that it can detect and
work around this libcurl bug. You can track the progress of this
workaround (currently in master and pending backport to jewel) at
http://tracker.ceph.com/issues/16695.
Casey
On 09/16/2016 01:38 PM, Ken Dreyer wrote:
H
Hi Casey,
Thank you for the follow-up. I had just found that one while searching in
the tracker. I probably should have done that first(though I guess I was
hoping/assuming google would have brought it up(but of course it didn't).
Anyway, it is working nicely for me now with the newer versi
Thanks Wes and Josh for your answers. So, for more production-like
environments and more tested procedures in case of failures the default
replication seems to be the way to go. Perhaps in next release we will add
a storage node with EC.
Thanks,
On Fri, Sep 16, 2016 at 7:25 AM, Wes Dillingham
w
Hi,
(just in case: this isn’t intended as a rant and I hope it doesn’t get read at
it. I’m trying to understand what some perspectives towards potential future
improvements are and I think it would be valuable to have this discoverable in
the archives)
We’ve had a “good" time recently balancin
Hi Casey,
That warning message tells users to upgrade to a new version of
libcurl. Telling users to upgrade to a newer version of a base system
package like that sets the user on a trajectory to have to maintain
their own curl packages forever, decreasing the security of their
overall system in th
On Fri, Sep 16, 2016 at 2:03 PM, Ken Dreyer wrote:
> Hi Casey,
>
> That warning message tells users to upgrade to a new version of
> libcurl. Telling users to upgrade to a newer version of a base system
> package like that sets the user on a trajectory to have to maintain
> their own curl packages
Hi,
I’m trying to run ceph jewel on centos 7.2 with fips mode=1and got a
segmentation fault running ceph-authtool.
I’m pretty sure that this is a result of the fips mode.
Obviously I would hope that this would work. I will have to try again, but I
think that firefly does not have this issue.
rp
Hi Brian,
This issue is fixed upstream in commit 08d54291435e. It looks like this did
not make it to Jewel, we're prioritizing this, and will follow up when this and
any related LDAP and NFS commits make it there.
Thanks for bringing this to our attention!
Matt
- Original Message -
>
15 matches
Mail list logo