Hi,
Am 16.08.2017 um 19:31 schrieb Henrik Korkuc:
> On 17-08-16 19:40, John Spray wrote:
>> On Wed, Aug 16, 2017 at 3:27 PM, Henrik Korkuc wrote:
> maybe you can suggest any recommendations how to scale Ceph for billions
> of objects? More PGs per OSD, more OSDs, more pools? Somewhere in the
> li
Hi,
while rebalancing a drive experienced read errors so I think leveldb was
corrupted. Unfortunately there's currently no second copy which is
up2date so I can forget this pg. Only one pg is affected (I moved all
other pg's away as they had active copies on another osd.
In "daily business" this
Hi,
Am 15.04.2016 um 07:43 schrieb Christian Balzer:
> On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
>>>> We thought this was a good idea so that we can change the replication
Hi,
Am 15.04.2016 um 03:07 schrieb Christian Balzer:
>> We thought this was a good idea so that we can change the replication
>> size different for doc_root and raw-data if we like. Seems this was a
>> bad idea for all objects.
> I'm not sure how you managed to get into that state or if it's a bug
Hi,
Am 14.04.2016 um 03:32 schrieb Christian Balzer:
> On Wed, 13 Apr 2016 14:51:58 +0200 Michael Metz-Martini | SpeedPartner GmbH
> wrote:
>> Am 13.04.2016 um 04:29 schrieb Christian Balzer:
>>> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner Gmb
Hi,
Am 13.04.2016 um 04:29 schrieb Christian Balzer:
> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 11.04.2016 um 23:39 schrieb Sage Weil:
>>> ext4 has never been recommended, but we did test it. After Jewel is
>>>
Hi,
Am 11.04.2016 um 23:39 schrieb Sage Weil:
> ext4 has never been recommended, but we did test it. After Jewel is out,
> we would like explicitly recommend *against* ext4 and stop testing it.
Hmmm. We're currently migrating away from xfs as we had some strange
performance-issues which were res
Hi,
Am 06.02.2016 um 07:15 schrieb Yan, Zheng:
>> On Feb 6, 2016, at 13:41, Michael Metz-Martini | SpeedPartner GmbH
>> wrote:
>> Am 04.02.2016 um 15:38 schrieb Yan, Zheng:
>>>> On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH
>>>>
Hi,
sorry for the delay - productional system unfortunately ;-(
Am 04.02.2016 um 15:38 schrieb Yan, Zheng:
>> On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH
>> wrote:
>> Am 04.02.2016 um 09:43 schrieb Yan, Zheng:
>>> On Thu, Feb 4, 2016 at 4:
Hi,
Am 04.02.2016 um 09:43 schrieb Yan, Zheng:
> On Thu, Feb 4, 2016 at 4:36 PM, Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 03.02.2016 um 15:55 schrieb Yan, Zheng:
>>>> On Feb 3, 2016, at 21:50, Michael Metz-Martini | SpeedPartner GmbH
>>>>
Hi,
Am 03.02.2016 um 15:55 schrieb Yan, Zheng:
>> On Feb 3, 2016, at 21:50, Michael Metz-Martini | SpeedPartner GmbH
>> wrote:
>> Am 03.02.2016 um 12:11 schrieb Yan, Zheng:
>>>> On Feb 3, 2016, at 17:39, Michael Metz-Martini | SpeedPartner GmbH
>>>>
Hi,
Am 03.02.2016 um 12:11 schrieb Yan, Zheng:
>> On Feb 3, 2016, at 17:39, Michael Metz-Martini | SpeedPartner GmbH
>> wrote:
>> Am 03.02.2016 um 10:26 schrieb Gregory Farnum:
>>> On Tue, Feb 2, 2016 at 10:09 PM, Michael Metz-Martini | SpeedPartner
>>> Gm
Hi,
Am 03.02.2016 um 10:26 schrieb Gregory Farnum:
> On Tue, Feb 2, 2016 at 10:09 PM, Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Putting some higher load via cephfs on the cluster leads to messages
>> like mds0: Client X failing to respond to capability release aft
Hi,
we're experiencing some strange issues running ceph 0.87 in our, I
think, quite large cluster (taking number of objects as a measurement).
mdsmap e721086: 1/1/1 up {0=storagemds01=up:active}, 2 up:standby
osdmap e143048: 92 osds: 92 up, 92 in
flags noout,noscrub,nodeep-s
Hi,
after larger moves of serveral placement groups we tried to empty 3 of
our 66 osds by slowly setting weight of them to 0 within the crushmap.
After move completed we're still experiencing a large amount of files
left on that osd-devices.
For example pg 5.117:
osdmap e56712 pg 5.117 (5.117) -
15 matches
Mail list logo