On 6/4/19 8:00 PM, J. Eric Ivancich wrote:
> On 6/4/19 7:37 AM, Wido den Hollander wrote:
>> I've set up a temporary machine next to the 13.2.5 cluster with the
>> 13.2.6 packages from Shaman.
>>
>> On that machine I'm running:
>>
>> $ radosgw-admin gc process
>>
>> That seems to work as intende
Hello Robert,
I did not make any changes, so I'm still using the prio queue.
Regards
Le lun. 10 juin 2019 à 17:44, Robert LeBlanc a
écrit :
> I'm glad it's working, to be clear did you use wpq, or is it still the
> prio queue?
>
> Sent from a mobile device, please excuse any typos.
>
> On Mon, J
Hi John,
I have 9 HDDs and 3 SSDs behind a SAS3008 PCI-Express Fusion-MPT SAS-3 from
LSI. HDDs are HGST HUH721008AL (8TB, 7200k rpm), SSDs are Toshiba PX05SMB040
(400GB). OSDs are bluestore format, 3 HDDs have their wal and db on one SSD (DB
Size 50GB, wal 10 GB). I did not change any cache set
On 6/7/19 3:35 PM, Jason Dillaman wrote:
> On Fri, Jun 7, 2019 at 7:22 AM Sakirnth Nagarasa
> wrote:
>>
>> On 6/6/19 5:09 PM, Jason Dillaman wrote:
>>> On Thu, Jun 6, 2019 at 10:13 AM Sakirnth Nagarasa
>>> wrote:
On 6/6/19 3:46 PM, Jason Dillaman wrote:
> Can you run "rbd trash ls -
On Tue, 11 Jun 2019 at 14:46, Sakirnth Nagarasa
wrote:
> On 6/7/19 3:35 PM, Jason Dillaman wrote:
[...]
> > Can you run "rbd rm --log-to-stderr=true --debug-rbd=20
> > ${POOLNAME}/${IMAGE}" and provide the logs via pastebin.com?
> >
> >> Cheers,
> >> Sakirnth
>
> It is not necessary anymore the re
On 6/11/19 10:42 AM, Igor Podlesny wrote:
> On Tue, 11 Jun 2019 at 14:46, Sakirnth Nagarasa
> wrote:
>> On 6/7/19 3:35 PM, Jason Dillaman wrote:
> [...]
>>> Can you run "rbd rm --log-to-stderr=true --debug-rbd=20
>>> ${POOLNAME}/${IMAGE}" and provide the logs via pastebin.com?
>>>
Cheers,
>>>
On 06/04/2019 07:01 PM, Jianyu Li wrote:
> Hello,
>
> I have a ceph cluster running over 2 years and the monitor began crash
> since yesterday. I had some flapping OSDs up and down occasionally,
> sometimes I need to rebuild the OSD. I found 3 OSDs are down yesterday,
> they may cause this issue o
I certainly would, particularly on your SSD's. I'm not familiar with
those Toshibas but disabling disk cache has improved performance on my
clusters and others on this list.
Does the LSI controller you're using provide read/write cache and do
you have it enabled? 72k spinners are likely to see a h
Hello,
I have a problem when I want to validate (using md5 hashes) rbd
export/import diff from a rbd source-pool (the production pool) towards
another rbd destination-pool (the backup pool).
Here is the algorythm :
1- First of all, I validate that the two hashes from lasts snapshots
source a
Hello,
I am hoping to expose a REST API to a remote client group who would like to do
things like:
* Create, List, and Delete RBDs and map them to gateway (make a LUN)
* Create snapshots, list, delete, and rollback
* Determine Owner / Active gateway of a given lun
I would run 2-4 n
On Tue, Jun 11, 2019 at 9:25 AM Rafael Diaz Maurin
wrote:
>
> Hello,
>
> I have a problem when I want to validate (using md5 hashes) rbd
> export/import diff from a rbd source-pool (the production pool) towards
> another rbd destination-pool (the backup pool).
>
> Here is the algorythm :
> 1- Firs
On Tue, Jun 11, 2019 at 9:29 AM Wesley Dillingham
wrote:
>
> Hello,
>
> I am hoping to expose a REST API to a remote client group who would like to
> do things like:
>
>
> Create, List, and Delete RBDs and map them to gateway (make a LUN)
> Create snapshots, list, delete, and rollback
> Determine
On 6/11/19 3:24 PM, Rafael Diaz Maurin wrote:
> 3- I create a snapshot inside the source pool
> rbd snap create ${POOL-SOURCE}/${KVM-IMAGE}@${TODAY-SNAP}
>
> 4- I export the snapshot from the source pool and I import the snapshot
> towards the destination pool (in the pipe)
> rbd export-diff --
Thanks Jason for the info! A few questions:
"The current rbd-target-api doesn't really support single path LUNs."
In our testing, using single path LUNs, listing the "owner" of a given LUN and
then connecting directly to that gateway yielded stable and well-performing
results, obviously, there
The server side encryption features all require special x-amz headers on
write, so they only apply to our S3 apis. But objects encrypted with
SSE-KMS (or a default encryption key) can be read without any x-amz
headers, so swift should be able to decrypt them too. I agree that this
is a bug and
On Tue, Jun 11, 2019 at 4:24 PM Wesley Dillingham
wrote:
> (running 14.2.0 and ceph-iscsi-3.0-57.g4ae)
>
> and configuring the dash as follows:
>
> ceph dashboard set-iscsi-api-ssl-verification false
> ceph dashboard iscsi-gateway-add http://admin:admin@${MY_HOSTNAME}:5000
> systemctl restart
On Tue, Jun 11, 2019 at 10:24 AM Wesley Dillingham
wrote:
>
> Thanks Jason for the info! A few questions:
>
> "The current rbd-target-api doesn't really support single path LUNs."
>
> In our testing, using single path LUNs, listing the "owner" of a given LUN
> and then connecting directly to that
Hi Wido,
Interleaving below
On 6/11/19 3:10 AM, Wido den Hollander wrote:
>
> I thought it was resolved, but it isn't.
>
> I counted all the OMAP values for the GC objects and I got back:
>
> gc.0: 0
> gc.11: 0
> gc.14: 0
> gc.15: 0
> gc.16: 0
> gc.18: 0
> gc.19: 0
> gc.1: 0
> gc.20: 0
> g
Hi all!
I'm thinking about building a learning rig for ceph. This is the parts list:
PCPartPicker Part List: https://pcpartpicker.com/list/s4vHXP
TL;DR 8 Core 3Ghz Ryzen CPU, 64 Gb RAM, 2Tb x 5 HDDs, 1 240Gb SSD in a
tower case.
My plan is to build a KVM-based setup, both for ceph and workload
Interesting performance increase! I'm Iscsi it at a few installations and now a
wonder what version of Centos is required to improve performance! Did the
cluster go from Luminous to Mimic?
Glen
-Original Message-
From: ceph-users On Behalf Of Heðin
Ejdesgaard Møller
Sent: Saturday, 8
Quoting Patrick Donnelly (pdonn...@redhat.com):
> Hi Stefan,
>
> Sorry I couldn't get back to you sooner.
NP.
> Looks like you hit the infinite loop bug in OpTracker. It was fixed in
> 12.2.11: https://tracker.ceph.com/issues/37977
>
> The problem was introduced in 12.2.8.
We've been quite lon
21 matches
Mail list logo