Re: [ceph-users] radosgw leaking objects

2017-03-31 Thread Marius Vaitiekunas
On Fri, Mar 31, 2017 at 11:15 AM, Luis Periquito wrote: > But wasn't that what orphans finish was supposed to do? > > orphans finish only removes search results from a log pool. -- Marius Vaitiekūnas ___ ceph-users mailing list ceph-users@lists.ceph.

Re: [ceph-users] disk timeouts in libvirt/qemu VMs...

2017-03-28 Thread Marius Vaitiekunas
On Mon, Mar 27, 2017 at 11:17 PM, Peter Maloney < peter.malo...@brockmann-consult.de> wrote: > I can't guarantee it's the same as my issue, but from that it sounds the > same. > > Jewel 10.2.4, 10.2.5 tested > hypervisors are proxmox qemu-kvm, using librbd > 3 ceph nodes with mon+osd on each > > -

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-28 Thread Marius Vaitiekunas
On Wed, Mar 1, 2017 at 9:06 AM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > > > On Mon, Feb 27, 2017 at 11:40 AM, Marius Vaitiekunas < > mariusvaitieku...@gmail.com> wrote: > >> >> >> On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitie

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-28 Thread Marius Vaitiekunas
On Mon, Feb 27, 2017 at 11:40 AM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > > > On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitiekunas < > mariusvaitieku...@gmail.com> wrote: > >> >> >> On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinr

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-27 Thread Marius Vaitiekunas
On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > > > On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub > wrote: > >> On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas >> wrote: >> > >> >

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-27 Thread Marius Vaitiekunas
On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub wrote: > On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas > wrote: > > > > > > On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub < > yeh...@redhat.com> > > wrote: > >> > >

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-24 Thread Marius Vaitiekunas
On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub wrote: > On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas > wrote: > > Hi Cephers, > > > > We are testing rgw multisite solution between to DC. We have one > zonegroup > > and to zones. At the moment al

Re: [ceph-users] radosgw-admin bucket check kills SSD disks

2017-02-23 Thread Marius Vaitiekunas
On Wed, Feb 22, 2017 at 4:06 PM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > Hi Cephers, > > We are running latest jewel (10.2.5). Bucket index sharding is set to 8. > rgw pools except data are placed on SSD. > Today I've done some testing and run buck

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-22 Thread Marius Vaitiekunas
On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub wrote: > On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas > wrote: > > Hi Cephers, > > > > We are testing rgw multisite solution between to DC. We have one > zonegroup > > and to zones. At the moment al

Re: [ceph-users] radosgw-admin bucket check kills SSD disks

2017-02-22 Thread Marius Vaitiekunas
On Wed, Feb 22, 2017 at 4:06 PM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > Hi Cephers, > > We are running latest jewel (10.2.5). Bucket index sharding is set to 8. > rgw pools except data are placed on SSD. > Today I've done some testing and run buck

[ceph-users] rgw multisite resync only one bucket

2017-02-22 Thread Marius Vaitiekunas
Hi Cephers, We are testing rgw multisite solution between to DC. We have one zonegroup and to zones. At the moment all writes/deletes are done only to primary zone. Sometimes not all the objects are replicated.. We've written prometheus exporter to check replication status. It gives us each bucke

[ceph-users] radosgw-admin bucket check kills SSD disks

2017-02-22 Thread Marius Vaitiekunas
Hi Cephers, We are running latest jewel (10.2.5). Bucket index sharding is set to 8. rgw pools except data are placed on SSD. Today I've done some testing and run bucket index check on a bucket with ~120k objects: # radosgw-admin bucket check -b mybucket --fix --check-objects --rgw-realm=myrealm

Re: [ceph-users] Ceph Monitoring

2017-01-16 Thread Marius Vaitiekunas
On Mon, Jan 16, 2017 at 3:54 PM, Andre Forigato wrote: > Hello Marius Vaitiekunas, Chris Jones, > > Thank you for your contributions. > I was looking for this information. > > I'm starting to use Ceph, and my concern is about monitoring. > > Do you have any scripts

Re: [ceph-users] Ceph Monitoring

2017-01-15 Thread Marius Vaitiekunas
On Fri, 13 Jan 2017 at 22:15, Chris Jones wrote: > General question/survey: > > Those that have larger clusters, how are you doing alerting/monitoring? > Meaning, do you trigger off of 'HEALTH_WARN', etc? Not really talking about > collectd related but more on initial alerts of an issue or potent

[ceph-users] rgw swift api long term support

2017-01-10 Thread Marius Vaitiekunas
Hi, I would like to ask ceph developers if there any chance that swift api support for rgw is going to be dropped in the future (like in 5 years). Why am I asking? :) We were happy openstack glance users on ceph s3 api until openstack decided to drop glance s3 support.. So, we need to switch our

Re: [ceph-users] rgw leaking data, orphan search loop

2016-12-26 Thread Marius Vaitiekunas
; > > > HI Maruis, > > > > > > On Thu, Dec 22, 2016 at 12:00 PM, Marius Vaitiekunas > > > wrote: > > > > On Thu, Dec 22, 2016 at 11:58 AM, Marius Vaitiekunas > > > > wrote: > > > >> > > > >> Hi, > > &g

Re: [ceph-users] rgw leaking data, orphan search loop

2016-12-22 Thread Marius Vaitiekunas
On Thu, Dec 22, 2016 at 11:58 AM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > Hi, > > 1) I've written before into mailing list, but one more time. We have big > issues recently with rgw on jewel. because of leaked data - the rate is > about 50GB/hour. &

[ceph-users] rgw leaking data, orphan search loop

2016-12-22 Thread Marius Vaitiekunas
Hi, 1) I've written before into mailing list, but one more time. We have big issues recently with rgw on jewel. because of leaked data - the rate is about 50GB/hour. We've hitted these bugs: rgw: fix put_acls for objects starting and ending with underscore ( issue#17625

Re: [ceph-users] How radosgw works with .rgw pools?

2016-12-20 Thread Marius Vaitiekunas
On Tue, Dec 20, 2016 at 3:18 PM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > Hi Cephers, > > Could anybody explain, how rgw works with pools? I don't understand > how .rgw.control, .rgw.gc,, .rgw.buckets.index pools could be 0 size, but > also have s

[ceph-users] How radosgw works with .rgw pools?

2016-12-20 Thread Marius Vaitiekunas
Hi Cephers, Could anybody explain, how rgw works with pools? I don't understand how .rgw.control, .rgw.gc,, .rgw.buckets.index pools could be 0 size, but also have some objects? # ceph df detail GLOBAL: SIZE AVAIL RAW USED %RAW USED OBJECTS 507T 190T 316T

Re: [ceph-users] Loop in radosgw-admin orphan find

2016-12-14 Thread Marius Vaitiekunas
Hello, We have the same loop in our jobs in 2 clusters. Only one difference is that our cluster don't use erasure coding. The same cluster version - 10.2.2. Any ideas, what could be wrong? Maybe, we need to upgrade? :) BR, On Thu, Oct 13, 2016 at 6:15 PM, Yoann Moulin wrote: > Hello, > > I run

[ceph-users] radosgw leaked orphan objects

2016-12-02 Thread Marius Vaitiekunas
Hi Cephers, I would like to ask more about this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1254398 On our backup cluster we've a search of leaked objects: # radosgw-admin orphans find --pool=.rgw.buckets --job-id=bck1 The result is 131288. Before running radosgw-admin orphans finish, I wou

Re: [ceph-users] RGW quota

2016-03-19 Thread Marius Vaitiekunas
On Wednesday, 16 March 2016, Derek Yarnell wrote: > Hi, > > We have a user with a 50GB quota and has now a single bucket with 20GB > of files. They had previous buckets created and removed but the quota > has not decreased. I understand that we do garbage collection but it > has been significan

[ceph-users] Delete a bucket with 14 millions objects

2016-01-28 Thread Marius Vaitiekunas
Hi, Anybody could give a hint how to delete a bucket with lots of files (about 14 millions)? I've unsuccessfully tried: # radosgw-admin bucket rm --bucket=big-bucket --purge-objects --yes-i-really-mean-it -- Marius Vaitiekūnas ___ ceph-users mailing l

Re: [ceph-users] Ceph + Libvirt + QEMU-KVM

2016-01-27 Thread Marius Vaitiekunas
Hi, With ceph rbd you should use raw image format. As i know qcow2 is not supported. On Thu, Jan 28, 2016 at 6:21 AM, Bill WONG wrote: > Hi Simon, > > i have installed ceph package into the compute node, but it looks qcow2 > format is unable to create.. it show error with : Could not write qcow

Re: [ceph-users] raid0 and ceph?

2015-11-12 Thread Marius Vaitiekunas
he moment we have servers, which doesn't support HBA mode. So, we can not easily rebuild on the same hardware. On Wed, Nov 11, 2015 at 4:12 PM, John Spray wrote: > On Wed, Nov 11, 2015 at 9:54 AM, Marius Vaitiekunas > wrote: > > Hi, > > > > We use firefly 0.80.9.

[ceph-users] raid0 and ceph?

2015-11-11 Thread Marius Vaitiekunas
Hi, We use firefly 0.80.9. We have some ceph nodes in our cluster configured to use raid0. The node configuration looks like this: 2xHDD - RAID1 - /dev/sda - OS 1xSSD - RAID0 - /dev/sdb - ceph journaling disk, usually one for four data disks 1xHDD - RAID0 - /dev/sdc - ceph data disk 1xHDD