On Tue, Aug 2, 2016 at 1:05 AM, Alex Gorbachev wrote:
> Hi Ilya,
>
> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
>> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
>> wrote:
>>> RBD illustration showing RBD ignoring discard until a certain
>>> threshold - why is that? This behavior is u
Hello Guys,
this time without the original acting-set osd.4, 16 and 28. The issue
still exists...
[...]
For the record, this ONLY happens with this PG and no others that
share
the same OSDs, right?
Yes, right.
[...]
When doing the deep-scrub, monitor (atop, etc) all 3 nodes and
see if a
p
On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote:
> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
>> Hi Ilya,
>>
>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
>>> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
>>> wrote:
RBD illustration showing RBD ignoring discard unt
I am actively working through the code and debugging everything. I figure
the issue is with how RGW is listing the parts of a multipart upload when
it completes or aborts the upload (read: it's not getting *all* the parts,
just those that are either most recent or tagged with the upload id). As
s
We're having the same issues. I have a 1200TB pool at 90% utilization however
disk utilization is only 40%
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that
On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev wrote:
> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote:
>> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
>>> Hi Ilya,
>>>
>>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
>>>
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote:
> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev
> wrote:
>> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote:
>>> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
Hi Ilya,
On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov w
Hey cephers,
Just a reminder that our Ceph Developer Monthly discussion is
happening tomorrow at 12:30p EDT on bluejeans. Please, if you are
working on something in the Ceph code base currently, just drop a
quick note on the CDM page so that we’re able to get it on the agenda.
Thanks!
http://wiki
Hello Jason/Kees,
I am trying to take snapshot of my instance.
Image was stuck up in Queued state and instance is stuck up in Image
Pending Upload state.
I had to manually quit the job as it was not working since last 1 hour ..
my instance is still in Image Pending Upload state.
Is it something
Dear Ceph Team,
I need your guidance on this.
Regards
Gaurav Goyal
On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal
wrote:
> Dear Team,
>
> I have ceph storage installed on SAN storage which is connected to
> Openstack Hosts via iSCSI LUNs.
> Now we want to get rid of SAN storage and move over c
Hi David,
Thanks for your comments!
Can you please help to share the procedure/Document if available?
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:24 AM, David Turner wrote:
> Just add the new storage and weight the old storage to 0.0 so all data
> will move off of the old storage to the new
Just add the new storage and weight the old storage to 0.0 so all data will
move off of the old storage to the new storage. It's not unique to migrating
from SANs to Local Disks. You would do the same any time you wanted to migrate
to newer servers and retire old servers. After the backfillin
I'm going to assume you know how to add and remove storage
http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The only
other part of this process is reweighting the crush map for the old osds to a
new weight of 0.0 http://docs.ceph.com/docs/master/rados/operations/crush-map/.
I
Am 2016-08-02 13:30, schrieb c:
Hello Guys,
this time without the original acting-set osd.4, 16 and 28. The issue
still exists...
[...]
For the record, this ONLY happens with this PG and no others that
share
the same OSDs, right?
Yes, right.
[...]
When doing the deep-scrub, monitor (atop,
Hello David,
Thanks a lot for detailed information!
This is going to help me.
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:46 AM, David Turner wrote:
> I'm going to assume you know how to add and remove storage
> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The
> onl
Hi David,
There’s a good amount of backstory to our configuration, but I’m happy to
report I found the source of my problem.
We were applying some “optimizations” for our 10GbE via sysctl, including
disabling net.ipv4.tcp_sack. Re-enabling net.ipv4.tcp_sack resolved the issue.
Thanks,
Tom
Fro
Hi,
I have seen an error when I'm using Ceph RGW v10.2.2 with S3 API, it's as
follows:
I have three S3 users are A, B, C. Both A, B, C have some buckets and
objects. When I used A or C in order to PUT, GET object to RGW, I have seen
"decode_policy Read
AccessControlPolicy2BFULL_CONTROL"
in ceph-cli
On 08/02/2016 07:26 PM, Ilya Dryomov wrote:
This seems to reflect the granularity (4194304), which matches the
>8192 pages (8192 x 512 = 4194304). However, there is no alignment
>value.
>
>Can discard_alignment be specified with RBD?
It's exported as a read-only sysfs attribute, just like
disca
Hello,
not a Ceph specific issue, but this is probably the largest sample size of
SSD users I'm familiar with. ^o^
This morning I was woken at 4:30 by Nagios, one of our Ceph nodes having a
religious experience.
It turns out that the SMART check plugin I run to mostly get an early
wearout warni
19 matches
Mail list logo