On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev
wrote:
> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov wrote:
>> On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev
>> wrote:
>>> Hi Ilya,
>>>
>>> On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov wrote:
On Tue, Apr 11, 2017 at 3:10 PM, Alex
Dear David,
Am Mittwoch, den 12.04.2017, 13:46 + schrieb David Turner:
> I can almost guarantee what you're seeing is PG subfolder splitting.
Evey day there's something new to learn about ceph ;)
> When the subfolders in a PG get X number of objects, it splits into
> 16 subfolders. Every c
On 04/13/17 10:34, Jogi Hofmüller wrote:
> Dear David,
>
> Am Mittwoch, den 12.04.2017, 13:46 + schrieb David Turner:
>> I can almost guarantee what you're seeing is PG subfolder splitting.
> Evey day there's something new to learn about ceph ;)
>
>> When the subfolders in a PG get X number of
Hello Greg,
Thank you for the answer.
I'm still in doubt with the "lossy". What does it mean in this context? I can
think of different variants:
1. The designer of the protocol from start is considering the connection to be "lossy" so
the connection errors are handled in a higher layer. So the
Hi,
I am currently working on Ceph with an underlying Clos IP fabric and I
am hitting some issues.
The setup looks as follows: There are 3 Ceph nodes which are running
OSDs and MONs. Each server has one /32 loopback ip, which it announces
via BGP to its uplink switches. Besides the loopback ip ea
Hi All,
Anybody facing this similar issue.
Regards
Prabu GJ
On Sat, 04 Mar 2017 09:50:35 +0530 gjprabu
wrote
Hi Team,
I am installing new ceph setup(jewel) and here while activating OSD
its throughing below error.
Hi Tom,
Yes , its mounted . I am using centos7 and kernel version
3.10.0-229.el7.x86_64.
/dev/xvda3 xfs 138G 33M 138G 1% /home
Regards
Prabu GJ
On Thu, 13 Apr 2017 17:20:34 +0530 Tom Verhaeg
wrote
Hi,
On Thu, Apr 13, 2017 at 2:17 AM Laszlo Budai
wrote:
> Hello Greg,
>
> Thank you for the answer.
> I'm still in doubt with the "lossy". What does it mean in this context? I
> can think of different variants:
> 1. The designer of the protocol from start is considering the connection
> to be "lossy"
When our clusters hits a failure (e.g. Node going down or osd dying) our vms
pause all IO for about 10 – 20 seconds. I’m curious if there is a way to fix or
mitigate this?
Here is my ceph.conf:
[global]
fsid = fb991e48-c425-4f82-a70e-5ce748ae186b
mon_initial_members = mon01, mon02, mon03
mon_ho
Hi,
Le 13/04/2017 à 10:51, Peter Maloney a écrit :
> [...]
> Also more things to consider...
>
> Ceph snapshots relly slow things down.
We use rbd snapshots on Firefly (and Hammer now) and I didn't see any
measurable impact on performance... until we tried to remove them. We
usually have at l
I wouldn't set the default for osd_heartbeat_grace to 5 minutes, but inject
it when you see this happening. It's a good to know what your cluster is
up to. The fact that you aren't seeing the blocked requests any more tells
me that this was your issue. It will go through, split everything, go a
Hi,
On 04/13/2017 04:53 PM, Lionel Bouton wrote:
We use rbd snapshots on Firefly (and Hammer now) and I didn't see any
measurable impact on performance... until we tried to remove them.
What exactly do you mean with that?
MJ
___
ceph-users mailing l
Le 13/04/2017 à 17:47, mj a écrit :
> Hi,
>
> On 04/13/2017 04:53 PM, Lionel Bouton wrote:
>> We use rbd snapshots on Firefly (and Hammer now) and I didn't see any
>> measurable impact on performance... until we tried to remove them.
>
> What exactly do you mean with that?
Just what I said : havin
Hello Greg,
Thank you for the clarification. One last thing: can you point me to some
documents that describes these? I would like to better understand what's going
on behind the curtains ...
Kind regards,
Laszlo
On 13.04.2017 16:22, Gregory Farnum wrote:
On Thu, Apr 13, 2017 at 2:17 AM Las
Hello Frédéric,
Thank you very much for the input. I would like to ask for some feedback
from you, as well as the ceph-users list at large.
The PGCalc tool was created to help steer new Ceph users in the right
direction, but it's certainly difficult to account for every possible
scenario. I'm
Hi Richard, thank you for answer,
it seems I can't go back
I removed all luminous packages from all of 3 nodes and install jewel ones, on
monitor node mon won't start, due disk features incompatibility
root@ceph-node01:~# /usr/bin/ceph-mon -f --cluster ceph --id ceph-node01
--setuser ceph --se
I think what fits the need of Frédéric while not impacting the complexity
of the tool for new users would be a list of known "gotchas" in PG counts.
Like not having a Base2 count of PGs will cause each PG to be variable
sized (for each PG past the last Base2, you have 2 PGs that are half the
size o
On Thu, Apr 13, 2017 at 9:27 AM, Laszlo Budai wrote:
> Hello Greg,
>
> Thank you for the clarification. One last thing: can you point me to some
> documents that describes these? I would like to better understand what's
> going on behind the curtains ...
Unfortunately I don't think anything like
Hey Trey.
Sounds great, we were discussing the same kind of requirements and couldn't
agree on/find something "useful"... so THANK YOU for sharing!!!
It would be great if you could provide some more details or an example how you
configure the "bucket user" and sub-users and all that stuff.
Even
Dear ceph-*,
A couple weeks ago I wrote this simple tool to measure the round-trip
latency of a shared filesystem.
https://github.com/dvanders/fsping
In our case, the tool is to be run from two clients who mount the same
CephFS.
First, start the server (a.k.a. the ping reflector) on one mach
On 04/12/2017 09:26 AM, Gerald Spencer wrote:
Ah I'm running Jewel. Is there any information online about python3-rados
with Kraken? I'm having difficulties finding more then I initially posted.
What info are you looking for?
The interface for the python bindings is the same for python 2 and 3
I initiated a manual lifecycle cleanup with:
radosgw-admin lc process
It took over a day working on my bucket called 'bucket1' (w/2 million
objects) and seems like it eventually got stuck with about 1.7 million objs
left, with uninformative errors like: (notice the timestamps)
2017-04-12 18:5
Anton,
It turns out that Adam Emerson is trying to get bucket policies and roles
merged in time for Luminous:
https://github.com/ceph/ceph/pull/14307
Given this, I think we will only be using subusers temporarily as a method
to track which human or service did what in which bucket. This seems t
Thanks a lot, Trey.
I'll try that stuff next week, once back from Easter holidays.
And some "multi site" and "metasearch" is also still on my to-be-tested list.
Need badly to free up some time for all the interesting "future of storage"
things.
BTW., we are on Kraken and I'd hope to see more of
Hi Dan,
I don't have a solution to the problem, I can only second that we've
also been seeing strange problems when more than one node accesses the
same file in ceph and at least one of them opens it for writing. I've
tried verbose logging on the client (fuse), and it seems that the fuse
cli
Ups... thanks for your efforts, Ben!
This could explain some bit's. Still I have lot's of question as it seems different S3 tools/clients behaive different. We need to stick on CyberDuck on Windows and s3cms and boto on Linux and many things are not the same with RadosGW :|
And more on my
Ok, I tried strace to check why vi slows or pauses. It seems to slow on fsync(3)
I didn't see the issue with nano editor.
--
Deepak
From: Deepak Naidu
Sent: Wednesday, April 12, 2017 2:18 PM
To: 'ceph-users'
Subject: saving file on cephFS mount using vi takes pause/time
Folks,
This is bit wei
Is it related to this the recovery behaviour of vim creating a swap file,
which I think nano does not do?
http://vimdoc.sourceforge.net/htmldoc/recover.html
A sync into cephfs I think needs the write to get confirmed all the way
down from the osds performing the write before it returns the confir
Based on past LTS release dates would predict Luminous much sooner than
that, possibly even in May... http://docs.ceph.com/docs/master/releases/
The docs also say "Spring" http://docs.ceph.com/docs/master/release-notes/
-Ben
On Thu, Apr 13, 2017 at 12:11 PM, wrote:
> Thanks a lot, Trey.
>
> I
Yes via creates a swap file and nano doesn’t. But when I try fio to write, I
don’t see this happening.
--
Deepak
From: Chris Sarginson [mailto:csarg...@gmail.com]
Sent: Thursday, April 13, 2017 2:26 PM
To: Deepak Naidu; ceph-users
Subject: Re: [ceph-users] saving file on cephFS mount using vi ta
We've wrote out own python3 bindings for rados, but would rather use a
community supported version. I'm looking to import rados in python3..
On Thu, Apr 13, 2017 at 11:01 AM, Josh Durgin wrote:
> On 04/12/2017 09:26 AM, Gerald Spencer wrote:
>
>> Ah I'm running Jewel. Is there any information on
Hi Tom,
Is there any solution for this issue.
Regards
Prabu GJ
On Thu, 13 Apr 2017 18:31:36 +0530 gjprabu
wrote
Hi Tom,
Yes , its mounted . I am using centos7 and kernel version
3.10.0-229.el7.x86_64.
/dev/xvda3
32 matches
Mail list logo