Hello,
This is going to have some level of ranting, bear with me as the points
are all valid and poignant.
Backstory:
I currently run 3 Ceph clusters all on Debian Jessie but with SysV init,
as they all predate any systemd supporting Ceph packages.
- A crappy test one running Hammer, manually
Hi all,
I was being asked if CEPH supports the Storage Management Initiative
Specification (SMI-S)? This for the context of monitoring our ceph
clusters/environments.
I've tried looking and find no references to supporting it. But does it?
thanks,
___
Hi Guys
I am testing the performance of Jewel (10.2.2) with FIO, but found the
performance would drop dramatically when two process write to the same image.
My environment:
1. Server:
One mon and four OSDs running on the same server.
Intel P3700 400GB SSD which have 4 partitions, and each
Hi,
I think this is because of exclusive-lock feature enabled by default since
jessie on rbd image
- Mail original -
De: "Zhiyuan Wang"
À: "ceph-users"
Envoyé: Jeudi 4 Août 2016 11:37:04
Objet: [ceph-users] Bad performance when two fio write to the same image
Hi Guys
I am testing
I will read your cache thread, thnx.
Now we have the following setup in mind:
X10SRH-CLN4F
E5-2620v4
32GB/64GB RAM
6x 2TB (start with 4 drives)
1x S3710 200GB for Journaling
In the future adding 2 SSD's for caching, or is it an option to use a P3700
400GB (or two) for journaling and caching ?
Kin
With exclusive-lock, only a single client can have write access to the
image at a time. Therefore, if you are using multiple fio processes
against the same image, they will be passing the lock back and forth
between each other and you can expect bad performance.
If you have a use-case where you re
If the client is no longer running the watch should expire within 30
seconds. If you are still experiencing this issue, you can blacklist
the mystery client via "ceph osd blacklist add".
On Wed, Aug 3, 2016 at 6:06 PM, K.C. Wong wrote:
> I'm having a hard time removing an RBD that I no longer nee
If you are attempting to use RBD "fancy" striping (e.g. stripe unit !=
object size and stripe count != 1) with krbd, the answer is that it is
still unsupported.
On Wed, Aug 3, 2016 at 8:41 AM, w...@globe.de wrote:
> Hi List,
> i am using Ceph Infernalis and Ubuntu 14.04 Kernel 3.13.
> 18 Data Ser
Hi All,
With ceph jewel,
I'm pretty stuck with
ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
Because when i specify a journal path like this:
ceph-deploy osd prepare ceph-osd1:sdd:sdf7
And then:
ceph-deploy osd activate ceph-osd1:sdd:sdf7
I end up with "wrong permission" on the
Hi all,
FYI, a few days ago, we released openATTIC 2.0.13 beta. On the Ceph
management side, we've made some progress with the cluster and pool
monitoring backend, which lays the foundation for the dashboard that
will display graphs generated from this data. We also added some more
RBD management
Hello,
I am thinking about setting up a second Ceph cluster in the near future,
and I was wondering about the current status of rbd-mirror.
1)is it production ready at this point?
2)can it be used when you have a cluster with existing data in order to
replicate onto a new cluster?
3)we hav
Can you run "rbd info vm-208-disk-2@initial.20160729-220225"? You most
likely need to rebuild the object map for that specific snapshot via
"rbd object-map rebuild vm-208-disk-2@initial.20160729-220225".
On Sat, Jul 30, 2016 at 7:17 AM, Christoph Adomeit
wrote:
> Hi there,
>
> I upgraded my clust
Thank you, Jason.
While I can't find the culprit for the watcher (the watcher never expired,
and survived a reboot. udev, maybe?), blacklisting the host did allow me
to remove the device.
Much appreciated,
-kc
> On Aug 4, 2016, at 4:50 AM, Jason Dillaman wrote:
>
> If the client is no longer
Wow, thanks. I think that¹s the tidbit of info I needed to explain why
increasing numjobs doesn¹t (anymore) scale performance as expected.
Warren Wang
On 8/4/16, 7:49 AM, "ceph-users on behalf of Jason Dillaman"
wrote:
>With exclusive-lock, only a single client can have write access to the
>i
If you search through the archives, there's been a couple of other
people that have run into this as well with Jewel. With the librbd
engine, you are much better using iodepth and/or multiple fio processes
vs numjobs. Even pre-jewel, there were gotchas that might not be
immediately apparent.
Hello,
you need to work on your google skills. ^_-
I wrote about his just yesterday and if you search for "ceph-deploy wrong
permission" the second link is the issue description:
http://tracker.ceph.com/issues/13833
So I assume your journal partitions are either pre-made or non-GPT.
Christian
Hello!
I would like some guidance about how to proceed with a problem inside of
a snap which is used to clone images. My sincere apologies if what I am
asking isn't possible.
I have snapshot which is used to create clones for guest virtual
machines. It is a raw object with an NTFS OS contain
Yeah you are right
>From what i understand is that using a ceph is a good idea
But the fact is that it dont work
So i circumvent that by configuring ceph-deploy to use root
Was it the main goal, i dont think so
Thanks for your answer
Le 5 août 2016 02:01, "Christian Balzer" a écrit :
>
> He
I am reading half your answer
Do you mean that ceph will create by itself the partitions for the journal?
If so its cool and weird...
Le 5 août 2016 02:01, "Christian Balzer" a écrit :
>
> Hello,
>
> you need to work on your google skills. ^_-
>
> I wrote about his just yesterday and if you se
Hello,
On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
> I am reading half your answer
>
> Do you mean that ceph will create by itself the partitions for the journal?
>
Yes, "man ceph-disk".
> If so its cool and weird...
>
It can be very weird indeed.
If sdc is your data (OSD) disk a
Maybe you are mispelling, but in the docs they dont use white space but :
this is quite misleading if it works
Le 5 août 2016 02:30, "Christian Balzer" a écrit :
>
> Hello,
>
> On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
>
> > I am reading half your answer
> >
> > Do you mean that c
Hello,
On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote:
> Maybe you are mispelling, but in the docs they dont use white space but :
> this is quite misleading if it works
>
I'm quoting/showing "ceph-disk", which is called by ceph-deploy, which
indeed uses a ":".
Christian
> Le 5 août 20
Ok i will try without creating by myself
Never the less thanks a lot Christian for your patience, i will try more
clever questions when i'm ready for them
Le 5 août 2016 02:44, "Christian Balzer" a écrit :
Hello,
On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote:
> Maybe you are mispell
Hi Jason
Thanks for your information
-邮件原件-
发件人: Jason Dillaman [mailto:jdill...@redhat.com]
发送时间: 2016年8月4日 19:49
收件人: Alexandre DERUMIER
抄送: Zhiyuan Wang ; ceph-users
主题: Re: [ceph-users] Bad performance when two fio write to the same image
With exclusive-lock, only a single client
On Wed, Aug 3, 2016 at 10:54 AM, Alex Gorbachev
wrote:
> On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev
> wrote:
>> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote:
>>> Alex Gorbachev wrote on 08/02/2016 07:56 AM:
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote:
> On
Dear cephers...
I am looking for some advice on migrating from legacy tunables to Jewel
tunables.
What would be the best strategy?
1) A step by step approach?
- starting with the transition from bobtail to firefly (and, in
this particular step, by starting to set setting chooseleaf_vary_
Have a cluster and I want a radosGW user to have access on a bucket objects
only like /* but user should not be able to create new or
remove this bucket
-
Parveen Kumar Sharma
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/
Added appropriate subject
On Fri, Aug 5, 2016 at 10:23 AM, Parveen Sharma
wrote:
> Have a cluster and I want a radosGW user to have access on a bucket
> objects only like /* but user should not be able to create new
> or remove this bucket
>
>
>
> -
> Parveen Kumar Sharma
>
_
28 matches
Mail list logo