Hi everybody,
We have 3 monitors in our ceph cluster: 2 in one local site (2 data centers a
few km away from each other), and the 3rd one on a remote site, with a maximum
round-trip time (RTT) of 30ms between the local site and the remote site. All
OSDs run on the local site. The reason for the re
On 07/01/2015 09:38 AM, - - wrote:
> Hi everybody,
>
> We have 3 monitors in our ceph cluster: 2 in one local site (2 data centers a
> few km away from each other), and the 3rd one on a remote site, with a maximum
> round-trip time (RTT) of 30ms between the local site and the remote site. All
> OS
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any thoughts?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hey Patrick,
Looks like the GMT+8 time for the 1st day is wrong, should be 10:00 pm - 7:30
am?
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Patrick McGarry
Sent: Tuesday, June 30, 2015 11:28 PM
To: Ceph Devel; Ceph-Use
On Wed, Jul 1, 2015 at 8:38 AM, - - wrote:
> Hi everybody,
>
> We have 3 monitors in our ceph cluster: 2 in one local site (2 data centers a
> few km away from each other), and the 3rd one on a remote site, with a maximum
> round-trip time (RTT) of 30ms between the local site and the remote site.
On Wed, Jul 1, 2015 at 9:02 AM, flisky wrote:
> Hi list,
>
> I meet a strange problem:
>
> sometimes I cannot see the file/directory created by another ceph-fuse
> client. It comes into visible after I touch/mkdir the same name.
>
> Any thoughts?
What version are you running? We've seen a few thi
On 2015年07月01日 16:11, Gregory Farnum wrote:
On Wed, Jul 1, 2015 at 9:02 AM, flisky wrote:
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any thoughts?
What version ar
Hi,
I've been playing around with the rados gateway and RBD and have some
questions about user access restrictions. I'd like to be able to set up
a cluster that would be shared among different clients without any
conflicts...
Is there a way to limit S3/Swift clients to be able to write data
On 06/30/2015 10:42 PM, Tuomas Juntunen wrote:
Hi
For seq reads here's the latencies:
lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.02%, 100=0.03%
lat (usec) : 250=1.02%, 500=87.09%, 750=7.47%, 1000=1.50%
lat (msec) : 2=0.76%, 4=1.72%, 10=0.19%, 20=0.19%
Random reads:
lat
Hi,
I am new to ceph project. I am trying to benchmark erasure code on Intel
and I am getting following error.
[root@nitin ceph]#
CEPH_ERASURE_CODE_BENCHMARK=src/ceph_erasure_code_benchmark
PLUGIN_DIRECTORY=src/.libs qa/workunits/erasure-code/bench.sh
seconds KB plugin k m work.
Hi Nitin,
Are you installed YASM compiler ?
David
On 07/01/2015 01:46 PM, Nitin Saxena wrote:
Hi,
I am new to ceph project. I am trying to benchmark erasure code on
Intel and I am getting following error.
[root@nitin ceph]#
CEPH_ERASURE_CODE_BENCHMARK=src/ceph_erasure_code_benchmark
PLUGI
Greg, Wido.Thank you to both of you.Every DC has a local NTP server, and the NTP config for all servers designate their local NTP server as preferred, and then the remote one.Thank you for the pointer to 'mon clock drift allowed' setting. The default is so 50ms.Funny enough, the 1 second leap whi
radosgw-agent=> 1.2.1trust
Ubuntu 14.04
English version :
Hello,
According to the documentation here:
http://ceph.com/docs/master/radosgw/admin/
I followed to the letter the documentation and the result is totally
different:
root@ih-prd-rgw01:~# radosgw-admin user create --uid=johndoe
Hi,
I think it's because secret key for swift subuser is not generated :
radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret
Mikaël
Le 01/07/2015 14:50, Jimmy Goffaux a écrit :
radosgw-agent=> 1.2.1trust
Ubuntu 14.04
English version :
Hello,
According to t
hi,
Thank you for your return but,
I just regenerate the user completely and I confirm that I have a
problem :(
radosgw-admin user create --uid=johndoe --display-name="John Doe"
--email=m...@email.com
"subusers": [],
"keys": [
{ "user": "johndoe",
"access_key": "91KC4
ok, I think I found the answer to the second question:
http://wiki.ceph.com/Planning/Blueprints/Giant/Add_QoS_capacity_to_librbd
..librbd doesn't support any QoS for now..
Can anyone shed some light on the namespaces and limiting S3 users to
one bucket?
J
On 07/01/2015 10:31 AM, Jacek Jaros
It's not really a problem, swift johndoe user works if he has a record
in swift_keys.
The s3 secret key of johndoe user is here :
"keys": [
{ "user": "johndoe",
"access_key": "91KC4JI5BRO39A22JY9I",
"secret_key": "Z5kLaBtg870xBhYtb4RKY82qGsbiqRpGs\/KQUXKF"},
I tested
Yes it also works ... It's more that I do not expect to have an element
johndoe:swift in "keys"
Thank you for providing answers.
On Wed, 01 Jul 2015 15:28:15 +0200, Mikaël Guichard wrote:
It's not really a problem, swift johndoe user works if he has a
record in swift_keys.
The s3 secret key o
Hi Cephers,
On Sunday evening we are upgraded Ceph form 0.87 to 0.94. After upgrade VM's
running on Proxmox, freezes for 3-4s in 10min periods (application is not
responding on Windows). Before upgrade everything was working fine. On
/proc/diskstats at field 7 (time spent reading (ms) ) and 11 (t
Le Tue, 16 Jun 2015 10:04:26 +0200
Marcus Forness écrivait:
> hi! anyone able to privide som tips on performance issue on a newly
> installe all flash ceph cluster? When we do write test we get 900MB/s
> write. but read tests are only 200MB/s all servers are on 10GBit
> connections.
>
> [global]
Hi community
Do you know if there is page with all the official Ceph cluster deployed ? With
number of nodes, volumetry, protocol (block / file / object)
If not are you agree to create this list on Ceph site?
Thanks
Sent from my iPhone
___
ceph-users m
On Wed, Jul 1, 2015 at 3:10 PM, Jacek Jarosiewicz
wrote:
> ok, I think I found the answer to the second question:
>
> http://wiki.ceph.com/Planning/Blueprints/Giant/Add_QoS_capacity_to_librbd
>
> ..librbd doesn't support any QoS for now..
But libvirt/qemu can do QoS: see iotune in https://libvirt
Hello all,
I've got a coworker who put "filestore_xattr_use_omap = true" in the
ceph.conf when we first started building the cluster. Now he can't
remember why. He thinks it may be a holdover from our first Ceph
cluster (running dumpling on ext4, iirc).
In the newly built cluster, we are using XF
Hi,
Like David said: the most probable cause is that there is no recent yasm
installed. You can ./install-deps.sh to ensure the necessary dependencies are
installed.
Cheers
On 01/07/2015 13:46, Nitin Saxena wrote:
> Hi,
>
> I am new to ceph project. I am trying to benchmark erasure code on I
It doesn't matter, I think filestore_xattr_use_omap is a 'noop' and not used
in the Hammer.
Thanks & Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Adam
Tygart
Sent: Wednesday, July 01, 2015 8:20 AM
To: Ceph Users
Subject: [c
On Tue, Jun 30, 2015 at 10:36 AM, Daniel Schneller
wrote:
> Hi!
>
> We are seeing a strange - and problematic - behavior in our 0.94.1
> cluster on Ubuntu 14.04.1. We have 5 nodes, 4 OSDs each.
>
> When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
> do not seem to shut down cor
On Mon, Jun 29, 2015 at 1:44 PM, Burkhard Linke
wrote:
> Hi,
>
> I've noticed that a number of placement groups in our setup contain objects,
> but no actual data
> (ceph pg dump | grep remapped during a hard disk replace operation):
>
> 7.616 26360 0 52720 4194304 3003
On Wed, Jul 1, 2015 at 9:21 AM, flisky wrote:
> On 2015年07月01日 16:11, Gregory Farnum wrote:
>>
>> On Wed, Jul 1, 2015 at 9:02 AM, flisky wrote:
>>>
>>> Hi list,
>>>
>>> I meet a strange problem:
>>>
>>> sometimes I cannot see the file/directory created by another ceph-fuse
>>> client. It comes in
Hi,
On 07/01/2015 06:09 PM, Gregory Farnum wrote:
On Mon, Jun 29, 2015 at 1:44 PM, Burkhard Linke
wrote:
Hi,
I've noticed that a number of placement groups in our setup contain objects,
but no actual data
(ceph pg dump | grep remapped during a hard disk replace operation):
7.616 26360
Hi
Yes, the OSD's are on spinning disks and we have 18 SSD's for journal, one
SSD for two OSD's
The OSD's are:
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
What I've understood the journals are not used as read cache at all, just
for writing. Would SSD ba
On 07/01/2015 12:13 PM, Tuomas Juntunen wrote:
Hi
Yes, the OSD's are on spinning disks and we have 18 SSD's for journal, one
SSD for two OSD's
The OSD's are:
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
What I've understood the journals are not used as
Hi Valery,
With the old account did you try to give FULL access to the new one user ID ?
Process should be :
>From OLD account add FULL access to NEW account (S3 ACL with CloudBerry for
>example)
With radosgw admin update link from OLD account to NEW account (link allow user
to see bucket with
Thanks Mark
Are there any plans for ZFS like L2ARC to CEPH or is the cache tiering what
should work like this in the future?
I have tested cache tier + EC pool, and that created too much load on our
servers, so it was not viable to be used.
I was also wondering if EnhanceIO would be a good solut
On 07/01/2015 01:39 PM, Tuomas Juntunen wrote:
Thanks Mark
Are there any plans for ZFS like L2ARC to CEPH or is the cache tiering what
should work like this in the future?
I have tested cache tier + EC pool, and that created too much load on our
servers, so it was not viable to be used.
We
Hi
I'll check the possibility on testing EnhanceIO. I'll report back on this.
Thanks
Br,T
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: 1. heinäkuuta 2015 21:51
To: Tuomas Juntunen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very low 4k randread perfor
Hi cephers,
Is anyone out there that implement enhanceIO in a production
environment? any recommendation? any perf output to share with the diff
between using it and not?
Thanks in advance,
*German*
___
ceph-users mailing list
ceph-users@lists.ceph.
Hello Ceph lovers
You would have noticed that recently RedHat has released RedHat Ceph
Storage 1.3
http://redhatstorage.redhat.com/2015/06/25/announcing-red-hat-ceph-storage-1-3/
My question is
- What's the exact version number of OpenSource Ceph is provided with this
Product
- RHCS 1.3 Feature
On 07/01/2015 03:02 PM, Vickey Singh wrote:
> - What's the exact version number of OpenSource Ceph is provided with
> this Product
It is Hammer, specifically 0.94.1 with several critical bugfixes on top
as the product went through QE. All of the bugfixes have been proposed
or merged to Hammer ups
Hi
One our nodes has OSD logs that say "wrongly marked me down" for every OSD
at some point. What could be the reason for this. Anyone have any similar
experiences?
Other nodes work totally fine and they are all identical.
Br,T
___
ceph-users
Hi,
I’ve asked same question last weeks or so (just search the mailing list
archives for EnhanceIO :) and got some interesting answers.
Looks like the project is pretty much dead since it was bought out by HGST.
Even their website has some broken links in regards to EnhanceIO
I’m keen to try
This can happen if your OSDs are flapping.. Hope your network is stable.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Tuomas
Juntunen
Sent: Wednesday, July 01, 2015 2:24 PM
To: 'ceph-users'
Subject: [ceph-users] One of our nodes has logs sayin
Hi,
The details of the differences between the Hammer point releases and the RedHat
Ceph Storage 1.3 can be listed as described at
http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between
hammer and v0.94.1.2
The same analysis should be done for
https://github.com/ceph/cep
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Gregory Farnum
> Sent: 01 July 2015 16:56
> To: Daniel Schneller
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Node reboot -- OSDs not "logging off" from cluster
>
> On Tue,
I would like to get some clarification on the size of the journal disks
that I should get for my new Ceph cluster I am planning. I read about the
journal settings on
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
but that didn't really clarify it for me that or I
I've been wrestling with IO performance in my cluster and one area I have
not yet explored thoroughly is whether or not performance constraints on
mon hosts would be likely to have any impact on OSDs. My mons are quite
small, and one in particular has rather high IO waits (frequently 30% or
more) d
I would probably go with less size osd disks, 4TB is to much to loss in
case of a broken disk, so maybe more osd daemons with less size, maybe 1TB
or 2TB size. 4:1 relationship is good enough, also i think that 200G disk
for the journals would be ok, so you can save some money there, the osd's
of c
4TB is too much to lose? Why would it matter if you lost one 4TB with the
redundancy? Won't it auto recover from the disk failure?
Nate Curry
On Jul 1, 2015 6:12 PM, "German Anders" wrote:
> I would probably go with less size osd disks, 4TB is to much to loss in
> case of a broken disk, so may
Like most disk redundancy systems, the concern usually is the amount of
time it takes to recover, wherein you are vulnerable to another failure. I
would assume that is also the concern here.
On Wed, Jul 1, 2015 at 5:54 PM, Nate Curry wrote:
> 4TB is too much to lose? Why would it matter if you
ask the other guys on the list, but for me to lose 4TB of data is to much,
the cluster will still running fine, but in some point you need to recover
that disk, and also if you lose one server with all the 4TB disk in that
case yeah it will hurt the cluster, also take into account that with that
ki
It also depends a lot on the size of your cluster ... I have a test cluster I'm
standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I
lose 4 TB - that's a very small fraction of the data. My replicas are going to
be spread out across a lot of spindles, and replicating
I'm interested in such a configuration, can you share some perfomance
test/numbers?
Thanks in advance,
Best regards,
*German*
2015-07-01 21:16 GMT-03:00 Shane Gibson :
>
> It also depends a lot on the size of your cluster ... I have a test
> cluster I'm standing up right now with 60 nodes - a
Hello,
On Wed, 1 Jul 2015 15:24:13 + Somnath Roy wrote:
> It doesn't matter, I think filestore_xattr_use_omap is a 'noop' and not
> used in the Hammer.
>
Then what was this functionality replaced with, esp. considering EXT4
based OSDs?
Chibi
> Thanks & Regards
> Somnath
>
> -Original
It is replaced with the following config option..
// Use omap for xattrs for attrs over
// filestore_max_inline_xattr_size or
OPTION(filestore_max_inline_xattr_size, OPT_U32, 0) //Override
OPTION(filestore_max_inline_xattr_size_xfs, OPT_U32, 65536)
OPTION(filestore_max_inline_xattr_size_btrfs,
On Thu, 2 Jul 2015 00:36:18 + Somnath Roy wrote:
> It is replaced with the following config option..
>
> // Use omap for xattrs for attrs over
> // filestore_max_inline_xattr_size or
> OPTION(filestore_max_inline_xattr_size, OPT_U32, 0) //Override
> OPTION(filestore_max_inline_xattr_size_
On Wed, 01 Jul 2015 13:50:39 -0500 Mark Nelson wrote:
>
>
> On 07/01/2015 01:39 PM, Tuomas Juntunen wrote:
> > Thanks Mark
> >
> > Are there any plans for ZFS like L2ARC to CEPH or is the cache tiering
> > what should work like this in the future?
> >
> > I have tested cache tier + EC pool, and
I've now read all messages relating to EnhanceIO and what I can tell it would
not help on this, atleast not the way I would want it to.
Thanks Christian for pointing this out.
Br, T
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: 2. heinäkuuta 2015 4:30
To: ceph
Ive checked the network, we use IPoIB and all nodes are connected to the
same switch, there are no breaks in connectivity while this happens. My
constant ping says 0.03 0.1ms. I would say this is ok.
This happens almost every time when deep scrubbing is running. Our loads on
this particular
Yeah, this can happen during deep_scrub and also during rebalancing..I forgot
to mention that..
Generally, it is a good idea to throttle those..For deep scrub, you can try
using (got it from old post, I never used it)
osd_scrub_chunk_min = 1
osd_scrub_chunk_max = 1
osd_scrub_sleep = 0.1
Fo
On 2015年07月02日 00:16, Gregory Farnum wrote:
How reproducible is this issue for you? Ideally I'd like to get logs
from both clients and the MDS server while this is happening, with mds
and client debug set to 20. And also to know if dropping kernel caches
and re-listing the directory resolves the
Hi,
Am 01.07.2015 um 23:35 schrieb Loic Dachary:
> Hi,
>
> The details of the differences between the Hammer point releases and the
> RedHat Ceph Storage 1.3 can be listed as described at
>
> http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between
> hammer and v0.94.1.2
>
60 matches
Mail list logo