I'm testing multi realm features according to official multisite(
http://docs.ceph.com/docs/jewel/radosgw/multisite/) doc, but after set up a
zone-group and zone, every time I run radosgw-admin command, it will print
an error mesage:
> 2016-12-05 16:19:21.117371 7f85beec59c0 0 error in read_id f
Le 05/12/2016 à 05:14, Alex Gorbachev a écrit :
> Referencing
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html
>
> When using --dmcrypt with ceph-deploy/ceph-disk, the journal device is
> not allowed to be an existing partition. You have to specify the entire
> block dev
I’m guessing that whatever is causing your hangs is also blocking the threads,
by bumping up the limits you are probably just making sure that there are
always threads available to handle new requests.
From: Thomas Danan [mailto:thomas.da...@mycom-osi.com]
Sent: 04 December 2016 20:20
To: n.
Hi Ceph Users,
Could anyone reply to my below questions? It would be great help. I appreciate.
--Rakesh Parkiti
From: ceph-users on behalf of Rakesh
Parkiti
Sent: 03 December 2016 13:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] RBD Image Features
OSD different sizes are used for different tasks. Such as cache. My concern is
4TB OSD, used as a storage pool. Place them engaged is not the same.
/Dev/sdf1 4.0T 1.7T 2.4T 42% / var / lib / ceph / osd / ceph-4
/Dev/sdd1 4.0T 1.7T 2.4T 41% / var / lib / ceph / osd / ceph-2
/Dev/sdb1 4.0T 1.9
On Mon, Dec 5, 2016 at 3:27 AM, Goncalo Borges
wrote:
> Hi Again...
>
> Once more, my environment:
>
> - ceph/cephfs in 10.2.2.
> - All infrastructure is in the same version (rados cluster, mons, mds and
> cephfs clients).
> - We mount cephfs using ceph-fuse.
>
> I want to set up quotas to limit u
On Fri, Dec 2, 2016 at 7:47 PM, Steve Jankowski wrote:
> Anyone using rrdtool with Ceph via rados or cephfs ?
>
>
> If so, how many rrd files and how many rrd file updates per minute.
>
>
> We have a large population of rrd files that's growing beyond a single
> machine. We're already using SSD a
Hello,
is it possible prevent cephfs client to mount the root of a cephfs
filesystem and browse through it?
We want to restrict cephfs clients to a particular directory, but when
we define a specific cephx auth key for a client we need to add the
following caps: "mds 'allow r'" which then gives t
Ok, just discovered that with the fuse client, we have to add the '-r
/path' option, to treat that as root. So I assume the caps 'mds allow
r' is only needed if we also what to be able to mount the directory
with the kernel client. Right?
Best,
Martin
On Mon, Dec 5, 2016 at 1:20 PM, Martin Palma
Hi Martin,
On Mon, 5 Dec 2016 13:27:01 +0100, Martin Palma wrote:
> Ok, just discovered that with the fuse client, we have to add the '-r
> /path' option, to treat that as root. So I assume the caps 'mds allow
> r' is only needed if we also what to be able to mount the directory
> with the kernel
On Mon, Dec 5, 2016 at 12:35 PM, David Disseldorp wrote:
> Hi Martin,
>
> On Mon, 5 Dec 2016 13:27:01 +0100, Martin Palma wrote:
>
>> Ok, just discovered that with the fuse client, we have to add the '-r
>> /path' option, to treat that as root. So I assume the caps 'mds allow
>> r' is only needed
On Sun, Dec 4, 2016 at 11:51 PM, Goncalo Borges
wrote:
> Dear CephFSers.
>
> We are running ceph/cephfs in 10.2.2. All infrastructure is in the same
> version (rados cluster, mons, mds and cephfs clients). We mount cephfs using
> ceph-fuse.
>
> Last week I triggered some of my heavy users to delet
On Fri, Dec 2, 2016 at 8:23 AM, Xusangdi wrote:
> Hi John,
>
> In our environment we want to deploy MDS and cephfs client on the same node
> (users actually use cifs/nfs to access ceph storage). However,
> it takes a long time to recover if the node with active MDS fails, during
> which a large
Hi Nick,
thanks for sharing your results. Would you be able to share the fio args
you used for benchmarking (especially the ones for the screenshot you
shared in the write latency post)?
What I found is that when I do some 4k write benchmarks my lat stdev is
much higher then the average (also wid
Hi,
we’re currently expanding our cluster to grow the number of IOPS we can provide
to clients. We’re still on Hammer but in the process of upgrading to Jewel. We
started adding pure-SSD OSDs in the last days (based on MICRON S610DC-3840) and
the slow requests we’ve seen in the past have starte
Hi Pierre,
On Mon, Dec 5, 2016 at 3:41 AM, Pierre BLONDEAU
wrote:
> Le 05/12/2016 à 05:14, Alex Gorbachev a écrit :
>> Referencing
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html
>>
>> When using --dmcrypt with ceph-deploy/ceph-disk, the journal device is
>> not allow
Ceph's data distribution is the crush algorithm. While your use case is
simple, the algorithm is very complex to handle complex scenarios. The
variable you have access to is the crush weight for each osd. If you have an
osd, like ceph-3 that has more data than the rest and ceph-2 that has les
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex
> Gorbachev
> Sent: 05 December 2016 15:39
> To: Pierre BLONDEAU
> Cc: ceph-users
> Subject: Re: [ceph-users] Reusing journal partitions when using
> ceph-deploy/ceph-disk --dmcrypt
>
>
Hi Sascha,
Here is what I used
[global]
ioengine=rbd
randrepeat=0
clientname=admin
pool=
rbdname=test
invalidate=0# mandatory
rw=write
bs=64k
direct=1
time_based=1
runtime=360
numjobs=1
[rbd_iodepth1]
iodepth=1
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists
Hi,
I had recently re-added some old OSD's by zapping them and reintroducing them
into cluster as new OSD's. I'm using Ansible to add
the OSD's and because there was an outstanding config change, it restarted all
OSD's on the host where I was adding the OSD's at the
end of the play.
I noticed s
When an OSD restarts all of its PGs that had any data modified need to recover
when it comes back up. This will make sure that all new objects created and
existing objects that were modified while it was down get replicated. Both of
those types of objects count as undersized objects.
Recovery
From: David Turner [mailto:david.tur...@storagecraft.com]
Sent: 05 December 2016 16:58
To: n...@fisk.me.uk; 'ceph-users'
Subject: RE: [ceph-users] PG's become undersize+degraded if OSD's restart
during backfill
When an OSD restarts all of its PGs that had any data modified need to recover
w
That's a fair point. I misunderstood the behaviour you were seeing. Based on
what you're seeing, it looks like the actual behaviour is that the PGs that
were down need to know whether or not any or all of their files are up to date
or not. They can only find out if they are through the Recove
Hello,
I have a question regarding if Ceph is suitable for small scale
deployments.
Lets say I have two machines, connected with gbit lan.
I want to share data between them, like an ordinary NFS
share, but with Ceph instead.
My idea is that with Ceph I would have redundancy with two machines
ha
On Sat, Dec 3, 2016 at 2:34 AM, Rakesh Parkiti
wrote:
> Hi All,
>
>
> I. Firstly, As per my understanding, RBD image features (exclusive-lock,
> object-map, fast-diff, deep-flatten, journaling) are not yet ready for ceph
> Jewel version?
Incorrect -- these features are the default enabled feature
On Mon, Dec 5, 2016 at 3:57 AM, John Spray wrote:
> On Mon, Dec 5, 2016 at 3:27 AM, Goncalo Borges
> wrote:
>> Hi Again...
>>
>> Once more, my environment:
>>
>> - ceph/cephfs in 10.2.2.
>> - All infrastructure is in the same version (rados cluster, mons, mds and
>> cephfs clients).
>> - We mount
Hello,
On Mon, 05 Dec 2016 19:33:22 +0100 joa...@verona.se wrote:
> Hello,
>
> I have a question regarding if Ceph is suitable for small scale
> deployments.
>
Depends on your use case, in general Ceph wants to be scaled out.
> Lets say I have two machines, connected with gbit lan.
>
Unless
Hello,
On Mon, 5 Dec 2016 15:25:37 +0100 Christian Theune wrote:
> Hi,
>
> we’re currently expanding our cluster to grow the number of IOPS we can
> provide to clients. We’re still on Hammer but in the process of upgrading to
> Jewel.
You might want to wait until the next Jewel release, giv
Hi Greg, Jonh...
To Jonh: Nothing is done in tge background between two consecutive df commands,
I have opened the following tracker issue: http://tracker.ceph.com/issues/18151
(sorry, all the issue headers are empty apart from the title. I've hit enter
before actually filling all the appropr
Hi John...
>> We are running ceph/cephfs in 10.2.2. All infrastructure is in the same
>> version (rados cluster, mons, mds and cephfs clients). We mount cephfs using
>> ceph-fuse.
>>
>> Last week I triggered some of my heavy users to delete data. In the
>> following example, the user in question
Hi Christian (heh),
thanks for picking this up. :)
This has become a rather long post as I added more details and giving our
history, but if we make progress then maybe this can help others in the future.
I find slow requests extremely hard to debug and as I said: aside from
scratching my own
Hi,
a quick addition as I kept poking around, mulling over the CPU theory. I see
that OSD 49 (Micron, Host Type 2) does max out close to the theoretical
bandwith limit with ~450MiB written at 4k IOPS). It happily uses up to 200% CPU
time and has somewhat lower “await” in iostat than in the load
Hello,
On Tue, 6 Dec 2016 03:37:32 +0100 Christian Theune wrote:
> Hi Christian (heh),
>
> thanks for picking this up. :)
>
> This has become a rather long post as I added more details and giving
> our history, but if we make progress then maybe this can help others in
> the future. I find slo
Hi Joakim,
On Mon, Dec 5, 2016 at 1:35 PM wrote:
> Hello,
>
> I have a question regarding if Ceph is suitable for small scale
> deployments.
>
> Lets say I have two machines, connected with gbit lan.
>
> I want to share data between them, like an ordinary NFS
> share, but with Ceph instead.
>
>
Hi John, Greg, Zheng
And now a much more relevant problem. Once again, my environment:
- ceph/cephfs in 10.2.2 but patched for
o client: add missing client_lock for get_root
(https://github.com/ceph/ceph/pull/10027)
o Jewel: segfault in ObjectCacher::FlusherThread
(http://tracker.ceph.com/
Hi
I was configuring two realms in one cluster. After set up the second realm,
I found a problem, *the master zonegroup are set to a zonegroup "default"
which is not what I want*, here is current period of the second realm rb:
# radosgw-admin period get --rgw-realm=rb
{
"id": "18a3c0f8-c852-4
36 matches
Mail list logo