On 01/16/2018 06:51 AM, Leonardo Vaz wrote:
Hey Cephers!
We are proud to announce our first Ceph Day in 2018 which happens on
February 7 at the Deutsche Telekom AG Office in Darmstadt (25 km South
from Frankfurt Airport).
The conference schedule[1] is being finished and the registration is
Hey Cephers!
We are proud to announce our first Ceph Day in 2018 which happens on
February 7 at the Deutsche Telekom AG Office in Darmstadt (25 km South from
Frankfurt Airport).
The conference schedule[1] is being finished and the registration is
already in progress[2].
If you're in Europe, join
Hi Massimiliano,
On Thu, Jan 11, 2018 at 6:15 AM, Massimiliano Cuttini
wrote:
> Hi everybody,
>
> i'm always looking at CEPH for the future.
> But I do see several issue that are leaved unresolved and block nearly
> future adoption.
> I would like to know if there are some answear already:
>
>
Hi Wido,
On Wed, Jan 10, 2018 at 11:09 AM, Wido den Hollander wrote:
> Hi,
>
> Is there a way to easily modify the device-class of devices on a offline
> CRUSHMap?
>
> I know I can decompile the CRUSHMap and do it, but that's a lot of work in a
> large environment.
>
> In larger environments I'm
Thanks John, I removed these pools on Friday and as you suspected
there was no impact.
Regards,
Rich
On 8 January 2018 at 23:15, John Spray wrote:
> On Mon, Jan 8, 2018 at 2:55 AM, Richard Bade wrote:
>> Hi Everyone,
>> I've got a couple of pools that I don't believe are being used but
>> have
On Tue, Jan 16, 2018 at 1:35 AM, Alexander Peters wrote:
> i created the dump output but it looks very cryptic to me so i can't really
> make much sense of it. is there anything to look for in particular?
Yes, basically we are looking for any line that ends in "= 34". You
might also find piping
On Mon, Jan 8, 2018 at 6:08 AM, Jens-U. Mozdzen wrote:
> Hi *,
>
> trying to remove a caching tier from a pool used for RBD / Openstack, we
> followed the procedure from http://docs.ceph.com/docs/mast
> er/rados/operations/cache-tiering/#removing-a-writeback-cache and ran
> into problems.
>
> The
Maybe for the future:
rpm {-V|--verify} [select-options] [verify-options]
Verifying a package compares information about the installed
files in the package with information about the files taken
from the package metadata stored in the rpm database. Among
other things, v
Finally, the issue that has haunted me for quite some time turned out to be
a ceph.conf issue:
I had
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
once I changed to
osd_pool_default_pg_num = 32
osd_pool_default_pgp_num = 32
then no issue to start the second rgw process.
No idea w
Hi Wes,
On 15-1-2018 20:57, Wes Dillingham wrote:
My understanding is that the exact same objects would move back to the
OSD if weight went 1 -> 0 -> 1 given the same Cluster state and same
object names, CRUSH is deterministic so that would be the almost certain
result.
Ok, thanks! So this
My understanding is that the exact same objects would move back to the OSD
if weight went 1 -> 0 -> 1 given the same Cluster state and same object
names, CRUSH is deterministic so that would be the almost certain result.
On Mon, Jan 15, 2018 at 2:46 PM, lists wrote:
> Hi Wes,
>
> On 15-1-2018 20
Hi Wes,
On 15-1-2018 20:32, Wes Dillingham wrote:
I dont hear a lot of people discuss using xfs_fsr on OSDs and going over
the mailing list history it seems to have been brought up very
infrequently and never as a suggestion for regular maintenance. Perhaps
its not needed.
True, it's just some
I dont hear a lot of people discuss using xfs_fsr on OSDs and going over
the mailing list history it seems to have been brought up very infrequently
and never as a suggestion for regular maintenance. Perhaps its not needed.
One thing to consider trying, and to rule out something funky with the XFS
You would need to create a new pool and migrate the data to that new pool.
Replicated pool fronting an EC pool for RBD is a known-bad workload:
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/#a-word-of-caution
but others mileage may vary I suppose.
In order to migrate you could d
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi!
After having a completely broken radosgw setup due to damaged buckets, I
completely deleted all rgw pools, and started from scratch.
But my problem is reproducible. After pushing ca. 10 objects into a
bucket, the resharding process appears to start, and the bucket is now
unresponsive
Good day,
I'm having an issue re-deploying a host back into my production ceph
cluster.
Due to some bad memory (picked up by a scrub) which has been replaced I
felt the need to re-install the host to be sure no host files were damaged.
Prior to decommissioning the host I set the crush weight's on
i created the dump output but it looks very cryptic to me so i can't really
make much sense of it. is there anything to look for in particular?
i think i am going to read up on how interpret ltrace output...
BR
Alex
- Ursprüngliche Mail -
Von: "Brad Hubbard"
An: "Alexander Peters"
CC:
Hello,
We've have a radosgw cluster(verion 12.2.2) in multisite mode. Our cluster
is formed by one master realm, with one master zonegroup and two
zones(which one is the master zone).
We've followed the instructions of Ceph documentation to install and
configure our cluster.
The cluster works as
Hi,
On our three-node 24 OSDs ceph 10.2.10 cluster, we have started seeing
slow requests on a specific OSD, during the the two-hour nightly xfs_fsr
run from 05:00 - 07:00. This started after we applied the meltdown patches.
The specific osd.10 also has the highest space utilization of all OSD
20 matches
Mail list logo