Hello,
*
**How to configure placement_targets?
Which step is wrong in my following steps?
*
I want to use different pools to hold user's buckets. Two pools are
created,
one is '.bj-dz.rgw.buckets', the other is '.bj-dz.rgw.buckets.hot'.
1. Two placement targets are added to region map. Targets
I commented out partprobe and everything seems to work just fine.
*If someone has experience with why this is very bad please advise.
Make sure you know about http://tracker.ceph.com/issues/13833 also.
*ps we are running btrfs in the test jig and had to add the "-f" to the
btrfs_args for ceph-dis
If I’m not mistaken, marking an osd out will remap its placement groups
temporarily, while removing it from the crush map will remap the placement
groups permanently. Additionally, other placement groups from other osds could
get remapped permanently when an osd is removed from the crush map. I
On 01/07/2016 05:08 PM, Steve Taylor wrote:
> If I’m not mistaken, marking an osd out will remap its placement groups
> temporarily, while removing it from the crush map will remap the
> placement groups permanently. Additionally, other placement groups from
> other osds could get remapped permanen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
What is the file system on the OSDs? Anything interesting in
iostat/atop? What are the drives backing the OSDs? A few more details
would be helpful.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Jan 7, 2016 at 5:56 AM, Jose M wrote:
> Hi,
>
> Following Yan's feeling that something could be wrong with ceph
> configuration, i started again from scratch, this time configuring ceph with
> three nodes (one mon, two osds).
>
> After starting hbase, it seems it moves forward a few more
Hi,
I'm looking to create a Ceph storage architecture to store files, I'm
particularly interested in the metadata segregation so would be
implementing the Ceph Storage Cluster to start, with CephFS added once
done. I'm wondering what the best approach to storing data would be, for
example, conside
On Thu, Jan 7, 2016 at 9:28 PM, James Gallagher
wrote:
> Hi,
>
> I'm looking to create a Ceph storage architecture to store files, I'm
> particularly interested in the metadata segregation so would be implementing
> the Ceph Storage Cluster to start, with CephFS added once done. I'm
> wondering wh
Sometimes my ceph osd tree output is wrong. Ie. Wrong osds on the wrong
hosts ?
Anyone else have this issue?
I have seen this at Infernalis and Jewell.
Thanks
Wade
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Can you share the output with us?
Rgds,
Shinobu
- Original Message -
From: "Wade Holler"
To: "ceph-users"
Sent: Friday, January 8, 2016 7:29:07 AM
Subject: [ceph-users] ceph osd tree output
Sometimes my ceph osd tree output is wrong. Ie. Wrong osds on the wrong hosts ?
Anyone else ha
http://sudomakeinstall.com/uncategorized/ceph-make-configuration-changes-in-realtime-without-restart
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying,
Sure. Apologies for all the text: We have 12 Nodes for OSDs, 15 OSDs per
node, but I will only include a sample:
ceph osd tree | head -35
ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 130.98450 root default
-2 5.82153 host cpn1
4 0.72769 osd.4
Hi Robert,
Thank you for the prompt response.
The OSDs are built on XFS and the drives are Intel SSDs. Each SSD is
parted into two partitions, one is for journal, the other is for data.
There is no alignment issue for the partitions.
When slow request msg is outputted, the workload is quite
Hello,
On Fri, 8 Jan 2016 12:22:04 +0800 Jevon Qiao wrote:
> Hi Robert,
>
> Thank you for the prompt response.
>
> The OSDs are built on XFS and the drives are Intel SSDs. Each SSD is
> parted into two partitions, one is for journal, the other is for data.
> There is no alignment issue for
Hi,
How did you benchmark?
I would recommend to have a lot of mysql with a lot of innodb tables that
are utilised heavily. During a recover you should see the latency rise at
least. Maybe using one of the tools here
https://dev.mysql.com/downloads/benchmarks.html
Regards,
Josef
On 7 Jan 2016 16:
Hi,
Do you have by any chance disabled automatic crushmap updates in your
ceph config?
osd crush update on start = false
If this is the case, and you move disks around hosts, they won't update
their position/host in the crushmap, even if the crushmap does not
reflect reality.
Regards,
Mart
Hello,
just in case I'm missing something obvious, there is no reason a pool
called aptly "ssd" can't be used simultaneously as a regular RBD pool and
for cache tiering, right?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global
Hi,
On 01/08/2016 08:07 AM, Christian Balzer wrote:
Hello,
just in case I'm missing something obvious, there is no reason a pool
called aptly "ssd" can't be used simultaneously as a regular RBD pool and
for cache tiering, right?
AFAIK the cache configuration is stored in the pool entry itself (
18 matches
Mail list logo