Good morning folks,
As a newbie to Ceph yesterday was the first time I've configured my
CRUSH map, added a CRUSH rule and created my first pool using this rule.
Since then I get the status HEALTH_WARN with the following output:
~~~
$ sudo ceph status
cluster:
id: 47c108bd-db66-4197-
is downgrading from 12.2.7 to 12.2.5 an option? - I'm still suffering
from high frequent osd crashes.
my hopes are with 12.2.9 - but hope wasn't always my best strategy
br
wolfgang
On 2018-08-30 19:18, Alfredo Deza wrote:
> On Thu, Aug 30, 2018 at 5:24 AM, Wolfgang Lendl
> wrote:
>> Hi Alfredo,
Hi, guys.
I made a few tests and i see that performance is better if
osd_journal_aio=false for LV-journals.
Setup:
2 servers x 4 OSD (SATA HDD + journal on SSD LV)
12.2.5, filestore
cluster:
id: ce305aae-4c56-41ec-be54-529b05eb45ed
health: HEALTH_OK
services:
mon: 2 daemons
On 09/04/2018 09:47 AM, Jörg Kastning wrote:
> My questions are:
>
> 1. What does active+undersized actually mean? I did not find anything
> about it in the documentation on docs.ceph.com.
http://docs.ceph.com/docs/master/rados/operations/pg-states/
active
Ceph will process requests to the
It's mds_beacon_grace. Set that on the monitor to control the
replacement of laggy MDS daemons, and usually also set it to the same
value on the MDS daemon as it's used there for the daemon to hold off
on certain tasks if it hasn't seen a mon beacon recently.
John
On Mon, Sep 3, 2018 at 9:26 AM W
On Tue, Sep 4, 2018 at 3:59 AM, Wolfgang Lendl
wrote:
> is downgrading from 12.2.7 to 12.2.5 an option? - I'm still suffering
> from high frequent osd crashes.
> my hopes are with 12.2.9 - but hope wasn't always my best strategy
12.2.8 just went out. I think that Adam or Radoslaw might have some
On Sun, Sep 2, 2018 at 3:01 PM, David Wahler wrote:
> On Sun, Sep 2, 2018 at 1:31 PM Alfredo Deza wrote:
>>
>> On Sun, Sep 2, 2018 at 12:00 PM, David Wahler wrote:
>> > Ah, ceph-volume.log pointed out the actual problem:
>> >
>> > RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv
Hello Lothar,
Thanks for your reply.
Am 04.09.2018 um 11:20 schrieb Lothar Gesslein:
By pure chance 15 pgs are now actually replicated to all 3 osds, so they
have enough copies (clean). But the placement is "wrong", it would like
to move the data to different osds (remapped) if possible.
That
On 09/03/2018 10:07 PM, Nhat Ngo wrote:
Hi all,
I am new to Ceph and we are setting up a new RadosGW and Ceph storage
cluster on Luminous. We are using only EC for our `buckets.data` pool
at the moment.
However, I just read the Red Hat Ceph object Gateway for Production
article and it
Hello
We are trying to use cephfs as storage for web graphics, such as
thumbnails and so on.
Is there any way to reduse overhead on storage? On test cluster we have
1 fs, 2 pools (meta and data) with replica size = 2
objects: 1.02 M objects, 1.1 GiB
usage: 144 GiB used, 27 GiB / 172 GiB
Are you planning on using bluestore or filestore? The settings for
filestore haven't changed. If you're planning to use bluestore there is a
lot of documentation in the ceph docs as well as a wide history of
questions like this on the ML.
On Mon, Sep 3, 2018 at 5:24 AM M Ranga Swami Reddy
wrote
I was confused what could be causing this until Janne's email. I think
they're correct that the cluster is preventing pool creation due to too
many PGs per OSD. Double check how many PGs you have in each pool and what
your defaults are for that.
On Mon, Sep 3, 2018 at 7:19 AM Janne Johansson wr
Instead of manually weighting the OSDs, you can use the mgr module to
slowly add the OSDs and balance your cluster at the same time. I believe
you can control the module by telling it a maximum percent of misplaced
objects, or other similar metrics, to control adding in the OSD, while also
prevent
This was the issue (could not create the pool, because it would have
exceeded the new (luminous) limitation on pgs /osd.
On Tue, Sep 4, 2018 at 10:35 AM David Turner wrote:
> I was confused what could be causing this until Janne's email. I think
> they're correct that the cluster is preventing
We're glad to announce the next point release in the Luminous v12.2.X
stable release series. This release contains a range of bugfixes and
stability improvements across all the components of ceph. For detailed
release notes with links to tracker issues and pull requests, refer to
the blog post at
You could probably cut the overhead in half with the inline data
feature:
http://docs.ceph.com/docs/master/cephfs/experimental-features/#inline-data
However, that is an experimental feature.
CephFS is unfortunately not very good at storing lots of small files
in a storage-efficient manner :(
Pau
You need to re-deploy OSDs for bluestore_min_alloc_size to take effect.
> On 4.09.2018, at 18:31, andrew w goussakovski wrote:
>
> Hello
>
> We are trying to use cephfs as storage for web graphics, such as
> thumbnails and so on.
> Is there any way to reduse overhead on storage? On test cluster
Hi,
I created a ceph cluster manually (not using ceph-deploy). When I reboot
the node the osd's doesn't come backup because the OS doesn't know that it
need to bring up the OSD. I am running this on Ubuntu 1604. Is there a
standardized way to initiate ceph osd start on node reboot?
"sudo start c
Hi Eugen.
Just tried everything again here by removing the /sda4 partitions and
letting it so that either salt-run proposal-populate or salt-run state.orch
ceph.stage.configure could try to find the free space on the partitions to
work with: unsuccessfully again. :(
Just to make things clear: are
Hi Martin,
hope this is still useful, despite the lag.
On Fri, Jun 29, 2018 at 01:04:09PM +0200, Martin Palma wrote:
Since Prometheus uses a pull model over HTTP for collecting metrics.
What are the best practices to secure these HTTP endpoints?
- With a reverse proxy with authentication?
This
The prometheus plugin currently skips histogram perf counters. The
representation in ceph is not compatible with prometheus' approach (iirc).
However I believe most, if not all of the perf counters should be exported as
long running averages. Look for metric pair that are named some_metric_name
I'm not the expert when it comes to cephmetrics but I think (at least until very
recently) cephmetrics relies on other exporters besides the mgr module and the
node_exporter.
On Mon, Aug 27, 2018 at 01:11:29PM -0400, Steven Vacaroaia wrote:
Hi
has anyone been able to use Mimic + cephmetric
22 matches
Mail list logo