Will do, thanks!
Vào Th 4, 22 thg 7, 2020 vào lúc 12:27 steven prothero <
ste...@marimo-tech.com> đã viết:
> Hello,
>
> Yes, make sure docker & ntp is setup on the new node first.
> Also, make sure the public key is added on the new node and firewall
> is allowing it through
>
_
No. You need to recreate the OSD.
ср, 22 июл. 2020 г., 2:52 Frank Ritchie :
> Hi all,
>
> Is it safe to change bluestore_prefer_deferred_size_hdd for an OSD at
> runtime?
>
> thx
> Frank
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubsc
Hello,
Yes, make sure docker & ntp is setup on the new node first.
Also, make sure the public key is added on the new node and firewall
is allowing it through
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
hello,
i use docker, i will check ntp,
Do new node need to be installed?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
is podman installed on the new node? also make sure the NTP time sync
is on for new node. The ceph orch checks those on the new node and
then dies if not ready with an error like you see.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
I need help about add node when install ceph with cephadm .
When i run cpeh orch add host ceph2
error enoent: new host ceph2 (ceph2) failed check: ['traceback (most recent
call last):',
Please help me fix it.
Thanks & Best Regards
David
___
cep
Dear Support,
I need help about add node when install ceph with cephadm .
When i run cpeh orch add host ceph2
error enoent: new host ceph2 (ceph2) failed check: ['traceback (most
recent call last):',
Please help me fix it.
Thanks & Best Regards
David
_
Hi Bobby,
You can use systemtap script to get this, use Statistical Aggregates.
my systemtap github: https://github.com/gmayyyha/stap-tools
e.g.
func-latency.stp
-
@define BIN_MDS %( "/tiger/source/ceph/build/bin/ceph-mds" %)
global ms
global count
global latency
probe pr
Hi all,
Is it safe to change bluestore_prefer_deferred_size_hdd for an OSD at runtime?
thx
Frank
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Marcel;
Yep, you're right. I focused in on the last op, and missed the ones above it.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
dhils...@performair.com
www.PerformAir.com
-Original Message-
From: Marcel Kuiper [mailto:c...
Hi!
I have, or actually had a similar problem.
I found the solution on this page:
https://segmentfault.com/a/119023292938
I used the commands:
> ceph auth add client.crash.nodeX.xxx.com mgr "profile crash" mon "profile
> crash"
___
ceph-users mailing
Hi Dominiq
I must say that I inherited this cluster and did not develop the cursh
rule used. The rule reads:
"rule_id": 1,
"rule_name": "hdd",
"ruleset": 1,
"type": 1,
"min_size": 2,
"max_size": 3,
"steps": [
{
"o
Marcel;
To answer your question, I don't see anything that would be keeping these PGs
on the same node. Someone with more knowledge of how the Crush rules are
applied, and the code around these operations, would need to weigh in.
I am somewhat curious though; you define racks, and even rooms i
Dominic
The crush rule dump and tree are attached (hope that works). All pools use
crush_rule 1
Marcel
> Marcel;
>
> Sorry, could also send the output of:
> ceph osd tree
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International, Inc.
> dhils...@p
Marcel;
Sorry, could also send the output of:
ceph osd tree
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
dhils...@performair.com
www.PerformAir.com
-Original Message-
From: dhils...@performair.com [mailto:dhils...@performair.c
Marcel;
Thank you for the information.
Could you send the output of:
ceph osd crush rule dump
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
dhils...@performair.com
www.PerformAir.com
-Original Message-
From: Marcel Kuiper [mai
Hi Dominic,
This cluster is running 14.2.8 (nautilus)
There's 172 osds divided over 19 nodes.
There are currently 10 pools.
All pools have 3 replica's of data
There are 3968 PG's (the cluster is not yet fully in use. The number of
PGs is expected to grow)
Marcel
> Marcel;
>
> Short answer; yes
Hi Ben,
we are not using EC pool on that cluster.
OSD out behavior almost stopped when we solved memory issues (less memory
allocated to OSD's).
Now we are not working on that cluster anymore so we have no other info about
that problem.
Jan
On 20/07/2020 07.59, Benoît Knecht wrote:
Hi Jan,
Hi everyone, we have a ceph cluster for object storage only, the rgws are
accessible from the internet, and everything is ok.
Now, one of our team/client required that their data should not ever be
accessible from the internet.
In any case of security bug/breach/whatever, they want to limit the
Marcel;
Short answer; yes, it might be expected behavior.
PG placement is highly dependent on the cluster layout, and CRUSH rules. So...
Some clarifying questions.
What version of Ceph are you running?
How many nodes do you have?
How many pools do you have, and what are their failure domains?
Quick question: Is there a way to change the frequency of heap dumps? On this
page http://goog-perftools.sourceforge.net/doc/heap_profiler.html a function
HeapProfilerSetAllocationInterval() is mentioned, but no other way of
configuring this. Is there a config parameter or a ceph daemon call to
Just sharing my xp :
- storing photos for photoways, now photobox in early 2000. A bug in
the HP storage enclosure earase all the raidgroup. 3 weeks to
recalculate all the thumbnail with a dedicated server specialized in
resizing images.
- little emc storage with something like 10 disk. 3 for the
Thanks soo much to the Ceph Teams and Community, all your efforts are amazing
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi
I'm using Ceph Dashboard on Firefox on Macbook, and lately it has been hanging
with "A web page is loading slowly - stop it or wait". Some pages load load,
but some show that warning and stop loading.
I'm on "Firefox Extended Support release 68.10.0esr (64-bit)"
Anyone else seen this issue?
Hi list,
I ran a test with marking an osd out versus setting its crush weight to 0.
I compared to what osds pages were send. The crush map has 3 rooms. This
is what happened.
On ceph osd out 111 (first room; this node has osds 108 - 116) pg's were
send to the following osds
NR PG's OSD
2
>> I'm happy user since 2014 and I never lost any data. When I remember
>> how painfull was the firmware upgrade of emc, netapp, hp storage and
the
>> time passed to recover lost data . Ceph is just amazing !
Interesting I always wondered how ceph compares to propriatary
solutions. I am g
Hello!
I've run into a bit of an issue with one of our radosgw production clusters..
Setup is two radosgw nodes behind haproxy loadbalancing, which in turn are
connected to the ceph cluster. Everything running 14.2.2 so Nautilus. It's tied
to a openstack cluster, so keystone as authentication
And to put it more precisely, I would like to figure out how many times
this particular function is called during the execution of the program?
BR
Bobby !
On Tue, Jul 21, 2020 at 1:24 PM Bobby wrote:
>
> Hi,
>
> I am trying to profile the number of invocations to a particular function
> in Cep
Hi Steven,
IMO your statement about "not supporting higher block sizes" is too
strong. From my experience excessive space usage for EC pools tend to
depend on write access pattern, input block sizes and/or object sizes.
Hence I'm pretty sure this issue isn't present/visible for every cluster
Hi,
I am trying to profile the number of invocations to a particular function
in Ceph source code. I have instrumented the code with time functions.
Can someone please share the script for compiling and running the Ceph
source code? I am struggling with it. That would be great help !
BR
Bobby !
Hi,
I have created an issue to track this: https://tracker.ceph.com/issues/46653
Could you please tell me which version of ceph are you using?
Thanks
-Original Message-
From: bioh...@yahoo.com
Sent: 21 de julho de 2020 10:02
To: ceph-users@ceph.io
Subject: [ceph-users] Ceph Dashboard a
Hi,
On a previously installed machine I get :
# rpm -qi ceph-selinux-14.2.10-0.el7.x86_64 |grep Build
Build Date : Thu 25 Jun 2020 08:08:52 PM CEST
Build Host : braggi01.front.sepia.ceph.com
# rpm -q --requires ceph-selinux-14.2.10-0.el7.x86_64 |grep selinux
libselinux-utils
selinux-policy-bas
hello
ceph is the definitive solution for storage. That's all.
I'm happy user since 2014 and I never lost any data. When I remember
how painfull was the firmware upgrade of emc, netapp, hp storage and
the time passed to recover lost data . Ceph is just amazing !
so many thx to you guys. Th
33 matches
Mail list logo