On 5/17/20 4:49 PM, Harald Staub wrote:
> tl;dr: this cluster is up again, thank you all (Mark, Wout, Paul
> Emmerich off-list)!
>
Awesome!
> First we tried to lower max- and min_pg_log_entries on a single running
> OSD, without and with restarting it. There was no effect. Maybe because
> of t
I'm experiencing a similar thing with Cephadm when attempting to deploy rgw.
Following the instructions at
https://ceph.readthedocs.io/en/latest/cephadm/install/#deploy-rgws results in
the container failing to start up immediately. Digging through the logs, it is
because it can't bind to port 8
Hi,
Anyone can help on this?
Regs,
Icy
On Tue, 12 May 2020 at 10:44, icy chan wrote:
> Hi,
>
> I had configured a cache tier with below parameters:
> cache_target_dirty_ratio: 0.1
> cache_target_dirty_high_ratio: 0.7
> cache_target_full_ratio: 0.9
>
> The cache tier did improved the performa
Clarification:
On Sun, May 17, 2020 at 10:36:41PM +0200, Martin Millnert wrote:
> Hi Jeff,
>
> Ran into the very same issue. Filed a bug-report at
> https://tracker.ceph.com/issues/45574
> ceph 15.2.1 on up-to-date Debian Buster.
>
> TL;DR: The way ceph-mgr-rook's RookOrchestrator class intera
Hi Jeff,
Ran into the very same issue. Filed a bug-report at
https://tracker.ceph.com/issues/45574
ceph 15.2.1 on up-to-date Debian Buster.
TL;DR: The way ceph-mgr-rook's RookOrchestrator class interacts with
python3-numpy package is borked.
Result: Cluster cannot start, since 'devicehealth' pl
Hi Martin,
The Samsung Evo is just my system drive on my test host. This is my server setup
3 Nodes: (original those where render workstations that are not in use right
now)
Each Node is MON MGM OSD
Mainboard: ASRock TRX40 Creator
CPU: AMD Ryzen Threadripper 3960X, 24 Cores, 3.8Ghz
RAM: 2 x Sa
Hi,
Crush rule is "replicated" and min_size 2 actually. I am trying to test
multiple volume configs in a single filesystem
using file layout.
I have created metadata pool with rep 3 (min_size2 and replicated crush
rule) and data pool with rep 3 (min_size2 and replicated crush rule). and
also I
Hello Moritz,
drop the EVO disk and use a SSD that is working with Ceph. For example just
use PM883 / PM983 from the same vendor and you will have a huge performance
increase.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerge
I have OSD’s on the brain … that line should have read:
systemctl restart ceph-{fsid}@mon.{host}.service
> On May 17, 2020, at 10:08 AM, Sean Johnson wrote:
>
> In case that doesn’t work, there’s also a systemd service that contains the
> fsid of the cluster.
>
> So, in the case of a mon serv
In case that doesn’t work, there’s also a systemd service that contains the
fsid of the cluster.
So, in the case of a mon service you can also run:
systemctl restart ceph-{fsid}@osd.{host}.service
Logs are correspondingly available via journalctl:
journalctl -u ceph-{fsid}@mon.{host}.service
tl;dr: this cluster is up again, thank you all (Mark, Wout, Paul
Emmerich off-list)!
First we tried to lower max- and min_pg_log_entries on a single running
OSD, without and with restarting it. There was no effect. Maybe because
of the unclean state of the cluster.
Then we tried ceph-objects
Hi Marc,
thank you very much for your feedback, actually that is what I am looking for
(Design advices and feedback). I also wanted to get in touch with the community
because right now I am on my own with this project with no experience at all.
But I also wanted to get into it first and learn
Hi Marc,
thank you very much for your feedback, actually that is what I am looking for
(Design advices and feedback). I also wanted to get in touch with the community
because right now I am on my own with this project with no experience at all.
But I also wanted to get into it first and learn
I was just reading your post, and started wondering why you posted it. I
do not see clear question, and you also do not share test results (from
your nas vs cephfs smb). So maybe you like some attention in this covid
social distancing time? ;)
Anyway, I have been 'testing' with ceph for 2,5
Hi,
my Name is Moritz and I am working for a 3D production company. Because of the
corona virus I have too much time left and also to much unused hardware. That
is why I started playing around with Ceph as a fileserver for us. Here I want
to share my experience for all those who are interested.
Hello Samuel,
Why you want to do that?
Just remove the rbd - as long your Image is distributet over random OSDs there
is no way to recovery a delete Image..
...2 cents
Mehmet
Am 12. Mai 2020 12:55:22 MESZ schrieb "huxia...@horebdata.cn"
:
>Hi, Ceph folks,
>
>Is there a rbd command, or any ot
16 matches
Mail list logo