+1 for proxmox. (I'm contributor and I can say that ceph support is very good)
- Mail original -
De: jes...@krogh.cc
À: "ceph-users"
Envoyé: Vendredi 5 Avril 2019 21:34:02
Objet: [ceph-users] VM management setup
Hi. Knowing this is a bit off-topic but seeking recommendations
and advise
Ah nevermind, I found ceph mon set addrs and I'm good to go.
Aaron
> On Apr 24, 2019, at 4:36 PM, Aaron Bassett
> wrote:
>
> Yea ok thats what I guessed. I'm struggling to get my mons to listen on both
> ports. On startup they report:
>
> 2019-04-24 19:58:43.652 7fcf9cd3c040 -1 WARNING: 'mo
Yea ok thats what I guessed. I'm struggling to get my mons to listen on both
ports. On startup they report:
2019-04-24 19:58:43.652 7fcf9cd3c040 -1 WARNING: 'mon addr' config option
[v2:172.17.40.143:3300/0,v1:172.17.40.143:6789/0] does not match monmap file
continuing with monmap confi
AFAIK, the kernel clients for CephFS and RBD do not support msgr2 yet.
On Wed, Apr 24, 2019 at 4:19 PM Aaron Bassett
wrote:
>
> Hi,
> I'm standing up a new cluster on nautilus to play with some of the new
> features, and I've somehow got my monitors only listening on msgrv2 port
> (3300) and no
Hi,
I'm standing up a new cluster on nautilus to play with some of the new
features, and I've somehow got my monitors only listening on msgrv2 port (3300)
and not the legacy port (6789). I'm running kernel 4.15 on my clients. Can I
mount cephfs via port 3300 or do I have to figure out how to get
Hello,
I would also recommend proxmox
It is very easy to install and to Manage your kvm/lxc with Huge amount of
Support for possible storages.
Just my 2 Cents
Hth
- Mehmet
Am 6. April 2019 17:48:32 MESZ schrieb Marc Roos :
>
>We have also hybrid ceph/libvirt-kvm setup, using some scripts to d
my cluster occur a big error this morning.
many osd suicide because of heartbeat_map timeout.
when I start all osd manually.it looks well.
but when I using rbd info for a rbd from rbd ls, it say not such file or
directory.
And the I use the way in
https://fnordahl.com/2017/04/17/ceph-rbd-volume-he
Hi,
we're having issue on one of our clusters, while wanting
to remove cache tier, trying to manually flush cache always
ends up with error:
rados -p ssd-cache cache-flush-evict-all
.
.
.
failed to flush /rb.0.965780.238e1f29.1641: (2) No such file or
directory
rb.0.965780.238e1f2
Hi,
I remember that there's some bug about cephfs when upgrading from 12.2.5.
Is it safe to upgrade the cluster now?
Thanks
Janne Johansson 于2019年4月24日周三 下午4:06写道:
>
>
> Den ons 24 apr. 2019 kl 08:46 skrev Zhenshi Zhou :
>
>> Hi,
>>
>> I'm running a cluster for a period of time. I find the clu
Den ons 24 apr. 2019 kl 08:46 skrev Zhenshi Zhou :
> Hi,
>
> I'm running a cluster for a period of time. I find the cluster usually
> run into unhealthy state recently.
>
> With 'ceph health detail', one or two pg are inconsistent. What's
> more, pg in wrong state each day are not placed on the sa
10 matches
Mail list logo