g, so am having
trouble searching for answers.
Have a great weekend, thank you for your time either way,
~Joshua West
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hey Folks,
- For background, I am working with a small (home-lab) ceph cluster
on a proxmox cluster.
- Proxmox uses a shared cluster storage to pass configuration files
around, including ceph.conf
- All nodes are connected with Mellanox connectx-3 (mlx4_core) 56GbE
cards connected via qsfp switc
game plan from here.
Joshua West
~Small Cluster Hobby User
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks Patrick,
Similar to Robert, when trying that, I simply receive "Error EINVAL:
adding a feature requires a feature string" 10x times.
I attempted to downgrade, but wasn't able to successfully get my mons
to come back up, as they had quincy specific "mon data structure
changes" or something
Hey Gregory, Thank you for your response.
Understood! This tells me I am approaching the issue from the wrong
angle I suppose.
Thank you!
Josh
On Wed, Sep 1, 2021 at 8:26 AM Gregory Farnum wrote:
>
> On Wed, Sep 1, 2021 at 5:40 AM Joshua West wrote:
> >
> > Hello,
>
Hello,
5 node cluster, with co-located mons, mgrs, mds, and osds
Each node has a:
- 192.168.xx.xx 1Gb/s connection
- 10.xx.yy.xx 10Gb/s connection
- 10.aa.yy.xx 50Gb/s connection
- and a couple of unused ethernet ports
192 and 10.aa both have a switch dedicated to the network, and 10.xx
uses
Related: Where can I find MDS numeric state references for ceph mds
set_state GID ?
Like a dummy I accidentally upgraded to the ceph dev branch (quincy?),
and have been having nothing but trouble since. This wasn't actually
intentionally, I was trying to implement a PR which was expected to
bring
Anyone know how best to get confirmation from the Ceph team if they
would have any issue with a user forum being set up?
--> I am toying with the idea of setting one up.
Josh
On Thu, Aug 5, 2021 at 1:34 AM Janne Johansson wrote:
>
> Den mån 26 juli 2021 kl 16:56 skrev :
> > and there's an irc c
g but awesome, but because,
frankly, I wasn't even aware they still existed.
Josh
Joshua West
President
403-456-0072
CAYK.ca
On Mon, Jul 26, 2021 at 1:12 PM Yosh de Vos wrote:
>
> Hi Marc, seems like you had a bad night's sleep right?
> There is just so much wrong with that rep
ceph pool size 1 for (temporary and expendable data) still using 2X storage?
Hey Ceph Users!
With all the buzz around chia coin, I want to dedicate a few TB to
storage mining, really just to play with the chia CLI tool, and learn
how it all works.
At the whole concept is about dedicating disk sp
ading I go! haha
Michael, Thank you for your help earlier. Hopefully this little saga
is useful to someone in future too!
Joshua
On Wed, Apr 14, 2021 at 7:08 AM Joshua West wrote:
>
> Additional to my last note, I should have mentioned, I am exploring
> options to delete the damaged data
t of
filepaths+filenames for cephfs?
My current plan is to get that list, and simply brute force attempting
to copy all files, but each copy in it's own thread + timeout. Does
this make sense?
Joshua
On Wed, Apr 14, 2021 at 6:03 AM Joshua West wrote:
>
> Just working this throug
;t revealed the OID
yet.
Joshua
Joshua West
President
403-456-0072
CAYK.ca
On Fri, Apr 9, 2021 at 12:15 PM Joshua West wrote:
>
> Absolutely!
>
> Attached the files, they're not duplicate, but revised (as I tidied up
> what I could to make things easier)
>
> > Cor
se on another pool, I am
not confident that this approach is safe?
-- cephfs currently blocks when attemping to impact every third file
in the EC directory. Once I delete the pool, how will I remove the
files if even `rm` is blocking?
Thank you for your
14 matches
Mail list logo