Did you mean SSD? 12 x 5TB solid-state disks? Or is that “Spinning Disk Drive?” Do you have any SSDs/NVMe you can use? From: gagan tiwariSent: Wednesday, September 14, 2022 1:54 AMTo: ceph-users@ceph.ioSubject: [ceph-users] ceph deployment best practice Hi Guys, I am new to Ceph and
Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari
:
> Hi Guys,
> I am new to Ceph and storage. We have a requirement of
> managing around 40T of data which will be accessed by around 100 clients
> all running RockyLinux9.
>
> We have a HP storage server with 12 SDD of 5T each and hav
Sorry. I meant SSD Solid state disks.
Thanks,
Gagan
On Wed, Sep 14, 2022 at 12:49 PM Janne Johansson
wrote:
> Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari
> :
> > Hi Guys,
> > I am new to Ceph and storage. We have a requirement of
> > managing around 40T of data which will
Hi,
On 9/13/22 16:33, Wesley Dillingham wrote:
what does "ceph pg ls scrubbing" show? Do you have PGs that have been
stuck in a scrubbing state for a long period of time (many
hours,days,weeks etc). This will show in the "SINCE" column.
the deep scrubs have been running for some minutes to a
Den ons 14 sep. 2022 kl 10:14 skrev gagan tiwari
:
>
> Sorry. I meant SSD Solid state disks.
>> > We have a HP storage server with 12 SDD of 5T each and have set-up hardware
>> > RAID6 on these disks.
>>
>> You have only one single machine?
>> If so, run zfs on it and export storage as NFS.
The
Yes. To start with we only have one HP server with DAS. Which I am planning
to set up as ceph on. We can have one more server later.
But I think you are correct. I will use ZFS file systems on it and NFS
export all the data to all clients. So, please advise me whether to I use
RAID6 with ZFS / NF
Hello,
We recently have build a similar config here with Cluster Samba CTDB on
top of CephFS (under Pacific) via LXC containers (RockyLinux) under
Proxmox (7.2) for 35000 users authenticated on an Active Directory.
It's used for personal homedirs and shared directories.
The LXC Proxmox Samba
On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs -
Den ons 14 sep. 2022 kl 11:08 skrev gagan tiwari
:
> Yes. To start with we only have one HP server with DAS. Which I am planning
> to set up as ceph on. We can have one more server later.
>
> But I think you are correct. I will use ZFS file systems on it and NFS export
> all the data to all clien
Hi all,
I think there's an error in the documentation:
https://docs.ceph.com/en/quincy/install/manual-deployment/
Im currently trying the manual deployment because ceph-deploy
unfortunately doesn't seem to exist anymore and under step 19 it says
you should run "sudo ceph -s". That doesn't seem t
Hi,
Im currently trying the manual deployment because ceph-deploy
unfortunately doesn't seem to exist anymore and under step 19 it says
you should run "sudo ceph -s". That doesn't seem to work. I guess this
is because the manager service isn't yet running, right?
ceph-deploy was deprecated qui
the ceph-volume failure seems valid. I need to investigate.
thanks
On Wed, 14 Sept 2022 at 11:12, Ilya Dryomov wrote:
> On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein
> wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/57472#note-1
> > Release
Hi Eugen,
thanks for your answer. I don't want to use the cephadm tool because it
needs docker. I don't like it because it's total overkill for our small
3-node cluster. I'd like to avoid the added complexity, added packages,
everything. Just another thing I have to learn in detaisl about in case
orch suite failures fall under
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/57290
https://tracker.ceph.com/issues/57268
https://tracker.ceph.com/issues/52321
For rados/cephadm the failures are both
https://tracker.ceph.com/issues/57290
Overall, nothing new/unexpected. orc
Hello,
I am trying to add my first mds service on any node. I am unable to add
keyring to start mds service.
# $ sudo ceph auth get-or-create mds.mynode mon 'profile mds' mgr 'profile
mds' mds 'allow *' osd 'allow *'
Error ENINVAL: key for mds.mynode exists but cap mds does not match
I tried th
Hi Yuri,
On Wed, Sep 14, 2022 at 8:02 AM Adam King wrote:
>
> orch suite failures fall under
> https://tracker.ceph.com/issues/49287
> https://tracker.ceph.com/issues/57290
> https://tracker.ceph.com/issues/57268
> https://tracker.ceph.com/issues/52321
>
> For rados/cephadm the failures are both
On 15/09/2022 03:09, Jerry Buburuz wrote:
Hello,
I am trying to add my first mds service on any node. I am unable to add
keyring to start mds service.
# $ sudo ceph auth get-or-create mds.mynode mon 'profile mds' mgr 'profile
mds' mds 'allow *' osd 'allow *'
Error ENINVAL: key for mds.mynode
17 matches
Mail list logo