Hi,
I have not worked with orchestrator but I remember I read somewhere that
NFS implementation is not supported.
Refer Cephadm documentation and for NFS you have configure nfs Ganesha.
You can manage NFS thru dashboard but for that you have initial config in
dashboard and in nfsganaesha you hav
Hello,
Well, yes and no, In the stability section
(https://docs.ceph.com/docs/octopus/cephadm/stability/) there is written, it's
still under development.
But in the set-up docs it's written whitout any hint of development:
https://docs.ceph.com/docs/octopus/cephadm/install/#deploying-nfs-gane
Hello,
we are using same environment, Opennebula + Ceph.
Our ceph cluster is composed by 5 ceph OSD Hosts with SSD, spinning 10ktrs and
7.2ktrs, with 10Gb/s fiber network
Each spinning OSD are associated with a db and wall devices on SSD
Nearly all our Windows VM RBD images are in a 10k/trs pool
Hello,
You just copied the same message.
I'll make a ticket in the tracker.
Regards,
Simon
Von: Amudhan P
Gesendet: Donnerstag, 11. Juni 2020 09:32:36
An: Simon Sutter
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Octopus: orchestrator not working cor
Hello,
you could use another deployment and management solution to have NFS and
everything with ease. Take a look into
https://croit.io/docs/croit/master/gateways/nfs.html#services how easy it
would be to deploy NFS.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver..
Hi,
assuming you're running Octopus the deployment guide [1] explains it
quite well.
To specify rocksDB/WAL devices you have to make use of "drive_groups" [2].
Regards,
Eugen
[1] https://docs.ceph.com/docs/octopus/cephadm/install/#deploy-osds
[2] https://docs.ceph.com/docs/octopus/cephadm/d
If you want to specify vgname/lvname you have to create them manually and run:
ceph-volume lvm create --data /dev/sdc --block.db vgname/db-sdc
--block.wal vgname/wal-sdc
where you can also specify wal-size and block.db-size (among other options).
Or you do it with the 'batch' command:
ceph-v
> What I am curious about is these 2 lines:
> full sync: 0/64 shards
> full sync: 0/128 shards
>
> Is this considered normal? If so, why have those lines present in this
> output?
This appears to be normal. My interpretation is those numbers show how many
shards are currently syncing, so it wo
That is my experience as well. The full sync will only run after
initiating a data or metadata sync init.
On Thu, Jun 11, 2020 at 9:30 AM wrote:
>
> > What I am curious about is these 2 lines:
> > full sync: 0/64 shards
> > full sync: 0/128 shards
> >
> > Is this considered normal? If so, why ha
Can you share which guide and deployment strategy you're following? I
didn't have any issues deploying either completely manually [3] or
with cephadm.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
I am still having this issue with nfs-ganesha with nautilus. I assume I
do not have to change configuration of nfs-ganesha as mentioned here[1].
Since I did not have any issues with Luminous.
Does someone with similar symptoms have also this xlock, waiting
message? How to resolve this?
20
Hi,
Total newbie question - I'm new to Ceph and am setting up a small test cluster.
I've set up five nodes and can see the available drives but I'm unsure on
exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ce
Hi
My ceph dashboard reports 64% usage for rgw.buckets.data:
[image: cephdashboard.png]
But "ceph df" command shows 56.81% used:
RAW STORAGE:
CLASS SIZEAVAIL USEDRAW USED %RAW USED
hdd 611 TiB 282 TiB 328 TiB 329 TiB 53.81
TOTAL
Stefan;
I can't find it, but I seem to remember a discussion in this mailing list that
sharded RGW performance is significantly better if the shard count is a power
of 2, so you might try increasing shards to 64.
Also, you might looks at OSD logs while a listing is trying to run, to see if
thi
Yea, I saw the drive group documentation. It might be just a baby-related lack
of sleep to blame but, to me, it wasn’t clear if I could achieve what I wanted
to.
I can set the criteria for which drives to use for data but can I ‘pair’ the
data drive up with matching VG/LVs?
I’m trying to get t
Hi Mark,
thanks for your comprehensive response!
Our tests are basically matching the linked results (we are testing with 2
OSDs/NVMe and fio/librbd too, but having a much smaller setup). Sometimes we
see smaller or higher improvements from Nautilus to Octupus but it is similar.
Only the rando
Hi David,
Some parts of OpenStack upstream CI consume the download.ceph.com
binaries for nautilus on CentOS 7, and the developers are asking if we
can provide nautilus builds for CentOS 8 as well.
What are the steps to do that for future Nautilus releases?
- Ken
On Thu, Jun 4, 2020 at 7:54 AM
this would be useful to build centos8 based containers of nautilus in
dockerhub!
On 6/11/20 6:55 PM, Ken Dreyer wrote:
> Hi David,
>
> Some parts of OpenStack upstream CI consume the download.ceph.com
> binaries for nautilus on CentOS 7, and the developers are asking if we
> can provide nautilus
On 6/11/20 11:30 AM, Stephan wrote:
Hi Mark,
thanks for your comprehensive response!
Our tests are basically matching the linked results (we are testing with 2
OSDs/NVMe and fio/librbd too, but having a much smaller setup). Sometimes we
see smaller or higher improvements from Nautilus to Oct
Hi Richard,
thanks for reporting this. "ceph df" is right:
https://tracker.ceph.com/issues/45185
Lenz
On 6/11/20 6:06 PM, Richard Kearsley wrote:
> My ceph dashboard reports 64% usage for rgw.buckets.data:
> cephdashboard.png
>
> But "ceph df" command shows 56.81% used:
> RAW STORAGE:
>
I can try it right now and see how it goes.
On 6/11/20 12:55 PM, Ken Dreyer wrote:
> Hi David,
>
> Some parts of OpenStack upstream CI consume the download.ceph.com
> binaries for nautilus on CentOS 7, and the developers are asking if we
> can provide nautilus builds for CentOS 8 as well.
>
> Wh
Since you’re using cephadm you should rather stick to ceph orch device
ls (to see which devices are available) and then work with
drive_groups. To clean up the disks you can remove all VGs and LVs
that start with ceph. I would do this on the OSD nodes where you tried
to create OSDs:
- cep
> If you want to specify vgname/lvname you have to create them manually and run:
>
> ceph-volume lvm create --data /dev/sdc --block.db vgname/db-sdc --block.wal
> vgname/wal-sdc
Apt helpfully told me I need to install ceph-osd, which I did. The ceph-volume
command then told me :
Running comma
We'll need python36-Cython built and served somewhere (preferably the
copr repo?) before this'll work.
https://jenkins.ceph.com/job/ceph-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MACHINE_SIZE=gigantic/444/console
On 6/11/20 1:50 PM, David Galloway wrote:
> I can
Thanks for the tip on power of 2 shard numbers. I want to say I've read that,
too. We have a small window where I can re-shard the index before we sync the
other 2 zones.
Will check on OSD logs, as well.
As to the SSDs – yes, I've confirmed that the bucket index is placed only on
those two SSD
> On 11 Jun 2020, at 15:21, Eugen Block wrote:
>
> Can you share which guide and deployment strategy you're following? I didn't
> have any issues deploying either completely manually [3] or with cephadm.
I followed the cephadm guide at
https://ceph.readthedocs.io/en/latest/cephadm/install/
I had 5 of 10 osds fail on one of my nodes, after reboot the other 5 osds
failed to start.
I have tried running ceph-disk activate-all and get back and error message
about the cluster fsid not matching in /etc/ceph/ceph.conf
Has anyone experienced an issue such as this?
*
Not seeing anything in OSD logs after triggering a listing, just heartbeat
entries.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,all:
the slave zone show metadata is caught up with master ; but use
radosgw-admin bucket list|wc diff master and the slave zone , is not equal.
how can I force sync it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscr
If there were some buckets on the secondary zone before you deployed the
multisite,
they won't be synced to the master zone. In this case I think it's normal
that the bucket
number is not the same.
黄明友 于2020年6月12日周五 上午10:17写道:
>
>
> Hi,all:
>
> the slave zone show metadata is caugh
30 matches
Mail list logo