Actually, that is exactly what I was looking for.
Thanks.
Ian
On Thu, Oct 27, 2022 at 3:31 PM Federico Lucifredi
wrote:
> Not exactly what you asked, but just to make sure you are aware, there is
> a project delivering Windows native Ceph drivers. If performance is an
> issue, these are going
Hi Oleksiy,
The Pacific RC has not been declared yet since there have been problems in
our upstream testing lab. There is no ETA yet for v16.2.11 for that reason,
but the full diff of all the patches that were included will be published
to ceph.io when v16.2.11 is released. There will also be a di
Hi,
That is most likely possible but the difference in performance from doing
CephFS + Samba compared to RBD + Ceph iSCSI + Windows SMB would probably be
extremely noticeable in a not very good way.
As Wyll mentioned recommended way is to just share out SMB on top of an
exisitng CephFS mount (
Hi together,
according to the list of mirror responsibles in the repo at:
https://github.com/ceph/ceph/blob/main/mirroring/MIRRORS
the person to ask is Oliver Dzombic. I have added him in CC.
Cheers and hope that helps,
Oliver
Am 27.10.22 um 21:43 schrieb Mike Perez:
Hi Christian,
Th
Would it be plausible to have Windows DFS servers mount the Ceph cluster
via iSCSI? And then share the data out in a more Windows native way?
Thanks,
Ian
On Thu, Oct 27, 2022 at 1:50 PM Wyll Ingersoll <
wyllys.ingers...@keepertech.com> wrote:
>
> No - the recommendation is just to mount /cephfs
Thanks, it's fine
> De: "Wyll Ingersoll"
> À: "Christophe BAILLON"
> Cc: "Eugen Block" , "ceph-users"
> Envoyé: Jeudi 27 Octobre 2022 22:49:18
> Objet: Re: [ceph-users] Re: SMB and ceph question
> No - the recommendation is just to mount /cephfs using the kernel module and
> then share it via
There do exist vfs_ceph and vfs_ceph_snapshots modules for Samba, at least in
theory.
https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html
https://www.samba.org/samba/docs/current/man-html/vfs_ceph_snapshots.8.html
However, they don't exist in, for instance, the version of Samba in
No - the recommendation is just to mount /cephfs using the kernel module and
then share it via standard VFS module from Samba. Pretty simple.
From: Christophe BAILLON
Sent: Thursday, October 27, 2022 4:08 PM
To: Wyll Ingersoll
Cc: Eugen Block ; ceph-users
Subj
Re
Ok, I thought there was a module like ganesha for the nfs to install directly
on the cluster...
- Mail original -
> De: "Wyll Ingersoll"
> À: "Eugen Block" , "ceph-users"
> Envoyé: Jeudi 27 Octobre 2022 15:25:36
> Objet: [ceph-users] Re: SMB and ceph question
> I don't think there
Hi,
I noticed one my OSDs keeps crashing even when ran manually, this is
my homelab and nothing too critical is going on my cluster, but I'd
like to know what's the issue.
I am running on archlinux arm (aarch64 on an odroid-hc4) and compiled
everything ceph related myself, ceph version 17.2.4
(13
Hi Christian,
Thank you for reporting this.
I did a git blame on the file and saw that Wido added it.
63be401a411ffc7c2f78e450a29c69eee1af02d3
Wido, do you happen to know who is maintaining this mirror?
On Thu, Oct 20, 2022 at 1:06 AM Christian Rohmann
wrote:
>
> Hey ceph-users,
>
> it seems
Hey guys,
Could you please point me to the branch that will be used for the upcoming
16.2.11 release? I'd like to see the diff w/ 16.2.10 to better understand
what was fixed.
Thank you.
Oleksiy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
Hi Alexander,
I'd be suspicious that something is up with pool 25. Which pool is
that? ('ceph osd pool ls detail') Knowing the pool and the CRUSH rule
it's using is a good place to start. Then that can be compared to your
CRUSH map (e.g. 'ceph osd tree') to see why Ceph is struggling to map
that P
Has anyone had success in using cephadm to add extra_container_args onto the
node-exporter config? For example changing the collector config.
I am trying and failing using the following:
1. Create ne.yml
service_type: node-exporter
service_name: node-exporter
placement:
host_pattern: '*'
Hi Folks,
The weekly performance meeting will be starting in approximately 55
minutes at 8AM PST. Peter Desnoyers from Khoury College of Computer
Sciences, Northeastern University will be speaking today about his work
on local storage for RBD caching. A short architectural overview is
avail
Hi Folks,
The weekly performance meeting will be starting in approximately 70
minutes at 8AM PST. Peter Desnoyers from Khoury College of Computer
Sciences, Northeastern University will be speaking today about his work
on local storage for RBD caching. A short architectural overview is
avail
This prior post
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2QNKWK642LWCNCJEB5THFGMSLR37FLX7/
may help. You can bump up the warning threshold to make the warning go away -
a few releases ago it was reduced to 1/10 of the prior value.
There’s also information about trimming
I don't think there is anything particularly special about exposing /cephfs (or
subdirs thereof) over SMB with SAMBA. We've done it for years over various
releases of both Ceph and Samba.
Basically, you create a NAS server host that mounts /cephfs and run Samba on
that host. You share whatev
Hi,
the SUSE docs [1] are not that old, they apply for Ceph Pacific. Have
you tried it yet?
Maybe the upstream docs could adapt the SUSE docs, just an idea if
there aren't any guides yet on docs.ceph.com.
Regards,
Eugen
[1] https://documentation.suse.com/ses/7.1/single-html/ses-admin/#cha-
Great, thanks Ilya.
Regards,
On Thu, Oct 27, 2022 at 2:00 PM Ilya Dryomov wrote:
> On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A wrote:
> >
> > >
> > > lab issues blocking centos container builds and teuthology testing:
> > > * https://tracker.ceph.com/issues/57914
> > > * delays testing for 16
Hello,
For a side project, we need to expose cephfs datas to legacy users via SMB, I
don't find the official way in ceph doc to do that.
In old suze doc I found ref to ceph-samba, but I can't find any informations on
ceph official doc.
We have a small cephadm dedicated cluster to do that, can yo
On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A wrote:
>
> >
> > lab issues blocking centos container builds and teuthology testing:
> > * https://tracker.ceph.com/issues/57914
> > * delays testing for 16.2.11
>
>
> The quay.ceph.io has been down for some days now. Not sure who is actively
> maintai
Hey, I would really appreciate any help I can get on this as googling has
led me to a dead end.
We have 2 data centers each with 4 servers running ceph on kubernetes in
multisite config, everything is working great but recently the master
cluster changed status to HEALTH_WARN and the issues are la
Hi,
any updates on this?
Best regards
Alexander Fiedler
Von: Alexander Fiedler
Gesendet: Dienstag, 25. Oktober 2022 14:45
An: 'ceph-users@ceph.io'
Betreff: 1 pg stale, 1 pg undersized
Hello,
we run a ceph cluster with the following error which came up suddenly without
any maintenance/changes
Hey Eugen,
valid points, I first tried to provision OSDs via ceph-ansible (later
excluded), which does run the batch command with all 4 disk devices, but it
often failed with the same issue I mentioned earlier, something like:
```
bluefs _replay 0x0: stop: uuid e2f72ec9-2747-82d7-c7f8-41b7b6d41e1b
Hi,
Thanks for the interesting discussion. Actually it's a bit
disappointing to see that also cephfs with multiple MDS servers is
not as HA as we would like it.
it really depends on what you're trying to achieve since there are
lots of different scenarios how to setup and configure one or
Dear list
thanks for the answers, it looks like we have worried about this far too
much ;-)
Cheers
/Simon
On 26/10/2022 22:21, shubjero wrote:
We've done 14.04 -> 16.04 -> 18.04 -> 20.04 all at various stages of our
ceph cluster life.
The latest 18.04 to 20.04 was painless and we ran:
|ap
Hi,
first of all, if you really need to issue ceph-volume manually,
there's a batch command:
cephadm ceph-volume lvm batch /dev/sdb /dev/sdc /dev/sdd /dev/sde
Second, are you using cephadm? Maybe your manual intervention
conflicts with the automatic osd setup (all available devices). You
>
> lab issues blocking centos container builds and teuthology testing:
> * https://tracker.ceph.com/issues/57914
> * delays testing for 16.2.11
The quay.ceph.io has been down for some days now. Not sure who is actively
maintaining the quay repos now.
At least in the ceph-dashboard, we have a fa
29 matches
Mail list logo