Hi.
On Tue, Jan 30, 2024 at 5:24 PM duluxoz wrote:
>
> Hi All,
>
> Quick Q: How easy/hard is it to change the IP networks of:
>
> 1) A Ceph Cluster's "Front-End" Network?
This is hard, but doable. The problem is that the MON database
contains the expected addresses of all MONs, and therefore, yo
On 1/31/24 18:53, Gregory Farnum wrote:
The docs recommend a fast SSD pool for the CephFS *metadata*, but the
default data pool can be more flexible. The backtraces are relatively
small — it's an encoded version of the path an inode is located at,
plus the RADOS hobject, which is probably more of
We have ceph (currently 18.2.0) log to an rsyslog server with the
following file name format:
template (name="DynaFile" type="string"
string="/tank/syslog/%fromhost-ip%/%hostname%/%programname%.log")
Around May 25th this year something changed so instead of getting the
usual program log nam
On 1/31/24 20:13, Patrick Donnelly wrote:
On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder
wrote:
Hello,
I have a question regarding the default pool of a cephfs.
According to the docs it is recommended to use a fast ssd replicated
pool as default pool for cephfs. I'm asking what are the space
> I’ve heard conflicting asserts on whether the write returns with min_size
> shards have been persisted, or all of them.
I think it waits until all replicas have written the data, but from
simplistic tests with fast network and slow drives, the extra time
taken to write many copies is not linear
I reviewed the rados suite. @Adam King , @Nizamudeen A
would appreciate a look from you, as there are some
orchestrator and dashboard trackers that came up.
pacific-release, 16.2.15
Failures:
1. https://tracker.ceph.com/issues/62225
2. https://tracker.ceph.com/issues/64278
3. https:/
In our department we're getting starting with Ceph 'reef', using Ceph FUSE
client for our Ubuntu workstations.
So far so good, except I can't quite figure out one aspect of subvolumes.
When I do the commands:
[root@ceph1 ~]# ceph fs subvolumegroup create cephfs csvg
[root@ceph1 ~]# ceph fs sub
I’ve heard conflicting asserts on whether the write returns with min_size
shards have been persisted, or all of them.
> On Jan 31, 2024, at 2:58 PM, Can Özyurt wrote:
>
> I never tried this myself but "min_size = 1" should do what you want to
> achieve.
___
Would you be willing to accept the risk of data loss?
> On Jan 31, 2024, at 2:48 PM, quag...@bol.com.br wrote:
>
> Hello everybody,
> I would like to make a suggestion for improving performance in Ceph
> architecture.
> I don't know if this group would be the best place or if my propos
I never tried this myself but "min_size = 1" should do what you want to achieve.
On Wed, 31 Jan 2024 at 22:48, quag...@bol.com.br wrote:
>
> Hello everybody,
> I would like to make a suggestion for improving performance in Ceph
> architecture.
> I don't know if this group would be the
Hello everybody,
I would like to make a suggestion for improving performance in Ceph architecture.
I don't know if this group would be the best place or if my proposal is correct.
My suggestion would be in the item https://docs.ceph.com/en/latest/architecture/, at the end of the top
On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
rgw approved, thanks
> fs - Venky
> rbd -
On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder
wrote:
>
> Hello,
>
> I have a question regarding the default pool of a cephfs.
>
> According to the docs it is recommended to use a fast ssd replicated
> pool as default pool for cephfs. I'm asking what are the space
> requirements for storing the in
The docs recommend a fast SSD pool for the CephFS *metadata*, but the
default data pool can be more flexible. The backtraces are relatively
small — it's an encoded version of the path an inode is located at,
plus the RADOS hobject, which is probably more of the space usage. So
it should fit fine in
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Just in case anybody is interested: Using dm-cache works and boosts
performance -- at least for my use case.
The "challenge" was to get 100 (identical) Linux-VMs started on a three
node hyperconverged cluster. The hardware is nothing special, each node
has a Supermicro server board with a sing
On 31.01.2024 09:38, garcetto wrote:
good morning,
how can i install latest dev release using cephadm?
Have you looked at this page?
https://docs.ceph.com/en/latest/install/containers/#development-builds
--
Kai Stian Olstad
___
ceph-users mailing li
Hello,
Recently we got a problem from an internal customer on our S3. Our setup consist
of roughly 10 servers with 140 OSDs. Our 3 RGWs are collocated with monitors on
dedicated servers in a HA setup with HAProxy in front. We are running 16.2.14
on Podman with Cephadm.
Our S3 is constantly having
If you just manually run `ceph orch daemon rm
` does it get removed? I know there's
some logic in host drain that does some ok-to-stop checks that can cause
things to be delayed or stuck if it doesn't think it's safe to remove the
daemon for some reason. I wonder if it's being overly cautious here.
Hi all,
after performing "ceph orch host drain" on one of our host with only the
mgr container left, I encounter that another mgr daemon is indeed
deployed on another host, but the "old" does not get removed from the
drain command. The same happens if I edit the mgr service via UI to
define d
On Tue, Jan 30, 2024 at 9:24 PM Yuri Weinstein wrote:
>
> Update.
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> rbd - Ilya
Hi Yuri,
rbd looks good overall but we are missing iSCSI coverage due to
https://tracker.ceph.com/issues
On Wed, Jan 31, 2024 at 3:43 AM garcetto wrote:
>
> good morning,
> i was struggling trying to understand why i cannot find this setting on
> my reef version, is it because is only on latest dev ceph version and not
> before?
that's right, this new feature will be part of the squid release. we
On Tue, Jan 30, 2024 at 3:08 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
fs approved. Failures are -
ht
Hi,
I don’t have a fourth machine available, so that’s not an option
unfortunatly.
I did enable a lot of debugging earlier, but that shows no information
as to why stuff is not working as to be expected.
Proxmox just deploys the mons, nothing fancy there, no special cases.
Can anyone confi
Hi,
As I read the documentation[1] the "count: 1" handles that so what I
have is a placement pool from which only one is selected for
deployment?
you're probably right, when using your example command with multiple
hosts it automatically sets "count:1" (don't mind the hostnames, it's
an
Hi Mark,
as I'm not familiar with proxmox I'm not sure what happens under the
hood. There are a couple of things I would try, not necessarily in
this order:
- Check the troubleshooting guide [1], for example a clock skew could
be one reason, have you verified ntp/chronyd functionality?
-
thank you, but seems related to quincy, there is nothing on latest vesions
in the doc...maybe the doc is not updated?
On Wed, Jan 31, 2024 at 10:46 AM Christian Rohmann <
christian.rohm...@inovex.de> wrote:
> On 31.01.24 09:38, garcetto wrote:
> > how can i install latest dev release using ceph
On 31.01.24 09:38, garcetto wrote:
how can i install latest dev release using cephadm?
I suppose you found
https://docs.ceph.com/en/quincy/install/get-packages/#ceph-development-packages,
but yes, that only seems to target a package installation.
Would be nice if there were also dev container
Hi,
During an upgrade from pacific to quincy, we needed to recreate the mons
because the mons were pretty old and still using leveldb.
So step one was to destroy one of the mons. After that we recreated the
monitor, and although it starts, it remains in state ‘probing’, as you
can see below.
On 31/01/2024 09:52, Eugen Block wrote:
I deployed the nfs with ceph version 17.2.7 and then upgraded to 18.2.1
successfully, the ingress service is still present. Can you tell if it
was there while you were on quincy? To fix it I would just apply the
nfs.yaml again and see if the ingress ser
I deployed the nfs with ceph version 17.2.7 and then upgraded to
18.2.1 successfully, the ingress service is still present. Can you
tell if it was there while you were on quincy? To fix it I would just
apply the nfs.yaml again and see if the ingress service is deployed.
To know what happene
I believe so, I just deployed it with the same command as you did, but
on a single-node cluster (hence only one host):
ceph:~ # ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
crash 1/1 3m ago 44m *
ingress.nfs.
On 31/01/2024 09:36, Eugen Block wrote:
Hi,
if I understand this correctly, with the "keepalive-only" option only
one ganesha instance is supposed to be deployed:
If a user additionally supplies --ingress-mode keepalive-only a
partial ingress service will be deployed that still provides a
good morning,
i was struggling trying to understand why i cannot find this setting on
my reef version, is it because is only on latest dev ceph version and not
before?
https://docs.ceph.com/en/*latest*
/radosgw/metrics/#user-bucket-counter-caches
Reef gives 404
https://docs.ceph.com/en/reef
On 31/01/2024 08:38, Torkil Svensgaard wrote:
Hi
Last week we created an NFS service like this:
"
ceph nfs cluster create jumbo "ceph-flash1,ceph-flash2,ceph-flash3"
--ingress --virtual_ip 172.21.15.74/22 --ingress-mode keepalive-only
"
Worked like a charm. Yesterday we upgraded from 17.2.
good morning,
how can i install latest dev release using cephadm?
thank you.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
if I understand this correctly, with the "keepalive-only" option only
one ganesha instance is supposed to be deployed:
If a user additionally supplies --ingress-mode keepalive-only a
partial ingress service will be deployed that still provides a
virtual IP, but has nfs directly bindin
37 matches
Mail list logo