[ceph-users] Re: Grafana without presenting data from the first Host

2022-10-20 Thread Marc



> 
> I'm experiencing something strange on a cluster regarding monitoring. In
> Grafana I can't see any data referring to the first Host, I've already
> tried to redeploy Grafana and Prometheus, but the first Host never
> appears,
> if I go to Dashboard -> Hosts -> Performance Detail the first Host
> always
> appears with "no data", but the amount of OSDs and Raw Capacity appears.
> 
> Another detail I noticed is that nothing is showing up in Monitoring,
> even
> though something appears on the start page, I've also tried to redeploy
> alertmanager and it didn't help.
> 
> I don't understand what could be happening.

Did check if the the data is even there? With a query on the prometheus gui.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Mirror de.ceph.com broken?

2022-10-20 Thread Christian Rohmann

Hey ceph-users,

it seems that the German ceph mirror http://de.ceph.com/ 
 listed

at https://docs.ceph.com/en/latest/install/mirrors/#locations

does not hold any data.

The index page shows some plesk default page and also deeper links like 
http://de.ceph.com/debian-17.2.4/ return 404.



Regards

Christian


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Status of Quincy 17.2.5 ?

2022-10-20 Thread Christian Rohmann



On 19/10/2022 16:30, Laura Flores wrote:
Dan is correct that 17.2.5 is a hotfix release. There was a flaw in 
the release process for 17.2.4 in which five commits were not included 
in the release. The users mailing list will hear an official 
announcement about this hotfix release later this week.


Thanks for the info.


1) May I bring up again my remarks about the timing:

On 19/10/2022 11:46, Christian Rohmann wrote:

I believe the upload of a new release to the repo prior to the 
announcement happens quite regularly - it might just be due to the 
technical process of releasing.
But I agree it would be nice to have a more "bit flip" approach to new 
releases in the repo and not have the packages appear as updates prior 
to the announcement and final release and update notes.
By my observations sometimes there are packages available on the 
download servers via the "last stable" folders such as 
https://download.ceph.com/debian-quincy/ quite some time before the 
announcement of a release is out.
I know it's hard to time this right with mirrors requiring some time to 
sync files, but would be nice to not see the packages or have people 
install them before there are the release notes and potential pointers 
to changes out.



2) Also in cases as with the 17.2.4 release containing a regression it 
would be great to have the N release and N-1 there to allow users to 
downgrade to a previous point-release quickly in case they run into issues.
Otherwise one needs to configure the N-1 repo manually to still have 
access to the N-1 release.


And with this just being links in the filesystem this should not even 
take make space on the download servers or their mirrors.




Regards


Christian

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Grafana without presenting data from the first Host

2022-10-20 Thread Ernesto Puerta
Hi Murilo,

Is node-exporter running on that node? Most host page metrics are
node-exporter's. As Marc suggested, you can also confirm it by checking
whether there are "node_*" metrics for that node in the Prometheus web UI.

Kind Regards,
Ernesto


On Thu, Oct 20, 2022 at 2:52 AM Murilo Morais  wrote:

> Good evening everyone.
>
> I'm experiencing something strange on a cluster regarding monitoring. In
> Grafana I can't see any data referring to the first Host, I've already
> tried to redeploy Grafana and Prometheus, but the first Host never appears,
> if I go to Dashboard -> Hosts -> Performance Detail the first Host always
> appears with "no data", but the amount of OSDs and Raw Capacity appears.
>
> Another detail I noticed is that nothing is showing up in Monitoring, even
> though something appears on the start page, I've also tried to redeploy
> alertmanager and it didn't help.
>
> I don't understand what could be happening.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Getting started with cephfs-top, how to install

2022-10-20 Thread Frank Schilder
Hi all,

I'm stuck with a similar problem with cephfs-shell for octopus 
(https://docs.ceph.com/en/octopus/cephfs/cephfs-shell/#cephfs-shell). I can't 
figure out where to get it from. Its not part of the octopus container and also 
seems not to be available through the octopus repos. Anyone here who can point 
me to an installation procedure?

Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Zach Heise (SSCC) 
Sent: 19 October 2022 21:25:14
To: Neeraj Pratap Singh; ceph-users@ceph.io
Subject: [ceph-users] Re: Getting started with cephfs-top, how to install

Thank you for the reply, Neeraj - solved!

I was just going off of the document at 
https://docs.ceph.com/en/quincy/cephfs/cephfs-top/ - I did not see a specified 
terminal emulator listed, but yes, after switching my Secure Shell program to 
XTerm, that seems to have fixed it. Also, now 'ceph fs perf stats' shows a 
"wall of text" in an unformatted way, which is what I assume cephfs-top is 
formatting so nicely in its python script.

I appreciate the help, and I will make an edit request to that document adding 
the information I have learned from you and Xiubo, so other relative novices 
like myself will perhaps be able to fix these issues on their own in the future!

Best,

Zach Heise


On 2022-10-19 2:16 PM, Neeraj Pratap Singh wrote:
Hi Zach,
Seeing the `fs perf stats` output , it looks like you are not using the latest 
build. Lots of enhancements are being done in cephfs-top recently.
Will suggest to use latest build for better results.
And regarding your `use_default_colors()` error, it looks like there some issue 
with the unsupported terminal emulator.

On Thu, Oct 20, 2022 at 12:20 AM Zach Heise (SSCC) 
mailto:he...@ssc.wisc.edu>> wrote:

Thank you, Xiubo - yes, checking my ceph.repo file as specified at 
https://docs.ceph.com/en/pacific/install/get-packages/#rhel reminded me that I 
had set the ceph-noarch repo to disabled, because we didn't want ceph trying to 
update itself outside of using cephadm for new builds. Flipping that bit to 1 
then re-running yum update then yum install cephfs-top solved that problem.

However, that leads to a second question then - after running the required 
command to create the fstop user with $ ceph auth get-or-create client.fstop 
mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' as the instructions 
specify, and adding the file ceph.client.fstop.keyring with the relevant key to 
/etc/ceph on the node in question (which the instructions do not specify and I 
will make a github edit later), I am getting an error:

"exception: use_default_colors() returned ERR"

Beyond that, it seems like the system that cephfs-top needs to work, perf 
stats, has no data in the output fields?

ceph01.ssc.wisc.edu> ceph fs perf stats
{"version": 1, "global_counters": ["cap_hit", "read_latency", "write_latency", 
"metadata_latency", "dentry_lease", "opened_files", "pinned_icaps", 
"opened_inodes", "read_io_sizes", "write_io_sizes"], "counters": [], 
"client_metadata": {}, "global_metrics": {}, "metrics": {"delayed_ranks": []}}

I activated ceph fs perf stats yesterday, so by this point I should have data 
in the stats, unless there is a problem elsewhere?

Zach Heise


On 2022-10-18 7:56 PM, Xiubo Li wrote:

Hi Zach,

On 18/10/2022 04:20, Zach Heise (SSCC) wrote:

I'd like to see what CephFS clients are doing the most IO. According to this 
page: https://docs.ceph.com/en/quincy/cephfs/cephfs-top/ - cephfs-top is the 
simplest way to do this? I enabled 'ceph mgr module enable stats' today, but 
I'm a bit confused about what the best way is to get the cephfs-top package to 
use this perf stats.

If you are building code from source then you should run it just by:

[ceph/build]$ ./src/tools/cephfs/top/cephfs-top

You can also find the main/quincy/pacific builds for different distro pacakges 
under [1] and their artifacts are made available under [2].

[1] https://shaman.ceph.com/builds/ceph/
[2] https://shaman.ceph.com/repos/ceph/

Thanks!

- Xiubo


The ceph doc page linked above just mentions "cephfs-top is available as part 
of cephfs-top package" but it does not list what repo is required to access 
this. Anyone using cephfs-top themselves, and know the missing parts of this 
document that should be added?

--
Zach Heise



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 
ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 
ceph-users-le...@ceph.io


--
Neeraj Pratap Singh
He/Him/His
Associate Software Engineer, CephFS
neesi...@redhat.com

Red Hat Inc.

[h

[ceph-users] Re: Grafana without presenting data from the first Host

2022-10-20 Thread Murilo Morais
Hi Marc, thanks for replying.

I already checked, nothing appears about the first Host even if I perform a
direct query from the Prometheus GUI.

Ernesto mentioned node-exporter, this service is running on all hosts.

Em qui., 20 de out. de 2022 às 04:21, Marc 
escreveu:

>
>
> >
> > I'm experiencing something strange on a cluster regarding monitoring. In
> > Grafana I can't see any data referring to the first Host, I've already
> > tried to redeploy Grafana and Prometheus, but the first Host never
> > appears,
> > if I go to Dashboard -> Hosts -> Performance Detail the first Host
> > always
> > appears with "no data", but the amount of OSDs and Raw Capacity appears.
> >
> > Another detail I noticed is that nothing is showing up in Monitoring,
> > even
> > though something appears on the start page, I've also tried to redeploy
> > alertmanager and it didn't help.
> >
> > I don't understand what could be happening.
>
> Did check if the the data is even there? With a query on the prometheus
> gui.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Grafana without presenting data from the first Host

2022-10-20 Thread Murilo Morais
Ernesto, thanks for replying.

The service is UP on all hosts. Performing a direct query from Prometheus
also doesn't return anything about the first Host.

Em qui., 20 de out. de 2022 às 06:27, Ernesto Puerta 
escreveu:

> Hi Murilo,
>
> Is node-exporter running on that node? Most host page metrics are
> node-exporter's. As Marc suggested, you can also confirm it by checking
> whether there are "node_*" metrics for that node in the Prometheus web UI.
>
> Kind Regards,
> Ernesto
>
>
> On Thu, Oct 20, 2022 at 2:52 AM Murilo Morais 
> wrote:
>
>> Good evening everyone.
>>
>> I'm experiencing something strange on a cluster regarding monitoring. In
>> Grafana I can't see any data referring to the first Host, I've already
>> tried to redeploy Grafana and Prometheus, but the first Host never
>> appears,
>> if I go to Dashboard -> Hosts -> Performance Detail the first Host always
>> appears with "no data", but the amount of OSDs and Raw Capacity appears.
>>
>> Another detail I noticed is that nothing is showing up in Monitoring, even
>> though something appears on the start page, I've also tried to redeploy
>> alertmanager and it didn't help.
>>
>> I don't understand what could be happening.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cluster network change

2022-10-20 Thread Andrei Mikhailovsky
Hello cephers, 

I've got a few questions for the community to help us with migrating ceph 
cluster from Infiniband networking to 10G Ethernet with no or minimal downtime. 
Please find below the details of the cluster as well as info on what we are 
trying to achieve. 

1. Cluster Info: 
Ceph version - 15.2.15 
Four storage servers running mon + osd + mgr + rgw services 
Ubuntu 20.04 server 
Networks: Infiniband (storage network) (ipoib interface and NOT RDMA) 
192.168.168.0/24 ; 10G Ethernet (management network) 192.168.169.0/24 
each server has an IP in each of the networks, i.e. 192.168.168.201 and 
192.168.169.201 and so forth 

2. What we would like to do: 
We are decommissioning our Infiniband infrastructure and moving towards 10G 
Ethernet. We would like to move ceph cluster from the current 
192.168.168.0/24(IB) onto either 192.168.169.0/24(eth) running on 10G ethernet. 
Alternatively we could create a new ceph vlan on 10G ethernet and shift the IP 
range 192.168.168.0/24 to the new ceph vlan running on 10G instead of 
Infiniband. We would like to make the move with no or minimal downtime as we 
have critical services running on top of ceph, such as VMs, etc. 

Could someone suggest on the best/safest route to take for such migration? 

Is it a plausible scenarion, when one server is switched to the 192.168.169 
network, while the others run in the original 192.168.168 network? From the 
networking view point it does not introduce any difficulties, but woud it 
create problems with the ceph itself? 

p.s. How would one go about changing an IP of the ceph server, providing the 
network remains the same? 

Many thanks 

Andrei 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Status of Quincy 17.2.5 ?

2022-10-20 Thread Chris Palmer
I do agree with Christian. I would like to see the Ceph repositories 
handled in a similar way to most others:


 * Testing or pre-release packages go into one (or more) testing repos
 * Production-ready packages go into the production repo

I don't care about the minor mirror-synch delay. What I do care about is 
only pulling packages that really are production-ready unless I 
explicitly go looking for a pre-release one...


Thanks, Chris

On 20/10/2022 09:12, Christian Rohmann wrote:


On 19/10/2022 16:30, Laura Flores wrote:
Dan is correct that 17.2.5 is a hotfix release. There was a flaw in 
the release process for 17.2.4 in which five commits were not 
included in the release. The users mailing list will hear an official 
announcement about this hotfix release later this week.


Thanks for the info.


1) May I bring up again my remarks about the timing:

On 19/10/2022 11:46, Christian Rohmann wrote:

I believe the upload of a new release to the repo prior to the 
announcement happens quite regularly - it might just be due to the 
technical process of releasing.
But I agree it would be nice to have a more "bit flip" approach to 
new releases in the repo and not have the packages appear as updates 
prior to the announcement and final release and update notes.
By my observations sometimes there are packages available on the 
download servers via the "last stable" folders such as 
https://download.ceph.com/debian-quincy/ quite some time before the 
announcement of a release is out.
I know it's hard to time this right with mirrors requiring some time 
to sync files, but would be nice to not see the packages or have 
people install them before there are the release notes and potential 
pointers to changes out.



2) Also in cases as with the 17.2.4 release containing a regression it 
would be great to have the N release and N-1 there to allow users to 
downgrade to a previous point-release quickly in case they run into 
issues.
Otherwise one needs to configure the N-1 repo manually to still have 
access to the N-1 release.


And with this just being links in the filesystem this should not even 
take make space on the download servers or their mirrors.




Regards


Christian

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] What is the use case of your Ceph cluster? Developers want to know!

2022-10-20 Thread Laura Flores
Dear Ceph Users,

Ceph developers and doc writers are looking for responses from people in
the user/dev community who have experience with a Ceph cluster. Our
question: *What is the use case of your Ceph cluster?*

Since the first official Argonaut release in 2012, Ceph has greatly
expanded its features and user base. With the next major release on the
horizon, developers are now more curious than ever to know how people are
using their clusters in the wild.

Our goal is to share these insightful results with the community, as well
as make it easy for beginning developers (e.g. students from Google Summer
of Code, Outreachy, or Grace Hopper) to understand all the ways that Ceph
can be used.

We plan to add interesting use cases to our website  [1]
and/or documentation  [2].

In completing this survey, you'll have the option of providing your name or
remaining anonymous. If your use case is chosen to include on the website
or documentation, we will be sure to honor your choice of being recognized
or remaining anonymous.

Follow this link

[3] to begin the survey. Feel free to reach out to me with any questions!

- Laura Flores

1. Ceph website: https://ceph.io/
2. Ceph documentation: https://docs.ceph.com/en/latest/
3. Survey link:
https://docs.google.com/forms/d/e/1FAIpQLSceR8i2vmjdL34hbkhqyU5dAJjZKzjVokx2rI4sB2n1Q0fHKA/viewform?usp=sf_link

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage

Red Hat Inc. 

Chicago, IL

lflo...@redhat.com
M: +17087388804
@RedHat    Red Hat
  Red Hat


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] radosgw networking

2022-10-20 Thread Wyll Ingersoll


What network does radosgw use when it reads/writes the objects to the cluster?

We have a high-speed cluster_network and want the radosgw to write data over 
that instead of the slower public_network if possible, is it configurable?

thanks!
  Wyllys Ingersoll



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: radosgw networking

2022-10-20 Thread Boris
AFAIK radosgw uses the public network to talk to the OSDs. 

You could ditch the cluster network and have the public network use the high 
speed cluster network connections?

Maybe there is another way, which I don't know. 

Cheers
 Boris

> Am 20.10.2022 um 18:58 schrieb Wyll Ingersoll 
> :
> 
> 
> What network does radosgw use when it reads/writes the objects to the cluster?
> 
> We have a high-speed cluster_network and want the radosgw to write data over 
> that instead of the slower public_network if possible, is it configurable?
> 
> thanks!
>  Wyllys Ingersoll
> 
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Quincy 22.04/Jammy packages

2022-10-20 Thread Goutham Pacha Ravi
On Tue, Oct 18, 2022 at 8:22 AM Reed Dier  wrote:

> Curious if there is a timeline for when quincy will start getting packages
> for Ubuntu Jammy/22.04.
> It looks like quincy started getting builds for EL9 with 17.2.4, and now
> with the 17.2.5 there are still only bullseye and focal dists available.
>
> Canonical is publishing a 17.2.0 build in jammy-updates, but obviously
> this lags the upstream releases by a decent margin.
>
> Hoping non-cephadm won’t be left in the dark.
>

+1
The OpenStack community is interested in this as well. We're trying to move
all our ubuntu testing to Ubuntu Jammy/22.04 [1]; and we consume packages
from download.ceph.com.

While we're adopting cephadm, a lot of OpenStack and Ceph deployers still
use other installers, and so the OpenStack CI system has had a barebones
install-from-package mechanism [2] that we use for our integration testing
with services like OpenStack Manila, Cinder, Glance and Nova.


[1] https://etherpad.opendev.org/p/migrate-to-jammy
[2] https://opendev.org/openstack/devstack-plugin-ceph


--
Goutham


>
> Reed
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] s3gw v0.7.0 released

2022-10-20 Thread Joao Eduardo Luis

# s3gw v0.7.0

The s3gw team is announcing the release of s3gw v0.7.0. This release 
contains fixes to known bugs and new features. This includes an early 
version of an object explorer via the web-based UI. See the CHANGELOG 
below for more information.


This project is still under early-stage development and is not 
recommended for production systems and upgrades are not guaranteed to 
succeed from one version to another. Additionally, although we strive 
for API parity with RADOSGW, features may still be missing.


Do not hesitate to provide constructive feedback.

## CHANGELOG

Exciting changes include:

- Bucket management features for non-admin users (create/update/delete 
buckets) on the UI.

- Different improvements on the UI.
- Several bug fixes.
- Improved charts.

Full changelog can be found at 
https://github.com/aquarist-labs/s3gw/releases/tag/v0.7.0


## OBTAINING s3gw

Container images can be found on GitHub’s container registry:

ghcr.io/aquarist-labs/s3gw:v0.7.0
ghcr.io/aquarist-labs/s3gw-ui:v0.7.0

Additionally, a helm chart [1] is available at ArtifactHUB:

https://artifacthub.io/packages/helm/s3gw/s3gw

For additional information, see the documentation:

https://s3gw-docs.readthedocs.io/en/latest/

## WHAT IS s3gw

s3gw is an S3-compatible service that focuses on deployment within a 
Kubernetes environment backed by any PVC, including Longhorn [2]. Since 
its inception, the primary focus has been on Cloud Native deployments. 
However, s3gw can be deployed in a myriad of scenarios (including a 
standalone container), provided it has some form of storage attached.


s3gw is based on Ceph’s RADOSGW but runs as a stand-alone service 
without the RADOS cluster and relies on a storage backend still under 
heavy development by the storage team at SUSE. Additionally, the s3gw 
team is developing a web-based UI for management and an object explorer.


More information can be found at https://aquarist-labs.io/s3gw/ or 
https://github.com/aquarist-labs/s3gw/ .


  -Joao and the s3gw team

[1] https://github.com/aquarist-labs/s3gw-charts
[2] https://longhorn.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] How to determine if a filesystem is allow_standby_replay = true

2022-10-20 Thread Wesley Dillingham
I am building some automation for version upgrades of MDS and part of the
process I would like to determine if a filesystem has allow_standby_replay
set to true and if so then disable it. Granted I could just issue: "ceph fs
set MyFS allow_standby_replay false" and be done with it but Its got me
curious that there is not the equivalent command: "ceph fs get MyFS
allow_standby_replay" to check this information. So where can an operator
determine this?

I tried a diff of "ceph fs get MyFS" with this configurable in both true
and false and found:

diff /tmp/true /tmp/false
3,4c3,4
< epoch 66
< flags 32
---
> epoch 67
> flags 12

and Im guessing this information is encoded  in the "flags" field. I am
working with 16.2.10. Thanks.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: s3gw v0.7.0 released

2022-10-20 Thread Matt Benjamin
The ability to run as a stand-alone service without a RADOS service comes
from the Zipper API work, which is part of upstream Ceph RGW, obviously.
It should relatively soon be possible to load new Zipper store drivers
(backends) at runtime, so there won't be a need to maintain a fork of Ceph
RGW.

regards,

Matt

On Thu, Oct 20, 2022 at 1:34 PM Joao Eduardo Luis  wrote:

> # s3gw v0.7.0
>
> The s3gw team is announcing the release of s3gw v0.7.0. This release
> contains fixes to known bugs and new features. This includes an early
> version of an object explorer via the web-based UI. See the CHANGELOG
> below for more information.
>
> This project is still under early-stage development and is not
> recommended for production systems and upgrades are not guaranteed to
> succeed from one version to another. Additionally, although we strive
> for API parity with RADOSGW, features may still be missing.
>
> Do not hesitate to provide constructive feedback.
>
> ## CHANGELOG
>
> Exciting changes include:
>
> - Bucket management features for non-admin users (create/update/delete
> buckets) on the UI.
> - Different improvements on the UI.
> - Several bug fixes.
> - Improved charts.
>
> Full changelog can be found at
> https://github.com/aquarist-labs/s3gw/releases/tag/v0.7.0
>
> ## OBTAINING s3gw
>
> Container images can be found on GitHub’s container registry:
>
>  ghcr.io/aquarist-labs/s3gw:v0.7.0
>  ghcr.io/aquarist-labs/s3gw-ui:v0.7.0
>
> Additionally, a helm chart [1] is available at ArtifactHUB:
>
>  https://artifacthub.io/packages/helm/s3gw/s3gw
>
> For additional information, see the documentation:
>
>  https://s3gw-docs.readthedocs.io/en/latest/
>
> ## WHAT IS s3gw
>
> s3gw is an S3-compatible service that focuses on deployment within a
> Kubernetes environment backed by any PVC, including Longhorn [2]. Since
> its inception, the primary focus has been on Cloud Native deployments.
> However, s3gw can be deployed in a myriad of scenarios (including a
> standalone container), provided it has some form of storage attached.
>
> s3gw is based on Ceph’s RADOSGW but runs as a stand-alone service
> without the RADOS cluster and relies on a storage backend still under
> heavy development by the storage team at SUSE. Additionally, the s3gw
> team is developing a web-based UI for management and an object explorer.
>
> More information can be found at https://aquarist-labs.io/s3gw/ or
> https://github.com/aquarist-labs/s3gw/ .
>
>-Joao and the s3gw team
>
> [1] https://github.com/aquarist-labs/s3gw-charts
> [2] https://longhorn.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: s3gw v0.7.0 released

2022-10-20 Thread Joao Eduardo Luis

On 2022-10-20 17:46, Matt Benjamin wrote:
The ability to run as a stand-alone service without a RADOS service 
comes
from the Zipper API work, which is part of upstream Ceph RGW, 
obviously.

It should relatively soon be possible to load new Zipper store drivers
(backends) at runtime, so there won't be a need to maintain a fork of 
Ceph

RGW.


Indeed it does. None of this would be possible without Zipper and the 
SAL abstraction work. :)


  -Joao



regards,

Matt

On Thu, Oct 20, 2022 at 1:34 PM Joao Eduardo Luis  
wrote:



# s3gw v0.7.0

The s3gw team is announcing the release of s3gw v0.7.0. This release
contains fixes to known bugs and new features. This includes an early
version of an object explorer via the web-based UI. See the CHANGELOG
below for more information.

This project is still under early-stage development and is not
recommended for production systems and upgrades are not guaranteed to
succeed from one version to another. Additionally, although we strive
for API parity with RADOSGW, features may still be missing.

Do not hesitate to provide constructive feedback.

## CHANGELOG

Exciting changes include:

- Bucket management features for non-admin users (create/update/delete
buckets) on the UI.
- Different improvements on the UI.
- Several bug fixes.
- Improved charts.

Full changelog can be found at
https://github.com/aquarist-labs/s3gw/releases/tag/v0.7.0

## OBTAINING s3gw

Container images can be found on GitHub’s container registry:

 ghcr.io/aquarist-labs/s3gw:v0.7.0
 ghcr.io/aquarist-labs/s3gw-ui:v0.7.0

Additionally, a helm chart [1] is available at ArtifactHUB:

 https://artifacthub.io/packages/helm/s3gw/s3gw

For additional information, see the documentation:

 https://s3gw-docs.readthedocs.io/en/latest/

## WHAT IS s3gw

s3gw is an S3-compatible service that focuses on deployment within a
Kubernetes environment backed by any PVC, including Longhorn [2]. 
Since

its inception, the primary focus has been on Cloud Native deployments.
However, s3gw can be deployed in a myriad of scenarios (including a
standalone container), provided it has some form of storage attached.

s3gw is based on Ceph’s RADOSGW but runs as a stand-alone service
without the RADOS cluster and relies on a storage backend still under
heavy development by the storage team at SUSE. Additionally, the s3gw
team is developing a web-based UI for management and an object 
explorer.


More information can be found at https://aquarist-labs.io/s3gw/ or
https://github.com/aquarist-labs/s3gw/ .

   -Joao and the s3gw team

[1] https://github.com/aquarist-labs/s3gw-charts
[2] https://longhorn.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io




--

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: s3gw v0.7.0 released

2022-10-20 Thread Matt Benjamin
And to clarify, too, this Aquarium work is the first attempt by folks to
build a file backed storage setup, it's great to see innovation around this.

Matt

On Thu, Oct 20, 2022 at 1:50 PM Joao Eduardo Luis  wrote:

> On 2022-10-20 17:46, Matt Benjamin wrote:
> > The ability to run as a stand-alone service without a RADOS service
> > comes
> > from the Zipper API work, which is part of upstream Ceph RGW,
> > obviously.
> > It should relatively soon be possible to load new Zipper store drivers
> > (backends) at runtime, so there won't be a need to maintain a fork of
> > Ceph
> > RGW.
>
> Indeed it does. None of this would be possible without Zipper and the
> SAL abstraction work. :)
>
>-Joao
>
> >
> > regards,
> >
> > Matt
> >
> > On Thu, Oct 20, 2022 at 1:34 PM Joao Eduardo Luis 
> > wrote:
> >
> >> # s3gw v0.7.0
> >>
> >> The s3gw team is announcing the release of s3gw v0.7.0. This release
> >> contains fixes to known bugs and new features. This includes an early
> >> version of an object explorer via the web-based UI. See the CHANGELOG
> >> below for more information.
> >>
> >> This project is still under early-stage development and is not
> >> recommended for production systems and upgrades are not guaranteed to
> >> succeed from one version to another. Additionally, although we strive
> >> for API parity with RADOSGW, features may still be missing.
> >>
> >> Do not hesitate to provide constructive feedback.
> >>
> >> ## CHANGELOG
> >>
> >> Exciting changes include:
> >>
> >> - Bucket management features for non-admin users (create/update/delete
> >> buckets) on the UI.
> >> - Different improvements on the UI.
> >> - Several bug fixes.
> >> - Improved charts.
> >>
> >> Full changelog can be found at
> >> https://github.com/aquarist-labs/s3gw/releases/tag/v0.7.0
> >>
> >> ## OBTAINING s3gw
> >>
> >> Container images can be found on GitHub’s container registry:
> >>
> >>  ghcr.io/aquarist-labs/s3gw:v0.7.0
> >>  ghcr.io/aquarist-labs/s3gw-ui:v0.7.0
> >>
> >> Additionally, a helm chart [1] is available at ArtifactHUB:
> >>
> >>  https://artifacthub.io/packages/helm/s3gw/s3gw
> >>
> >> For additional information, see the documentation:
> >>
> >>  https://s3gw-docs.readthedocs.io/en/latest/
> >>
> >> ## WHAT IS s3gw
> >>
> >> s3gw is an S3-compatible service that focuses on deployment within a
> >> Kubernetes environment backed by any PVC, including Longhorn [2].
> >> Since
> >> its inception, the primary focus has been on Cloud Native deployments.
> >> However, s3gw can be deployed in a myriad of scenarios (including a
> >> standalone container), provided it has some form of storage attached.
> >>
> >> s3gw is based on Ceph’s RADOSGW but runs as a stand-alone service
> >> without the RADOS cluster and relies on a storage backend still under
> >> heavy development by the storage team at SUSE. Additionally, the s3gw
> >> team is developing a web-based UI for management and an object
> >> explorer.
> >>
> >> More information can be found at https://aquarist-labs.io/s3gw/ or
> >> https://github.com/aquarist-labs/s3gw/ .
> >>
> >>-Joao and the s3gw team
> >>
> >> [1] https://github.com/aquarist-labs/s3gw-charts
> >> [2] https://longhorn.io
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> >
> > --
> >
> > Matt Benjamin
> > Red Hat, Inc.
> > 315 West Huron Street, Suite 140A
> > Ann Arbor, Michigan 48103
> >
> > http://www.redhat.com/en/technologies/storage
> >
> > tel.  734-821-5101
> > fax.  734-769-8938
> > cel.  734-216-5309
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>

-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to determine if a filesystem is allow_standby_replay = true

2022-10-20 Thread Dhairya Parmar
Hi Wesley,

You can find if the `allow_standby_replay` is turned on or off by looking
at the fs dump,
run `ceph fs dump | grep allow_standby_replay` and if it is turned on you
will find something like:

$ ./bin/ceph fs dump | grep allow_standby_replay
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2022-10-21T00:06:14.656+0530 7fbed4fc3640 -1 WARNING: all dangerous and
experimental features are enabled.
2022-10-21T00:06:14.663+0530 7fbed4fc3640 -1 WARNING: all dangerous and
experimental features are enabled.
dumped fsmap epoch 8
flags 32 joinable allow_snaps allow_multimds_snaps *allow_standby_replay*

turn it to false and it will be gone:

$ ./bin/ceph fs set a allow_standby_replay false
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2022-10-21T00:10:38.668+0530 7f68b66f0640 -1 WARNING: all dangerous and
experimental features are enabled.
2022-10-21T00:10:38.675+0530 7f68b66f0640 -1 WARNING: all dangerous and
experimental features are enabled.
$ ./bin/ceph fs dump | grep allow_standby_replay
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2022-10-21T00:10:43.938+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
experimental features are enabled.
2022-10-21T00:10:43.945+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
experimental features are enabled.
dumped fsmap epoch 15

Hope it helps.


On Thu, Oct 20, 2022 at 11:09 PM Wesley Dillingham 
wrote:

> I am building some automation for version upgrades of MDS and part of the
> process I would like to determine if a filesystem has allow_standby_replay
> set to true and if so then disable it. Granted I could just issue: "ceph fs
> set MyFS allow_standby_replay false" and be done with it but Its got me
> curious that there is not the equivalent command: "ceph fs get MyFS
> allow_standby_replay" to check this information. So where can an operator
> determine this?
>
> I tried a diff of "ceph fs get MyFS" with this configurable in both true
> and false and found:
>
> diff /tmp/true /tmp/false
> 3,4c3,4
> < epoch 66
> < flags 32
> ---
> > epoch 67
> > flags 12
>
> and Im guessing this information is encoded  in the "flags" field. I am
> working with 16.2.10. Thanks.
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>

-- 
*Dhairya Parmar*

He/Him/His

Associate Software Engineer, CephFS

Red Hat Inc. 

dpar...@redhat.com

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CephFS performance

2022-10-20 Thread quag...@bol.com.br
Hello everyone,
    I have some considerations and doubts to ask...

    I work at an HPC center and my doubts stem from performance in this environment. All clusters here was suffering from NFS performance and also problems of a single point of failure it has. We were suffering from the performance of NFS and also the single point of failure it has.
    
    At that time, we decided to evaluate some available SDS and the chosen one was Ceph (first for its resilience and later for its performance).
    I deployed CephFS in a small cluster: 6 nodes and 1 HDD per machine with 1Gpbs connection.
    The performance was as good as a large NFS we have on another cluster (spending much less). In addition, it was able to evaluate all the benefits of resiliency that Ceph offers (such as activating an OSD, MDS, MON or MGR server) and the objects/services to settle on other nodes. All this in a way that the user did not even notice.

    Given this information, a new storage cluster was acquired last year with 6 machines and 22 disks (HDDs) per machine. The need was for the amount of available GBs. The amount of IOPs was not so important at that time.

    Right at the beginning, I had a lot of work to optimize the performance in the cluster (the main deficiency was in the performance in the access/write of metadata). The problem was not at the job execution, but the user's perception of slowness when executing interactive commands (my perception was in the slowness of Ceph metadata).
    There were a few months of high loads in which storage was the bottleneck of the environment.

    After a lot of research in documentation, I made several optimizations on the available parameters and currently CephFS is able to reach around 10k IOPS (using size=2).
    
    Anyway, my boss asked for other solutions to be evaluated to verify the performance issue.
    First of all, it was suggested to put the metadata on SSD disks for a higher amount of IOPS.
    In addition, a test environment was set up and the solution that made the most difference in performance was with BeeGFS.
    
    In some situations, BeeGFS is many times faster than Ceph in the same tests and under the same hardware conditions. This happens in both the throuput (BW) and IOPS.
    
    We tested it using io500 as follows:
    1-) An individual process
    2-) 8 processes (4 processes on 2 different machines)
    3-) 16 processes (8 processes on 2 different machines)
    
    I did tests configuring CephFS to use:
    * HDD only (for both data and metadata)
    * Metadata on SSD
    * Using Linux FSCache features
    * With some optimizations (increasing MDS memory, client memory, inflight parameters, etc)
    * Cache tier with SSD
    
    Even so, the benchmark score was lower than the BeeGFS installed without any optimization. This difference becomes even more evident as the number of simultaneous accesses increases.
    
    The two best results of CephFS were using metadata on SSD and also doing TIER on SSD.
    
    Here is the result of Ceph's performance when compared to BeeGFS:

Bandwith Test (bw is in GB/s):

==
|fs        |bw        |process    |
==
|beegfs-metassd    |0.078933    |01        |
|beegfs-metassd    |0.051855    |08        |
|beegfs-metassd    |0.039459    |16        |
==
|cephmetassd    |0.022489    |01        |
|cephmetassd    |0.009789    |08        |
|cephmetassd    |0.002957    |16        |
==
|cephcache    |0.023966    |01        |
|cephcache    |0.021131    |08        |
|cephcache    |0.007782    |16        |
==

IOPS Test:

==
|fs        |iops        |process    |
==
|beegfs-metassd    |0.740658    |01        |
|beegfs-metassd    |3.508879    |08        |
|beegfs-metassd    |6.514768    |16        |
==
|cephmetassd    |1.224963    |01        |
|cephmetassd    |3.762794    |08        |
|cephmetassd    |3.188686    |16        |
==
|cephcache    |1.829107    |01        |
|cephcache    |3.257963    |08        |
|cephcache    |3.524081    |16        |
==

    I imagine that if I test with 32 processes, BeeGFS is even better.
    
    Do you have any recommendations for me to apply to Ceph without reducing resilience?
    
Rafael.___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-10-20 Thread Sean Matheny
HI all, 

We've deployed a new cluster on Quincy 17.2.3 with 260x 18TB spinners across 11 
chassis that will be used exclusively in the next year or so as a S3 store. 
100Gb per chassis shared by both cluster and public networks, NVMe DB/WAL, 32 
phys cores @ 2.3Ghz base, 192GB chassis ram (per 24 OSDs).

We're looking to use the clay ec plugin for our rgw (data) pool, as it appears 
to use less reads in recovery, and might be beneficial. I'm going to be 
benchmarking recovery scenarios ahead of production, but that of course doesn't 
give a view on longer-term reliability. :)  Anyone hear of any bad experiences, 
or any reason not to use over jerasure? Any reason to use cauchy-good instead 
of reed-solomon for the use case above?


Ngā mihi,

Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)

e: sean.math...@nesi.org.nz



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to determine if a filesystem is allow_standby_replay = true

2022-10-20 Thread Wesley Dillingham
Thanks Dhairya, what version are you using? I am 16.2.10

[root@alma3-4 ~]# ceph fs dump | grep -i replay
dumped fsmap epoch 90
[mds.alma3-6{0:10340349} state up:standby-replay seq 1 addr [v2:
10.0.24.6:6803/937383171,v1:10.0.24.6:6818/937383171] compat
{c=[1],r=[1],i=[7ff]}]

as you can see i have a MDS in replay mode and standby replay is enabled
but my output is different from yours.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Thu, Oct 20, 2022 at 2:43 PM Dhairya Parmar  wrote:

> Hi Wesley,
>
> You can find if the `allow_standby_replay` is turned on or off by looking
> at the fs dump,
> run `ceph fs dump | grep allow_standby_replay` and if it is turned on you
> will find something like:
>
> $ ./bin/ceph fs dump | grep allow_standby_replay
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> 2022-10-21T00:06:14.656+0530 7fbed4fc3640 -1 WARNING: all dangerous and
> experimental features are enabled.
> 2022-10-21T00:06:14.663+0530 7fbed4fc3640 -1 WARNING: all dangerous and
> experimental features are enabled.
> dumped fsmap epoch 8
> flags 32 joinable allow_snaps allow_multimds_snaps *allow_standby_replay*
>
> turn it to false and it will be gone:
>
> $ ./bin/ceph fs set a allow_standby_replay false
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> 2022-10-21T00:10:38.668+0530 7f68b66f0640 -1 WARNING: all dangerous and
> experimental features are enabled.
> 2022-10-21T00:10:38.675+0530 7f68b66f0640 -1 WARNING: all dangerous and
> experimental features are enabled.
> $ ./bin/ceph fs dump | grep allow_standby_replay
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> 2022-10-21T00:10:43.938+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
> experimental features are enabled.
> 2022-10-21T00:10:43.945+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
> experimental features are enabled.
> dumped fsmap epoch 15
>
> Hope it helps.
>
>
> On Thu, Oct 20, 2022 at 11:09 PM Wesley Dillingham 
> wrote:
>
>> I am building some automation for version upgrades of MDS and part of the
>> process I would like to determine if a filesystem has allow_standby_replay
>> set to true and if so then disable it. Granted I could just issue: "ceph
>> fs
>> set MyFS allow_standby_replay false" and be done with it but Its got me
>> curious that there is not the equivalent command: "ceph fs get MyFS
>> allow_standby_replay" to check this information. So where can an operator
>> determine this?
>>
>> I tried a diff of "ceph fs get MyFS" with this configurable in both true
>> and false and found:
>>
>> diff /tmp/true /tmp/false
>> 3,4c3,4
>> < epoch 66
>> < flags 32
>> ---
>> > epoch 67
>> > flags 12
>>
>> and Im guessing this information is encoded  in the "flags" field. I am
>> working with 16.2.10. Thanks.
>>
>> Respectfully,
>>
>> *Wes Dillingham*
>> w...@wesdillingham.com
>> LinkedIn 
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
>
> --
> *Dhairya Parmar*
>
> He/Him/His
>
> Associate Software Engineer, CephFS
>
> Red Hat Inc. 
>
> dpar...@redhat.com
> 
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to determine if a filesystem is allow_standby_replay = true

2022-10-20 Thread Xiubo Li

Hi Wesley,

You can also just run:

$ ceph fs get MyFS|grep flags
flags    32 joinable allow_snaps allow_multimds_snaps allow_standby_replay

And if you can see "allow_standby_replay" flag as above that means it's 
enabled, or disabled already.


- Xiubo


On 21/10/2022 05:58, Wesley Dillingham wrote:

Thanks Dhairya, what version are you using? I am 16.2.10

[root@alma3-4 ~]# ceph fs dump | grep -i replay
dumped fsmap epoch 90
[mds.alma3-6{0:10340349} state up:standby-replay seq 1 addr [v2:
10.0.24.6:6803/937383171,v1:10.0.24.6:6818/937383171] compat
{c=[1],r=[1],i=[7ff]}]

as you can see i have a MDS in replay mode and standby replay is enabled
but my output is different from yours.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Thu, Oct 20, 2022 at 2:43 PM Dhairya Parmar  wrote:


Hi Wesley,

You can find if the `allow_standby_replay` is turned on or off by looking
at the fs dump,
run `ceph fs dump | grep allow_standby_replay` and if it is turned on you
will find something like:

$ ./bin/ceph fs dump | grep allow_standby_replay
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2022-10-21T00:06:14.656+0530 7fbed4fc3640 -1 WARNING: all dangerous and
experimental features are enabled.
2022-10-21T00:06:14.663+0530 7fbed4fc3640 -1 WARNING: all dangerous and
experimental features are enabled.
dumped fsmap epoch 8
flags 32 joinable allow_snaps allow_multimds_snaps *allow_standby_replay*

turn it to false and it will be gone:

$ ./bin/ceph fs set a allow_standby_replay false
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2022-10-21T00:10:38.668+0530 7f68b66f0640 -1 WARNING: all dangerous and
experimental features are enabled.
2022-10-21T00:10:38.675+0530 7f68b66f0640 -1 WARNING: all dangerous and
experimental features are enabled.
$ ./bin/ceph fs dump | grep allow_standby_replay
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2022-10-21T00:10:43.938+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
experimental features are enabled.
2022-10-21T00:10:43.945+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
experimental features are enabled.
dumped fsmap epoch 15

Hope it helps.


On Thu, Oct 20, 2022 at 11:09 PM Wesley Dillingham 
wrote:


I am building some automation for version upgrades of MDS and part of the
process I would like to determine if a filesystem has allow_standby_replay
set to true and if so then disable it. Granted I could just issue: "ceph
fs
set MyFS allow_standby_replay false" and be done with it but Its got me
curious that there is not the equivalent command: "ceph fs get MyFS
allow_standby_replay" to check this information. So where can an operator
determine this?

I tried a diff of "ceph fs get MyFS" with this configurable in both true
and false and found:

diff /tmp/true /tmp/false
3,4c3,4
< epoch 66
< flags 32
---

epoch 67
flags 12

and Im guessing this information is encoded  in the "flags" field. I am
working with 16.2.10. Thanks.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



--
*Dhairya Parmar*

He/Him/His

Associate Software Engineer, CephFS

Red Hat Inc. 

dpar...@redhat.com



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Quincy - Support with NFS Ganesha on Alma

2022-10-20 Thread Lokendra Rathour
Hi, Thanks for the update, it worked.
Thanks once again.

On Wed, Oct 19, 2022 at 11:32 AM Tahder Xunil  wrote:

> Hi Lokendra
>
> It seems Alma doesn't support quincy version, for more info
> .
> If you do 'rpm -qa | grep ceph', you will see that only nautilus, octopus
> and pacific are supported for AlmaLinux 8.x.
> i.e. dnf install centos-release-ceph-pacific
> that will install as well the common repository.
>
> But on Rocky Linux they support the quincy except for Rocky 9, no idea if
> Alma Linux really supported the CentOS Ceph SIG.
> Or manually altering the repo (instead of using the centos-release-ceph-*
> package based on above), create to repo files namely
> CentOS-NFS-Ganesha-3.repo (change if 3 to 4 using the latest version) and
> CentOS-Ceph-Quincy.repo
>
>
> cd /etc/yum.repos.d
>
> *cat CentOS-NFS-Ganesha-3.repo *
>>
>>
>> [centos-nfs-ganesha3]
>> name=CentOS-$releasever - NFS Ganesha 3
>> mirrorlist=
>> http://mirrorlist.centos.org?arch=$basearch&release=8-stream&repo=storage-nfsganesha-3
>> #baseurl=
>> https://mirror.centos.org/centos/8-stream//storage/$basearch/nfsganesha-3/
>> gpgcheck=1
>> enabled=1
>> gpgkey=
>> https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-SIG-Storage
>
>
>
> *cat CentOS-Ceph-Quincy.repo*
>>
>> [centos-ceph-quincy]
>> name=CentOS-$releasever - Ceph Quincy
>> mirrorlist=
>> http://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=storage-ceph-quincy
>> #baseurl=
>> http://mirror.centos.org/centos/8-stream/storage/$basearch/ceph-quincy/
>> gpgcheck=1
>> enabled=1
>> gpgkey=
>> https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-SIG-Storage
>
>
>
>
> if using Ganesha4, alter the above repo for ganesha-3
>
>> [centos-nfs-ganesha4]
>> name=CentOS-$releasever - NFS Ganesha 4
>> mirrorlist=
>> http://mirrorlist.centos.org?arch=$basearch&release=8-stream&repo=storage-nfsganesha-4
>> #baseurl=
>> https://mirror.centos.org/centos/8-stream//storage/$basearch/nfsganesha-4/
>> gpgcheck=1
>> enabled=1
>> gpgkey=
>> https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-SIG-Storage
>
>
>
> Then:
>
> *dnf install nfs-ganesha-ceph  *
>
>
> Hope this helps.
>
> On Wed, Oct 19, 2022 at 4:25 PM Lokendra Rathour <
> lokendrarath...@gmail.com> wrote:
>
>> Hi,
>> was trying to get the NFS Ganesha installed on the Alma 8.5 with Ceph
>> Quincy releases. getting errors of some packages not being available. For
>> example: 'librgw'
>>  nothing provides librgw.so.2()(64bit) needed by
>> nfs-ganesha-rgw-3.5-3.el8.x86_64
>>  nothing provides libcephfs.so.2()(64bit) needed by
>> nfs-ganesha-ceph-3.5-3.el8.x86_64
>> checking further we could not see the related packages available.
>> Please advise , we need to get this installed with NFS Ganesha
>>
>> Thanks for your all time support.
>>
>>
>> --
>> ~ Lokendra
>> skype: lokendrarathour
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>

-- 
~ Lokendra
skype: lokendrarathour
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to determine if a filesystem is allow_standby_replay = true

2022-10-20 Thread Dhairya Parmar
Hi Wesley,

It's 17.0.0-14319-ga686eb80799 (a686eb80799dc503a45002f4b9181f4573e8e0b3)
quincy (dev)

On Fri, Oct 21, 2022 at 3:29 AM Wesley Dillingham 
wrote:

> Thanks Dhairya, what version are you using? I am 16.2.10
>
> [root@alma3-4 ~]# ceph fs dump | grep -i replay
> dumped fsmap epoch 90
> [mds.alma3-6{0:10340349} state up:standby-replay seq 1 addr [v2:
> 10.0.24.6:6803/937383171,v1:10.0.24.6:6818/937383171] compat
> {c=[1],r=[1],i=[7ff]}]
>
> as you can see i have a MDS in replay mode and standby replay is enabled
> but my output is different from yours.
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn 
>
>
> On Thu, Oct 20, 2022 at 2:43 PM Dhairya Parmar  wrote:
>
>> Hi Wesley,
>>
>> You can find if the `allow_standby_replay` is turned on or off by looking
>> at the fs dump,
>> run `ceph fs dump | grep allow_standby_replay` and if it is turned on you
>> will find something like:
>>
>> $ ./bin/ceph fs dump | grep allow_standby_replay
>> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>> 2022-10-21T00:06:14.656+0530 7fbed4fc3640 -1 WARNING: all dangerous and
>> experimental features are enabled.
>> 2022-10-21T00:06:14.663+0530 7fbed4fc3640 -1 WARNING: all dangerous and
>> experimental features are enabled.
>> dumped fsmap epoch 8
>> flags 32 joinable allow_snaps allow_multimds_snaps *allow_standby_replay*
>>
>> turn it to false and it will be gone:
>>
>> $ ./bin/ceph fs set a allow_standby_replay false
>> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>> 2022-10-21T00:10:38.668+0530 7f68b66f0640 -1 WARNING: all dangerous and
>> experimental features are enabled.
>> 2022-10-21T00:10:38.675+0530 7f68b66f0640 -1 WARNING: all dangerous and
>> experimental features are enabled.
>> $ ./bin/ceph fs dump | grep allow_standby_replay
>> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>> 2022-10-21T00:10:43.938+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
>> experimental features are enabled.
>> 2022-10-21T00:10:43.945+0530 7fe6b3e7a640 -1 WARNING: all dangerous and
>> experimental features are enabled.
>> dumped fsmap epoch 15
>>
>> Hope it helps.
>>
>>
>> On Thu, Oct 20, 2022 at 11:09 PM Wesley Dillingham 
>> wrote:
>>
>>> I am building some automation for version upgrades of MDS and part of the
>>> process I would like to determine if a filesystem has
>>> allow_standby_replay
>>> set to true and if so then disable it. Granted I could just issue: "ceph
>>> fs
>>> set MyFS allow_standby_replay false" and be done with it but Its got me
>>> curious that there is not the equivalent command: "ceph fs get MyFS
>>> allow_standby_replay" to check this information. So where can an operator
>>> determine this?
>>>
>>> I tried a diff of "ceph fs get MyFS" with this configurable in both true
>>> and false and found:
>>>
>>> diff /tmp/true /tmp/false
>>> 3,4c3,4
>>> < epoch 66
>>> < flags 32
>>> ---
>>> > epoch 67
>>> > flags 12
>>>
>>> and Im guessing this information is encoded  in the "flags" field. I am
>>> working with 16.2.10. Thanks.
>>>
>>> Respectfully,
>>>
>>> *Wes Dillingham*
>>> w...@wesdillingham.com
>>> LinkedIn 
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>>
>>>
>>
>> --
>> *Dhairya Parmar*
>>
>> He/Him/His
>>
>> Associate Software Engineer, CephFS
>>
>> Red Hat Inc. 
>>
>> dpar...@redhat.com
>> 
>>
>

-- 
*Dhairya Parmar*

He/Him/His

Associate Software Engineer, CephFS

Red Hat Inc. 

dpar...@redhat.com

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io