[ceph-users] Re: Certificate for Dashboard / Grafana

2020-11-25 Thread E Taka
FYI: I've found a solution for the Grafana Certificate. Just run the
following commands:

1.
ceph config-key set mgr/cephadm/grafana_crt -i  
ceph config-key set mgr/cephadm/grafana_key -i  

2.
ceph orch redeploy grafana

3.
ceph config set mgr mgr/dashboard/GRAFANA_API_URL
https://ceph01.domain.tld:3000
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph on ARM ?

2020-11-25 Thread Kevin Thorpe
Indeed it does run very happily on ARM. We have three of the Mars 400
appliances from Ambedded and they work exceedingly well. 8 micro servers
per chassis. 1 is a MON server and the rest are OSD (or other services).
Each micro server is running CentOS 7 and ceph so real easy to maintain.
And each chassis is only 105W IIRC.

-- 
*Kevin Thorpe*

VP of Enterprise Platform



*W* *|* www.predictx.com

*P * *|* +44 (0)20 3005 6750 <+44%2020%203005%206750> | +44 (0)808 204 0344
<+44%20808%20204%200344>
*A* *|* 7th Floor, 52 Grosvenor Gardens, London SW1W 0AU



  

_

This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately by e-mail if you have received this e-mail by mistake and
delete this e-mail from your system. If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any
action in reliance on the contents of this information is strictly
prohibited.


On Tue, 24 Nov 2020 at 16:23,  wrote:

> Adrian;
>
> I've always considered the advantage of ARM to be the reduction in the
> failure domain.  Instead of one server with 2 processors, and 2 power
> supplies, in 1 case, running 48 disks, you can do  4 cases containing 8
> power supplies, and 32 processors running 32 (or 64...) disks.
>
> The architecture is different with ARM; you pair an ARM SoC up with just
> one or 2 disks, and you only run the OSD software.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> -Original Message-
> From: Robert Sander [mailto:r.san...@heinlein-support.de]
> Sent: Tuesday, November 24, 2020 5:56 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Ceph on ARM ?
>
> Am 24.11.20 um 13:12 schrieb Adrian Nicolae:
>
> > Has anyone tested Ceph in such scenario ?  Is the Ceph software
> > really optimised for the ARM architecture ?
>
> Personally I have not run Ceph on ARM, but there are companies selling
> such setups:
>
> https://softiron.com/
> https://www.ambedded.com.tw/
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] S3 Object Lock - ceph nautilus

2020-11-25 Thread Torsten Ennenbach
We are running our S3 Ceph Cluster on Nautilus 14.2.9 and want to use features 
like 
Object-lock-enabled.
A creation of a bucket is possible with: 
 aws s3api create-bucket --bucket locktest  --endpoint http://our-s3 
 --object-lock-enabled-for-bucket   &  aws s3api 
put-object-lock-configuration --bucket locktest --endpoint http://our-s3 
 --object-lock-configuration '{ "ObjectLockEnabled“: "Enabled", 
"Rule": { "DefaultRetention": { "Mode": "COMPLIANCE", "Days": 50 }}}‘

Are still deletable, and we don’t know why. Because this feature was backported 
to 14.2.5 as you can see here: https://github.com/ceph/ceph/pull/29905 


Any idea what we are doing wrong? 


--

Beste Grüße aus Köln Ehrenfeld

Torsten Ennenbach
Cloud Architect


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph on ARM ?

2020-11-25 Thread Marc Roos
 
How does ARM compare to Xeon in latency and cluster utilization?





-Original Message-; ceph-users
Subject: [ceph-users] Re: Ceph on ARM ?

Indeed it does run very happily on ARM. We have three of the Mars 400 
appliances from Ambedded and they work exceedingly well. 8 micro servers 
per chassis. 1 is a MON server and the rest are OSD (or other services).
Each micro server is running CentOS 7 and ceph so real easy to maintain.
And each chassis is only 105W IIRC.

--
*Kevin Thorpe*

VP of Enterprise Platform



*W* *|* www.predictx.com

*P * *|* +44 (0)20 3005 6750 <+44%2020%203005%206750> | +44 (0)808 204 
0344 <+44%20808%20204%200344>
*A* *|* 7th Floor, 52 Grosvenor Gardens, London SW1W 0AU 



  

_

This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed.
If you have received this email in error please notify the system 
manager.
This message contains confidential information and is intended only for 
the individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and 
delete this e-mail from your system. If you are not the intended 
recipient you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.


On Tue, 24 Nov 2020 at 16:23,  wrote:

> Adrian;
>
> I've always considered the advantage of ARM to be the reduction in the 

> failure domain.  Instead of one server with 2 processors, and 2 power 
> supplies, in 1 case, running 48 disks, you can do  4 cases containing 
> 8 power supplies, and 32 processors running 32 (or 64...) disks.
>
> The architecture is different with ARM; you pair an ARM SoC up with 
> just one or 2 disks, and you only run the OSD software.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> -Original Message-
> From: Robert Sander [mailto:r.san...@heinlein-support.de]
> Sent: Tuesday, November 24, 2020 5:56 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Ceph on ARM ?
>
> Am 24.11.20 um 13:12 schrieb Adrian Nicolae:
>
> > Has anyone tested Ceph in such scenario ?  Is the Ceph software 
> > really optimised for the ARM architecture ?
>
> Personally I have not run Ceph on ARM, but there are companies selling 

> such setups:
>
> https://softiron.com/
> https://www.ambedded.com.tw/
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Misleading error (osd has already bound to class) when starting osd on nautilus?

2020-11-25 Thread David Caro

Hi!

I have a nautilus ceph cluster, and today I restarted one of the osd daemons
and spend some time trying to debug an error I was seeing in the log, though it
seems the osd is actually working.


The error I was seeing is:
```
Nov 25 09:07:43 osd15 systemd[1]: Starting Ceph object storage daemon osd.44...
Nov 25 09:07:43 osd15 systemd[1]: Started Ceph object storage daemon osd.44.
Nov 25 09:07:47 osd15 ceph-osd[12230]: 2020-11-25 09:07:47.846 7f55395fbc80 -1 
osd.44 106947 log_to_monitors {default=true}
Nov 25 09:07:47 osd15 ceph-osd[12230]: 2020-11-25 09:07:47.850 7f55395fbc80 -1 
osd.44 106947 mon_cmd_maybe_osd_create fail: 'osd.44 has already bound to class 
'ssd', can not reset class to 'hdd'; use 'ceph osd crush rm-device-class ' 
to remove old class first': (16) Device or resource busy
```

There's no other messages in the journal so at first I thought that the osd
failed to start.
But it seems to be up and working correctly anyhow.

There's no "hdd" class in my crush map:
```
# ceph osd crush class ls
[
"ssd"
]
```

And that osd is actually of the correct class:
```
# ceph osd crush get-device-class osd.44
ssd
```

```
# uname -a
Linux osd15 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07) x86_64 
GNU/Linux

# ceph --version
ceph version 14.2.5-1-g23e76c7aa6 (23e76c7aa6e15817ffb6741aafbc95ca99f24cbb) 
nautilus (stable)
```

The osd shows up in the cluster and it's receiving load, so there seems to be
no problem, but does anyone know what that error is about?


Thanks!


-- 
David Caro
SRE - Cloud Services
Wikimedia Foundation 
PGP Signature: 7180 83A2 AC8B 314F B4CE  1171 4071 C7E1 D262 69C3

"Imagine a world in which every single human being can freely share in the
sum of all knowledge. That's our commitment."


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: replace osd with Octopus

2020-11-25 Thread Eugen Block

Hi,

assuming you deployed with cephadm since you're mentioning Octopus  
there's a brief section in [1]. The basis for the OSD deployment is  
the drive_group configuration. If nothing has changed in your setup  
and you replace an OSD cephadm will detect the available disk and  
match it with the drive_group config. If there's enough space on the  
SSD too, it will redeploy the OSD.


The same goes for your second case: you'll need to remove all OSDs  
from that host, zap the devices, replace the SSD and then cephadm will  
deploy the entire host. That's the simple case. If redeploying all  
OSDs on that host is not an option you'll probably have to pause  
orchestrator in order to migrate devices yourself to prevent to much  
data movement.


Regards,
Eugen


[1] https://docs.ceph.com/en/latest/mgr/orchestrator/#replace-an-osd


Zitat von Tony Liu :


Hi,

I did some search about replacing osd, and found some different
steps, probably for different release?
Is there recommended process to replace an osd with Octopus?
Two cases here:
1) replace HDD whose WAL and DB are on a SSD.
1-1) failed disk is replaced by the same model.
1-2) working disk is replaced by bigger one.
2) replace the SSD holding WAL and DB for multiple HDDs.


Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph on ARM ?

2020-11-25 Thread Kevin Thorpe
Sorry, I can't compare. Not used Ceph in anger anywhere else. In our case
we were looking for Kubernetes on-premises storage and several points led
us to the Ambedded solution. I wouldn't expect them to have the sort of
throughput of a full size Xeon server but for our immediate purposes that
is not really an issue.


   - It was a turnkey solution with support. We knew very little about
   either Kubernetes or Ceph and this was the least risk for us.
   - Size and power. We only have single racks in two datacentres, space is
   a serious consideration. Ceph is very machine hungry and the alternatives
   like Softiron even ran to 7U at least. We get 24 micro servers in 3U. Power
   is only 105W per unit along with little heat.
   - Cost. These appliances are incredibly inexpensive for the amount of
   storage they provide. Even the smallest offerings from people like Dell/EMC
   were both a lot larger and an astonomical amount more expensive. The
   licencing is purely related to the physical appliances. Hard drives and M.2
   cache can be upgraded. You can even run on a single appliance albeit at
   much reduced resilience and lost capacity. Makes evaluation really
   inexpensive.
   - Open source standard. Anything we learn from running these appliances
   is directly translatable to any Ceph install. Anything we learned on
   Dell/EMC would be yet more lock in to Dell/EMC.

We intend to experiment with Rook in the near future but our inexperience
of both Kubernetes and Ceph made this option too risky for the initial
stages. If we run Rook properly we think we can be able to co-locate things
like databases with their OSD and storage on one server so that performance
is optimal while having the management control of Ceph. But the bulk of our
data storage is exactly that and doesn't require massive performance.

On Wed, 25 Nov 2020 at 09:35, Marc Roos  wrote:

>
> How does ARM compare to Xeon in latency and cluster utilization?
>
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Misleading error (osd has already bound to class) when starting osd on nautilus?

2020-11-25 Thread David Caro

Forwarding here in case anyone is seeing the same/similar issue, Amit gave
really good pointers and a workaround :)


Thanks Amit!


On 11/25 16:08, Amit Ghadge wrote:
> Yes, and if you want avoid in future update this flag to 0 by $echo 0 >
> /sys/block/sdx/queue/rotational
> 
> Thanks
> 
> On Wed, Nov 25, 2020 at 4:03 PM David Caro  wrote:
> 
> >
> > Yep, you are right:
> >
> > ```
> > # cat /sys/block/sdd/queue/rotational
> > 1
> > ```
> >
> > I was looking to the code too but you got there before me :)
> >
> > https://github.com/ceph/ceph/blob/25ac1528419371686740412616145703810a561f/src/common/blkdev.cc#L222
> >
> >
> > It might be an issue with the driver then reporting the wrong data. I'll
> > look
> > into it.
> >
> > Do you mind if I reply on the list with this info? (or if you want you
> > reply)
> > I think this might help others too (and myself in the future xd)
> >
> > Thanks Amit!
> >
> > On 11/25 15:50, Amit Ghadge wrote:
> > > This might happen when the disk default sets 1
> > > in /sys/block/sdx/queue/rotational , 1 for HDD and 0 for SSD, But we not
> > > see any problem till now.
> > >
> > > -AmitG
> > >
> > > On Wed, Nov 25, 2020 at 3:08 PM David Caro  wrote:
> > >
> > > >
> > > > Hi!
> > > >
> > > > I have a nautilus ceph cluster, and today I restarted one of the osd
> > > > daemons
> > > > and spend some time trying to debug an error I was seeing in the log,
> > > > though it
> > > > seems the osd is actually working.
> > > >
> > > >
> > > > The error I was seeing is:
> > > > ```
> > > > Nov 25 09:07:43 osd15 systemd[1]: Starting Ceph object storage daemon
> > > > osd.44...
> > > > Nov 25 09:07:43 osd15 systemd[1]: Started Ceph object storage daemon
> > > > osd.44.
> > > > Nov 25 09:07:47 osd15 ceph-osd[12230]: 2020-11-25 09:07:47.846
> > > > 7f55395fbc80 -1 osd.44 106947 log_to_monitors {default=true}
> > > > Nov 25 09:07:47 osd15 ceph-osd[12230]: 2020-11-25 09:07:47.850
> > > > 7f55395fbc80 -1 osd.44 106947 mon_cmd_maybe_osd_create fail: 'osd.44
> > has
> > > > already bound to class 'ssd', can not reset class to 'hdd'; use 'ceph
> > osd
> > > > crush rm-device-class ' to remove old class first': (16) Device or
> > > > resource busy
> > > > ```
> > > >
> > > > There's no other messages in the journal so at first I thought that
> > the osd
> > > > failed to start.
> > > > But it seems to be up and working correctly anyhow.
> > > >
> > > > There's no "hdd" class in my crush map:
> > > > ```
> > > > # ceph osd crush class ls
> > > > [
> > > > "ssd"
> > > > ]
> > > > ```
> > > >
> > > > And that osd is actually of the correct class:
> > > > ```
> > > > # ceph osd crush get-device-class osd.44
> > > > ssd
> > > > ```
> > > >
> > > > ```
> > > > # uname -a
> > > > Linux osd15 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2+deb10u1
> > (2020-06-07)
> > > > x86_64 GNU/Linux
> > > >
> > > > # ceph --version
> > > > ceph version 14.2.5-1-g23e76c7aa6
> > > > (23e76c7aa6e15817ffb6741aafbc95ca99f24cbb) nautilus (stable)
> > > > ```
> > > >
> > > > The osd shows up in the cluster and it's receiving load, so there
> > seems to
> > > > be
> > > > no problem, but does anyone know what that error is about?
> > > >
> > > >
> > > > Thanks!
> > > >
> > > >
> > > > --
> > > > David Caro
> > > > SRE - Cloud Services
> > > > Wikimedia Foundation 
> > > > PGP Signature: 7180 83A2 AC8B 314F B4CE  1171 4071 C7E1 D262 69C3
> > > >
> > > > "Imagine a world in which every single human being can freely share in
> > the
> > > > sum of all knowledge. That's our commitment."
> > > > ___
> > > > ceph-users mailing list -- ceph-users@ceph.io
> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > > >
> >
> > --
> > David Caro
> > SRE - Cloud Services
> > Wikimedia Foundation 
> > PGP Signature: 7180 83A2 AC8B 314F B4CE  1171 4071 C7E1 D262 69C3
> >
> > "Imagine a world in which every single human being can freely share in the
> > sum of all knowledge. That's our commitment."
> >

-- 
David Caro
SRE - Cloud Services
Wikimedia Foundation 
PGP Signature: 7180 83A2 AC8B 314F B4CE  1171 4071 C7E1 D262 69C3

"Imagine a world in which every single human being can freely share in the
sum of all knowledge. That's our commitment."


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Documentation of older Ceph version not accessible anymore on docs.ceph.com

2020-11-25 Thread John Zachary Dover
I'll carry this message to the Leadership Team this week and, if
Thanksgiving proves an impediment to addressing it, next week.

Thanks for raising this concern.

Zac Dover
Upstream Docs
Ceph

On Wed, Nov 25, 2020 at 1:43 AM Marc Roos  wrote:

>
>
> 2nd that. Why even remove old documentation before it is migrated to the
> new environment. It should be left online until the migration
> successfully completed.
>
>
>
> -Original Message-
> Sent: Tuesday, November 24, 2020 4:23 PM
> To: Frank Schilder
> Cc: ceph-users
> Subject: [ceph-users] Re: Documentation of older Ceph version not
> accessible anymore on docs.ceph.com
>
> I want to just echo this sentiment. I thought the lack of older docs
> would be a very temporary issue, but they are still not available. It is
> especially frustrating when half the google searches also return a page
> not found error. The migration has been very badly done.
>
> Sincerely,
>
> On Tue, Nov 24, 2020 at 2:52 AM Frank Schilder  wrote:
>
> > Older versions are available here:
> >
> >
> > https://web.archive.org/web/20191226012841/https://docs.ceph.com/docs/
> > mimic/
> >
> > I'm actually also a bit unhappy about older versions missing. Mimic is
>
> > not end of life and a lot of people still use luminous. Since there
> > are such dramatic differences between interfaces, the old docs should
> > not just disappear.
> >
> > Best regards,
> > =
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> > 
> > From: Dan Mick 
> > Sent: 24 November 2020 01:53:29
> > To: Martin Palma
> > Cc: ceph-users
> > Subject: [ceph-users] Re: Documentation of older Ceph version not
> > accessible anymore on docs.ceph.com
> >
> > I don't know the answer to that.
> >
> > On 11/23/2020 6:59 AM, Martin Palma wrote:
> > > Hi Dan,
> > >
> > > yes I noticed but now only "latest", "octopus" and "nautilus" are
> > > offered to be viewed. For older versions I had to go directly to
> > > github.
> > >
> > > Also simply switching the URL from
> > > "https://docs.ceph.com/en/nautilus/"; to
> > > "https://docs.ceph.com/en/luminous/"; will not work any more.
> > >
> > > Is it planned to make the documentation of the older version
> > > available again through doc.ceph.com?
> > >
> > > Best,
> > > Martin
> > >
> > > On Sat, Nov 21, 2020 at 2:11 AM Dan Mick  wrote:
> > >>
> > >> On 11/14/2020 10:56 AM, Martin Palma wrote:
> > >>> Hello,
> > >>>
> > >>> maybe I missed the announcement but why is the documentation of
> > >>> the older ceph version not accessible anymore on docs.ceph.com
> > >>
> > >> It's changed UI because we're hosting them on readthedocs.com now.
>
> > >> See the dropdown in the lower right corner.
> > >>
> > >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
> >
>
>
> --
> Steven Pine
>
> *E * steven.p...@webair.com  |  *P * 516.938.4100 x
> *Webair* | 501 Franklin Avenue Suite 200, Garden City NY, 11530
> webair.com
> [image: Facebook icon]   [image:
> Twitter icon]  [image: Linkedin icon]
> 
> NOTICE: This electronic mail message and all attachments transmitted
> with it are intended solely for the use of the addressee and may contain
> legally privileged proprietary and confidential information. If the
> reader of this message is not the intended recipient, or if you are an
> employee or agent responsible for delivering this message to the
> intended recipient, you are hereby notified that any dissemination,
> distribution, copying, or other use of this message or its attachments
> is strictly prohibited. If you have received this message in error,
> please notify the sender immediately by replying to this message and
> delete it from your computer.
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph on ARM ?

2020-11-25 Thread Adrian Nicolae

Hi guys,

Thank you all for your input. I will try to get my hands on a few Huawei 
servers for testing before deciding.


The specs are really interesting, it looks like they have some power in 
them besides the high core numbers (2.6 Ghz) :


https://en.wikichip.org/wiki/hisilicon/kunpeng/920-6426

https://e.huawei.com/en/products/servers/taishan-server/taishan-5280-v2

I'll let you know how it goes.



On 11/24/2020 6:36 PM, Peter Woodman wrote:
I've been running ceph on a heterogeneous mix of rock64 and rpi4 SBCs. 
i've had to do my own builds, as the upstream ones started off with 
thunked-out checksumming due to (afaict) different arm feature sets 
between upstream's build targets and my SBCs, but other than that one, 
haven't run into any arm-specific issues. i've had to clamp down cache 
size to avoid memory exhaustion on the smaller boards, and ran into 
some corruption due to seemingly a bad interaction between o_direct 
and zram swap on a particular kernel version, but those aren't 
problems unique to ceph on arm.


i should also mention that i'm really very tolerant of latency with 
this cluster, as this is just some homelab garbage running on the 
cheapest slowest spinning disk available, so there's some bounds on 
what i'm asserting.


On Tue, Nov 24, 2020 at 11:24 AM > wrote:


Adrian;

I've always considered the advantage of ARM to be the reduction in
the failure domain.  Instead of one server with 2 processors, and
2 power supplies, in 1 case, running 48 disks, you can do  4 cases
containing 8 power supplies, and 32 processors running 32 (or
64...) disks.

The architecture is different with ARM; you pair an ARM SoC up
with just one or 2 disks, and you only run the OSD software.

Thank you,

Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
dhils...@performair.com
www.PerformAir.com 


-Original Message-
From: Robert Sander [mailto:r.san...@heinlein-support.de
]
Sent: Tuesday, November 24, 2020 5:56 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] Re: Ceph on ARM ?

Am 24.11.20 um 13:12 schrieb Adrian Nicolae:

>     Has anyone tested Ceph in such scenario ?  Is the Ceph software
> really optimised for the ARM architecture ?

Personally I have not run Ceph on ARM, but there are companies
selling such setups:

https://softiron.com/ 
https://www.ambedded.com.tw/ 

Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de 

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list -- ceph-users@ceph.io

To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] KeyError: 'targets' when adding second gateway on ceph-iscsi - BUG

2020-11-25 Thread Hamidreza Hosseini
Hi,
I have installed ceph-iscsi on ubuntu 20 manully,
But when I want to add second gateway it will show me error:

```

OS: Ubuntu 20 LTS
ceph version : octopus
I install cluster with ceph-ansible but I install ceph-iscsi manually via 
following link:
https://docs.ceph.com/en/latest/rbd/iscsi-target-cli/
(instead of 'yum install ceph-iscsi' I commit 'apt install ceph-iscsi')

root@dev13:~# gwcli -v
gwcli - 2.7


```

```

/iscsi-target...-igw/gateways> create ceph-gateway1 192.168.200.33
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
KeyError: 'targets'

```
and it will jump out of gwcli!
this is my gwcli log:

```
root@dev13:~# cat gwcli.log
2020-11-25 14:24:56,440 DEBUG[ceph.py:32:__init__()] Adding ceph cluster 
'ceph' to the UI
2020-11-25 14:24:57,049 DEBUG[ceph.py:241:populate()] Fetching ceph osd 
information
2020-11-25 14:24:57,086 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:24:57,197 DEBUG[storage.py:105:refresh()] Refreshing disk 
information from the config object
2020-11-25 14:24:57,197 DEBUG[storage.py:108:refresh()] - Scanning will use 
8 scan threads
2020-11-25 14:24:57,254 DEBUG[storage.py:135:refresh()] - rbd image scan 
complete: 0s
2020-11-25 14:24:57,254 DEBUG[gateway.py:378:refresh()] Refreshing gateway 
& client information
2020-11-25 14:24:57,254 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:24:57,294 DEBUG[ceph.py:260:refresh()] Gathering pool stats 
for cluster 'ceph'
2020-11-25 14:25:02,319 DEBUG[ceph.py:32:__init__()] Adding ceph cluster 
'ceph' to the UI
2020-11-25 14:25:03,076 DEBUG[ceph.py:241:populate()] Fetching ceph osd 
information
2020-11-25 14:25:03,168 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:25:03,219 DEBUG[storage.py:105:refresh()] Refreshing disk 
information from the config object
2020-11-25 14:25:03,219 DEBUG[storage.py:108:refresh()] - Scanning will use 
8 scan threads
2020-11-25 14:25:03,273 DEBUG[storage.py:135:refresh()] - rbd image scan 
complete: 0s
2020-11-25 14:25:03,274 DEBUG[gateway.py:378:refresh()] Refreshing gateway 
& client information
2020-11-25 14:25:03,274 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:25:03,314 DEBUG[ceph.py:260:refresh()] Gathering pool stats 
for cluster 'ceph'
2020-11-25 14:25:32,950 DEBUG[ceph.py:32:__init__()] Adding ceph cluster 
'ceph' to the UI
2020-11-25 14:25:33,614 DEBUG[ceph.py:241:populate()] Fetching ceph osd 
information
2020-11-25 14:25:33,652 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:25:33,714 DEBUG[storage.py:105:refresh()] Refreshing disk 
information from the config object
2020-11-25 14:25:33,714 DEBUG[storage.py:108:refresh()] - Scanning will use 
8 scan threads
2020-11-25 14:25:33,811 DEBUG[storage.py:135:refresh()] - rbd image scan 
complete: 0s
2020-11-25 14:25:33,811 DEBUG[gateway.py:378:refresh()] Refreshing gateway 
& client information
2020-11-25 14:25:33,811 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:25:33,864 DEBUG[ceph.py:260:refresh()] Gathering pool stats 
for cluster 'ceph'
2020-11-25 14:26:02,665 DEBUG[gateway.py:174:ui_command_create()] CMD: 
/iscsi create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
2020-11-25 14:26:02,666 DEBUG[gateway.py:185:ui_command_create()] Create an 
iscsi target definition in the UI
2020-11-25 14:26:03,191 INFO [gateway.py:196:ui_command_create()] ok
2020-11-25 14:26:45,455 DEBUG[gateway.py:793:ui_command_create()] CMD: 
../gateways/ create dev13 ['192.168.200.23'] nosync=False skipchecks=false
2020-11-25 14:26:45,467 INFO [gateway.py:836:ui_command_create()] Adding 
gateway, sync'ing 0 disk(s) and 0 client(s)
2020-11-25 14:26:45,949 DEBUG[gateway.py:854:ui_command_create()] Gateway 
creation successful
2020-11-25 14:26:45,949 DEBUG[gateway.py:855:ui_command_create()] Adding gw 
to UI
2020-11-25 14:26:45,968 DEBUG[gateway.py:934:refresh()] - checking 
iSCSI/API ports on dev13
2020-11-25 14:26:45,979 INFO [gateway.py:874:ui_command_create()] ok
2020-11-25 14:28:41,572 DEBUG[ceph.py:32:__init__()] Adding ceph cluster 
'ceph' to the UI
2020-11-25 14:28:42,269 DEBUG[ceph.py:241:populate()] Fetching ceph osd 
information
2020-11-25 14:28:42,297 DEBUG[ceph.py:150:update_state()] Querying ceph for 
state information
2020-11-25 14:28:42,334 DEBUG[storage.py:105:refresh()] Refreshing disk 
information from the config object
2020-11-25 14:28:42,334 DEBUG[storage.py:108:refresh()] - Scanning will use 
8 scan threads
2020-11-25 14:28:42,378 DEBUG[storage.py:135:refresh()] - rbd image scan 
complete: 0s
2020-11-25 14:28:42,378 DEBUG[gateway.py:378:refresh()] Refreshing gateway 
& client information
2020-11-25 14:28:42,449 DEBUG[gateway.py:934:refresh()] - checking 

[ceph-users] uniform and list crush bucket algorithm usage in data centers

2020-11-25 Thread Bobby
Hi all,

For placement purposes ceph uses the default straw2 bucket algorithm. I am
curious if the other two bucket algorithms like uniform and list are also
being used in some present use cases in data centers? Are there any use
cases where straw2 is not being used at all ?


BR
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph on ARM ?

2020-11-25 Thread Danny Abukalam
Just to add our perspective here, I’d like to second Darren’s message here that 
Ceph is great on ARM. We’re intensely familiar with this as we’ve been running 
it smoothly for years. We have some very credible customer references that have 
been running our product at scale for a while. I know there are others on this 
list that also run ARM at scale as well.

There’s a strong argument to be made that it’s preferable to do things this way 
- power efficiency (which can have an impact on rack density as well as cost) 
is one of those arguments but it doesn’t tell the whole story - especially when 
you start asking how running storage nodes hot impacts disk MTBF as just one 
example.

While we’d fix them if they existed, we’ve never hit the “crazy bugs” on ARM 
mentioned above - I believe that Ceph’s CI builds against ARM, and we do this 
in our lab as well.

I can’t speak to your specific configuration and silicon, but if you’d like to 
hear more about some of our customer stories I’m sure we can facilitate :)

Danny

> On 24 Nov 2020, at 13:58, Darren Soothill  wrote:
> 
> So yes you can get the servers for a considerably lower price than Intel.
> 
> Its not just about the CPU cost but many arm servers are based on a SOC that 
> includes networking so the overall cost of the 
> motherboard/processor/networking is a lot lower.
> 
> It doesn't reduce the price of the storage or memory though which are a large 
> part of the cost.
> 
> Power consumption on ARM processors is considerably lower than on Intel. 
> Think in the order of 100W per server which adds up over the course of a year.
> 
> ARM have done work to extend the ISA-L so they have hardware accelerated 
> things like erasure coding in the same way that intel have.
> 
> Having done a lot of testing on ARM including IO500 work then I can certainly 
> say it works. We didn't hit any problems that where arm specific and 
> certainly pushed things performance wise.
> 
> Yes you have to consider clock speeds of the processors as quite a few things 
> in CEPH are single threaded but having all the extra cores means you can run 
> many processes for things like MDS and OSD’s.
> 
> Darren
> 
> 
> 
> From: Martin Verges 
> Date: Tuesday, 24 November 2020 at 13:09
> To: Robert Sander 
> Cc: ceph-users 
> Subject: [ceph-users] Re: Ceph on ARM ?
> Hello,
> 
>> I'm curious however if the ARM servers are better or not for this use case 
>> (object-storage only).  For example, instead of using 2xSilver/Gold server, 
>> I can use a Taishan 5280 server with 2x Kungpen 920 ARM CPUs with up to 128 
>> cores in total .  So I can have twice as many CPU cores (or even more) per 
>> server comparing with x86.  Probably the price is lower for the ARM servers 
>> as well.
> 
> Even if they would be cheaper, which I strongly doubt, you will get
> less performance out of them. More cores won't give you any benefit in
> Ceph, but having much faster cores is somewhat of a game changer. Just
> use a good AMD Epyc for best price/performance/power ratio.
> 
>> Has anyone tested Ceph in such scenario?  Is the Ceph software really 
>> optimised for the ARM architecture ?  What do you think about this ?
> 
> If you choose ARM for your Ceph, you are one of very very few people
> and will most properly hit some crazy bugs that will cause trouble. A
> high price to pay in my opinion just for an "imaginary" performance or
> power reduction benefit. Storage has to run 24*7 all year long without
> a single incident. Everything else in my world is inacceptable.
> 
> --
> Martin Verges
> Managing director
> 
> Mobile: +49 174 9335695
> E-Mail: martin.ver...@croit.io
> Chat: https://t.me/MartinVerges
> 
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> 
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
> 
> Am Di., 24. Nov. 2020 um 13:57 Uhr schrieb Robert Sander
> :
>> 
>> Am 24.11.20 um 13:12 schrieb Adrian Nicolae:
>> 
>>>Has anyone tested Ceph in such scenario ?  Is the Ceph software
>>> really optimised for the ARM architecture ?
>> 
>> Personally I have not run Ceph on ARM, but there are companies selling
>> such setups:
>> 
>> https://softiron.com/
>> https://www.ambedded.com.tw/
>> 
>> Regards
>> --
>> Robert Sander
>> Heinlein Support GmbH
>> Schwedter Str. 8/9b, 10119 Berlin
>> 
>> http://www.heinlein-support.de
>> 
>> Tel: 030 / 405051-43
>> Fax: 030 / 405051-19
>> 
>> Zwangsangaben lt. §35a GmbHG:
>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>> 
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing

[ceph-users] DocuBetter Meeting Today

2020-11-25 Thread John Zachary Dover
There is a general documentation meeting called the "DocuBetter Meeting",
and it is held every two weeks. The next DocuBetter Meeting will be on Nov
26, 2020 at 0200 UTC, and will run for thirty minutes. Everyone with a
documentation-related request or complaint is invited. The meeting will be
held here: https://bluejeans.com/908675367

(Since this particular instance of this meeting will be held on the
Wednesday before United States Thanksgiving, I expect a light turnout.)

Send documentation-related requests and complaints to me by replying to
this email and CCing me at zac.do...@gmail.com.

The next DocuBetter meeting is scheduled for:

26 Nov 2020  0200 UTC



Etherpad: https://pad.ceph.com/p/Ceph_Documentation
Meeting: https://bluejeans.com/908675367

Thanks, everyone.

Zac Dover
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Planning: Ceph User Survey 2020

2020-11-25 Thread Yuval Lifshitz
Hi Mike,
Could we add more questions on RGW usecases and functionality adoption?

For instance:

bucket notifications:
* do you use "bucket notifications"?
* if so, which endpoint do you use: kafka, amqp, http?
* which other endpoints would you like to see there?

sync modules:
* do you use the cloud sync module? if so, with which cloud provider?
* do you use an archive zone?
* do you use the elasticsearch module?

multisite:
* do you have more than one realm in your setup? if so, how many?
* do you have more than one zone group in your setup?
* do you have more than one zone in your setup? if so, how many in the
largest zone group?
* is the syncing policy between zones global or per bucket?

On Tue, Nov 24, 2020 at 8:06 PM Mike Perez  wrote:

> Hi everyone,
>
> The Ceph User Survey 2020 is being planned by our working group. Please
> review the draft survey pdf, and let's discuss any changes. You may also
> join us in the next meeting *on November 25th at 12pm *PT
>
> https://tracker.ceph.com/projects/ceph/wiki/User_Survey_Working_Group
>
>
> https://tracker.ceph.com/attachments/download/5260/Ceph%20User%20Survey%202020.pdf
>
> We're aiming to have something ready by mid-December.
>
> --
>
> Mike Perez
>
> he/him
>
> Ceph / Rook / RDO / Gluster Community Architect
>
> Open-Source Program Office (OSPO)
>
>
> M: +1-951-572-2633
>
> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
> @Thingee   Thingee
>  
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Planning: Ceph User Survey 2020

2020-11-25 Thread Alexandre Marangone
Hi Mike,

For some of the multiple answer questions like "Which resources do you
check when you need help?" could these be ranked answers instead? It
would allow to see which resources are more useful to the community

On Tue, Nov 24, 2020 at 10:06 AM Mike Perez  wrote:
>
> Hi everyone,
>
> The Ceph User Survey 2020 is being planned by our working group. Please
> review the draft survey pdf, and let's discuss any changes. You may also
> join us in the next meeting *on November 25th at 12pm *PT
>
> https://tracker.ceph.com/projects/ceph/wiki/User_Survey_Working_Group
>
> https://tracker.ceph.com/attachments/download/5260/Ceph%20User%20Survey%202020.pdf
>
> We're aiming to have something ready by mid-December.
>
> --
>
> Mike Perez
>
> he/him
>
> Ceph / Rook / RDO / Gluster Community Architect
>
> Open-Source Program Office (OSPO)
>
>
> M: +1-951-572-2633
>
> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
> @Thingee   Thingee
>  
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Misleading error (osd has already bound to class) when starting osd on nautilus?

2020-11-25 Thread Anthony D'Atri

This was my first thought too.  Is it just this one drive, all drives on this 
host, or all drives in the cluster?

I’m curious if stupid HBA tricks are afoot, if this is a SAS / SATA drive.  
Especially if it’s a RAID-capable HBA vs passthrough.


>>> It might be an issue with the driver then reporting the wrong data. I'll
>>> look into it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: replace osd with Octopus

2020-11-25 Thread Tony Liu
Thank you Eugen for pointing it out. Yes, OSDs are deployed by
cephadm with drive_group. It seems that the orch module simplifies
the process to make it easier for users.

When replacing an osd, there will be no PG remapping, and backfill
will restore the data on the new disk, right?

In case of restoring a host with multiple OSDs, like WAL/DB SSD
needs to be replaced, I see two options.
1) Keep cluster in degraded state and rebuild all OSDs.
2) Mark those OSDs out so PGs are rebalanced, rebuild OSDs, and
   bring them back in to rebalance PGs again.
The key here is how much time backfilling and rebalancing will take?
The intention is to not keep cluster in degraded state for too long.
I assume they are similar, because either of them is to copy the
same amount of data?
If that's true, then option #2 is pointless.
Could anyone share such experiences, like how long time it takes
to recover how much data on what kind of networking/computing env?


Thanks!
Tony
> -Original Message-
> From: Eugen Block 
> Sent: Wednesday, November 25, 2020 1:49 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: replace osd with Octopus
> 
> Hi,
> 
> assuming you deployed with cephadm since you're mentioning Octopus
> there's a brief section in [1]. The basis for the OSD deployment is the
> drive_group configuration. If nothing has changed in your setup and you
> replace an OSD cephadm will detect the available disk and match it with
> the drive_group config. If there's enough space on the SSD too, it will
> redeploy the OSD.
> 
> The same goes for your second case: you'll need to remove all OSDs from
> that host, zap the devices, replace the SSD and then cephadm will deploy
> the entire host. That's the simple case. If redeploying all OSDs on that
> host is not an option you'll probably have to pause orchestrator in
> order to migrate devices yourself to prevent to much data movement.
> 
> Regards,
> Eugen
> 
> 
> [1] https://docs.ceph.com/en/latest/mgr/orchestrator/#replace-an-osd
> 
> 
> Zitat von Tony Liu :
> 
> > Hi,
> >
> > I did some search about replacing osd, and found some different steps,
> > probably for different release?
> > Is there recommended process to replace an osd with Octopus?
> > Two cases here:
> > 1) replace HDD whose WAL and DB are on a SSD.
> > 1-1) failed disk is replaced by the same model.
> > 1-2) working disk is replaced by bigger one.
> > 2) replace the SSD holding WAL and DB for multiple HDDs.
> >
> >
> > Thanks!
> > Tony
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] high memory usage in osd_pglog

2020-11-25 Thread Robert Brooks
We are seeing very high osd_pglog usage in mempools for ceph osds. For
example...

"mempool": {
"bloom_filter_bytes": 0,
"bloom_filter_items": 0,
"bluestore_alloc_bytes": 41857200,
"bluestore_alloc_items": 523215,
"bluestore_cache_data_bytes": 50876416,
"bluestore_cache_data_items": 1326,
"bluestore_cache_onode_bytes": 6814080,
"bluestore_cache_onode_items": 13104,
"bluestore_cache_other_bytes": 57793850,
"bluestore_cache_other_items": 2599669,
"bluestore_fsck_bytes": 0,
"bluestore_fsck_items": 0,
"bluestore_txc_bytes": 29904,
"bluestore_txc_items": 42,
"bluestore_writing_deferred_bytes": 733191,
"bluestore_writing_deferred_items": 96,
"bluestore_writing_bytes": 0,
"bluestore_writing_items": 0,
"bluefs_bytes": 101400,
"bluefs_items": 1885,
"buffer_anon_bytes": 21505818,
"buffer_anon_items": 14949,
"buffer_meta_bytes": 1161512,
"buffer_meta_items": 13199,
"osd_bytes": 1962920,
"osd_items": 167,
"osd_mapbl_bytes": 825079,
"osd_mapbl_items": 17,
"osd_pglog_bytes": 14099381936,
"osd_pglog_items": 134285429,
"osdmap_bytes": 734616,
"osdmap_items": 26508,
"osdmap_mapping_bytes": 0,
"osdmap_mapping_items": 0,
"pgmap_bytes": 0,
"pgmap_items": 0,
"mds_co_bytes": 0,
"mds_co_items": 0,
"unittest_1_bytes": 0,
"unittest_1_items": 0,
"unittest_2_bytes": 0,
"unittest_2_items": 0
},

Where roughly 14g is required for pg_logs. Cluster has 106 OSD and 2432
placement groups.

The pg log count for placement groups is much less than 134285429 logs.

Top counts are...

1486 1.41c
883 7.3
834 7.f
683 7.13
669 7.a
623 7.5
565 7.8
560 7.1c
546 7.16
544 7.19

Summing these gives 21594 pg logs.

Overall the performance of the cluster is poor, OSD memory usage is high
(20-30G resident), and with a moderate workload we are seeing iowait on OSD
hosts. The memory allocated to caches appears to be low, I believe because
osd_pglog is taking most of the available memory.

Regards,

Rob

-- 
***
This 
message was sent from RiskIQ, and is intended only for the designated 
recipient(s). It may contain confidential or proprietary information and 
may be subject to confidentiality protections. If you are not a designated 
recipient, you may not review, copy or distribute this message. If you 
receive this in error, please notify the sender by reply e-mail and delete 
this message. Thank you.

***
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: November Ceph Science User Group Virtual Meeting

2020-11-25 Thread Mike Perez
Here's the recording in case you missed it:

https://www.youtube.com/watch?v=WXtptiLtTCg

On Fri, Nov 20, 2020 at 5:44 AM Kevin Hrpcek 
wrote:

> Hey all,
>
> We will be having a Ceph science/research/big cluster call on Wednesday
> November 25th. If anyone wants to discuss something specific they can
> add it to the pad linked below. If you have questions or comments you
> can contact me.
>
> This is an informal open call of community members mostly from
> hpc/htc/research environments where we discuss whatever is on our minds
> regarding ceph. Updates, outages, features, maintenance, etc...there is
> no set presenter but I do attempt to keep the conversation lively.
>
> https://pad.ceph.com/p/Ceph_Science_User_Group_20201125
> 
> 
> We try to keep it to an hour or less.
>
> Ceph calendar event details:
>
> November 25th, 2020
> 15:00 UTC
> 4pm Central European
> 9am Central US
>
> Description: Main pad for discussions:
> https://pad.ceph.com/p/Ceph_Science_User_Group_Index
> Meetings will be recorded and posted to the Ceph Youtube channel.
> To join the meeting on a computer or mobile phone:
> https://bluejeans.com/908675367?src=calendarLink
> To join from a Red Hat Deskphone or Softphone, dial: 84336.
> Connecting directly from a room system?
>  1.) Dial: 199.48.152.152 or bjn.vc
>  2.) Enter Meeting ID: 908675367
> Just want to dial in on your phone?
>  1.) Dial one of the following numbers: 408-915-6466 (US)
>  See all numbers: https://www.redhat.com/en/conference-numbers
>  2.) Enter Meeting ID: 908675367
>  3.) Press #
> Want to test your video connection? https://bluejeans.com/111
>
>
> Kevin
>
> --
> Kevin Hrpcek
> NASA VIIRS Atmosphere SIPS
> Space Science & Engineering Center
> University of Wisconsin-Madison
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 

Mike Perez

he/him

Ceph / Rook / RDO / Gluster Community Architect

Open-Source Program Office (OSPO)


M: +1-951-572-2633

494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
@Thingee   Thingee
 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Public Swift yielding errors since 14.2.12

2020-11-25 Thread Jukka Nousiainen
Hi all,

In reference to:

https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Y2KTC7RXQYWRA54PVBAMEXSNNBRZUXP7/

We are seeing similar behavior with public Swift bucket access being broken.

In this case RadosGW Nautilus integrated to OpenStack Queens Keystone.

Public Swift containers have worked fine from Luminous era up to Nautilus
14.2.11, and started to break when upgrading RadosGW to 14.2.12 or newer.

Unsure if this is related to the backport of "rgw: Swift API anonymous access
should 401 (pr#37438", or some other rgw change within 14.2.12.

I believe the following ceph.conf we use is relevant:

rgw_swift_account_in_url = true
rgw_keystone_implicit_tenants = false

As well as the configured endpoint format:

https://fqdn:443/swift/v1/AUTH_%(tenant_id)s

Steps to reproduce:

Horizon:


1) Public container access

- Create a container with "Container Access" set to Public
- Click on the Horizon provided Link which is of the format 
https://fqdn/swift/v1/AUTH_projectUUID/public-test-container/

Expected result: Empty bucket listing
Actual result: "AccessDenied"

2) Public object access

- Upload an object to the public container
- Try to access the object via unauthenticated browser session

Expected result: Object downloaded or loaded into browser
Actual result: "NoSuchBucket"

Also getting similar behavior with Swift CLI tools (ACL '.r:*') from what I
can see.

Any suggestions how to troubleshoot further?

Happy to provide more debug log and configuration details if need be, as well
as pointers if something might be actually wrong in our configuration.



Also, apologies for the possible double post - I tried to first submit via the
hyperkitty web form but that post seems to have gone into a black hole.


BR,
Jukka
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io