[ceph-users] Re: How to verify the use of wire encryption?

2022-08-18 Thread Ilya Dryomov
On Tue, Aug 16, 2022 at 12:44 PM Martin Traxl  wrote:
>
> Hi,
>
> I am running a Ceph 16.2.9 cluster with wire encryption. From my ceph.conf:
> _
>   ms client mode = secure
>   ms cluster mode = secure
>   ms mon client mode = secure
>   ms mon cluster mode = secure
>   ms mon service mode = secure
>   ms service mode = secure
> _
>
> My cluster is running both messenger v1 and messenger v2 listening on the 
> default ports 6789 and 3300. Now I have Nautilus clients (krbd) mounting 
> rados block devices from this cluster.

Hi Martin,

For obscure backwards compatibility reasons, the kernel client defaults
to messenger v1.  You would need to specify "ms_mode=secure" option when
mapping your block devices to enable messenger v2 secure mode [1].

> When looking at the current sessions (ceph daemon  sessions) for my 
> rbd clients I see something like this:
> _
> {
> "name": "client.*",
> "entity_name": "client.fe-*",
> "addrs": {
> "addrvec": [
> {
> "type": "v1",
> "addr": "10.238.194.4:0",
> "nonce": 2819469832
> }
> ]
> },
> "socket_addr": {
> "type": "v1",
> "addr": "10.238.194.4:0",
> "nonce": 2819469832
> },
> "con_type": "client",
> "con_features": 3387146417253690110,
> "con_features_hex": "2f018fb87aa4aafe",
> "con_features_release": "luminous",
> "open": true,
> "caps": {
> "text": "profile rbd"
> },
> "authenticated": true,
> "global_id": 256359885,
> "global_id_status": "reclaim_ok",
> "osd_epoch": 13120,
> "remote_host": ""
> },
> _
>
> As I understand, "type": "v1" means messenger v1 is used and therefore no 
> secure wire encryption, which comes with messenger v2. Is this understanding 
> correct? How can I enable wire encrytion here? Nautilus should be able to use 
> msgr2. In general, how can I verify a client is using wire encryption or not?

Your understanding is correct.  Your ceph.conf options +
"ms_mode=secure" option for the kernel client (whether krbd or kcephfs)
is all that is needed.  Note that mainline kernel 5.11 or CentOS 8.4
is required.

As for the verification, you would need to either check monitor and
OSD logs or resort to wireshark/tcpdump.  There is a proposed change
from Radek to make this more ergonomic but it is not merged yet.

[1] https://docs.ceph.com/en/nautilus/man/8/rbd/#kernel-rbd-krbd-options
[2] https://github.com/ceph/ceph/pull/43791

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Issue adding host with cephadm - nothing is deployed

2022-08-18 Thread Adam King
If you try shuffling some daemon around on some of the working hosts (e.g.
changing the placement of the node-exporter spec so that one of the working
hosts is excluded so the node-exporter there should be removed) is
cephadm able to actually complete that? Also, does device info for any or
all of these hosts show up in `ceph orch device ls`? I know there's been an
issue people have run into occasionally where ceph-volume inventory (which
cephadm uses to gather device info) was hanging on one of the hosts and it
was causing things to get stuck until the host was removed or whatever
caused the hang was fixed. There could also be something interesting/useful
in the output of `ceph log last 200 cephadm`, `ceph orch host ls` and `ceph
orch ls --format yaml`. Some traceback or even just seeing what the last
thing logged was could be useful.

On Thu, Aug 18, 2022 at 9:31 AM  wrote:

> Hi again all,
>
> I have a new issue with ceph/cephadm/quincy and hopefully someone can
> assist.
>
> I have a cluster of four hosts, that I (finally) managed to bootstrap. I'm
> now trying to add several additional hosts. Whether I add the hosts from
> the dashboard or CLI, I get the same result - the host is added but no
> services are deployed.
>
> I have tried
>
>   *   confirming that the ssh connection works
>   *   enabled debug logging of cephadm and I'm watching the output
>   *   when adding the host, cephadm logs in to the host and runs check-host
>  *   the debug output show error on podman 3.0.1 but it's the same
> version as on all the working hosts
>  *   it confirms systemctl, lvcreate, chrony.service, hostname and
> concludes Host looks OK
>  *   it outputs Added host
>   *   The host is visible in the list of hosts in the dashboard and ceph
> orch host ls, but services are deployed so there is no data under model,
> CPUs etc
>   *   I have tried to edit and save the node-exporter and crash services
> (both have * for placement)
>   *   I have tried to redeploy the node-exporter service, they just get
> redeployed to the existing four hosts
>   *   I have tried ceph orch pause and then ceph orch resume
>   *   I have tried to put the host in maintenance mode and exit the
> maintenance mode
>   *   I tried to restart all the mons and mgrs - when restarting the
> active mgr, cephadm finally did the inventory of the host I added. When I
> added the additional hosts, nothing happened until I restarted the mgr
> again. The service size for crash and node-exporter increased to 8, but
> still no services are deployed and the running number remains at 4.
>
> I could see no error anywhere (except the note about podman 3.0.1) and I
> didn't have this problem when adding the first three hosts after the
> bootstrap.
>
> Ideas?
>
> Thanks
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Issue adding host with cephadm - nothing is deployed

2022-08-18 Thread Adam King
Okay, the fact that the removal is also not working means that idea of it
being "stuck" in some way is likely correct. The most likely culprit in
these scenarios in the past, as mentioned previously, are hanging
ceph-volume commands. Maybe going to each of these new hosts and running
something like `ps aux | grep ceph-volume` and see if there's any processes
that have been around a while. Especially if any of those processes are in
D state. If you see something like that it means there's probably something
going on with the devices or mount points on that host that is
causing ceph-volume to hang when checking them. Often a reboot fixes it in
this case.

On Thu, Aug 18, 2022 at 11:12 AM  wrote:

> Hi Adam,
>
> Thanks for the ideas! I tried all the things you mentioned but in short -
> cephadm isn't removing services either, but everything else works.
>
> I changed the node-exporter specification (on the dashboard) to only be
> placed on three of the old, working hosts. I see in the cephadm logs:
> [INF] Saving service node-exporter spec with placement host01, host02,
> host03
> [DBG] _kick_serve_loop
>
> But nothing else happens. I restarted the active mgr and I can see the
> hosts being re-inventoried, then again nothing. The node-exporter size is
> 3, and running remains 4.
>
> For the other checks
>
>   *   ceph orch host ls - all 8 hosts show
>   *   ceph orch device ls - all disks on all hosts show, including the
> four new hosts.
>   *   I ran cephadm shell -- ceph-volume inventory on all four new hosts,
> no errors or hangings
>   *   ceph orch ls --format yaml -> matches the dashboard view (i.e. shows
> node-exporter with placement on hosts 01, 02, 03 with size: 3 and running:
> 3)
>
> I'm running ceph -W cephadm with log_to_cluster_level set to debug, but
> except for the walls of text with the inventories, nothing (except
> _kick_service_loop) shows up in the logs after the INF level messages that
> host has been added or service specification has been saved.
>
> Best
>
>
> 
> From: Adam King 'adking at redhat.com'
> Sent: 18 August 2022 16:09
> To: ceph-m...@rikdvk.mailer.me 
> Subject: Re: [ceph-users] Issue adding host with cephadm - nothing is
> deployed
>
> If you try shuffling some daemon around on some of the working hosts (e.g.
> changing the placement of the node-exporter spec so that one of the working
> hosts is excluded so the node-exporter there should be removed) is cephadm
> able to actually complete that? Also, does device info for any or all of
> these hosts show up in `ceph orch device ls`? I know there's been an issue
> people have run into occasionally where ceph-volume inventory (which
> cephadm uses to gather device info) was hanging on one of the hosts and it
> was causing things to get stuck until the host was removed or whatever
> caused the hang was fixed. There could also be something interesting/useful
> in the output of `ceph log last 200 cephadm`, `ceph orch host ls` and `ceph
> orch ls --format yaml`. Some traceback or even just seeing what the last
> thing logged was could be useful.
>
> On Thu, Aug 18, 2022 at 9:31 AM  ceph-m...@rikdvk.mailer.me>> wrote:
> Hi again all,
>
> I have a new issue with ceph/cephadm/quincy and hopefully someone can
> assist.
>
> I have a cluster of four hosts, that I (finally) managed to bootstrap. I'm
> now trying to add several additional hosts. Whether I add the hosts from
> the dashboard or CLI, I get the same result - the host is added but no
> services are deployed.
>
> I have tried
>
>   *   confirming that the ssh connection works
>   *   enabled debug logging of cephadm and I'm watching the output
>   *   when adding the host, cephadm logs in to the host and runs check-host
>  *   the debug output show error on podman 3.0.1 but it's the same
> version as on all the working hosts
>  *   it confirms systemctl, lvcreate, chrony.service, hostname and
> concludes Host looks OK
>  *   it outputs Added host
>   *   The host is visible in the list of hosts in the dashboard and ceph
> orch host ls, but services are deployed so there is no data under model,
> CPUs etc
>   *   I have tried to edit and save the node-exporter and crash services
> (both have * for placement)
>   *   I have tried to redeploy the node-exporter service, they just get
> redeployed to the existing four hosts
>   *   I have tried ceph orch pause and then ceph orch resume
>   *   I have tried to put the host in maintenance mode and exit the
> maintenance mode
>   *   I tried to restart all the mons and mgrs - when restarting the
> active mgr, cephadm finally did the inventory of the host I added. When I
> added the additional hosts, nothing happened until I restarted the mgr
> again. The service size for crash and node-exporter increased to 8, but
> still no services are deployed and the running number remains at 4.
>
> I could see no error anywhere (except the note about podman 3.0.1) and I
> didn't have this problem when

[ceph-users] Looking for Companies who are using Ceph as EBS alternative

2022-08-18 Thread Abhishek Maloo
Hey Folks,
 I have recently joined the Ceph user group. I work for Twitter in their
Storage Infrastructure group. We run our infrastructure on-prem. We are
looking at credible alternatives to AWS EBS(Elastic Block Storage) on-prem.
We want to run our OLTP databases with remotely mounted drives. My research
led me to this project - Ceph.

I am looking for advice on your experiences of running Ceph as a Block
device / Posix provider. Do we have companies in this community who are
using Ceph as an AWS-EBS(Elastic Block Storage) replacement? I would deeply
appreciate any contact with such people.

Thanks in advance,
Abhishek Maloo,
TL, Realtime Storage
Twitter Inc.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Request for Info: What has been your experience with bluestore_compression_mode?

2022-08-18 Thread Laura Flores
Hi everyone,

We sent an earlier inquiry on this topic asking how many people are using
bluestore_compression_mode, but now, we would like to know about users'
experience in a more general sense. *Do you currently have
bluestore_compression_mode enabled? Have you tried enabling it in the past?
Have you chosen not to enable it? We would like to know about your
experience!*

The purpose of this inquiry is that we are trying to get a sense of
people's experiences -- positive, negative, or anything in between -- with
bluestore_compression_mode or the per-pool compression_mode options (these
were introduced early in bluestore's life, but as far as we know, may not
widely be used).  We might be able to reduce complexity in bluestore's blob
code if we could do compression in some other fashion, so we are trying to
get a sense of whether or not it's something worth looking into more.

It would help immensely to know:

   1. What has been *your* experience with bluestore_compression_mode?
   Whether you currently have it enabled, tried enabling it in the past, or
   have chosen not to enable it, we would like to know about your experience.
   2. If you currently have it enabled (or had it enabled in the past),
   what is/was your use case?


Thanks,
Laura Flores

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage

Red Hat Inc. 

La Grange Park, IL

lflo...@redhat.com
M: +17087388804
@RedHat    Red Hat
  Red Hat


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephfs and samba

2022-08-18 Thread Stolte, Felix
Hello there,

is anybody sharing his ceph filesystem via samba to windows clients and willing 
to share his experience as well as settings in smb.conf and ceph.conf which 
have performance impacts? 

We are running this setup for years now, but i think there is still room for 
improvement and learn from fellow ceph users.

Our current setup involves a pacific ceph cluster with cephfs kernel mounts on 
two samba servers (HA via ctdb). We tried using the vfs module, but it didn’t 
fit our perforamce needs. We have arround 3, Windows Clients accessing the 
filesystem. Current use is about 600 TB of data and 300 million objects in the 
data pool.

best regards
Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior
-
-

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Request for Info: What has been your experience with bluestore_compression_mode?

2022-08-18 Thread Richard Bade
Hi Laura,
We have used pool compression in the past and found it to work well.
We had it on 4/2 EC pool and found data ended up near 1:1 pool:raw.
We were storing backup data in this cephfs pool, however we changed
the backup product and as the data is now encrypted at rest by the
application the bluestore compression is unnecessary overhead for zero
gain so we are nolonger using it.

Regards,
Rich

On Fri, 19 Aug 2022 at 09:10, Laura Flores  wrote:
>
> Hi everyone,
>
> We sent an earlier inquiry on this topic asking how many people are using
> bluestore_compression_mode, but now, we would like to know about users'
> experience in a more general sense. *Do you currently have
> bluestore_compression_mode enabled? Have you tried enabling it in the past?
> Have you chosen not to enable it? We would like to know about your
> experience!*
>
> The purpose of this inquiry is that we are trying to get a sense of
> people's experiences -- positive, negative, or anything in between -- with
> bluestore_compression_mode or the per-pool compression_mode options (these
> were introduced early in bluestore's life, but as far as we know, may not
> widely be used).  We might be able to reduce complexity in bluestore's blob
> code if we could do compression in some other fashion, so we are trying to
> get a sense of whether or not it's something worth looking into more.
>
> It would help immensely to know:
>
>1. What has been *your* experience with bluestore_compression_mode?
>Whether you currently have it enabled, tried enabling it in the past, or
>have chosen not to enable it, we would like to know about your experience.
>2. If you currently have it enabled (or had it enabled in the past),
>what is/was your use case?
>
>
> Thanks,
> Laura Flores
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage
>
> Red Hat Inc. 
>
> La Grange Park, IL
>
> lflo...@redhat.com
> M: +17087388804
> @RedHat    Red Hat
>   Red Hat
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Request for Info: bluestore_compression_mode?

2022-08-18 Thread Mark Nelson

Hi Frank,


Thank you for the incredibly detailed reply!  Will respond inline.


On 8/17/22 7:06 AM, Frank Schilder wrote:

Hi Mark,

please find below a detailed report with data and observations from our 
production
system. The ceph version is mimic-latest and some ways of configuring 
compression
or interpreting settings may or may not have changed already. As far as I know 
its
still pretty much the same.


Ok, this is good to know.  While the compression settings haven't 
changed much, there definitely have been changes regarding min_alloc 
size and allocator code.





First thing I would like to mention is that you will not be able to get rid of
osd blob compression. This is the only way to preserve reasonable performance
on applications like ceph FS and RBD. If you are thinking about 
application-level
full file- or object compression, this would probably be acceptable for upload-
download only applications. For anything else, something like this would degrade
performance to unacceptable levels.


I'm curious how much we actually gain by compressing at the blob level 
vs object level in practice.  Obviously it could mean a lot more work 
when doing small IOs, but I'm curious how much latency it actually adds 
when only compressing the blob vs the whole object.  Also for something 
like RBD I wonder if a simplified blob structure with smaller 
(compressed) objects might be a bigger win?  CephFS is a good point 
though.  Right now there is likely some advantage by compressing at the 
blob level, especially if we are talking about small writes to huge objects.





Second thing is that the current bluestore compression could be much more 
effective
if the problem of small objects could be addressed. This might happen on 
application
level, for example, implementing tail-merging for ceph fs. This would come with
dramatic improvements, because the largest amount of over-allocation does not 
come
from uncompressed but from many small objects (even if compressed). I mentioned
this already in our earlier conversation and I will include you in a new thread
specifically about my observations in this direction.


Indeed.  No argument on the impact here.




As one of the important indicators of ineffective compression and huge 
over-allocation
due to small objects on ceph-fs you asked for, please see the output of ceph df 
below.
The pool con-fs2-meta2 is the primary data pool of an FS where the root of the 
file
system is assigned to another data pool con-fs2-data2 - the so-called 3-pool FS 
layout.

As you can see, con-fs2-meta2 contains 50% of all FS objects, yet they are all 
of size 0.
One could say "perfectly compressed" but they all require a 
min_alloc_size*replication_factor
(in our case, 16K*4 on the meta2-pool and 64K*11 on the data2-pool !) 
allocation on disk.
Together with having hundreds of millions of small files on the file system, 
which also
require such a minimum allocation each, a huge waste of raw capacity results. 
I'm just
lucky I don't have the con-fs2-meta2 in the main pool. Its also a huge pain for 
recovery.


I take it those size-0 objects are just metadata?  It's pretty 
unfortunate if we end up allocating min_alloc just for the header/footer 
on all EC shards.  At least in more modern versions of ceph the 
min_alloc size is 4K in all cases, so this gets better but doesn't 
totally go away.





I'm pretty sure the same holds for RGW with small objects. The only application 
that
does not have this problem is RBD with its fixed uniform object size.

This will not change by application level compression. This requires merging 
small
objects into large ones. I consider this to be currently the major factor for 
excess
raw usage and any improvements of some percent with compression will have only
very small effects on a global scale. Looking at the stat numbers below from a 
real-life
HPC system, you can simulate how much one could at best get out of more/better
compression.


Sounds like you have a lot of small objects?  Back when EC was 
implemented I recall we were really thinking about it in terms of large 
object use cases (and primarily for RGW).  Over time we've gotten a lot 
more interest from people wanting to use EC with CephFS and RBD, and 
also with smaller object sizes.  It's definitely a lot trickier getting 
those right imho.





For example, on our main bulk data pool, compressed allocated is only 13%. Even 
if
compression could compress this to size 0, the overall gain would at best be 
13%.
On the other hand, the actual compression rate of 2.9 is actually quite good. 
If *all*
data was merged into blobs of a minimum size that allowed to save this amount of
allocation by compression, one could improve storage capacity by a factor of 
about
2.5 (250% !). With the current implementation of compression.

Consequently, my personal opinion is, that it is not interesting to spend much 
time
on better compression if the small object min_allocation problem is not 
address

[ceph-users] Questions about the QA process and the data format of both OSD and MON

2022-08-18 Thread Satoru Takeuchi
Hi,

As I described in another mail(*1), my development Ceph cluster was
corrupted when using problematic binary.
When I upgraded to v16.2.7 + some patches (*2) + PR#45963 patch,
unfound pgs and inconsistent pgs appeared. In the end, I deleted this cluster.

  pacific: bluestore: set upper and lower bounds on rocksdb omap iterators
  https://github.com/ceph/ceph/pull/45963

This problem happened because PR#45963 causes data corruption about OSDs
which were created in octopus or older.

This patch was reverted, and the correct version (PR#46096) was applied later.

  pacific: revival and backport of fix for RocksDB optimized iterators
  https://github.com/ceph/ceph/pull/46096

It's mainly because I applied the not-well-tested patch carelessly. To
prevent the same
a mistake from happening again, let me ask some questions.

a. About QA process
   a.1 In my understanding, the test cases differ between the QA for merging
 a PR and the QA for release. For example, the upgrade test was run only
 in the release QA process. Is my understanding correct?
 I thought so because the bug in #45963 was not detected in
the QA for merging
 but was detected in the QA for release.
   a.2 If a.1 is correct, is it possible to run all test cases in both
QA? I guess that some
time-consuming tests are skipped to improve efficient development.
   a.3 Is there any detailed document about how to run Teuthology in
the user's local environment?
 Once I tried this by reading the official document, it didn't
work well.

 
https://docs.ceph.com/en/quincy/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro/#how-to-run-integration-tests

 At that time, Teuthology failed to connect to
paddles.front.sepia.ceph.com, which wasn't written in this document.

 ```
 requests.exceptions.ConnectionError:
HTTPConnectionPool(host='paddles.front.sepia.ceph.com', port=80): Max
retries exceeded with url: /nodes/?machine_type=vps&count=1 (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno 110]
Connection timed out'))
 ```
b. To minimize the risk, I'd like to use the newest data format of
both OSD and MON as possible.
More precisely, I'd like to re-create all OSDs and MONs if their
default data format was changed.
Please let me know if there is a convenient way to know the data
format of each OSD and MON.

As an example, when I re-created some OSDs created in octopus or
older in my pacific cluster,
I assumed that the older OSDs than the upgrade-to-pacific date
were created in octopus or older.
It seemed to work, but it's better to use a more straightforward way.

*1) 
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/message/TT6ZQ5LUS54ZK4NNXSDJIOBS5A2ZFAGT/
*2) PR#43581, 44413, 45502, 45654, these patches don't relate to the
topic of this mail

Best,
Satoru
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Looking for Companies who are using Ceph as EBS alternative

2022-08-18 Thread Linh Vu
I've used RBD for Openstack clouds from small to large scale since 2015.
Been through many upgrades and done many stupid things, and it's still rock
solid. It's the most reliable part of Ceph, I'd say.

On Fri, Aug 19, 2022 at 3:47 AM Abhishek Maloo  wrote:

> Hey Folks,
>  I have recently joined the Ceph user group. I work for Twitter in their
> Storage Infrastructure group. We run our infrastructure on-prem. We are
> looking at credible alternatives to AWS EBS(Elastic Block Storage) on-prem.
> We want to run our OLTP databases with remotely mounted drives. My research
> led me to this project - Ceph.
>
> I am looking for advice on your experiences of running Ceph as a Block
> device / Posix provider. Do we have companies in this community who are
> using Ceph as an AWS-EBS(Elastic Block Storage) replacement? I would deeply
> appreciate any contact with such people.
>
> Thanks in advance,
> Abhishek Maloo,
> TL, Realtime Storage
> Twitter Inc.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Looking for Companies who are using Ceph as EBS alternative

2022-08-18 Thread Anthony D'Atri
I agree with others who have described RBD as rock solid.

Lots of people use RBD, especially for virtualization.  DigitalOcean’s and 
Vultr's block service is Ceph, for example, and lots of OpenStack Cinder 
deployments.  Not an EBS replacement as such because AWS isn’t being used in 
the first place, but rather an EBS analogue.

https://www.digitalocean.com/blog/why-we-chose-ceph-to-build-block-storage

RBD is resizeable, can be thin-provisioned, compressed, cloned, snapshotted, 
imported, exported, mirrored, etc.

Anthony-Bob sez check it out.

> On Aug 18, 2022, at 10:46 AM, Abhishek Maloo  wrote:
> 
> Hey Folks,
> I have recently joined the Ceph user group. I work for Twitter in their
> Storage Infrastructure group. We run our infrastructure on-prem. We are
> looking at credible alternatives to AWS EBS(Elastic Block Storage) on-prem.
> We want to run our OLTP databases with remotely mounted drives. My research
> led me to this project - Ceph.
> 
> I am looking for advice on your experiences of running Ceph as a Block
> device / Posix provider. Do we have companies in this community who are
> using Ceph as an AWS-EBS(Elastic Block Storage) replacement? I would deeply
> appreciate any contact with such people.
> 
> Thanks in advance,
> Abhishek Maloo,
> TL, Realtime Storage
> Twitter Inc.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io