[ceph-users] Thick provisioning

2017-10-16 Thread sinan
Hi, I have deployed a Ceph cluster (Jewel). By default all block devices that are created are thin provisioned. Is it possible to change this setting? I would like to have that all created block devices are thick provisioned. In front of the Ceph cluster, I am running Openstack. Thanks! Sinan

Re: [ceph-users] Thick provisioning

2017-10-18 Thread sinan
. When you monitor closely the real usage, this should not be a problem; but from experience when there is no hard limit, overprovisioning will happen at some point. Sinan > I can only speak for some environments, but sometimes, you would want to > make sure that a cluster cannot fill up until y

[ceph-users] Slow requests during OSD maintenance

2018-07-17 Thread sinan
not the way to go, since no OSD's are out during yum update and the node is still part of the cluster and will handle I/O. I think the best way is the combination of "ceph osd set noout" + stopping the OSD services so the OSD node does not have any traffic anymore. Any th

[ceph-users] Pool size (capacity)

2018-07-20 Thread sinan
7;capacity size'. Thanks! Sinan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread sinan
am looking for. Thanks. Sinan > # for a specific pool: > > ceph osd pool get your_pool_name size > > >> Le 20 juil. 2018 à 10:32, Sébastien VIGNERON >> a écrit : >> >> #for all pools: >> ceph osd pool ls detail >> >> >>> Le 20

Re: [ceph-users] Read/write statistics per RBD image

2018-07-24 Thread sinan
ritable by QEMU and allowed by SELinux or AppArmor Permissions on the folder: 0 drwxrwx---. 2 ceph ceph 40 23 jul 11:27 ceph But /var/run/ceph is empty. Thanks! Sinan > Hello again, > > How can I determine $cctid for specific rbd name? Or is there any good way > to map

[ceph-users] rbd ls operation not permitted

2018-10-08 Thread sinan
irt-daemon-driver-storage-rbd-3.9.0-14.el7_5.5.x86_64 librbd1-12.2.4-10.el7cp.x86_64 python-rbd-12.2.4-10.el7cp.x86_64 $ Both clusters are using the same Ceph client key, same Ceph configuration file. The only difference is the version of rbd. Is this expected

Re: [ceph-users] rbd ls operation not permitted

2018-10-08 Thread sinan
The result of your command: $ rbd ls --debug-rbd=20 -p ssdvolumes --id openstack 2018-10-08 13:42:17.386505 7f604933fd40 20 librbd: list 0x7fff5b25cc30 rbd: list: (1) Operation not permitted $ Thanks! Sinan On 08-10-2018 15:37, Jason Dillaman wrote: On Mon, Oct 8, 2018 at 9:24 AM wrote: Hi

Re: [ceph-users] rbd ls operation not permitted

2018-10-08 Thread sinan
brbd: list 0x7fff5b25cc30 rbd: list: (1) Operation not permitted $ Thanks! Sinan On 08-10-2018 15:37, Jason Dillaman wrote: > On Mon, Oct 8, 2018 at 9:24 AM wrote: >> >> Hi, >> >> I am running a Ceph cluster (Jewel, ceph version 10.2.10-17.el7cp). >> >> >

Re: [ceph-users] rbd ls operation not permitted

2018-10-08 Thread sinan
gt; 2018-10-08 13:42:17.386505 7f604933fd40 20 librbd: list 0x7fff5b25cc30 >> rbd: list: (1) Operation not permitted >> $ >> >> Thanks! >> Sinan >> >> On 08-10-2018 15:37, Jason Dillaman wrote: >> > On Mon, Oct 8, 2018 at 9:24 AM wrote: >> &

[ceph-users] Decommissioning cluster - rebalance questions

2018-12-03 Thread sinan
. Why does this happen? The cluster should be already balanced after out'ed the osd. I didn't expect another rebalance with removing the OSD from the CRUSH map. Thanks! Sinan Polat ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Book & questions

2017-08-13 Thread Sinan Polat
Hi all, I am quite new with Ceph Storage. Currently we have a Ceph environment running, but in a few months we will be setting up a new Ceph storage environment. I have read a lot of information on the Ceph website, but the more information the better for me. What book(s) would you suggest?

Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread Sinan Polat
What has DWPD to do with performance / IOPS? The SSD will just fail earlier, but it should not have any affect on the performance, right? Correct me if I am wrong, just want to learn. > Op 20 aug. 2017 om 06:03 heeft Christian Balzer het volgende > geschreven: > > DWPD _

Re: [ceph-users] mon osd down out subtree limit default

2017-08-21 Thread Sinan Polat
The docs is providing the following information: The smallest CRUSH unit type that Ceph will notautomatically mark out. For instance, if set to host and if all OSDs of a host are down, Ceph will not automatically mark out these OSDs. But what does it exactly mean? Anyone who can explain it? Than

[ceph-users] Ruleset vs replica count

2017-08-24 Thread Sinan Polat
licas in ams5-ssd and 3 replicas in ams6-ssd? Thanks! Sinan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ruleset vs replica count

2017-08-24 Thread Sinan Polat
Hi David, Thank you for your reply, I will read about the min_size 1 value. What about my initial question, anyone? Thanks! Van: David Turner [mailto:drakonst...@gmail.com] Verzonden: donderdag 24 augustus 2017 19:45 Aan: Sinan Polat; ceph-us...@ceph.com Onderwerp: Re: [ceph-users

[ceph-users] Ceph df incorrect pool size (MAX AVAIL)

2017-08-27 Thread Sinan Polat
Hi, I am quite new to Ceph, so forgive me for any stupid questions. Setup: - 2 datacenters, each datacenter has 26 OSD's, which makes in total 52 OSD's. - ceph osd df shows that every disk is 1484GB. - I have 2 rulesets and 4 pools, 1 ruleset + 2 pools per data

[ceph-users] MAX AVAIL in ceph df

2017-09-09 Thread Sinan Polat
Hi, How is the MAX AVAIL calculated in 'ceph df'? Since I am missing some space. I have 26 OSD's, each is 1484GB (according to df). I have 3 replica's. Shouldn't the MAX AVAIL be: (26*1484)/3 = 12.861GB? Instead 'ceph df' is showing 7545G for the pool that is using the 26 OSD's. What i

Re: [ceph-users] What's 'failsafe full'

2017-09-13 Thread Sinan Polat
Hi According to: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-July/003140.html You can set it with: on the OSDs you may (not) want to change "osd failsafe full ratio" and "osd failsafe nearfull ratio". Van: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Namens dE V

[ceph-users] Usage not balanced over OSDs

2017-09-13 Thread Sinan Polat
ce on each disk is more or less the same? - What will happen if I hit the MAX AVAIL, while most of the disks still have space? Thanks! Sinan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Sinan Polat
You are talking about the min_size, which should be 2 according to your text. Please be aware, the min_size in your CRUSH is _not_ the replica size. The replica size is set with your pools. > Op 7 okt. 2017 om 19:39 heeft Peter Linder het > volgende geschreven: > >> On 10/7/2017 7:36 PM, Дроб

Re: [ceph-users] Configuring Ceph using multiple networks

2017-10-07 Thread Sinan Polat
Why do your put your mons inside your cluster network, shouldn't they reisde within the public network? Cluster network is only for replica data / for traffic between your osds. > Op 7 okt. 2017 om 14:32 heeft Kashif Mumtaz het > volgende geschreven: > > > > I have successfully installed Lu

Re: [ceph-users] Degraded data redundancy (low space): 1 pg backfill_toofull

2018-07-28 Thread Sinan Polat
Ceph has tried to (re)balance your data, backfill_toofull means no available space to move data, but you have plenty of space. Why do you have so little pgs? I would increase the amount of pgs, but before doing so lets see what others will say. Sinan > Op 28 jul. 2018 om 11:50 heeft Sebast

Re: [ceph-users] IO rate-limiting with Ceph RBD (and libvirt)

2018-03-22 Thread Sinan Polat
FYI: I/O limiting in combination with OpenStack 10/12 + Ceph doesn’t work properly. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1476830 > Op 22 mrt. 2018 om 07:59 heeft Wido den Hollander het > volgende geschreven: > > > >> On 03/21/2018 06:48 PM, Andre Goree wrote: >> I'm trying to det

Re: [ceph-users] Memory configurations

2018-11-21 Thread Sinan Polat
I don’t know about the memory, but your CPU’s would be overkill. For what would you need 20 cores (40 threads)? When using 2 sockets I would go for 2 memory modules. Does it even work with just 1 module? Regards, Sinan > Op 21 nov. 2018 om 22:30 heeft Georgios Dimitrakakis > het vo

[ceph-users] utilization of rbd volume

2018-12-28 Thread Sinan Polat
Hi all, We have a couple of hundreds RBD volumes/disks in our Ceph cluster, each RBD disk is mounted by a different client. Currently we see quite high IOPS happening on the cluster, but we don't know which client/RBD is causing it. Is it somehow easily to see the utilization per RBD disk? Thank

Re: [ceph-users] utilization of rbd volume

2018-12-28 Thread Sinan Polat
Hi Jason, Thanks for your reply. Unfortunately we do not have access to the clients. We are running Red Hat Ceph 2.x which is based on Jewel, that means we cannot pinpoint who or what is causing the load on the cluster, am I right? Thanks! Sinan > Op 28 dec. 2018 om 15:14 heeft Ja

[ceph-users] Explanation of perf dump of rbd

2019-01-31 Thread Sinan Polat
Hi, I finally figured out how to measure the statistics of a specific RBD volume; $ ceph --admin-daemon perf dump It outputs a lot, but I don't know what it means, is there any documentation about the output? For now the most important values are: - bytes read - bytes written I think I n

Re: [ceph-users] Explanation of perf dump of rbd

2019-01-31 Thread Sinan Polat
, can we include the volume name in the path? Sinan > Op 1 feb. 2019 om 00:44 heeft Jason Dillaman het > volgende geschreven: > >> On Thu, Jan 31, 2019 at 12:16 PM Paul Emmerich >> wrote: >> >> "perf schema" has a description field that may o

Re: [ceph-users] BlueStore / OpenStack Rocky performance issues

2019-02-21 Thread Sinan Polat
Hi Eric, 40% slower performance compared to ..? Could you please share the current performance. How many OSD nodes do you have? Regards, Sinan > Op 21 februari 2019 om 14:19 schreef "Smith, Eric" : > > > Hey folks – I recently deployed Luminous / BlueStore on SSDs

Re: [ceph-users] 'Missing' capacity

2019-04-15 Thread Sinan Polat
Probably inbalance of data across your OSDs. Could you show ceph osd df. >From there take the disk with lowest available space. Multiply that number >with number of OSDs. How much is it? Kind regards, Sinan Polat > Op 16 apr. 2019 om 05:21 heeft Igor Podlesny het > volgend

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Sinan Polat
I have deployed, expanded and upgraded multiple Ceph clusters using ceph-ansible. Works great. What information are you looking for? -- Sinan > Op 17 apr. 2019 om 16:24 heeft Francois Lafont > het volgende geschreven: > > Hi, > > +1 for ceph-ansible too. ;) > &

Re: [ceph-users] ceph-ansible as non-root user

2019-04-22 Thread Sinan Polat
Hi, Does your ansible user has sudo rights? Without password prompt? Kind regards, Sinan Polat > Op 23 apr. 2019 om 05:00 heeft ST Wong (ITSC) het > volgende geschreven: > > Hi all, > > We tried to deploy a new CEPH cluster using latest ceph-ansible, run as an &g

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Sinan Polat
Hi Felix, I can run your commands inside an OpenStack VM. Tthe storage cluster contains of 12 OSD servers, holding each 8x 960GB SSD. Luminous FileStore. Replicated 3. Would it help you to run your command on my cluster? Sinan > Op 7 jun. 2019 om 08:52 heeft Stolte, Felix het > vo

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Sinan Polat
): 9.9894/0.00 # Kind regards, Sinan Polat > Op 7 juni 2019 om 12:47 schreef "Stolte, Felix" : > > > Hi Sinan, > > that would be great. The numbers should differ a lot, since you have an all > flash pool, but it would be interesting, what we could exp

Re: [ceph-users] Questions regarding backing up Ceph

2019-07-24 Thread Sinan Polat
Hi, Why not using backup tools that can do native OpenStack backups? We are also using Ceph as the cinder backend on our OpenStack platform. We use CommVault to make our backups. - Sinan > Op 24 jul. 2019 om 17:48 heeft Wido den Hollander het > volgende geschreven: > > >

[ceph-users] Pool statistics via API

2019-10-10 Thread Sinan Polat
ng the ceph command line and without upgrading my python-rbd package? Kind regards Sinan Polat https://docs.ceph.com/docs/master/rados/api/python/___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Pool statistics via API

2019-10-11 Thread Sinan Polat
Hi Ernesto, Thanks for the information! I didn’t know about the existence of the REST Dashboard API. I will check that out, Thanks again! Sinan > Op 11 okt. 2019 om 21:06 heeft Ernesto Puerta het > volgende geschreven: > > Hi Sinan, > > If it's in the Dashboard,

Re: [ceph-users] Pool statistics via API

2019-10-13 Thread Sinan Polat
Hi Ernesto, I just opened the Dashboard and there is no menu at the top-right. Also no "?". I have a menu at the top-left which has the following items: Cluster health, Cluster, Block and Filesystems. Running Ceph version 12.2.8-89. Kind regards, Sinan Polat > Op 11 oktober

[ceph-users] Consumer-grade SSD in Ceph

2019-12-18 Thread Sinan Polat
Hi, I am aware that https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ holds a list with benchmark of quite some different ssd models. Unfortunately it doesn't have benchmarks for recent ssd models. A client is planning to expand a running c

Re: [ceph-users] Consumer-grade SSD in Ceph

2019-12-19 Thread Sinan Polat
Hi all, Thanks for the replies. I am not worried about their lifetime. We will be adding only 1 SSD disk per physical server. All SSD’s are enterprise drives. If the added consumer grade disk will fail, no problem. I am more curious regarding their I/O performance. I do want to have 50% drop i

[ceph-users] Restarting firewall causes slow requests

2019-12-24 Thread Sinan Polat
Hi, Restarting the firewall (systemctl restart firewalld) on an OSD node causes slow requests. Is this expected behavior? Cluster is running Ceph 12.2. Thanks! Sinan___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cg

Re: [ceph-users] Consumer-grade SSD in Ceph

2019-12-27 Thread Sinan Polat
Thanks for all the replies. In summary; consumer grade SSD is a no go. What is an alternative to SM863a? Since it is quite hard to get these due non non-stock. Thanks! Sinan > Op 23 dec. 2019 om 08:50 heeft Eneko Lacunza het > volgende geschreven: > > Hi Sinan, > > Just t

Re: [ceph-users] ceph luminous bluestore poor random write performances

2020-01-02 Thread Sinan Polat
=138650-138650msec This is OpenStack Queens with Ceph FileStore (Luminous). Kind regards, Sinan Polat > Op 2 januari 2020 om 10:59 schreef Stefan Kooman : > > > Quoting Ignazio Cassano (ignaziocass...@gmail.com): > > Hello All, > > I installed ceph luminous with open

[ceph-users] Log format in Ceph

2020-01-08 Thread Sinan Polat
Hi, I couldn't find any documentation or information regarding the log format in Ceph. For example, I have 2 log lines (see below). For each 'word' I would like to know what it is/means. As far as I know, I can break the log lines into: [date] [timestamp] [unknown] [unknown] [unknown] [pthread]

Re: [ceph-users] Log format in Ceph

2020-01-08 Thread Sinan Polat
Hi Stefan, I do not want to know the reason. I want to parse Ceph logs (and use it in Elastic). But without knowing the log format I can’t parse. I know that the first and second ‘words’ are date + timestamp, but what about the 3rd-5th words of a log line? Sinan > Op 8 jan. 2020 om 09