Hi,
I have deployed a Ceph cluster (Jewel). By default all block devices that
are created are thin provisioned.
Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
. When you monitor closely the real usage, this should
not be a problem; but from experience when there is no hard limit,
overprovisioning will happen at some point.
Sinan
> I can only speak for some environments, but sometimes, you would want to
> make sure that a cluster cannot fill up until y
not the way to go, since no OSD's are
out during yum update and the node is still part of the cluster and will
handle I/O.
I think the best way is the combination of "ceph osd set noout" + stopping
the OSD services so the OSD node does not have any traffic anymore.
Any th
7;capacity size'.
Thanks!
Sinan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
am looking for.
Thanks.
Sinan
> # for a specific pool:
>
> ceph osd pool get your_pool_name size
>
>
>> Le 20 juil. 2018 à 10:32, Sébastien VIGNERON
>> a écrit :
>>
>> #for all pools:
>> ceph osd pool ls detail
>>
>>
>>> Le 20
ritable by QEMU and allowed by SELinux or AppArmor
Permissions on the folder:
0 drwxrwx---. 2 ceph ceph 40 23 jul 11:27 ceph
But /var/run/ceph is empty.
Thanks!
Sinan
> Hello again,
>
> How can I determine $cctid for specific rbd name? Or is there any good way
> to map
irt-daemon-driver-storage-rbd-3.9.0-14.el7_5.5.x86_64
librbd1-12.2.4-10.el7cp.x86_64
python-rbd-12.2.4-10.el7cp.x86_64
$
Both clusters are using the same Ceph client key, same Ceph
configuration file.
The only difference is the version of rbd.
Is this expected
The result of your command:
$ rbd ls --debug-rbd=20 -p ssdvolumes --id openstack
2018-10-08 13:42:17.386505 7f604933fd40 20 librbd: list 0x7fff5b25cc30
rbd: list: (1) Operation not permitted
$
Thanks!
Sinan
On 08-10-2018 15:37, Jason Dillaman wrote:
On Mon, Oct 8, 2018 at 9:24 AM wrote:
Hi
brbd: list 0x7fff5b25cc30
rbd: list: (1) Operation not permitted
$
Thanks!
Sinan
On 08-10-2018 15:37, Jason Dillaman wrote:
> On Mon, Oct 8, 2018 at 9:24 AM wrote:
>>
>> Hi,
>>
>> I am running a Ceph cluster (Jewel, ceph version 10.2.10-17.el7cp).
>>
>>
>
gt; 2018-10-08 13:42:17.386505 7f604933fd40 20 librbd: list 0x7fff5b25cc30
>> rbd: list: (1) Operation not permitted
>> $
>>
>> Thanks!
>> Sinan
>>
>> On 08-10-2018 15:37, Jason Dillaman wrote:
>> > On Mon, Oct 8, 2018 at 9:24 AM wrote:
>> &
. Why does this happen? The cluster should be
already balanced after out'ed the osd. I didn't expect another rebalance
with removing the OSD from the CRUSH map.
Thanks!
Sinan Polat
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
I am quite new with Ceph Storage. Currently we have a Ceph environment
running, but in a few months we will be setting up a new Ceph storage
environment.
I have read a lot of information on the Ceph website, but the more
information the better for me. What book(s) would you suggest?
What has DWPD to do with performance / IOPS? The SSD will just fail earlier,
but it should not have any affect on the performance, right?
Correct me if I am wrong, just want to learn.
> Op 20 aug. 2017 om 06:03 heeft Christian Balzer het volgende
> geschreven:
>
> DWPD
_
The docs is providing the following information:
The smallest CRUSH unit type that Ceph will notautomatically mark out. For
instance, if set to host and if all OSDs of a host are down, Ceph will not
automatically mark out these OSDs.
But what does it exactly mean? Anyone who can explain it? Than
licas in ams5-ssd and 3 replicas in ams6-ssd?
Thanks!
Sinan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi David,
Thank you for your reply, I will read about the min_size 1 value.
What about my initial question, anyone?
Thanks!
Van: David Turner [mailto:drakonst...@gmail.com]
Verzonden: donderdag 24 augustus 2017 19:45
Aan: Sinan Polat; ceph-us...@ceph.com
Onderwerp: Re: [ceph-users
Hi,
I am quite new to Ceph, so forgive me for any stupid questions.
Setup:
- 2 datacenters, each datacenter has 26 OSD's, which makes in total
52 OSD's.
- ceph osd df shows that every disk is 1484GB.
- I have 2 rulesets and 4 pools, 1 ruleset + 2 pools per
data
Hi,
How is the MAX AVAIL calculated in 'ceph df'? Since I am missing some space.
I have 26 OSD's, each is 1484GB (according to df). I have 3 replica's.
Shouldn't the MAX AVAIL be: (26*1484)/3 = 12.861GB?
Instead 'ceph df' is showing 7545G for the pool that is using the 26 OSD's.
What i
Hi
According to:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-July/003140.html
You can set it with:
on the OSDs you may (not) want to change "osd failsafe full ratio" and "osd
failsafe nearfull ratio".
Van: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Namens dE
V
ce on each disk is more or less the same?
- What will happen if I hit the MAX AVAIL, while most of the disks
still have space?
Thanks!
Sinan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
You are talking about the min_size, which should be 2 according to your text.
Please be aware, the min_size in your CRUSH is _not_ the replica size. The
replica size is set with your pools.
> Op 7 okt. 2017 om 19:39 heeft Peter Linder het
> volgende geschreven:
>
>> On 10/7/2017 7:36 PM, Дроб
Why do your put your mons inside your cluster network, shouldn't they reisde
within the public network?
Cluster network is only for replica data / for traffic between your osds.
> Op 7 okt. 2017 om 14:32 heeft Kashif Mumtaz het
> volgende geschreven:
>
>
>
> I have successfully installed Lu
Ceph has tried to (re)balance your data, backfill_toofull means no available
space to move data, but you have plenty of space.
Why do you have so little pgs? I would increase the amount of pgs, but before
doing so lets see what others will say.
Sinan
> Op 28 jul. 2018 om 11:50 heeft Sebast
FYI: I/O limiting in combination with OpenStack 10/12 + Ceph doesn’t work
properly. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1476830
> Op 22 mrt. 2018 om 07:59 heeft Wido den Hollander het
> volgende geschreven:
>
>
>
>> On 03/21/2018 06:48 PM, Andre Goree wrote:
>> I'm trying to det
I don’t know about the memory, but your CPU’s would be overkill. For what would
you need 20 cores (40 threads)?
When using 2 sockets I would go for 2 memory modules. Does it even work with
just 1 module?
Regards,
Sinan
> Op 21 nov. 2018 om 22:30 heeft Georgios Dimitrakakis
> het vo
Hi all,
We have a couple of hundreds RBD volumes/disks in our Ceph cluster, each RBD
disk is mounted by a different client. Currently we see quite high IOPS
happening on the cluster, but we don't know which client/RBD is causing it.
Is it somehow easily to see the utilization per RBD disk?
Thank
Hi Jason,
Thanks for your reply.
Unfortunately we do not have access to the clients.
We are running Red Hat Ceph 2.x which is based on Jewel, that means we cannot
pinpoint who or what is causing the load on the cluster, am I right?
Thanks!
Sinan
> Op 28 dec. 2018 om 15:14 heeft Ja
Hi,
I finally figured out how to measure the statistics of a specific RBD volume;
$ ceph --admin-daemon perf dump
It outputs a lot, but I don't know what it means, is there any documentation
about the output?
For now the most important values are:
- bytes read
- bytes written
I think I n
, can we include the volume name in the path?
Sinan
> Op 1 feb. 2019 om 00:44 heeft Jason Dillaman het
> volgende geschreven:
>
>> On Thu, Jan 31, 2019 at 12:16 PM Paul Emmerich
>> wrote:
>>
>> "perf schema" has a description field that may o
Hi Eric,
40% slower performance compared to ..? Could you please share the current
performance. How many OSD nodes do you have?
Regards,
Sinan
> Op 21 februari 2019 om 14:19 schreef "Smith, Eric" :
>
>
> Hey folks – I recently deployed Luminous / BlueStore on SSDs
Probably inbalance of data across your OSDs.
Could you show ceph osd df.
>From there take the disk with lowest available space. Multiply that number
>with number of OSDs. How much is it?
Kind regards,
Sinan Polat
> Op 16 apr. 2019 om 05:21 heeft Igor Podlesny het
> volgend
I have deployed, expanded and upgraded multiple Ceph clusters using
ceph-ansible. Works great.
What information are you looking for?
--
Sinan
> Op 17 apr. 2019 om 16:24 heeft Francois Lafont
> het volgende geschreven:
>
> Hi,
>
> +1 for ceph-ansible too. ;)
>
&
Hi,
Does your ansible user has sudo rights? Without password prompt?
Kind regards,
Sinan Polat
> Op 23 apr. 2019 om 05:00 heeft ST Wong (ITSC) het
> volgende geschreven:
>
> Hi all,
>
> We tried to deploy a new CEPH cluster using latest ceph-ansible, run as an
&g
Hi Felix,
I can run your commands inside an OpenStack VM. Tthe storage cluster contains
of 12 OSD servers, holding each 8x 960GB SSD. Luminous FileStore. Replicated 3.
Would it help you to run your command on my cluster?
Sinan
> Op 7 jun. 2019 om 08:52 heeft Stolte, Felix het
> vo
): 9.9894/0.00
#
Kind regards,
Sinan Polat
> Op 7 juni 2019 om 12:47 schreef "Stolte, Felix" :
>
>
> Hi Sinan,
>
> that would be great. The numbers should differ a lot, since you have an all
> flash pool, but it would be interesting, what we could exp
Hi,
Why not using backup tools that can do native OpenStack backups?
We are also using Ceph as the cinder backend on our OpenStack platform. We use
CommVault to make our backups.
- Sinan
> Op 24 jul. 2019 om 17:48 heeft Wido den Hollander het
> volgende geschreven:
>
>
>
ng the ceph command line and without upgrading my python-rbd package?
Kind regards
Sinan Polat https://docs.ceph.com/docs/master/rados/api/python/___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Ernesto,
Thanks for the information! I didn’t know about the existence of the REST
Dashboard API. I will check that out, Thanks again!
Sinan
> Op 11 okt. 2019 om 21:06 heeft Ernesto Puerta het
> volgende geschreven:
>
> Hi Sinan,
>
> If it's in the Dashboard,
Hi Ernesto,
I just opened the Dashboard and there is no menu at the top-right. Also no "?".
I have a menu at the top-left which has the following items: Cluster health,
Cluster, Block and Filesystems.
Running Ceph version 12.2.8-89.
Kind regards,
Sinan Polat
> Op 11 oktober
Hi,
I am aware that
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
holds a list with benchmark of quite some different ssd models. Unfortunately it
doesn't have benchmarks for recent ssd models.
A client is planning to expand a running c
Hi all,
Thanks for the replies. I am not worried about their lifetime. We will be
adding only 1 SSD disk per physical server. All SSD’s are enterprise drives. If
the added consumer grade disk will fail, no problem.
I am more curious regarding their I/O performance. I do want to have 50% drop
i
Hi,
Restarting the firewall (systemctl restart firewalld) on an OSD node causes slow
requests. Is this expected behavior?
Cluster is running Ceph 12.2.
Thanks!
Sinan___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cg
Thanks for all the replies. In summary; consumer grade SSD is a no go.
What is an alternative to SM863a? Since it is quite hard to get these due non
non-stock.
Thanks!
Sinan
> Op 23 dec. 2019 om 08:50 heeft Eneko Lacunza het
> volgende geschreven:
>
> Hi Sinan,
>
> Just t
=138650-138650msec
This is OpenStack Queens with Ceph FileStore (Luminous).
Kind regards,
Sinan Polat
> Op 2 januari 2020 om 10:59 schreef Stefan Kooman :
>
>
> Quoting Ignazio Cassano (ignaziocass...@gmail.com):
> > Hello All,
> > I installed ceph luminous with open
Hi,
I couldn't find any documentation or information regarding the log format in
Ceph. For example, I have 2 log lines (see below). For each 'word' I would like
to know what it is/means.
As far as I know, I can break the log lines into:
[date] [timestamp] [unknown] [unknown] [unknown] [pthread]
Hi Stefan,
I do not want to know the reason. I want to parse Ceph logs (and use it in
Elastic). But without knowing the log format I can’t parse. I know that the
first and second ‘words’ are date + timestamp, but what about the 3rd-5th words
of a log line?
Sinan
> Op 8 jan. 2020 om 09
46 matches
Mail list logo