Hi,
mon: /var/lib/ceph/mon/*
mds: inside the cephfs_data and cephfs_metadata rados pools
On 07/22/2019 09:25 PM, dhils...@performair.com wrote:
> All;
>
> Where, in the filesystem, do MONs and MDSs store their data?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technolo
18 24.2GiB 0.29 8.17TiB 6622
> kube 21 1.82GiB 0.03 5.45TiB 550
> .log 22 0B 0 5.45TiB 176
>
>
> The stuff in the data pool and the rwg pools is old data that we used
&
Hi,
I see that you are using rgw
RGW comes with many pools, yet most of them are used for metadata and
configuration, those do not store many data
Such pools do not need more than a couple PG, each (I use pg_num = 8)
You need to allocate your pg on pool that actually stores the data
Please do th
>From what I understand of your needs, you should create multiple Ceph
cluster, and hide them behind a proxy
You will have a large cluster, that is the default storage
If a customer own their machines, you will have an additional smaller
cluster on them
The single entry point will default to your
d Avail Use% Mounted on
> /dev/rbd0 8.0T 4.8T 3.3T 60% /mnt/nfsroot/rbd0
> /dev/rbd1 9.8T 34M 9.8T 1% /mnt/nfsroot/rbd1
>
>
> only 5T is taken up
>
>
> On Thu, Feb 28, 2019 at 2:26 PM Jack wrote:
>
>> Are not you using 3-replicas pool ?
>&
Are not you using 3-replicas pool ?
(15745GB + 955GB + 1595M) * 3 ~= 51157G (there is overhead involved)
Best regards,
On 02/28/2019 11:09 PM, solarflow99 wrote:
> thanks, I still can't understand whats taking up all the space 27.75
>
> On Thu, Feb 28, 2019 at 7:18 AM Mohamad Gebai wrote:
>
>
Hi,
There is an admin API for RGW :
http://docs.ceph.com/docs/master/radosgw/adminops/
You can check out rgwadmin¹ to see how to use it
Best regards,
[1] https://github.com/UMIACS/rgwadmin
On 01/31/2019 06:11 PM, shubjero wrote:
> Has anyone automated the ability to generate S3 keys for OpenSt
AFAIK, the only AAA available with librados works on a pool granularity
So, if you create a ceph user with access to your pool, he will get
access to all the content stored in this pool
If you want to use librados for your use case, you will need to
implement, on your code, the application logic r
Hi,
AFAIK, there is no encryption on the wire, either between daemons or
between a daemon and a client
The only encryption available on Ceph is at rest, using dmcrypt (aka
your data are encrypted before being written on disk)
Regards,
On 01/10/2019 07:59 PM, Sergio A. de Carvalho Jr. wrote:
> Hi
We are using qemu storage migration regularly via proxmox
Works fine, you can go on
On 12/11/2018 05:39 PM, Lionel Bouton wrote:
>
> I believe OP is trying to use the storage migration feature of QEMU.
> I've never tried it and I wouldn't recommend it (probably not very
> tested and there is a
Having the / mounted somewhere, you can simply "mv" directories around
On 12/10/2018 02:59 PM, Zhenshi Zhou wrote:
> Hi,
>
> Is there a way I can move sub-directories outside the directory.
> For instance, a directory /parent contains 3 sub-directories
> /parent/a, /parent/b, /parent/c. All these
There is only a simple iptables conntrack there
Could it be something related to timeout ?
/proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established has 7875
currently
Best regards,
On 12/05/2018 02:47 AM, Yan, Zheng wrote:
>
> This is more like network issue. check if there is firewall bet
Why is the client frozen at the first place ?
Is this because it somehow lost the connection to the mon (have not
found anything about this yet) ?
How can I prevent this ?
Can I make the client reconnect in less that 15 minutes, to lessen the
impact ?
Best regards,
On 12/04/2018 07:41 PM, Gregory
Thanks
However, I do not think this tip is related to my issue
Best regards,
On 12/04/2018 12:00 PM, NingLi wrote:
>
> Hi,maybe this reference can help you
>
> http://docs.ceph.com/docs/master/cephfs/troubleshooting/#disconnected-remounted-fs
>
>
>> On Dec 4, 2018, at 18:55, c...@jack.fr.eu.
We simply use the "noop" scheduler on our nand-based ceph cluster
On 11/05/2018 09:33 PM, solarflow99 wrote:
> I'm interested to know about this too.
>
>
> On Mon, Nov 5, 2018 at 10:45 AM Bastiaan Visser wrote:
>
>>
>> There are lots of rumors around about the benefit of changing
>> io-schedu
On 09/19/2018 06:26 PM, ST Wong (ITSC) wrote:
> Hi, thanks for your help.
>
>> Snapshots are exported remotely, thus they are really backups
>> One or more snapshots are kept on the live cluster, for faster recovery: if
>> a user broke his disk, you can restore it really fast
> -> Backups can b
20
> 30 ssd 0.48000 1.0 447G 149G 297G 33.50 0.79 17
>
> rbd.ssd: pg_num 8 pgp_num 8
>
> I will look into the balancer, but I am still curious why these 8 pg
> (8x8=64? + 8? = 72) are still not spread evenly. Why not 18 on every
> osd?
>
> -Origina
ceph osd df will get you more information: variation & pg number for
each OSD
Ceph does not spread object on a per-object basis, but on a pg-basis
The data repartition is thus not perfect
You may increase your pg_num, and/or use the mgr balancer module
(http://docs.ceph.com/docs/mimic/mgr/balance
On the same network, all host shall have the same MTU
Packets truncating can only happen on routers
Let say you have an OSD with mtu 9000, and a mon with mtu 1500
Communications from mon to osd will works, because the mon will send
1500 bytes-sized packets, and this is < 9000
However, communica
On 08/09/2018 03:01 PM, Piotr Dałek wrote:
> This introduces one big issue: it enforces COW snapshot on image,
> meaning that original image access latencies and consumed space
> increases. "Lightweight" snapshots would remove these inefficiencies -
> no COW performance and storage overhead.
Do yo
Fuse
On 07/22/2018 10:02 PM, Bryan Henderson wrote:
> Is there some better place to get a filesystem driver for the longterm
> stable Linux kernel (3.16) than the regular kernel.org source distribution?
>
> The reason I ask is that I have been trying to get some clients running
> Linux kernel 3.
My reaction when I read that there will be no Mimic soon on Stretch:
https://pix.milkywan.fr/JDjOJWnx.png
Anyway, thank you for the kind explanation, as well as for getting in
touch with the Debian team about this issue
On 06/04/2018 08:39 PM, Sage Weil wrote:
> [adding ceph-maintainers]
>
> On
dmcrypt is part of the whole device-mapper infrastructure :
https://en.wikipedia.org/wiki/Device_mapper
LVM is nothing but a tool to manipulate device-mapper easily
I do not see any reason for using dmcrypt directly without LVM
On 06/02/2018 11:24 AM, Marc Roos wrote:
>
>>> I would like to try
On 06/02/2018 11:09 AM, Marc Roos wrote:
> I would like to try this disk encryption without lvm.
> And then have ceph use this device dmcrypt device
You know that dmcrypt and LVM are tightly linked, right ?
The later is a tool to handle the former
___
On 05/30/2018 09:20 PM, Simon Ironside wrote:
> * What's the recommendation for what to deploy?
>
> I have a feeling the answer is going to be Luminous (as that's current
> LTS) and Bluestore (since that's the default in Luminous) but several
> recent threads and comments on this list make me doub
On 05/24/2018 11:40 PM, Stefan Kooman wrote:
>> What are your thoughts, would you run 2x replication factor in
>> Production and in what scenarios?
Me neither, mostly because I have yet to read a technical point of view,
from someone who read and understand the code
I do not buy Janne's "trust me,
Hi,
About Bluestore, sure there are checksum, but are they fully used ?
Rumors said that on a replicated pool, during recovery, they are not
> My thoughts on the subject are that even though checksums do allow to find
> which replica is corrupt without having to figure which 2 out of 3 copies a
ards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>
>
> On Wed, May 16, 2018 at 4:19 PM Jack wrote:
>
>> Hi,
>>
>> Many (most ?) filesystems does not store multiple files on the same block
&
Hi,
Many (most ?) filesystems does not store multiple files on the same block
Thus, with sdbox, every single mail (you know, that kind of mail with 10
lines in it) will eat an inode, and a block (4k here)
mdbox is more compact on this way
Another difference: sdbox removes the message, mdbox does
For what it worth, yahoo published their setup some years ago:
https://yahooeng.tumblr.com/post/116391291701/yahoo-cloud-object-store-object-storage-at
54 nodes per cluster for 3.2PB of raw storage, I guess this leads to 16
* 4TB hdd per node, thus 896 per cluster
(they may have used ssd as journ
Well
I currently manage 27 nodes, over 9 clusters
There is some burden that you should considers
The easiest is : "what do we do when two smalls clusters, which grows
slowly, need more space"
With one cluster: buy a node, add it, done
With two clusters: buy two nodes, add them, done
This can be
On 03/31/2018 03:24 PM, Mark Nelson wrote:
>> 1. Completely new users may think that bluestore defaults are fine and
>> waste all that RAM in their machines.
>
> What does "wasting" RAM mean in the context of a node running ceph? Are
> you upset that other applications can't come in and evict blue
Yes, this is what object-map does, it tracks used objects
For your 50TB new image:
- Without object-map, rbd rm must interate over every object, find out
that the object does not exists, look after the next object etc
- With object-map, rbd rm get the used objects list, find it empty, and
job is d
Kernel 4.4 is more than 2 years old
If your distribution has not made any release since then, and do not
plan do to so in the near future, change it
On 02/25/2018 01:18 PM, Massimiliano Cuttini wrote:
> Hi everybody,
>
> Just a simple question.
> In order to deploy Ceph...
>
>/Do you'll use
block until both osds are back up. If I were you, I would use
>> min_size=2 and change it to 1 temporarily if needed to do maintenance
>> or troubleshooting where down time is not an option.
>>
>> On Thu, Feb 22, 2018, 5:31 PM Georgios Dimitrakakis wrote:
>>
>>>
If min_size == size, a single OSD failure will place your pool read only
On 02/22/2018 11:06 PM, Georgios Dimitrakakis wrote:
> Dear all,
>
> I would like to know if there are additional risks when running CEPH
> with "Min Size" equal to "Replicated Size" for a given pool.
>
> What are the drawb
On 01/22/2018 08:38 PM, Massimiliano Cuttini wrote:
> The web interface is needed because:*cmd-lines are prune to typos.*
And you never misclick, indeed;
> SMART is widely used.
SMART has never, and will never be any useful for failure prediction.
> My opinion is pretty simple: the more a softwar
My cluster (55 OSDs) runs 12.2.x since the release, and bluestore too
All good so far
On 16/11/2017 15:14, Konstantin Shalygin wrote:
> Hi cephers.
> Some thoughts...
> At this time my cluster on Kraken 11.2.0 - works smooth with FileStore
> and RBD only.
> I want upgrade to Luminous 12.2.1 and go
Online does that on C14 (https://www.online.net/en/c14)
IIRC, 52 spining disks per RU, with only 2 disks usable at a time
There is some custom hardware, though, and it is really design for cold
storage (as an IO must wait for an idle slot, power-on the device, do
the IO, power-off the device and r
My conf (may not be optimal):
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name FQDN;
ssl_certificate /etc/ssl/certs/FQDN.crt;
ssl_certificate_key /etc/ssl/private/FQDN.key;
add_header Strict-Transport-Security 'max-age=31536000;
Or maybe you reach that ipv4 directly, and that ipv6 via a router, somehow
Check your routing table and neighbor table
On 27/10/2017 16:02, Wido den Hollander wrote:
>
>> Op 27 oktober 2017 om 14:22 schreef Félix Barbeira :
>>
>>
>> Hi,
>>
>> I'm trying to configure a ceph cluster using IPv6 onl
Hi,
I would like some information about the following
Let say I have a running cluster, with 4 OSDs: 2 SSDs, and 2 HDDs
My single pool has size=3, min_size=2
For a write-only pattern, I thought I would get SSDs performance level,
because the write would be acked as soon as min_size OSDs acked
B
You cannot;
On 02/10/2017 21:43, Andrei Mikhailovsky wrote:
> Hello everyone,
>
> what is the safest way to decrease the number of PGs in the cluster.
> Currently, I have too many per osd.
>
> Thanks
>
>
>
> ___
> ceph-users mailing list
> ceph
Yes, use tips from here :
http://ceph.com/geen-categorie/incremental-snapshots-with-rbd/
Basically:
- create a snapshot
- export the diff between your new snap and the previous one
- on the backup cluster, import the diff
This way, you can keep a few snap on the production cluster, for quick
reco
How many PGs ? How many pool (and how many data, please post rados df)
On 13/09/2017 22:30, Sinan Polat wrote:
> Hi,
>
>
>
> I have 52 OSD's in my cluster, all with the same disk size and same weight.
>
>
>
> When I perform a:
>
> ceph osd df
>
>
>
> The disk with the least available
;
> command to upload a dummy object and then remove it properly
> # rados -c /etc/ceph/ceph.conf -p ceph.rgw.buckets.data put
> be8fa19b-ad79-4cd8-ac7b-1e14fdc882f6.2384280.20_$OBJECT dummy.file
>
> Hope it helps,
> Andreas
>
> On 10 Sep 2017 1:20 a.m., "Jack" wrot
Hi,
I face a wild issue: I cannot remove an object from rgw (via s3 API)
My steps:
s3cmd ls s3://bucket/object -> it exists
s3cmd rm s3://bucket/object -> success
s3cmd ls s3://bucket/object -> it still exists
At this point, I can curl and get the object (thus, it does exists)
Doing the same vi
Hi Sage,
The one option I do not want for Ceph is the last one: support upgrade
across multiple LTS versions
I'd rather wait 3 months for a better release (both in terms of
functions and quality) than seeing the Ceph team exhausted, having to
maintain for years a lot more releases and code
Othe
Hi,
I have no number to share, nor did I test with Ceph specifically
However, I saturated a 4*10G NIC, with and without jumbo
In CPU consumption, jumbo frames eated 2 to 3 times less CPU
If you can use jumbo frames, just do it -there is no drawback, and the
gains are appreciable
On 11/08/2017 2
You may just upgrade to Luminous, then replace filestore by bluestore
Don't be scared, as Sage said:
> The only good(ish) news is that we aren't touching FileStore if we can
> help it, so it less likely to regress than other things. And we'll
> continue testing filestore+btrfs on jewel for some
Yes . I have this problem to. Actually the newest kernels don't support
this features to. It's a strange problem.
On Aug 16, 2016 6:36 AM, "Chengwei Yang" wrote:
> On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> > On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
> > wrote:
> > > H
Thanks Christian, and all of ceph users
Your guidance was very helpful, appreciate !
Regards
Jack Makenz
On Mon, May 30, 2016 at 11:08 AM, Christian Balzer wrote:
>
> Hello,
>
> you may want to read up on the various high-density node threads and
> conversations here.
>
&g
Forwarded conversation
Subject: Wasting the Storage capacity when using Ceph based On high-end
storage systems
From: *Jack Makenz*
Date: Sun, May 29, 2016 at 6:52 PM
To: ceph-commun...@lists.ceph.com
Hello All,
There are some serious problem about ceph that may waste
53 matches
Mail list logo