We are trying to use cephfs as storage for web graphics, such as
thumbnails and so on.
Is there any way to reduse overhead on storage? On test cluster we have
1 fs, 2 pools (meta and data) with replica size = 2
objects: 1.02 M objects, 1.1 GiB
usage: 144 GiB used, 27 GiB / 172 GiB ava
Are you using filestore or bluestore on the OSDs? If filestore what is
the underlying filesystem?
You could try setting debug_osd and debug_filestore to 20 and see if
that gives some more info?
On Wed, Sep 19, 2018 at 12:36 PM fatkun chan wrote:
>
>
> ceph version 12.2.5 (cad919881333ac9227417158
For cephfs & rgw, it all depends on your needs, as with rbd
You may want to trust blindly Ceph
Or you may backup all your data, just in case (better safe than sorry,
as he said)
To my knowledge, there is no (or few) impact of keeping a large number
of snapshot on a cluster
With rbd, you can indee
Hi,
Am 19.09.18 um 03:24 schrieb ST Wong (ITSC):
> Hi,
>
> Thanks for your information.
> May I know more about the backup destination to use? As the size of the
> cluster will be a bit large (~70TB to start with), we're looking for some
> efficient method to do that backup. Seems RBD mirror
Hi John,
I'm not 100% sure of that. It could be that there's a path through
the code that's healthy, but just wasn't anticipated at the point that
warning message was added. I wish a had a more unambiguous response
to give!
then I guess we'll just keep ignoring these warnings from the replay
Hello everyone,
I am currently working on the design of a ceph cluster, and i was
asking myself some question regarding the security of the cluster.
(Cluster should be deployed using Luminous on Ubuntu 16.04)
Technically, we would have HVs exploiting the block storage, but we
are in a position wh
On Wed, Sep 19, 2018 at 10:37 AM Eugen Block wrote:
>
> Hi John,
>
> > I'm not 100% sure of that. It could be that there's a path through
> > the code that's healthy, but just wasn't anticipated at the point that
> > warning message was added. I wish a had a more unambiguous response
> > to give
On Wed, 19 Sep 2018, KEVIN MICHAEL HRPCEK wrote:
> Sage,
>
> Unfortunately the mon election problem came back yesterday and it makes
> it really hard to get a cluster to stay healthy. A brief unexpected
> network outage occurred and sent the cluster into a frenzy and when I
> had it 95% healthy
Yeah, since we haven't knowingly done anything about it, it would be a
(pleasant) surprise if it was accidentally resolved in mimic ;-)
Too bad ;-)
Thanks for your help!
Eugen
Zitat von John Spray :
On Wed, Sep 19, 2018 at 10:37 AM Eugen Block wrote:
Hi John,
> I'm not 100% sure of that.
The cluster needs time to remove those objects in the previous pools. What
you can do is to wait.
发件人: Mike Cave
收件人: ceph-users
日期: 2018/09/19 06:24
主题: [ceph-users] total_used statistic incorrect
发件人: "ceph-users"
Greetings,
I’ve recently run into an issue with
Hi, I've recently deployed fresh cluster via ceph-ansible. I've not yet
created pools, but storage is used anyway.
[root@ceph01 ~]# ceph version
ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
(stable)
[root@ceph01 ~]# ceph df
GLOBAL:
SIZEAVAIL RAW USED %
Hi Cephers,
Any plans for Ceph Mimic packages for Ubuntu Trusty? I found only
ceph-deploy.
https://download.ceph.com/debian-mimic/dists/trusty/main/binary-amd64/
Thanks
Jakub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
Used data is wal+db size on each OSD.
On Wed, Sep 19, 2018 at 3:50 PM Jakub Jaszewski
wrote:
>
> Hi, I've recently deployed fresh cluster via ceph-ansible. I've not yet
> created pools, but storage is used anyway.
>
> [root@ceph01 ~]# ceph version
> ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0
Hello,
We have Mimic version 13.2.1 using Bluestore. OSDs are using NVMe disks for
data storage (in AWS).
Four OSDs are active in replicated mode.
Further information on request, since there are so many config options I am not
sure where to focus my attention yet. Assume we have default options.
On Mon, Sep 17, 2018 at 5:39 AM, Jeffrey Zhang wrote:
> In one env, which is deployed through container, i found the ceph-osd always
> be suicide due to "error (24) Too many open files"
>
> Then i increased the LimitNOFILE for the container from 65k to 655k, which
> could fix the issue.
> But the
I doubt it - Mimic needs gcc v7 I believe, and Trusty's a bit old for that.
Even the Xenial releases aren't straightforward and rely on some backported
packages.
Sean, missing Mimic on debian stretch
On Wed, 19 Sep 2018, Jakub Jaszewski said:
> Hi Cephers,
>
> Any plans for Ceph Mimic packag
You're going to need to tell us *exactly* what you're doing. I presume this
uses CephFS somehow? Are you accessing via NFS or something? Using what
client versions?
CephFS certainly isn't supposed to allow this, and I don't think there are
any currently known bugs which could leak it. But there ar
No, it doesn't. In fact, I'm not aware of any client that sets this
flag, I think it's more for custom applications.
Paul
2018-09-18 21:41 GMT+02:00 Kevin Olbrich :
> Hi!
>
> is the compressible hint / incompressible hint supported on qemu+kvm?
>
> http://docs.ceph.com/docs/mimic/rados/configura
I set mon lease = 30 yesterday and it had no effect on the quorum election. To
give you an idea of how much cpu ms_dispatch is using, from the last mon
restart about 7.5 hours ago, the ms_dispatch thread has 5h 40m of cpu time.
Below are 2 snippets from perf top. I took them while ms_dispatch wa
Hi Gregory,
Thanks for your reply.
Yes, the file is stored on CephFS.
Accessed using ceph client
Everything is a basic install following the ceph-deploy guide
Note sure what details would be helpful…
The file is written to by a webserver (apache)
The file is accessed by the webserver on request
Okay, so you’re using the kernel client. What kernel version is it? I think
this was one of a few known bugs there a while ago that have since been
fixed.
On Wed, Sep 19, 2018 at 7:24 AM Thomas Sumpter
wrote:
> Hi Gregory,
>
>
>
> Thanks for your reply.
>
>
>
> Yes, the file is stored on CephFS.
No, Ceph Mimic will not be available for Ubuntu Trusty 14.04. That release
is almost 4.5 years old now, you should start planning towards an OS
upgrade.
On Wed, Sep 19, 2018 at 8:54 AM Jakub Jaszewski
wrote:
> Hi Cephers,
>
> Any plans for Ceph Mimic packages for Ubuntu Trusty? I found only
> ce
Linux version 4.18.4-1.el7.elrepo.x86_64 (mockbuild@Build64R7) (gcc version
4.8.5 20150623 (Red Hat 4.8.5-28) (GCC))
CentOS 7
From: Gregory Farnum
Sent: Wednesday, September 19, 2018 4:27 PM
To: Thomas Sumpter
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Delay Between Writing Data an
It's hard to tell exactly from the below, but it looks to me like there is
still a lot of OSDMap reencoding going on. Take a look at 'ceph features'
output and see who in the cluster is using pre-luminous features.. I'm
guessing all of the clients? For any of those sessions, fetching OSDMaps
Tried 4.17 with the same problem
Just downgraded to 4.8. Let's see if no more 0x67... appears
On 18/09/18 16:28, Alfredo Daniel Rezinovsky wrote:
I started with this after upgrade to bionic. I had Xenial with lts
kernels (4.13) without problem.
I will try to change to ubuntu 4.13 and wait fo
I have been trying to do this on a sierra vm, installed xcode 9.2
I had to modify this ceph-fuse.rb and copy it to the folder
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/ (was
not there, is that correct?)
But I get now the error
make: *** No rule to make target `rados'.
I thought maybe that the cleanup process hadn't occurred yet, but I've been in
this state for over a week now.
I’m just about to go live with this system ( in the next couple of weeks ) so
I'm trying to start out as clean as possible.
If anyone has any insights I'd appreciate it.
There shoul
Hi, thanks for your help.
> Snapshots are exported remotely, thus they are really backups
> One or more snapshots are kept on the live cluster, for faster recovery: if a
> user broke his disk, you can restore it really fast
-> Backups can be inspected on the backup cluster
For "Snapshots are e
Hi,
Thanks for your help.
> For the moment, we use Benji to backup to a classic RAID 6.
Will the RAID 6 be mirrored to another storage in remote site for DR purpose?
> For RBD mirroring, you do indeed need another running Ceph Cluster, but we
> plan to use that in the long run (on separate hard
> On 08/30/2018 11:00 AM, Joao Eduardo Luis wrote:
> > On 08/30/2018 09:28 AM, Dan van der Ster wrote:
> > Hi,
> > Is anyone else seeing rocksdb mon stores slowly growing to >15GB,
> > eventually triggering the 'mon is using a lot of disk space' warning?
> > Since upgrading to luminous, we've seen
Done. :)
On Tue, Sep 18, 2018 at 12:15 PM Alfredo Daniel Rezinovsky <
alfredo.rezinov...@ingenieria.uncuyo.edu.ar> wrote:
> Can anyone add me to this slack?
>
> with my email alfrenov...@gmail.com
>
> Thanks.
>
> --
> Alfredo Daniel Rezinovsky
> Director de Tecnologías de Información y Comunicaci
On 09/19/2018 06:26 PM, ST Wong (ITSC) wrote:
> Hi, thanks for your help.
>
>> Snapshots are exported remotely, thus they are really backups
>> One or more snapshots are kept on the live cluster, for faster recovery: if
>> a user broke his disk, you can restore it really fast
> -> Backups can b
Majority of the clients are luminous with a few kraken stragglers. I
looked at ceph features and 'ceph daemon mon.sephmon1 sessions'. Nothing
is reporting as having mimic features, all mon,mgr,osd are running
13.2.1 but are reporting luminous features, and majority of the luminous
clients are r
Hi,
Am 19.09.18 um 18:32 schrieb ST Wong (ITSC):
> Thanks for your help.
You're welcome!
I should also add we don't have very long-term experience with this yet - Benji
is pretty modern.
>> For the moment, we use Benji to backup to a classic RAID 6.
> Will the RAID 6 be mirrored to another st
Looks like you are running on CentOS, fwiw. We’ve successfully ran the
conversion commands on Jewel, Ubuntu 16.04.
Have a feel it’s expecting the compression to be enabled, can you try removing
“compression=kNoCompression” from the filestore_rocksdb_options? And/or you
might want to check if ro
Hi there,
With default cluster name "ceph" I can map rbd-nbd without any issue.
But for a different cluster name, i'm not able to map image using rbd-nbd
and getting
root@vtier-P-node1:/etc/ceph# rbd-nbd --cluster cephdr map test-pool/testvol
rbd-nbd: unknown command: --cluster
I looked at the
Thanks for reporting this -- it looks like we broke the part where
command-line config overrides were parsed out from the parsing. I've
opened a tracker ticket against the issue [1].
On Wed, Sep 19, 2018 at 2:49 PM Vikas Rana wrote:
>
> Hi there,
>
> With default cluster name "ceph" I can map rbd
Hi Zheng,
It looks like the memory growth happens even with the simple messenger:
[root@worker1032 ~]# ceph daemon /var/run/ceph/ceph-client.admin.asok
config get ms_type
{
"ms_type": "simple"
}
[root@worker1032 ~]# ps -auxwww | grep ceph-fuse
root 179133 82.2 13.5 77281896 71644120 ?
Thanks Gregory for the explanation.
>
> files open (which mostly only applies to FileStore and there's a
> config, defaults to 1024 I think).
>
After searched the ceph doc, i do not think there is a such option[0]
I found a similar `filestore flusher max fds=512` option, but it is
already depreca
Thank you Shalygin for sharing.
I have know the reason. it is in L the fastcgi is disabled by default. I have
reenable the fastcgi and it worked well now.
By the way I use keepalive+lvs to loadbalance and ha.
Thanks again!
At 2018-09-18 18:36:46, "Konstantin Shalygin" wrote:
>>
On 09/20/2018 10:09 AM, linghucongsong wrote:
By the way I use keepalive+lvs to loadbalance and ha.
This is good. But in that case I wonder why fastcgi+nginx, instead
civetweb or beast?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
I am setting up RadosGW and Ceph cluster on Luminous. I am using EC
for `buckets.data` pool on HDD osds, is it okay to put
`buckets.non-ec` pool with replicated ruleset for multi-parts upload
on the same HDD osds? Will there be issues with mixing EC and
replicated pools on the same disk types?
1 it is for the perfomance nginx is more faster than civetweb base on the
cosbench test.
2 I want to use some extra function with nginx. such as rtmp streaming data and
add watermask for picture and so
on. with nginx it has more free and opensource powerful plug-in units.
At 2018-
Hi,
if you want to isolate your HV from ceph's public network a gateway would do
that (like iscsi gateway). Note however that this will also add an extra network
hop and a potential bottleneck since all client traffic has to pass through the
gateway node(s).
HTH,
Jan
On Wed, Sep 19, 2018 at
44 matches
Mail list logo