Re: [ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Daniel Carrasco
is incurred > which can take somewhat longer. > > > On 10.10.2018, at 13:57, Daniel Carrasco wrote: > > Thanks for your response. > > I'll point in that direction. > I also need a fast recovery in case that MDS die so, Standby MDS are > recomended or recovery is fa

Re: [ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Daniel Carrasco
10:49, Daniel Carrasco wrote: > > >- Wich is the best configuration to avoid that MDS problems. > > Single active MDS with lots of RAM. > > -- _____ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L.

[ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Daniel Carrasco
s that uses about 4GB. Thanks! -- _________ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L. Tlf: +34 911 12 32 84 Ext: 223 www.i2tic.com _ ___ ceph

Re: [ceph-users] Don't upgrade to 13.2.2 if you use cephfs

2018-10-08 Thread Daniel Carrasco
El lun., 8 oct. 2018 5:44, Yan, Zheng escribió: > On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco > wrote: > > > > I've got several problems on 12.2.8 too. All my standby MDS uses a lot > of memory (while active uses normal memory), and I'm receiving a lot of &

Re: [ceph-users] Don't upgrade to 13.2.2 if you use cephfs

2018-10-07 Thread Daniel Carrasco
.1, then run 'ceph mds repaired fs_name:damaged_rank' . > > > > Sorry for all the trouble I caused. > > Yan, Zheng > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/l

[ceph-users] Connect client to cluster on other subnet

2018-08-23 Thread Daniel Carrasco
d to be sure. Someone has tried it? Thanks!! -- _____ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L. Tlf: +34 911 12 32 84 Ext: 223 www.i2tic.com _ ___ ceph-users

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-26 Thread Daniel Carrasco
daemon overhead and the memory fragmentation. At least is not 13-15Gb like before. Greetings!! 2018-07-25 23:16 GMT+02:00 Daniel Carrasco : > I've changed the configuration adding your line and changing the mds > memory limit to 512Mb, and for now looks stable (its on about 3-6% and

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Daniel Carrasco
now looks acceptable: 1264 ceph 20 0 12,543g 737952 16188 S 1,0 4,6% 0:41.05 ceph-mds Anyway, I need time to test it, because 15 minutes is too less. Greetings!! 2018-07-25 17:16 GMT+02:00 Daniel Carrasco : > Hello, > > Thanks for all your help. > > The dd is an option of

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Daniel Carrasco
t;, "error", "failed", "message" or something similar, so looks like there are no messages of that kind. Greetings!! 2018-07-25 14:48 GMT+02:00 Yan, Zheng : > On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng wrote: > > > > On Wed, Jul 25, 2018 at 5:04 PM Daniel Ca

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-24 Thread Daniel Carrasco
--- Greetings!! 2018-07-24 12:07 GMT+02:00 Yan, Zheng : > On Tue, Jul 24, 2018 at 4:59 PM Daniel Carrasco > wrote: > > > > Hello, > > > > How many time is neccesary?, because is a production environment and > memory profiler + low cache size because

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-24 Thread Daniel Carrasco
r/log/ceph/ceph-mds.x.profile..heap > > > > > On Tue, Jul 24, 2018 at 3:18 PM Daniel Carrasco > wrote: > > > > This is what i get: > > > > > >

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-24 Thread Daniel Carrasco
; Greetings! > > > > El mar., 24 jul. 2018 1:00, Gregory Farnum > escribió: > >> > >> On Mon, Jul 23, 2018 at 11:08 AM Patrick Donnelly > wrote: > >>> > >>> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco > wrote: > >>>

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-23 Thread Daniel Carrasco
at 11:08 AM Patrick Donnelly > wrote: > >> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco >> wrote: >> > Hi, thanks for your response. >> > >> > Clients are about 6, and 4 of them are the most of time on standby. >> Only two >> > are

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-23 Thread Daniel Carrasco
y is using less than 1Gb of RAM just now. Of course I've not rebooted the machine, but maybe if the daemon was killed for high memory usage then the new configuration is loaded now. Greetings! 2018-07-23 21:07 GMT+02:00 Daniel Carrasco : > Thanks!, > > It's true that I'

Re: [ceph-users] Fwd: MDS memory usage is very high

2018-07-23 Thread Daniel Carrasco
y is using less than 1Gb of RAM just now. Of course I've not rebooted the machine, but maybe if the daemon was killed for high memory usage then the new configuration is loaded now. Greetings! 2018-07-19 11:35 GMT+02:00 Daniel Carrasco : > Hello again, > > It is still early to say

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-23 Thread Daniel Carrasco
, "wait": { "avgcount": 0, "sum": 0.0, "avgtime": 0.0 } }, "throttle-objecter_bytes": { "val": 0, "max": 104857600, "get_started": 0, "get": 0, "get_sum": 0

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-23 Thread Daniel Carrasco
ely small cache size on the MDS? > > > Paul > > 2018-07-23 13:16 GMT+02:00 Daniel Carrasco : > >> Hello, >> >> I've created a Ceph cluster of 3 nodes (3 mons, 3 osd, 3 mgr and 3 mds >> with two active). This cluster is for mainly for server a webpage (

[ceph-users] Insane CPU utilization in ceph.fuse

2018-07-23 Thread Daniel Carrasco
fuse module but I've the above problem. My SO is Ubuntu 16.04 x64 with kernel version 4.13.0-45-generic and ceph server/client version is 12.2.7. How I can debug why that CPU usage?. Thanks! -- _____ Daniel Carrasco Marín Ingeniería para

Re: [ceph-users] Fwd: MDS memory usage is very high

2018-07-19 Thread Daniel Carrasco
ALLOC: 8192 Tcmalloc page size Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()). Bytes released to the OS take up virtual address space but no physical memory. Greetings!! 2018-07-19 10:24 GMT+02:00 Daniel Car

Re: [ceph-users] Fwd: MDS memory usage is very high

2018-07-19 Thread Daniel Carrasco
07-19 1:07 GMT+02:00 Daniel Carrasco : > Thanks again, > > I was trying to use fuse client instead Ubuntu 16.04 kernel module to see > if maybe is a client side problem, but CPU usage on fuse client is very > high (a 100% and even more in a two cores machine), so I'd to rever to

Re: [ceph-users] Fwd: MDS memory usage is very high

2018-07-18 Thread Daniel Carrasco
ed. > > On Wed, Jul 18, 2018 at 3:48 PM Daniel Carrasco > wrote: > >> Hello, thanks for your response. >> >> This is what I get: >> >> # ceph tell mds.kavehome-mgto-pro-fs01 heap stats >> 2018-07-19 00:43:46.142560 7f5a7a7fc700 0 client.1318388

Re: [ceph-users] Fwd: MDS memory usage is very high

2018-07-18 Thread Daniel Carrasco
have one of the slightly-broken > base systems and find that running the "heap release" (or similar > wording) command will free up a lot of RAM back to the OS! > -Greg > > On Wed, Jul 18, 2018 at 1:53 PM, Daniel Carrasco > wrote: > > Hello, > > > >

[ceph-users] Fwd: MDS memory usage is very high

2018-07-18 Thread Daniel Carrasco
2.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable). Thanks!! -- _ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L. Tlf: +34 911 12 32 84 Ext: 223 www.i2tic.com ___

Re: [ceph-users] Slow clients after git pull

2018-03-01 Thread Daniel Carrasco
is a commit to the > repository from elsewhere. This would be on local storage and remove a lot > of complexity. All front-end servers would update automatically via git. > > If something like that doesn't work, it would seem you have a workaround > that works for you. >

Re: [ceph-users] Slow clients after git pull

2018-03-01 Thread Daniel Carrasco
HA website with an LB in front of them. > > I'm biased here a bit, but I don't like to use networked filesystems > unless nothing else can be worked out or the software using it is 3rd party > and just doesn't support anything else. > > On Thu, Mar 1, 2018 at 9:05 AM

Re: [ceph-users] Slow clients after git pull

2018-03-01 Thread Daniel Carrasco
ts I can do... Greetings!! 2018-02-28 17:11 GMT+01:00 Daniel Carrasco : > Hello, > > I've created a Ceph cluster with 3 nodes and a FS to serve a webpage. The > webpage speed is good enough (near to NFS speed), and have HA if one FS die. > My problem comes when I deploy a git

[ceph-users] Slow clients after git pull

2018-02-28 Thread Daniel Carrasco
just deploy the git repository and all start to work very slow. Thanks!! -- _____ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L. Tlf: +34 911 12 32 84 Ext: 223

Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-23 Thread Daniel Carrasco
bled the quota check because there is no quota on this cluster. This will lower the petitions to MDS and CPU usage, right? Greetings!! 2018-02-22 19:34 GMT+01:00 Patrick Donnelly : > On Wed, Feb 21, 2018 at 11:17 PM, Daniel Carrasco > wrote: > > I want to search also if there is any way to

Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Daniel Carrasco
ached for a while. Greetings!! El 22 feb. 2018 3:59, "Patrick Donnelly" escribió: > Hello Daniel, > > On Wed, Feb 21, 2018 at 10:26 AM, Daniel Carrasco > wrote: > > Is possible to make a better distribution on the MDS load of both nodes?. > > We are aware of

Re: [ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Daniel Carrasco
2018-02-21 19:26 GMT+01:00 Daniel Carrasco : > Hello, > > I've created a Ceph cluster with 3 nodes to serve files to an high traffic > webpage. I've configured two MDS as active and one as standby, but after > add the new system to production I've noticed that

[ceph-users] Balanced MDS, all as active and recomended client settings.

2018-02-21 Thread Daniel Carrasco
own for example, or if there are another side effects. My last question is if someone can recomend me a good client configuration like cache size, and maybe something to lower the metadata servers load. Thanks!! -- _____ Daniel Carrasco Marín Ing

Re: [ceph-users] OSD are marked as down after jewel -> luminous upgrade

2017-10-18 Thread Daniel Carrasco
Finally i've disabled the mon_osd_report_timeout option and seems to works fine. Greetings!. 2017-10-17 19:02 GMT+02:00 Daniel Carrasco : > Thanks!! > > I'll take a look later. > > Anyway, all my Ceph daemons are in same version on all nodes (I've > upgrad

Re: [ceph-users] OSD are marked as down after jewel -> luminous upgrade

2017-10-17 Thread Daniel Carrasco
ph.com/msg39886.html -Original Message- From: Daniel Carrasco [mailto:d.carra...@i2tic.com] Sent: dinsdag 17 oktober 2017 17:49 To: ceph-us...@ceph.com Subject: [ceph-users] OSD are marked as down after jewel -> luminous upgrade Hello, Today I've decided to upgrade my Ceph cluster to

[ceph-users] OSD are marked as down after jewel -> luminous upgrade

2017-10-17 Thread Daniel Carrasco
-- For now I've added the nodown flag to keep all OSD online, and all is working fine, but this is not the best way to do it. Someone knows how to fix this problem?. Maybe this release needs to open new ports on firewall? Th

Re: [ceph-users] Connections between services secure?

2017-06-30 Thread Daniel Carrasco
RGW, multi-site, multi-datacenter crush maps, etc? On Fri, Jun 30, 2017 at 2:28 PM Daniel Carrasco wrote: > Hello, > > My question is about steam security of connections between ceph services. > I've read that connection is verified by private keys and signed packets, > but

[ceph-users] Connections between services secure?

2017-06-30 Thread Daniel Carrasco
Hello, My question is about steam security of connections between ceph services. I've read that connection is verified by private keys and signed packets, but my question is if that packets are ciphered in any way to avoid packets sniffers, because I want to know if can be used through internet wi

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-27 Thread Daniel Carrasco
!! 2017-06-15 19:04 GMT+02:00 Daniel Carrasco : > Hello, thanks for the info. > > I'll give a try tomorrow. On one of my test I got the messages that yo say > (wrongfully marked), but i've lowered other options and now is fine. For > now the OSD are not reporting down m

Re: [ceph-users] What is "up:standby"? in ceph mds stat => e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby

2017-06-16 Thread Daniel Carrasco
> ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- _ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L. Tlf: +34 911 12 32 84 Ext: 223 www.

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-15 Thread Daniel Carrasco
ant > to monitor your cluster for OSDs being marked down for a few seconds before > marking themselves back up. You can see this in the OSD logs where the OSD > says it was wrongfully marked down in one line and then the next is where > it tells the mons it is actually up. > > On Th

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-15 Thread Daniel Carrasco
I forgot to say that after upgrade the machine RAM to 4Gb, the OSD daemons has started to use only a 5% (about 200MB). Is like magic, and now I've about 3.2Gb of free RAM. Greetings!! 2017-06-15 15:08 GMT+02:00 Daniel Carrasco : > Finally, the problem was W3Total Cache, that seems to b

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-15 Thread Daniel Carrasco
caused more peering and backfilling, ... which caused more OSDs to be > killed by OOM killer. > > On Wed, Jun 14, 2017 at 5:01 PM Daniel Carrasco > wrote: > >> Is strange because on my test cluster (three nodes) with two nodes with >> OSD, and all with MON and MDS, I'

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-14 Thread Daniel Carrasco
g these restarts. What is your full ceph configuration? There must be something not quite right in there. On Wed, Jun 14, 2017 at 4:26 PM Daniel Carrasco wrote: > > > El 14 jun. 2017 10:08 p. m., "David Turner" > escribió: > > Not just the min_size of your cephfs data

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-14 Thread Daniel Carrasco
#x27;d > still say that 2GB is low. The Ceph OSD daemon using 1GB of RAM is not > surprising, even at that size. > > When you say you increased the size of the pools to 3, what did you do to > the min_size? Is that still set to 2? > > On Wed, Jun 14, 2017 at 3:17 PM Daniel Car

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-14 Thread Daniel Carrasco
Case / NFS / stale file handles up > the wazoo background > > > > On Mon, Jun 12, 2017 at 10:41 AM, Daniel Carrasco > wrote: > >> 2017-06-12 16:10 GMT+02:00 David Turner : >> >>> I have an incredibly light-weight cephfs configuration. I set up an MDS >&g

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread Daniel Carrasco
created the other MDS after mount, because I've done some test just before send this email and now looks very fast (i've not noticed the downtime). Greetings!! -- _ Daniel Carrasco Marín Ingeniería para la Innovación i2TIC, S.L. Tlf: +34 911 12 32 84 Ext:

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread Daniel Carrasco
2017-06-12 10:49 GMT+02:00 Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de>: > Hi, > > > On 06/12/2017 10:31 AM, Daniel Carrasco wrote: > >> Hello, >> >> I'm very new on Ceph, so maybe this question is a noob question. >> >

[ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HA of MDS daemon.

2017-06-12 Thread Daniel Carrasco
Multi-MDS enviorement is stable?, because if I have multiple FS to avoid SPOF and I only can deploy an MDS, then we have a new SPOF... This is to know if maybe i need to use Block Devices pools instead File Server pools. Thanks!!! and greetings!! -- _____