is incurred
> which can take somewhat longer.
>
>
> On 10.10.2018, at 13:57, Daniel Carrasco wrote:
>
> Thanks for your response.
>
> I'll point in that direction.
> I also need a fast recovery in case that MDS die so, Standby MDS are
> recomended or recovery is fa
10:49, Daniel Carrasco wrote:
>
>
>- Wich is the best configuration to avoid that MDS problems.
>
> Single active MDS with lots of RAM.
>
>
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
s that uses
about 4GB.
Thanks!
--
_________
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
___
ceph
El lun., 8 oct. 2018 5:44, Yan, Zheng escribió:
> On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco
> wrote:
> >
> > I've got several problems on 12.2.8 too. All my standby MDS uses a lot
> of memory (while active uses normal memory), and I'm receiving a lot of
&
.1, then run 'ceph mds repaired fs_name:damaged_rank' .
> >
> > Sorry for all the trouble I caused.
> > Yan, Zheng
> >
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/l
d to be sure.
Someone has tried it?
Thanks!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
___
ceph-users
daemon overhead and the memory fragmentation. At least is not 13-15Gb like
before.
Greetings!!
2018-07-25 23:16 GMT+02:00 Daniel Carrasco :
> I've changed the configuration adding your line and changing the mds
> memory limit to 512Mb, and for now looks stable (its on about 3-6% and
now looks acceptable:
1264 ceph 20 0 12,543g 737952 16188 S 1,0 4,6% 0:41.05 ceph-mds
Anyway, I need time to test it, because 15 minutes is too less.
Greetings!!
2018-07-25 17:16 GMT+02:00 Daniel Carrasco :
> Hello,
>
> Thanks for all your help.
>
> The dd is an option of
t;, "error", "failed",
"message" or something similar, so looks like there are no messages of that
kind.
Greetings!!
2018-07-25 14:48 GMT+02:00 Yan, Zheng :
> On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng wrote:
> >
> > On Wed, Jul 25, 2018 at 5:04 PM Daniel Ca
---
Greetings!!
2018-07-24 12:07 GMT+02:00 Yan, Zheng :
> On Tue, Jul 24, 2018 at 4:59 PM Daniel Carrasco
> wrote:
> >
> > Hello,
> >
> > How many time is neccesary?, because is a production environment and
> memory profiler + low cache size because
r/log/ceph/ceph-mds.x.profile..heap
>
>
>
>
> On Tue, Jul 24, 2018 at 3:18 PM Daniel Carrasco
> wrote:
> >
> > This is what i get:
> >
> >
> >
; Greetings!
> >
> > El mar., 24 jul. 2018 1:00, Gregory Farnum
> escribió:
> >>
> >> On Mon, Jul 23, 2018 at 11:08 AM Patrick Donnelly
> wrote:
> >>>
> >>> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco
> wrote:
> >>>
at 11:08 AM Patrick Donnelly
> wrote:
>
>> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco
>> wrote:
>> > Hi, thanks for your response.
>> >
>> > Clients are about 6, and 4 of them are the most of time on standby.
>> Only two
>> > are
y is using
less than 1Gb of RAM just now. Of course I've not rebooted the machine, but
maybe if the daemon was killed for high memory usage then the new
configuration is loaded now.
Greetings!
2018-07-23 21:07 GMT+02:00 Daniel Carrasco :
> Thanks!,
>
> It's true that I'
y is using
less than 1Gb of RAM just now. Of course I've not rebooted the machine, but
maybe if the daemon was killed for high memory usage then the new
configuration is loaded now.
Greetings!
2018-07-19 11:35 GMT+02:00 Daniel Carrasco :
> Hello again,
>
> It is still early to say
,
"wait": {
"avgcount": 0,
"sum": 0.0,
"avgtime": 0.0
}
},
"throttle-objecter_bytes": {
"val": 0,
"max": 104857600,
"get_started": 0,
"get": 0,
"get_sum": 0
ely small cache size on the MDS?
>
>
> Paul
>
> 2018-07-23 13:16 GMT+02:00 Daniel Carrasco :
>
>> Hello,
>>
>> I've created a Ceph cluster of 3 nodes (3 mons, 3 osd, 3 mgr and 3 mds
>> with two active). This cluster is for mainly for server a webpage (
fuse module but I've the above problem.
My SO is Ubuntu 16.04 x64 with kernel version 4.13.0-45-generic and ceph
server/client version is 12.2.7.
How I can debug why that CPU usage?.
Thanks!
--
_____
Daniel Carrasco Marín
Ingeniería para
ALLOC: 8192 Tcmalloc page size
Call ReleaseFreeMemory() to release freelist memory to the OS (via
madvise()).
Bytes released to the OS take up virtual address space but no physical
memory.
Greetings!!
2018-07-19 10:24 GMT+02:00 Daniel Car
07-19 1:07 GMT+02:00 Daniel Carrasco :
> Thanks again,
>
> I was trying to use fuse client instead Ubuntu 16.04 kernel module to see
> if maybe is a client side problem, but CPU usage on fuse client is very
> high (a 100% and even more in a two cores machine), so I'd to rever to
ed.
>
> On Wed, Jul 18, 2018 at 3:48 PM Daniel Carrasco
> wrote:
>
>> Hello, thanks for your response.
>>
>> This is what I get:
>>
>> # ceph tell mds.kavehome-mgto-pro-fs01 heap stats
>> 2018-07-19 00:43:46.142560 7f5a7a7fc700 0 client.1318388
have one of the slightly-broken
> base systems and find that running the "heap release" (or similar
> wording) command will free up a lot of RAM back to the OS!
> -Greg
>
> On Wed, Jul 18, 2018 at 1:53 PM, Daniel Carrasco
> wrote:
> > Hello,
> >
> >
2.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous
(stable).
Thanks!!
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
___
is a commit to the
> repository from elsewhere. This would be on local storage and remove a lot
> of complexity. All front-end servers would update automatically via git.
>
> If something like that doesn't work, it would seem you have a workaround
> that works for you.
>
HA website with an LB in front of them.
>
> I'm biased here a bit, but I don't like to use networked filesystems
> unless nothing else can be worked out or the software using it is 3rd party
> and just doesn't support anything else.
>
> On Thu, Mar 1, 2018 at 9:05 AM
ts I can do...
Greetings!!
2018-02-28 17:11 GMT+01:00 Daniel Carrasco :
> Hello,
>
> I've created a Ceph cluster with 3 nodes and a FS to serve a webpage. The
> webpage speed is good enough (near to NFS speed), and have HA if one FS die.
> My problem comes when I deploy a git
just deploy the git repository and
all start to work very slow.
Thanks!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
bled the quota check because there
is no quota on this cluster.
This will lower the petitions to MDS and CPU usage, right?
Greetings!!
2018-02-22 19:34 GMT+01:00 Patrick Donnelly :
> On Wed, Feb 21, 2018 at 11:17 PM, Daniel Carrasco
> wrote:
> > I want to search also if there is any way to
ached for a while.
Greetings!!
El 22 feb. 2018 3:59, "Patrick Donnelly" escribió:
> Hello Daniel,
>
> On Wed, Feb 21, 2018 at 10:26 AM, Daniel Carrasco
> wrote:
> > Is possible to make a better distribution on the MDS load of both nodes?.
>
> We are aware of
2018-02-21 19:26 GMT+01:00 Daniel Carrasco :
> Hello,
>
> I've created a Ceph cluster with 3 nodes to serve files to an high traffic
> webpage. I've configured two MDS as active and one as standby, but after
> add the new system to production I've noticed that
own for example, or if there are
another side effects.
My last question is if someone can recomend me a good client configuration
like cache size, and maybe something to lower the metadata servers load.
Thanks!!
--
_____
Daniel Carrasco Marín
Ing
Finally i've disabled the mon_osd_report_timeout option and seems to works
fine.
Greetings!.
2017-10-17 19:02 GMT+02:00 Daniel Carrasco :
> Thanks!!
>
> I'll take a look later.
>
> Anyway, all my Ceph daemons are in same version on all nodes (I've
> upgrad
ph.com/msg39886.html
-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: dinsdag 17 oktober 2017 17:49
To: ceph-us...@ceph.com
Subject: [ceph-users] OSD are marked as down after jewel -> luminous
upgrade
Hello,
Today I've decided to upgrade my Ceph cluster to
--
For now I've added the nodown flag to keep all OSD online, and all is
working fine, but this is not the best way to do it.
Someone knows how to fix this problem?. Maybe this release needs to open
new ports on firewall?
Th
RGW,
multi-site, multi-datacenter crush maps, etc?
On Fri, Jun 30, 2017 at 2:28 PM Daniel Carrasco
wrote:
> Hello,
>
> My question is about steam security of connections between ceph services.
> I've read that connection is verified by private keys and signed packets,
> but
Hello,
My question is about steam security of connections between ceph services.
I've read that connection is verified by private keys and signed packets,
but my question is if that packets are ciphered in any way to avoid packets
sniffers, because I want to know if can be used through internet wi
!!
2017-06-15 19:04 GMT+02:00 Daniel Carrasco :
> Hello, thanks for the info.
>
> I'll give a try tomorrow. On one of my test I got the messages that yo say
> (wrongfully marked), but i've lowered other options and now is fine. For
> now the OSD are not reporting down m
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.
ant
> to monitor your cluster for OSDs being marked down for a few seconds before
> marking themselves back up. You can see this in the OSD logs where the OSD
> says it was wrongfully marked down in one line and then the next is where
> it tells the mons it is actually up.
>
> On Th
I forgot to say that after upgrade the machine RAM to 4Gb, the OSD daemons
has started to use only a 5% (about 200MB). Is like magic, and now I've
about 3.2Gb of free RAM.
Greetings!!
2017-06-15 15:08 GMT+02:00 Daniel Carrasco :
> Finally, the problem was W3Total Cache, that seems to b
caused more peering and backfilling, ... which caused more OSDs to be
> killed by OOM killer.
>
> On Wed, Jun 14, 2017 at 5:01 PM Daniel Carrasco
> wrote:
>
>> Is strange because on my test cluster (three nodes) with two nodes with
>> OSD, and all with MON and MDS, I'
g
these restarts. What is your full ceph configuration? There must be
something not quite right in there.
On Wed, Jun 14, 2017 at 4:26 PM Daniel Carrasco
wrote:
>
>
> El 14 jun. 2017 10:08 p. m., "David Turner"
> escribió:
>
> Not just the min_size of your cephfs data
#x27;d
> still say that 2GB is low. The Ceph OSD daemon using 1GB of RAM is not
> surprising, even at that size.
>
> When you say you increased the size of the pools to 3, what did you do to
> the min_size? Is that still set to 2?
>
> On Wed, Jun 14, 2017 at 3:17 PM Daniel Car
Case / NFS / stale file handles up
> the wazoo background
>
>
>
> On Mon, Jun 12, 2017 at 10:41 AM, Daniel Carrasco
> wrote:
>
>> 2017-06-12 16:10 GMT+02:00 David Turner :
>>
>>> I have an incredibly light-weight cephfs configuration. I set up an MDS
>&g
created the
other MDS after mount, because I've done some test just before send this
email and now looks very fast (i've not noticed the downtime).
Greetings!!
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext:
2017-06-12 10:49 GMT+02:00 Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de>:
> Hi,
>
>
> On 06/12/2017 10:31 AM, Daniel Carrasco wrote:
>
>> Hello,
>>
>> I'm very new on Ceph, so maybe this question is a noob question.
>>
>
Multi-MDS enviorement is stable?, because if I have multiple FS to avoid
SPOF and I only can deploy an MDS, then we have a new SPOF...
This is to know if maybe i need to use Block Devices pools instead File
Server pools.
Thanks!!! and greetings!!
--
_____
47 matches
Mail list logo