Hi
Maybe the easiest way would be to just create files to the SSD and use those
as journals. Don't know if this creates too much overhead, but atleast it
would be simple.
Br,
T
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
George Shuklin
Se
something else. (Ubuntu 14.04 3.13.0-32-generic)
Br,
Tuomas
From: Константин Сахинов [mailto:sakhi...@gmail.com]
Sent: 7. elokuuta 2015 21:15
To: Tuomas Juntunen; Quentin Hartman
Cc: ceph-users
Subject: Re: [ceph-users] Flapping OSD's when scrubbing
Hi!
One time I faced such a
Thanks
We play with the values a bit and see what happens.
Br,
Tuomas
From: Quentin Hartman [mailto:qhart...@direwolfdigital.com]
Sent: 7. elokuuta 2015 20:32
To: Tuomas Juntunen
Cc: ceph-users
Subject: Re: [ceph-users] Flapping OSD's when scrubbing
That kind of behavi
Hi
We are experiencing an annoying problem where scrubs make OSD's flap down
and cause Ceph cluster to be unusable for couple of minutes.
Our cluster consists of three nodes connected with 40gbit infiniband using
IPoIB, with 2x 6 core X5670 CPU's and 64GB of memory
Each node has 6 SSD's fo
I would say this is a problem of ntfs mount.. I found another way to do this,
so that’s that.
Thanks for noticing.
Br,T
-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: 5. heinäkuuta 2015 20:45
To: Tuomas Juntunen
Cc: ceph-users
Subject: Re: [ceph-users] RBD
Hi
Is there any other kernel that would work? Anyone else had this kind of problem
with rbd map?
Br, T
-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: 5. heinäkuuta 2015 19:42
To: Tuomas Juntunen
Cc: ceph-users
Subject: Re: [ceph-users] RBD mounted image on
Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: 5. heinäkuuta 2015 19:30
To: Tuomas Juntunen
Cc: ceph-users
Subject: Re: [ceph-users] RBD mounted image on linux server kernel error and
hangs the device
On Sun, Jul 5, 2015 at 6:58 PM, Tuomas Juntunen
wrote:
> Hi
>
> That's the on
Hi
That's the only error what comes from this, there's nothing else.
Br,
T
-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: 5. heinäkuuta 2015 18:30
To: Tuomas Juntunen
Cc: ceph-users
Subject: Re: [ceph-users] RBD mounted image on linux server kernel
Hi
We are experiencing the following
- Hammer 0.94.2
- Ubuntu 14.04.1
- Kernel 3.16.0-37-generic
- 40TB NTFS disk mounted through RBD
First 50GB goes fine, but then this happens
Jul 5 16:56:01 cephclient kernel: [110581.046141] kworker/u65:
-boun...@lists.ceph.com] On Behalf Of
Tuomas Juntunen
Sent: 2. heinäkuuta 2015 16:23
To: 'Somnath Roy'; 'ceph-users'
Subject: Re: [ceph-users] One of our nodes has logs saying: wrongly marked
me down
Thanks
Ill test these values, and also add the osd heartbeat grace to 6
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: 2. heinäkuuta 2015 6:29
To: Tuomas Juntunen; 'ceph-users'
Subject: RE: [ceph-users] One of our nodes has logs saying: wrongly marked
me down
Yeah, this can happen during deep_scrub and also during rebalancing..I
forgot to me
osd_client_message_cap = 0
osd_enable_op_tracker = false
Br, T
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: 2. heinäkuuta 2015 0:30
To: Tuomas Juntunen; 'ceph-users'
Subject: RE: [ceph-users] One of our nodes has logs saying: wrongly marked
me dow
30
To: ceph-users@lists.ceph.com
Cc: Mark Nelson; Tuomas Juntunen
Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops
On Wed, 01 Jul 2015 13:50:39 -0500 Mark Nelson wrote:
>
>
> On 07/01/2015 01:39 PM, Tuomas Juntunen wrote:
> > Thanks Mark
> >
> > Are th
Hi
One our nodes has OSD logs that say "wrongly marked me down" for every OSD
at some point. What could be the reason for this. Anyone have any similar
experiences?
Other nodes work totally fine and they are all identical.
Br,T
___
ceph-users
Hi
I'll check the possibility on testing EnhanceIO. I'll report back on this.
Thanks
Br,T
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: 1. heinäkuuta 2015 21:51
To: Tuomas Juntunen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very low 4
solution for getting more
random iops. I've read some Sébastien's writings.
Br,
Tuomas
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: 1. heinäkuuta 2015 20:29
To: Tuomas Juntunen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very low 4k randread p
st
for writing. Would SSD based cache pool be viable solution here?
Br, T
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: 1. heinäkuuta 2015 13:58
To: Tuomas Juntunen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops
Hi
Our ceph is running the following hardware:
3 nodes with 36 OSDs, 18 SSDs one SSD for two OSDs, each node has 64gb mem
& 2x6core cpus
4 monitors running on other servers
40gbit infiniband with IPoIB
Here's my cephfs fio test results using the following file, and changing rw
parameter
[test
=0.01%, 10=0.02%, 20=0.03%, 50=0.55%
lat (msec) : 100=99.31%, 250=0.08%
100msecs seems a lot to me.
Br,T
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: 30. kesäkuuta 2015 22:01
To: Tuomas Juntunen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very l
I have already set readahead to OSDs before, It is now 2048, this didnt
affect the random reads, but gave a lot more sequential performance.
Br, T
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: 30. kesäkuuta 2015 21:00
To: Tuomas Juntunen; 'Stephen Mercier'
Cc: &
20:55
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops
Hi Tuomos,
Can you paste the command you ran to do the test?
Thanks,
Mark
On 06/30/2015 12:18 PM, Tuomas Juntunen wrote:
> Hi
>
> Its not probably hitting the disks, but that reall
]
Sent: 30. kesäkuuta 2015 20:32
To: Tuomas Juntunen
Cc: 'Somnath Roy'; 'ceph-users'
Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops
I ran into the same problem. What we did, and have been using since, is
increased the read ahead buffer in the VMs to 16M
get it up? or is there some tunable
which would enhance it? I would assume Linux caches reads in memory and
serves them from there, but atleast now we dont see it.
Br,
Tuomas
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: 30. kesäkuuta 2015 19:24
To: Tuomas Juntunen; '
Hi
I have been trying to figure out why our 4k random reads in VM's are so bad.
I am using fio to test this.
Write : 170k iops
Random write : 109k iops
Read : 64k iops
Random read : 1k iops
Our setup is:
3 nodes with 36 OSDs, 18 SSD's one SSD for two OSD's, each node has 64gb mem
&
Hi
Thanks for your comments
I'll indeed put the OS Controller on, when we get our replacement CPU's and try
what you described here.
If there isn't any guide for this yet, should there be?
Br,
Tuomas
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Beh
Hi
I wanted to share my findings of running ceph on HP servers.
We had a lot of problems with CPU load, which was sometimes even 800. We
were trying to figure out why this happens even while not doing anything
special.
Our OSD nodes are running DL380 G6 with Dual Quad core cpu's and 32g
Thanks!
I'll give it a try, and report back my findings.
Br,
Tuomas
-Original Message-
From: Pavel V. Kaygorodov [mailto:pa...@inasan.ru]
Sent: 16. toukokuuta 2015 13:51
To: Tuomas Juntunen
Cc: Jason Dillaman; ceph-users
Subject: Re: [ceph-users] RBD images -- parent snapshot mi
m
> http://www.redhat.com
>
>
> - Original Message -
> From: "Pavel V. Kaygorodov"
> To: "Tuomas Juntunen"
> Cc: "ceph-users"
> Sent: Tuesday, May 12, 2015 3:55:21 PM
> Subject: Re: [ceph-users] RBD images -- parent snapshot mis
Hi
I am having this exact same problem, for more than a week. I have not found
a way to do this either.
Any help would be appreciated.
Basically all of our guests are now down, even though they are not in
production, we would still need to get the data out of them.
Br,
Tuomas
-Original Me
-boun...@lists.ceph.com] On Behalf Of
Tuomas Juntunen
Sent: 5. toukokuuta 2015 16:24
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Change pool id
Hi
Previously I had to delete one pool because of a mishap I did. Now I need to
create the pool again and give it the same id. How would one do that?
Hi
Previously I had to delete one pool because of a mishap I did. Now I need to
create the pool again and give it the same id. How would one do that?
I assume my root problem is that, since I had to delete the images pool, the
base images vm's use are missing. I have the images available i
I upgraded Ceph from 0.87 Giant to 0.94.1 Hammer
Then created new pools and deleted some old ones. Also I created one pool for
tier to be able to move data without outage.
After these operations all but 10 OSD's are down and creating this kind of
messages to logs, I get more than 100gb of these i
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org
Subject: RE: [ceph-users] Upgrade from Giant to Hammer and after some basic
operations most of the OSD's went down
On Mon, 4 May 2015, Tuomas Juntunen wrote:
> 5827504:10.20.0.11:6800/3382530 'ceph1' m
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: 4. toukokuuta 2015 18:29
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org
Subject: RE: [ceph-users] Upgrade from Giant to Hammer and after some basic
operations most of the OSD's went down
s
>>
>>
>> > Thanks man. I'll try it tomorrow. Have a good one.
>> >
>> > Br,T
>> >
>> > Original message
>> > From: Sage Weil
>> > Date: 30/04/2015 18:23 (GMT+02:00)
>> > To: Tuomas Juntun
;>
>>
>> > Thanks man. I'll try it tomorrow. Have a good one.
>> >
>> > Br,T
>> >
>> > Original message
>> > From: Sage Weil
>> > Date: 30/04/2015 18:23 (GMT+02:00)
>> > To: Tuomas Juntunen
>&g
Hey
Yes I can drop the images data, you think this will fix it?
Br,
Tuomas
> On Wed, 29 Apr 2015, Tuomas Juntunen wrote:
>> Hi
>>
>> I updated that version and it seems that something did happen, the osd's
>> stayed up for a while and 'ceph statu
one of the osd's with osd debug = 20,
http://beta.xaasbox.com/ceph/ceph-osd.15.log
Thank you!
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: 28. huhtikuuta 2015 23:57
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org
Subject:
cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128
public_network = 10.20.0.0/16
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 28. huhtikuuta 2015 21:57
To: Tuomas Juntunen
Cc: ceph-
Here it is
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 28. huhtikuuta 2015 21:57
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Upgrade from Giant to Hammer and after some basic
operations most of the OSD's went dow
s will still show it as up.
Here's a log for that osd http://beta.xaasbox.com/ceph/ceph-osd.37.log
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 28. huhtikuuta 2015 20:02
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Upg
eplay}, 1 up:standby
osdmap e18132: 37 osds: 11 up, 11 in
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 27. huhtikuuta 2015 22:22
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Upgrade from Giant to Hammer and after some basic
op
Hi
Updated the logfile, same place http://beta.xaasbox.com/ceph/ceph-osd.15.log
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 27. huhtikuuta 2015 22:22
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Upgrade from Giant to
Hey
Got the log, you can get it from
http://beta.xaasbox.com/ceph/ceph-osd.15.log
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 27. huhtikuuta 2015 20:45
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upgrade from Giant to
.com]
Sent: 27. huhtikuuta 2015 20:45
To: Tuomas Juntunen
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upgrade from Giant to Hammer and after some basic
operations most of the OSD's went down
Yeah, no snaps:
images:
"snap_mode": "self
Hi
Here you go
Br,
Tuomas
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: 27. huhtikuuta 2015 19:23
To: Tuomas Juntunen
Cc: 'Samuel Just'; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upgrade from Giant to Hammer and after some basic
operations m
I can sacrifice the images and img pools, if that is necessary.
Just need to get the thing going again
Tuomas
-Original Message-
From: Samuel Just [mailto:sj...@redhat.com]
Sent: 27. huhtikuuta 2015 15:50
To: tuomas juntunen
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users
to:sj...@redhat.com]
Sent: 27. huhtikuuta 2015 15:50
To: tuomas juntunen
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upgrade from Giant to Hammer and after some basic
operations most of the OSD's went down
So, the base tier is what determines the snapshots for the cache/base pool
ama
ean by:
>
> "Also I created one pool for tier to be able to move data without outage."
>
> -Sam
> - Original Message -----
> From: "tuomas juntunen"
> To: "Ian Colle"
> Cc: ceph-users@lists.ceph.com
> Sent: Monday, April 27, 2015 4:23:44
itter.com/ircolle
> Cell: +1.303.601.7713
> Email: ico...@redhat.com
>
> ----- Original Message -
> From: "tuomas juntunen"
> To: ceph-users@lists.ceph.com
> Sent: Monday, April 27, 2015 1:56:29 PM
> Subject: [ceph-users] Upgrade from Giant to Hammer and after som
I upgraded Ceph from 0.87 Giant to 0.94.1 Hammer
Then created new pools and deleted some old ones. Also I created one pool for
tier to be able to move data without outage.
After these operations all but 10 OSD's are down and creating this kind of
messages to logs, I get more than 100gb of these
51 matches
Mail list logo