Good news. when this release will public in Debian Wheezy pkglist ?
thanks for ur good job
2014-07-30 8:21 GMT+07:00 Sage Weil :
> Another Ceph development release! This has been a longer cycle, so there
> has been quite a bit of bug fixing and stabilization in this round.
> There is also a bu
On Mon, 4 Aug 2014 15:11:39 -0400 Chris Kitzmiller wrote:
> On Aug 2, 2014, at 12:03 AM, Christian Balzer wrote:
> > On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote:
> >
> >> I have 3 nodes each running a MON and 30 OSDs.
> >
> > Given the HW you list below, that might be a tall order,
Thanks for the response, Yehuda.
[ more text below ]
On 2014年08月05日 05:33, Yehuda Sadeh wrote:
On Fri, Aug 1, 2014 at 9:49 AM, Osier Yang wrote:
[ correct the URL ]
On 2014年08月02日 00:42, Osier Yang wrote:
Hi, list,
I managed to setup radosgw in testing environment to see if it's
stable/m
This is going to sound odd and if I hadn't been issuing all commands on the
monitor I would swear I issued 'rm -rf' from the shell of the osd in the
/var/lib/osd/ceph-s/ directory. After creating the pool/rbd and getting an
error from 'rbd info' I saw an osd down/out so I went to it's shell and
I couldn't fine the ceph-mon stack dump in the log all greps for 'ceph version'
weren't followed by a stack trace.
Executed ceph-deploy purge/purgedata on the monitor and osd's.
NOTE: had to manually go to the individual osd shells and remove /var/lib/ceph
after umount of the ceph/xfs device.
On Fri, Aug 1, 2014 at 9:49 AM, Osier Yang wrote:
> [ correct the URL ]
>
>
> On 2014年08月02日 00:42, Osier Yang wrote:
>>
>> Hi, list,
>>
>> I managed to setup radosgw in testing environment to see if it's
>> stable/mature enough
>> for production use these several days. In the meanwhile, I tried t
Hi to all, does anybody have a procedure step-by-step to install Ceph
from tar.gz file? I would like to test version 0.82
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
--- Original message ---
Asunto: Re: [ceph-users] Ceph writes stall for long perioids with
nodisk/network activity
De: Chris Kitzmiller
Para: Mariusz Gronczewski
Cc:
Fecha: Monday, 04/08/2014 17:28
On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote:
I got weird stalling during writes
On 08/04/2014 03:28 PM, Chris Kitzmiller wrote:
On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote:
I got weird stalling during writes, sometimes I got same write speed
for few minutes and after some time it starts stalling with 0 MB/s for
minutes
I'm getting very similar behavior on my clu
On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote:
> I got weird stalling during writes, sometimes I got same write speed
> for few minutes and after some time it starts stalling with 0 MB/s for
> minutes
I'm getting very similar behavior on my cluster. My writes start well but then
just kind
Sage Weil writes:
>
> On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote:
> > Hi all,
> >
> > We recently added many OSDs to our production cluster.
> > This brought us to a point where the number of PGs we had assigned to our
> > main (heavily used) pool was well below the recommended value.
On Mon, 4 Aug 2014, Bruce McFarland wrote:
> Is there a header or first line that appears in all ceph-mon stack dumps
> I can search for? The couple of ceph-mon stack dumps I've seen in web
> searches appear to all begin with "ceph version 0.xx", but those are
> from over a year ago. Is that st
On Aug 2, 2014, at 12:03 AM, Christian Balzer wrote:
> On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote:
>
>> I have 3 nodes each running a MON and 30 OSDs.
>
> Given the HW you list below, that might be a tall order, particular CPU
> wise in certain situations.
I'm not seeing any drama
Is there a header or first line that appears in all ceph-mon stack dumps I can
search for? The couple of ceph-mon stack dumps I've seen in web searches
appear to all begin with "ceph version 0.xx", but those are from over a year
ago. Is that still the case with 0.81 firefly code?
-Origina
This is probably a question best asked on the ceph-user list. I have
added it here.
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
On Mon, Aug 4, 2014 at 2:17 AM, Santhosh Fernandes
wrote:
> Hi all,
>
Okay, looks like the mon went down then.
Was there a stack trace in the log after the daemon crashed? (Or did the
daemon stay up but go unresponsive or something?)
Thanks!
sage
On Mon, 4 Aug 2014, Bruce McFarland wrote:
> 2014-08-04 09:57:37.144649 7f42171c8700 0 -- 209.243.160.35:0/1032499
2014-08-04 09:57:37.144649 7f42171c8700 0 -- 209.243.160.35:0/1032499 >>
209.243.160.35:6789/0 pipe(0x7f4204007dd0 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f4204001a90).fault
2014-08-04 09:58:07.145097 7f4215ac3700 0 -- 209.243.160.35:0/1032499 >>
209.243.160.35:6789/0 pipe(0x7f4204001530 sd=3 :0 s=1 p
Does anyone have any insight on how we can tune librbd to perform closer
to the level of the rbd kernel module?
In our lab we have a four node cluster with 1GbE public network and
10GbE cluster network. A client node connects to the public network
with 10GbE.
When doing benchmarks on the clien
This is pretty straightforward to fix. CRUSH defaults to peering with OSDs
on other nodes. If you are setting up a 1 node cluster, modify the setting.
osd crush chooseleaf type = 0
Add that to your ceph.com file and restart your cluster.
On Mon, Aug 4, 2014 at 5:51 AM, Pratik Rupala
wrote:
On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote:
> Hi all,
>
> We recently added many OSDs to our production cluster.
> This brought us to a point where the number of PGs we had assigned to our
> main (heavily used) pool was well below the recommended value.
>
> We increased the PG number (in
On Mon, 4 Aug 2014, Kenneth Waegeman wrote:
> Hi,
>
> I have been doing some tests with rados bench write on a EC storage pool with
> a writeback cache pool(replicated, size 3), and have some questions:
>
> * I had set target_max_bytes to 280G, and after some time of writing, the
> cache pool sta
Hi all,
We recently added many OSDs to our production cluster.
This brought us to a point where the number of PGs we had assigned to our
main (heavily used) pool was well below the recommended value.
We increased the PG number (incrementally to avoid huge degradation ratios)
to the recommended
On Mon, 4 Aug 2014, Sarang G wrote:
> Hi,
>
> I am configuring Ceph Cluster using teuthology. I want to use Valgrind.
>
> My yaml File contains:
>
> check-locks: false
>
> roles:
> - [mon.0, osd.0]
> - [mon.1, osd.1]
> - [mon.2, osd.2, client.0]
BTW I would use mon.[abc] instead of [012] as th
On Sun, Aug 3, 2014 at 2:04 AM, Christopher O'Connell
wrote:
> To be more clear on my question, we currently use ELRepo for those rare
> occasions when we need a 3.x kernel on centos. Are you aware of anyone
> maintaining a 3.14 kernel.
The fix is not in stable yet, and won't be for the next two-
Hi,
You mentioned that you have 3 hosts which are VMs. Are you using simple
directories as OSDs or virtual disks as OSDs?
I had same problem few days back where enough space was not available
from OSD for the cluster.
Try to increase the size of disks if you are using virtual disks and if
Hi Tijn,
I am not an expert but maybe to have only one "mon", could be part of the
problem.
regards, I
2014-08-04 12:41 GMT+02:00 Tijn Buijs :
> Hi Everybody,
>
> My idea was that maybe I was inpatient or something, so I let my Ceph
> cluster running over the weekend. So from friday 15:00 u
Hi Kapil,
The crush map is below
# begin crush map
# devices
device 0 osd.0
device 1 osd.1
# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root
# buckets
root default {
id -1 # do not change unnecessarily
# weight 1.000
a
Hi Everybody,
My idea was that maybe I was inpatient or something, so I let my Ceph
cluster running over the weekend. So from friday 15:00 until now (it is
monday morning 11:30 here now) it kept on running. And it didn't help
:). It still needs to create 192 PGs.
I've reinstalled my entier clu
I think ceph osd tree should list your OSDs under the node bucket.
Could you check your osd crush map also with this -
ceph osd getcrushmap -o filename
crushtool -d filename -o filename.txt
you should see your OSDs in the #devices section and you
should see your three servers in the #buckets sec
Dell - Internal Use - Confidential
Hi Kapil
Thanks for responding :)
My Mon-server two OSD's are running on three separate servers one for
respective node. All are SLES sp3.
Below is "ceph osd tree" output from my mon server box
slesceph1: # ceph osd tree
# idweight type name up/down
Hi,
I have been doing some tests with rados bench write on a EC storage
pool with a writeback cache pool(replicated, size 3), and have some
questions:
* I had set target_max_bytes to 280G, and after some time of writing,
the cache pool stays filled around 250G of data, rados df output:
Hi Yogesh,
Are your two OSDs on same node ? Could you check the osd tree output
with the command - "ceph osd tree"
Regards,
Kapil.
On Mon, 2014-08-04 at 09:22 +, yogesh_d...@dell.com wrote:
> Dell - Internal Use - Confidential
>
> Matt
>
> I am using Suse Enterprise Linux 11 – SP3 ( S
Dell - Internal Use - Confidential
Matt
I am using Suse Enterprise Linux 11 - SP3 ( SLES SP3)
I don't think I have enabled SE Linux ..
Yogesh Devi,
Architect, Dell Cloud Clinical Archive
Dell
Land Phone +91 80 28413000 Extension - 2781
Hand Phone+91 99014 71082
From: Matt Harlum [mail
Hello guys,
I was hoping to get some answers on how would ceph behaive when I install SSDs
on the hypervisor level and use them as cache pool. Let's say I've got 10 kvm
hypervisors and I install one 512GB ssd on each server. I then create a cache
pool for my storage cluster using these ssds. M
Hi,
I am configuring Ceph Cluster using teuthology. I want to use Valgrind.
My yaml File contains:
check-locks: false
roles:
- [mon.0, osd.0]
- [mon.1, osd.1]
- [mon.2, osd.2, client.0]
tasks:
- install:
branch: firefly
flavor: notcmalloc
- ceph:
valgrind:
osd.0: --tool=memchec
Hi
What distributions are your machines using? and is SELinux enabled on them?
I ran into the same issue once, i had to disable SELinux on all the machines
and then reinstall
On 4 Aug 2014, at 5:25 pm, yogesh_d...@dell.com wrote:
> Dell - Internal Use - Confidential
>
> Matt
> Thanks for res
Dell - Internal Use - Confidential
Matt
Thanks for responding
As suggested I tried to set replication to 2X by usng commands you provided
$ceph osd pool set data size 2
$ceph osd pool set data min_size 2
$ceph osd pool set rbd size 2
$ceph osd pool set rbd min_size 2
$ceph osd pool set metadata si
Hi,
The auth caps were as follows:
caps: [mon] allow r
caps: [osd] allow rwx pool=hosting_windows_sharedweb, allow rwx
pool=infra_systems, allow rwx pool=hosting_linux_sharedweb
I changed them (just adding a pool to the list) to:
caps: [mon] allow r
caps: [osd] allow rwx pool=hosting_windows_s
38 matches
Mail list logo