I just got my small Ceph cluster running. I run 6 OSDs on the same server to
basically replace mdraid.
I have tried to simulate a hard drive (OSD) failure: removed the OSD
(out+stop), zapped it, and then
prepared and activated it. It worked, but I ended up with one extra OSD (and
the old one s
On Mon, 12 Aug 2013, Jeppesen, Nelson wrote:
> Joao,
>
> (log file uploaded to http://pastebin.com/Ufrxn6fZ)
>
> I had some good luck and some bad luck. I copied the store.db to a new
> monitor, injected a modified monmap and started it up (This is all on the
> same host.) Very quickly it reac
Joao,
(log file uploaded to http://pastebin.com/Ufrxn6fZ)
I had some good luck and some bad luck. I copied the store.db to a new monitor,
injected a modified monmap and started it up (This is all on the same host.)
Very quickly it reached quorum (as far as I can tell) but didn't respond.
Runn
> On 08/12/2013 06:49 PM, Dmitry Postrigan wrote:
>> Hello community,
>>
>> I am currently installing some backup servers with 6x3TB drives in them. I
>> played with RAID-10 but I was not
>> impressed at all with how it performs during a recovery.
>>
>> Anyway, I thought what if instead of RAID-1
[re-adding ceph-users so others can benefit from the archives]
On 08/12/2013 07:18 PM, PJ wrote:
2013/8/13 Josh Durgin :
On 08/12/2013 10:19 AM, PJ wrote:
Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+
On 08/12/2013 06:49 PM, Dmitry Postrigan wrote:
Hello community,
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Anyway, I thought what if instead of RAID-10 I use ceph? All
i got PGs stuck long time. do not how to fix it. can some person help to
check?
Environment: Debian 7 + ceph 0.617
root@ceph-admin:~# ceph -s
health HEALTH_WARN 6 pgs stuck unclean
monmap e2: 2 mons at {a=192.168.250.15:6789/0,b=192.168.250.8:6789/0},
election epoch 8, q
Hello community,
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be
local, so I could simply create
6 l
Do I understand you to mean, James, that you bounce spam messages back
to the sender, even if the sender is a listserv? That seems like a
really bad idea, punishing the innocent at best, and causing problems
like this at worst.
IMO the best spam strategy is to drop it as early as possible at
On 08/12/2013 10:19 AM, PJ wrote:
Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12
(500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on
physical machines.
- Each physical machin
You saved me a bunch of time; I was planning to test my backup and
restore later today. Thanks!
It occurred to me that the backups won't be as useful as I thought. I'd
need to make sure that the PGs hadn't moved around after the backup was
made. If they had, I'd spend a lot of time tracki
Following a discussion we had today on #ceph, I've added some extra
functionality to 'ceph-monstore-tool' to allow copying the data out of a
store into a new mon store, and can be found on branch wip-monstore-copy.
Using it as
ceph-monstore-tool --mon-store-path --out
--command store-copy
Ok, your best bet is to remove osds 3,14,16:
ceph auth del osd.3
ceph osd crush rm osd.3
ceph osd rm osd.3
for each of them. Each osd you remove may cause
some data re balancing, so you should be ready for
that.
-Sam
On Mon, Aug 12, 2013 at 3:01 PM, Jeff Moskow wrote:
> Sam,
>
> 3, 14
Sam,
3, 14 and 16 have been down for a while and I'll eventually replace
those drives (I could do it now)
but didn't want to introduce more variables.
We are using RBD with Proxmox, so I think the answer about kernel
clients is yes
Jeff
On Mon, Aug 12, 2013 at 02:41:11PM -
Can you give a step by step account of what you did prior to the error?
-Sam
On Tue, Aug 6, 2013 at 10:52 PM, 於秀珠 wrote:
> using the ceph-deploy to manage a existing cluster,i follow the steps in the
> document ,but there is some errors that i can not gather the keys.
> when i run the command "ce
Can you attach the output of:
ceph -s
ceph pg dump
ceph osd dump
and run
ceph osd getmap -o /tmp/osdmap
and attach /tmp/osdmap/
-Sam
On Wed, Aug 7, 2013 at 1:58 AM, Howarth, Chris wrote:
> Hi,
>
> One of our OSD disks failed on a cluster and I replaced it, but when it
> failed it did not
On 08/12/2013 04:49 AM, Joshua Young wrote:
I have 2 issues that I can not find a solution to.
First: I am unable to stop / start any osd by command. I have deployed
with ceph-deploy on Ubuntu 13.04 and everything seems to be working
find. I have 5 hosts 5 mons and 20 osds.
Using initctl list
I think the docs you are looking for are
http://ceph.com/docs/master/man/8/cephfs/ (specifically the set_layout
command).
-Sam
On Thu, Aug 8, 2013 at 7:48 AM, Da Chun wrote:
> Hi list,
> I saw the info about data striping in
> http://ceph.com/docs/master/architecture/#data-striping .
> But couldn
Are you using any kernel clients? Will osds 3,14,16 be coming back?
-Sam
On Mon, Aug 12, 2013 at 2:26 PM, Jeff Moskow wrote:
> Sam,
>
> I've attached both files.
>
> Thanks!
> Jeff
>
> On Mon, Aug 12, 2013 at 01:46:57PM -0700, Samuel Just wrote:
>> Can you attach the output of ce
On 08/08/13 15:21, Craig Lewis wrote:
I've seen a couple posts here about broken clusters that had to repair
by modifing the monmap, osdmap, or the crush rules.
The old school sysadmin in me says it would be a good idea to make
backups of these 3 databases. So far though, it seems like everybod
Sam,
I've attached both files.
Thanks!
Jeff
On Mon, Aug 12, 2013 at 01:46:57PM -0700, Samuel Just wrote:
> Can you attach the output of ceph osd tree?
>
> Also, can you run
>
> ceph osd getmap -o /tmp/osdmap
>
> and attach /tmp/osdmap?
> -Sam
>
> On Fri, Aug 9, 2013 at 4:28 A
I have referred you to someone more conversant with the details of
mkcephfs, but for dev purposes, most of us use the vstart.sh script in
src/ (http://ceph.com/docs/master/dev/).
-Sam
On Fri, Aug 9, 2013 at 2:59 AM, Nulik Nol wrote:
> Hi,
> I am configuring a single node for developing purposes,
Can you attach the output of ceph osd tree?
Also, can you run
ceph osd getmap -o /tmp/osdmap
and attach /tmp/osdmap?
-Sam
On Fri, Aug 9, 2013 at 4:28 AM, Jeff Moskow wrote:
> Thanks for the suggestion. I had tried stopping each OSD for 30 seconds,
> then restarting it, waiting 2 minutes and t
Can you elaborate on what behavior you are looking for?
-Sam
On Fri, Aug 9, 2013 at 4:37 AM, Georg Höllrigl
wrote:
> Hi,
>
> I'm using ceph 0.61.7.
>
> When using ceph-fuse, I couldn't find a way, to only mount one pool.
>
> Is there a way to mount a pool - or is it simply not supported?
>
>
>
>
Did you try using ceph-deploy disk zap ceph001:sdaa first?
-Sam
On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov
wrote:
> Hi.
>
> I have some problems with create journal on separate disk, using ceph-deploy
> osd prepare command.
>
> When I try execute next command:
>
> ceph-deploy osd prepare
Can you post more of the log? There should be a line towards the bottom
indicating the line with the failed assert. Can you also attach ceph pg
dump, ceph osd dump, ceph osd tree?
-Sam
On Mon, Aug 12, 2013 at 11:54 AM, John Wilkins wrote:
> Stephane,
>
> You should post any crash bugs with sta
Stephane,
You should post any crash bugs with stack trace to ceph-devel
ceph-de...@vger.kernel.org.
On Mon, Aug 12, 2013 at 9:02 AM, Stephane Boisvert <
stephane.boisv...@gameloft.com> wrote:
> Hi,
> It seems my OSD processes keep crashing randomly and I don't know
> why. It seems to happ
Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12
(500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on physical
machines.
- Each physical machine runs 5 virtual machines. One VM as
Hi,
I have been trying to get ceph installed on a single node. But I'm stuck with
the following error.
[host]$ ceph-deploy -v mon create ceph-server-299
Deploying mon, cluster ceph hosts ceph-server-299
Deploying mon to ceph-server-299
Distro RedHatEnterpriseServer codename Santiago, will use sy
Hi Matthew,
I am not quite sure about the POLLRDHUP.
On the server side (ceph-mon), tcp_read_wait does see the
POLLHUP - which should be the indicator that the
the other side is shutting down.
I have also taken a brief look at the client side (ceph mon stat).
It initiates a shutdown - but never f
Hi,
It seems my OSD processes keep crashing randomly and I don't
know why. It seems to happens when the cluster is trying to
re-balance... In normal usange I didn't notice any crash like that.
We running ceph 0.61.7 on an up to date ubuntu 12.04 (all packages
On 08/12/2013 03:19 PM, Jeff Moskow wrote:
Hi,
The activity on our ceph cluster has gone up a lot. We are using exclusively
RBD
storage right now.
Is there a tool/technique that could be used to find out which rbd images are
receiving the most activity (something like "rbdtop")?
Are you us
On Mon, Aug 12, 2013 at 03:19:04PM +0200, Jeff Moskow wrote:
> Hi,
>
> The activity on our ceph cluster has gone up a lot. We are using exclusively
> RBD
> storage right now.
>
> Is there a tool/technique that could be used to find out which rbd images are
> receiving the most activity (somethi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi.
I have some problems with create journal on separate disk, using ceph-deploy
osd prepare command.
When I try execute next command:
ceph-deploy osd prepare ceph001:sdaa:sda1
where:
sdaa - disk for ceph data
sda1 - partition on ssd drive for journal
I get next errors:
===
Hi,
The activity on our ceph cluster has gone up a lot. We are using exclusively
RBD
storage right now.
Is there a tool/technique that could be used to find out which rbd images are
receiving the most activity (something like "rbdtop")?
Thanks,
Jeff
--
___
On Sun, Aug 11, 2013 at 9:31 PM, Harvey Skinner wrote:
> Hello Alfredo,
>
> when do you expect this updated version of ceph-deploy to make it into
> a cuttlefish release?I would like to give this updated version a
> try while I am working on deployment of a Ceph environment using
> ceph-depl
I have 2 issues that I can not find a solution to.
First: I am unable to stop / start any osd by command. I have deployed with
ceph-deploy on Ubuntu 13.04 and everything seems to be working find. I have 5
hosts 5 mons and 20 osds.
Using initctl list | grep ceph gives me
ceph-mds-all-starter st
Hi Andreas,
I think we're both working on the same thing, I've just changed the
function calls over to rsockets in the source instead of using the pre-load
library. It explains why we're having the exact same problem!
>From what I've been able to tell the entire problem revolves around
rsockets n
39 matches
Mail list logo