On 13.08.2013 05:05, Dmitry Postrigan wrote:
> This will be a single server configuration, the goal is to replace mdraid,
> hence I tried to use localhost
> (nothing more will be added to the cluster). Are you saying it will be less
> fault tolerant than a RAID-10?
Ceph is a distributed object
On 08/13/2013 03:49 AM, Dmitry Postrigan wrote:
> Hello community,
Hi,
> I am currently installing some backup servers with 6x3TB drives in them. I
> played with RAID-10 but I was not
> impressed at all with how it performs during a recovery.
>
> Anyway, I thought what if instead of RAID-10 I
Hi.
Yes, i'm zapped all disks before.
More about my situation:
sdaa - one of disk for data: 3 TB with GPT partition table.
sda - ssd drive with manual created partitions (10 GB) for journal with MBR
partition table.
===
fdisk -l /dev/sda
Disk /dev/sda: 480.1 GB, 4
On Tue, Aug 13, 2013 at 3:22 PM, Wolfgang Hennerbichler
wrote:
>
>
> On 08/13/2013 03:49 AM, Dmitry Postrigan wrote:
>> Hello community,
>
> Hi,
>
>> I am currently installing some backup servers with 6x3TB drives in them. I
>> played with RAID-10 but I was not
>> impressed at all with how it per
On 08/13/2013 09:23 AM, Jeffrey 'jf' Lim wrote:
>>> Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will
>>> be local, so I could simply create
>>> 6 local OSDs + a monitor, right? Is there anything I need to watch out for
>>> in such configuration?
>>
>> You can do that. A
>> This will be a single server configuration, the goal is to replace mdraid,
>> hence I tried to use localhost
>> (nothing more will be added to the cluster). Are you saying it will be less
>> fault tolerant than a RAID-10?
> Ceph is a distributed object store. If you stay within a single machi
On 08/13/2013 09:47 AM, Dmitry Postrigan wrote:
>> Why would you want to make this switch?
>
> I do not think RAID-10 on 6 3TB disks is going to be reliable at all. I have
> simulated several failures, and
> it looks like a rebuild will take a lot of time. Funnily, during one of these
> exper
>> I am currently installing some backup servers with 6x3TB drives in them. I
>> played with RAID-10 but I was not
>> impressed at all with how it performs during a recovery.
>>
>> Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be
>> local, so I could simply create
>>
Georg Höllrigl пишет:
> I'm using ceph 0.61.7.
>
> When using ceph-fuse, I couldn't find a way, to only mount one pool.
>
> Is there a way to mount a pool - or is it simply not supported?
This mean "mount as fs"?
Same as kernel-level cephfs (fuse & cephfs = same instance). You cannot "mount
poo
Sam,
Thanks that did it :-)
health HEALTH_OK
monmap e17: 5 mons at
{a=172.16.170.1:6789/0,b=172.16.170.2:6789/0,c=172.16.170.3:6789/0,d=172.16.170.4:6789/0,e=172.16.170.5:6789/0},
election epoch 9794, quorum 0,1,2,3,4 a,b,c,d,e
osdmap e23445: 14 osds: 13 up, 13 in
pgmap v1355
Hi all,
my Debian 7 wheezy machine died with the following in the logs:
http://pastebin.ubuntu.com/5981058/
It's using kvm and ceph as an rdb device.
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
Can you give me please some advices?
Thanks,
Giuseppe
__
We have a cluster with 10 servers, 64 OSDs and 5 Mons on them. The OSDs are 3TB
disk, formatted with btrfs and the servers are either on Ubuntu 12.10 or 13.04.
Recently one of the servers (13.04) stood still (due to problems with btrfs -
something we have seen a few times). I decided to not try
Hi Sam,
Thanks for your reply here. Unfortunately I didn't capture all this
data at the time of the issue. What I do have I've pasted below. FYI the only
way I found to fix this issue was to temporarily reduce the number of replicas
in the pool to 1. The stuck pgs then disappeared and so
On Tue, 13 Aug 2013, Dmitry Postrigan wrote:
>
> I just got my small Ceph cluster running. I run 6 OSDs on the same server to
> basically replace mdraid.
>
> I have tried to simulate a hard drive (OSD) failure: removed the OSD
> (out+stop), zapped it, and then
> prepared and activated it. It wo
On 08/13/2013 02:56 AM, Dmitry Postrigan wrote:
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be
loc
On Tue, 13 Aug 2013, Giuseppe 'Gippa' Paterno' wrote:
> Hi all,
> my Debian 7 wheezy machine died with the following in the logs:
> http://pastebin.ubuntu.com/5981058/
>
> It's using kvm and ceph as an rdb device.
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
>
> Can you give m
Interesting,
So if I change ' auth service ticket ttl' to 172,800, in theory I could go
without a monitor for 48 hours?
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Monday, August 12, 2013 9:50 PM
To: Jeppesen, Nelson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ce
On Tue, 13 Aug 2013, Jeppesen, Nelson wrote:
> Interesting,
>
> So if I change ' auth service ticket ttl' to 172,800, in theory I could go
> without a monitor for 48 hours?
If there are no up/down events, no new clients need to start, no osd
recovery going on, then I *think* so. I may be forge
Thank you for the explaination.
By mounting as filesystem I'm talking about something similar to this:
http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/
Using the kernel module, I can mount a subdirectory into my directory
tree - a directory, where I have assigned a
On Tue, 13 Aug 2013, Georg H?llrigl wrote:
> Thank you for the explaination.
>
> By mounting as filesystem I'm talking about something similar to this:
> http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/
>
> Using the kernel module, I can mount a subdirectory into my
Thanks Joao,
Is there a doc somewhere on the dependencies? I assume I'll need to setup the
tool chain to compile?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is there an easy way I can find the age and/or expiration of the service ticket
on a particular osd? Is that a file or just kept in ram?
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Tuesday, August 13, 2013 9:01 AM
To: Jeppesen, Nelson
Cc: ceph-users@lists.ceph.com
On Tue, 13 Aug 2013, Jeppesen, Nelson wrote:
> Is there an easy way I can find the age and/or expiration of the service
> ticket on a particular osd? Is that a file or just kept in ram?
It's just in ram. If you crank up debug auth = 10 you will periodically
see it dump the rotating keys and exp
On 13/08/13 09:19, Jeppesen, Nelson wrote:
Thanks Joao,
Is there a doc somewhere on the dependencies? I assume I’ll need to
setup the tool chain to compile?
README on the ceph repo has the dependencies.
You could also try getting it from the gitbuilders [1], but I'm not sure
how you'd go a
>> I have tried to simulate a hard drive (OSD) failure: removed the OSD
>> (out+stop), zapped it, and then
>> prepared and activated it. It worked, but I ended up with one extra OSD (and
>> the old one still showing in the ceph -w output).
>> I guess this is not how I am supposed to do it?
> It
Adding back ceph-users; try not to turn public threads into private ones
when the problem hasn't been resolved.
On 08/13/2013 04:42 AM, Joshua Young wrote:
So I put the journals on their own partitions and they worked just
fine. All night they were up doing normal operations. When running
initc
It really sounds like you're looking for a better RAID system, not a
distributed storage system.
I've been using ZFS on FreeBSD for years. The Linux port meets nearly
all of your needs, while acting more like a conventional software RAID.
BtrFS also has a lot of these features, but I'm not f
Hi,
I'd just like to echo what Wolfgang said about ceph being a complex system.
I initially started out testing ceph with a setup much like yours. And
while it overall performed ok, it was not as good as sw raid on the same
machine.
Also, as Mark said you'll have at very best half write speeds b
You can run 'ceph pg 0.cfa mark_unfound_lost revert'. (Revert Lost
section of http://ceph.com/docs/master/rados/operations/placement-groups/).
-Sam
On Tue, Aug 13, 2013 at 6:50 AM, Jens-Christian Fischer
wrote:
> We have a cluster with 10 servers, 64 OSDs and 5 Mons on them. The OSDs are
> 3TB di
Version?
-Sam
On Tue, Aug 13, 2013 at 7:52 AM, Howarth, Chris wrote:
> Hi Sam,
> Thanks for your reply here. Unfortunately I didn't capture all this
> data at the time of the issue. What I do have I've pasted below. FYI the only
> way I found to fix this issue was to temporarily reduce
Cool!
-Sam
On Tue, Aug 13, 2013 at 4:49 AM, Jeff Moskow wrote:
> Sam,
>
> Thanks that did it :-)
>
>health HEALTH_OK
>monmap e17: 5 mons at
> {a=172.16.170.1:6789/0,b=172.16.170.2:6789/0,c=172.16.170.3:6789/0,d=172.16.170.4:6789/0,e=172.16.170.5:6789/0},
> election epoch 9794, quorum
Hi Sijo
On Mon, Aug 12, 2013 at 12:26 PM, Mathew, Sijo (KFRM 1) <
sijo.mat...@credit-suisse.com> wrote:
> Hi,
>
> ** **
>
> I have been trying to get ceph installed on a single node. But I’m stuck
> with the following error.
>
> ** **
>
> [host]$ ceph-deploy -v mon create ceph-server-29
I built the wip-monstore-copy branch with './configure --with-rest-bench
--with-debug' and 'make'. It worked and I get all the usual stuff but
ceph-monstore-tool is missing. I see code in ./src/tools/. Did I miss something?
___
ceph-users mailing list
c
On Tue, Aug 13, 2013 at 3:21 AM, Pavel Timoschenkov <
pa...@bayonetteas.onmicrosoft.com> wrote:
> Hi.
> Yes, i'm zapped all disks before.
>
> More about my situation:
> sdaa - one of disk for data: 3 TB with GPT partition table.
> sda - ssd drive with manual created partitions (10 GB) for journal
Hmm. This sounds very similar to the problem I reported (with
debug-mon = 20 and debug ms = 1 logs as of today) on our support site
(ticket #438) - Sage, please take a look.
On Mon, Aug 12, 2013 at 9:49 PM, Sage Weil wrote:
> On Mon, 12 Aug 2013, Jeppesen, Nelson wrote:
>> Joao,
>>
>> (log file
Hi,
Installed ceph-deploy_1.2.1 via rpm but it looks like it needs pushy>=0.5.2,
which I couldn't find in the repository. Please advise.
[host]$ ceph-deploy mon create ceph-server-299
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 21, in
main()
File "/usr/lib/pytho
On 13/08/13 13:09, Jeppesen, Nelson wrote:
I built the wip-monstore-copy branch with ‘./configure --with-rest-bench
--with-debug’ and ‘make’. It worked and I get all the usual stuff but
ceph-monstore-tool is missing. I see code in ./src/tools/. Did I miss something?
That usually builds it fo
Never mind, I removed --with-rest-bench and it worked.
> I built the wip-monstore-copy branch with './configure --with-rest-bench
> --with-debug' and 'make'. It worked and I get all the usual stuff but ceph->
> monstore-tool is missing. I see code in ./src/tools/. Did I miss something?
_
Hi,
I am planning to use Ceph as a database storage for a webmail
client/server application, and I am thinking to store the data as
key/value pair instead of using any RDBMSs, for speed. The webmail
will manage companies, and each company will have many users, users
will end/receive emails and stor
On Tue, 13 Aug 2013, Mandell Degerness wrote:
> Hmm. This sounds very similar to the problem I reported (with
> debug-mon = 20 and debug ms = 1 logs as of today) on our support site
> (ticket #438) - Sage, please take a look.
Hi Mandell,
It's a different issue. In your case we need a bit more i
On Mon, 5 Aug 2013, Mike Dawson wrote:
> Josh,
>
> Logs are uploaded to cephdrop with the file name mikedawson-rbd-qemu-deadlock.
>
> - At about 2013-08-05 19:46 or 47, we hit the issue, traffic went to 0
> - At about 2013-08-05 19:53:51, ran a 'virsh screenshot'
>
>
> Environment is:
>
> - Ce
Hi Oliver,
(Posted this on the bug too, but:)
Your last log revealed a bug in the librados aio flush. A fix is pushed
to wip-librados-aio-flush (bobtail) and wip-5919 (master); can you retest
please (with caching off again)?
Thanks!
sage
On Fri, 9 Aug 2013, Oliver Francke wrote:
> Hi Josh,
Joao,
ceph-monstore-tool --mon-store-path /var/lib/ceph/mon/ceph-2 --out
/var/lib/ceph/mon/ceph-1 --command store-copy
is running now. It hit 52MB very quickly then nothing with lots of disk read,
which is what I'd expect. Its reading fast and expect it to finish in 35min.
Just to make sure, t
>
> This looks like a different issue than Oliver's. I see one anomaly in the
> log, where a rbd io completion is triggered a second time for no apparent
> reason. I opened a separate bug
>
> http://tracker.ceph.com/issues/5955
>
> and pushed wip-5955 that will hopefully shine some light
On 13/08/13 14:46, Jeppesen, Nelson wrote:
Joao,
ceph-monstore-tool --mon-store-path /var/lib/ceph/mon/ceph-2 --out
/var/lib/ceph/mon/ceph-1 --command store-copy
is running now. It hit 52MB very quickly then nothing with lots of disk
read, which is what I’d expect. Its reading fast and expect
On 13/08/13 14:46, Jeppesen, Nelson wrote:
Joao,
ceph-monstore-tool --mon-store-path /var/lib/ceph/mon/ceph-2 --out
/var/lib/ceph/mon/ceph-1 --command store-copy
is running now. It hit 52MB very quickly then nothing with lots of disk
read, which is what I’d expect. Its reading fast and expect
2 is certainly an intriguing option. RADOS isn't really a database
engine (even a nosql one), but should be able to serve your needs
here. Have you seen the omap api available in librados? It allows
you to efficiently store key/value pairs attached to a librados object
(uses leveldb on the OSDs
Success! It was pretty quick too, maybe 20-30min. It’s now at 100MB.
In a matter of min I was able to add two monitors and now I’m back to three
monitors.
Thank you again, Joao and Sage! I can sleep at night now knowing that a single
node won't take down the cluster anymore ☺
__
On 13/08/13 16:13, Jeppesen, Nelson wrote:
Success! It was pretty quick too, maybe 20-30min. It’s now at 100MB.
In a matter of min I was able to add two monitors and now I’m back to three
monitors.
Thank you again, Joao and Sage! I can sleep at night now knowing that a single
node won't take
On Aug 12, 2013, at 7:41 PM, Josh Durgin wrote:
> On 08/12/2013 07:18 PM, PJ wrote:
>>
>> If the target rbd device only map on one virtual machine, format it as
>> ext4 and mount to two places
>> mount /dev/rbd0 /nfs --> for nfs server usage
>> mount /dev/rbd0 /ftp --> for ftp server usage
On Tue, Aug 13, 2013 at 4:20 PM, Mathew, Sijo (KFRM 1) <
sijo.mat...@credit-suisse.com> wrote:
> Hi,
>
> ** **
>
> Installed ceph-deploy_1.2.1 via rpm but it looks like it needs
> pushy>=0.5.2, which I couldn’t find in the repository. Please advise.
>
Can you try again? It seems we left out
Another three months have gone by, and the next stable release of Ceph is
ready: Dumpling! Thank you to everyone who has contributed to this
release!
This release focuses on a few major themes since v0.61 (Cuttlefish):
* rgw: multi-site, multi-datacenter support for S3/Swift object storage
*
52 matches
Mail list logo