Hi,
Today I installed the ceph deploy node on a common PC (AMD Athlon 64 X2
Dual Core Processor 5000+, 2G memory, one SATA Disk).
The OS: Ubuntu 12.04 with kernel 3.8.0-33-generic
I have followed this guide:
http://ceph.com/docs/master/start/quick-start-preflight/
I double checked the apt-ke
Hi Greogory
Found the solution to my mounting problem today.
You where right the error message: libceph: mon0 10.100.214.11:6789 feature set
mismatch, my 4008a < server's 4004008a, missing 4000
Comes from wrong haspspool settings. The correct command to clear this flag is:
ceph osd pool set
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Monday, November 18, 2013 6:34 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry
>
>I went ahead and created a ticket to track
No, you wouldn’t need to re-replicate the whole disk for a single bad sector.
The way to deal with that if the object is on the primary is to remove the file
manually from the OSD’s filesystem and perform a repair of the PG that holds
that object. This will copy the object back from one of th
I went ahead and created a ticket to track this, if you have any new
input, please make sure you add to
the actual ticket: http://tracker.ceph.com/issues/6793
Thanks for reporting the problem!
On Fri, Nov 15, 2013 at 4:22 PM, Alfredo Deza wrote:
> On Fri, Nov 15, 2013 at 2:53 PM, Gruher, Joseph
Hi David,
On Fri, Nov 15, 2013 at 10:00:37AM -0800, David Zafman wrote:
>
> Replication does not occur until the OSD is “out.” This creates a new
> mapping in the cluster of where the PGs should be and thus data begins to
> move and/or create sufficient copies. This scheme lets you control ho
Hi guys,
in the past we've used intel 520 ssds for ceph journal - this worked
great and our experience was good.
Now they started to replace the 520 series with their new 530.
When we did we were supriced by the ugly performance and i need some
days to reproduce.
While O_DIRECT works fine for b
On Sun, Nov 17, 2013 at 9:45 PM, Dnsbed Ops wrote:
> Hi,
>
> Today I installed the ceph deploy node on a common PC (AMD Athlon 64 X2 Dual
> Core Processor 5000+, 2G memory, one SATA Disk).
>
> The OS: Ubuntu 12.04 with kernel 3.8.0-33-generic
>
> I have followed this guide:
> http://ceph.com/docs/
OK, that's good (as far is it goes, being a manual process).
So then, back to what I think was Mihály's original issue:
> pg repair or deep-scrub can not fix this issue. But if I
> understand correctly, osd has to known it can not retrieve
> object from osd.0 and need to be replicate an another o
Hi all,
I've uploaded it via github - https://github.com/waipeng/nfsceph. Standard
disclaimer applies. :)
Actually #3 is a novel idea, I have not thought of it. Thinking about the
difference just off the top of my head though, comparatively, #3 will have
1) more overheads (because of the additio
Hi Dima,
Benchmark FYI.
$ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
Version 1.97 --Sequential Create-- Random
Create
altair -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
On 09 нояб. 2013 г., at 1:46, Gregory Farnum wrote:
> On Fri, Nov 8, 2013 at 8:49 AM, Listas wrote:
>> Hi !
>>
>> I have clusters (IMAP service) with 2 members configured with Ubuntu + Drbd
>> + Ext4. Intend to migrate to the use of Ceph and begin to allow distributed
>> access to the data.
>>
On Mon, Nov 18, 2013 at 02:38:42PM +0100, Stefan Priebe - Profihost AG wrote:
> Hi guys,
>
> in the past we've used intel 520 ssds for ceph journal - this worked
> great and our experience was good.
>
> Now they started to replace the 520 series with their new 530.
>
> When we did we were supric
I looked at the code. The automatic repair should handle getting an EIO during
read of the object replica. It does NOT require removing the object as I said
before, so it doesn’t matter which copy has bad sectors. It will copy from a
good replica to the primary, if necessary. By default a d
Thanks. I also found the version is incorrect, I changed it and
re-install, got successful.
Regards.
On 2013-11-19 4:54, Alfredo Deza wrote:
Have you edited the output? It looks like you have the URLs that have
the variable `ceph-stable-release` in them:
http://ceph.com/debian-{ceph-stable-re
Hi David,
Thanks for taking the time to look into this for us.
Isn't the checksum calculated over the data? If so, wouldn't it
then be easy for ceph to tell which copy is good (because the
checksum matches) and so an automatic repair should be possible?
Is the lack of this functionality once aga
All,
We've just discovered an issue that impacts some users running any of
the "ceph osd pool set" family of commands while some of their
monitors are running Dumpling and some are running Emperor. Doing so
can result in the commands being interpreted incorrectly and your
cluster being accidentall
Hi,
Since the /dev/sdX device location could shuffle things up (and that would mess
things up) I'd like to use a more-persistent device path.
Since I'd like to be able to replace a disk without adjusting anything (e.g.
just formatting the disk) the /dev/disk/by-path seems the best fit.
However,
Hi,
When an osd node server restarted, I found the osd daemon doesn't get
started.
I must run these two commands from the deploy node to restart them:
ceph-deploy osd prepare ceph3.anycast.net:/tmp/osd2
ceph-deploy osd activate ceph3.anycast.net:/tmp/osd2
My questions are,
#1, can it be set
On 19/11/13 18:56, Robert van Leeuwen wrote:
Hi,
Since the /dev/sdX device location could shuffle things up (and that would mess
things up) I'd like to use a more-persistent device path.
Since I'd like to be able to replace a disk without adjusting anything (e.g.
just formatting the disk) the
Hello,
I deployed one monitor daemon in a separate server successfully.
But I can't deploy it together with the osd node.
I run the deployment command and got:
ceph@172-17-6-65:~/my-cluster$ ceph-deploy mon create ceph3.geocast.net
[ceph_deploy.cli][INFO ] Invoked (1.3.2): /usr/bin/ceph-deploy
On 19/11/2013 03:19, Gregory Farnum wrote:
> All,
>
> We've just discovered an issue that impacts some users running any of
> the "ceph osd pool set" family of commands while some of their
> monitors are running Dumpling and some are running Emperor. Doing so
> can result in the commands being i
22 matches
Mail list logo