Hi Greg,
Thanks.
We need end to end (disk-client to disk-OSD) latency/throughput for READs
and WRITEs. Writes can be made Write through but we are having difficulties
with read.
Thanks
Mudit
On 31-Jan-2015 5:03 AM, "Gregory Farnum" wrote:
> I don't think there's any way to force the OSDs to do
Thanks Lionel, we are using btrfs compression and it's also stable in our
cluster.
Currently another minor problem of btrfs fragments is sometimes we see
btrfs-transacti process can pause the whole OSD node I/O for seconds, impacting
all OSDs on the server. Especially when doing recovery / ba
One thing than can cause this is messed-up partition ID's / typecodes. Check
out the ceph-disk script to see how they get applied. I have a few systems
that somehow got messed up -- at boot they don't get started, but if I mounted
them manually on /mnt, checked out the whoami file and remoun
I don't think there's any way to force the OSDs to do that. What
exactly are you trying to do?
-Greg
On Fri, Jan 30, 2015 at 4:02 AM, Mudit Verma wrote:
> Hi All,
>
> We are working on a project where we are planning to use Ceph as storage.
> However, for one experiment we are required to disable
Hi,
I have ceph giant installed and installed/compiled calamari but getting
"calamari server error 503 detail rpc error lost remote after 10s heartbeat"
It seems calamari doesn't have contact with ceph for some reason.
Anyway to configure calamari manually to get status and fix the 503 error?
Hi Bruce,
you can also look on the mon, like
ceph --admin-daemon /var/run/ceph/ceph-mon.b.asok config show | grep cache
(I guess you have an number instead of the .b. )
Udo
On 30.01.2015 22:02, Bruce McFarland wrote:
>
> The ceph daemon isn’t running on the client with the rbd device so I
> can’t
The ceph daemon isn't running on the client with the rbd device so I can't
verify if it's disabled at the librbd level on the client. If you mean on the
storage nodes I've had some issues dumping the config. Does the rbd caching
occur on the storage nodes, client, or both?
From: Udo Lembke [ma
Hi Bruce,
hmm, sounds for me like the rbd cache.
Can you look, if the cache is realy disabled in the running config with
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep cache
Udo
On 30.01.2015 21:51, Bruce McFarland wrote:
>
> I have a cluster and have created a rbd device -
I have a cluster and have created a rbd device - /dev/rbd1. It shows up as
expected with 'rbd -image test info' and rbd showmapped. I have been looking at
cluster performance with the usual Linux block device tools - fio and vdbench.
When I look at writes and large block sequential reads I'm see
On 01/30/15 14:24, Luke Kao wrote:
>
> Dear ceph users,
>
> Has anyone tried to add autodefrag and mount option when use btrfs as
> the osd storage?
>
>
>
> In some previous discussion that btrfs osd startup becomes very slow
> after used for some time, just thinking about add autodefrag will hel
All,
I built up a ceph system on my little development network, then tried to move
it to a different network. I edited the ceph.conf file, and fired it up and...
well, I discovered that I was a bit naive.
I looked through the documentation pretty carefully, and I can't see any list
of places
Hi Karl,
Sorry that I missed this go by. If you are still hitting this issue,
I'd like to help you and figure this one out, especially since you are
not the only person to have hit it.
Can you pass along your system details, (OS, version, etc.).
I'd also like to know how you installed ceph-depl
About a year ago I was talking to j
On 01/30/2015 07:24 AM, Luke Kao wrote:
Dear ceph users,
Has anyone tried to add autodefrag and mount option when use btrfs as
the osd storage?
Sort of. About a year ago I was looking into it, but Josef told me not
to use either defrag or autodefrag. (esp
oops, mangled the first part of that reply a bit. Need my morning
coffee. :)
On 01/30/2015 07:56 AM, Mark Nelson wrote:
About a year ago I was talking to j
On 01/30/2015 07:24 AM, Luke Kao wrote:
Dear ceph users,
Has anyone tried to add autodefrag and mount option when use btrfs as
the osd
Dear ceph users,
Has anyone tried to add autodefrag and mount option when use btrfs as the osd
storage?
In some previous discussion that btrfs osd startup becomes very slow after used
for some time, just thinking about add autodefrag will help.
We will add on our test cluster first to see
Hi All,
We are working on a project where we are planning to use Ceph as storage.
However, for one experiment we are required to disable the caching on OSDs
and on client.
We want any data transaction in the filesystem to be served directly from
OSDs disk, without any cache involvement in between
I'm running Ubuntu 14.04 servers with Firefly and I don't have a sysvinit
file, but I do have an upstart file.
"touch /var/lib/ceph/osd/ceph-XX/upstart" should be all you need to do.
That way, the OSD's should be mounted automatically on boot.
On 30 January 2015 at 10:25, Alexis KOALLA wrote:
>
Hi Lindsay and Daniel
Thanks for your replies.
Apologize for not specifying my LAB env details :
Here is the details:
OS: Ubuntu 14.04 LTS, Kernel 3.8.0-29-generic
Ceph version: Firefly 0.80.8
env: LAB
@Lindsay : I'm wonderring if putting the mount command in fstab is new
to ceph or it is recom
Hi Mike,
Sorry to hear that, I hope this can help you to recover your RBD images:
http://www.sebastien-han.fr/blog/2015/01/29/ceph-recover-a-rbd-image-from-a-dead-cluster/
Since you don’t have your monitors, you can still walk through the OSD data dir
and look for the rbd identifiers.
Something
19 matches
Mail list logo