Hi Pragya
Let me try to answer these.
1# The decisions is based on your use case ( performance , reliability ) .If
you need high performance out of your cluster , the deployer will create a pool
on SSD and assign this pool to applications which require higher I/O. For Ex :
if you integrate op
(re-cc'ing list without log file)
Did you change anything in the cluster aside from ubuntu's version?
Were any upgrades performed? If so, from which version to which version
and on which monitors?
-Joao
On 07/15/2014 03:18 AM, Richard Zheng wrote:
Thanks Joao. Not sure if the log contain
We used to have ubuntu 12.04 with ceph 0.80.1. The upgrade just tries to
use ubuntu 14.04. We have three servers which run 1 mon and 10 OSDs
each. The other 2 servers are ok.
Before upgrade we didn't enable upstart and just manually started mon on
all three nodes, e.g. start ceph-mon id=storag
Hi Sage,
since this problem is tunables-related, do we need to expect same behavior
or not when we do regular data rebalancing caused by adding new/removing
OSD? I guess not, but would like your confirmation.
I'm already on optimal tunables, but I'm afraid to test this by i.e.
shuting down 1 OSD.
I found the problem. Storage1 dpkg shows ceph is running on 0.80.1 and
other 2 are running on 0.80.2. The upgrade follows the same method. I
don't understand why. I have tried apt-get update and apt-get upgrade on
storage1. How do I bring it to 0.80.2?
On Tue, Jul 15, 2014 at 12:30 AM, Richa
Hello guys,
I was wondering if there has been any progress on getting multipath iscsi play
nicely with ceph? I've followed the how to and created a single path iscsi over
ceph rbd with XenServer. However, it would be nice to have a built in failover
using iscsi multipath to another ceph mon or
Can you connect to your Ceph cluster?
You can pass options to the cmd line like this:
$ qemu-img create -f rbd
rbd:instances/vmdisk01:id=leseb:conf=/etc/ceph/ceph-leseb.conf 2G
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood."
Phone: +33 (0)1 49 70 99
Hi,
to avoid confusion I would name the "host" entries in the crush map
differently. Make sure these host names can be resolved to the correct
boxes though (/etc/hosts on all the nodes). You're also missing a new
rule entry (also shown in the link you mentioned).
Lastly, and this is *extremely* i
Does anyone have a guide or re-producible method of getting multipath ISCSI
working infront of ceph? Even if it just means having two front-end ISCSI
targets each with access to the same underlying Ceph volume?
This seems like a super popular topic.
Thanks,
-Drew
__
One other question, if you are going to be using Ceph as a storage system for
KVM virtual machines does it even matter if you use ISCSI or not?
Meaning that if you are just going to use LVM and have several hypervisors
sharing that same VG then using ISCSI isn't really a requirement unless you a
hi,all:
I created an erasure-code pool ,I create a 1GB image named fool that stores
information in the erasure-code pool,however the action
failed,with tips as follows:
root@mon1:~# rbd create foo --size 1024 --pool ecpool
rbd: create error: (95) Operation not supported2014-07-13 10:32:55.311
Drew, I would not use iscsi with ivm. instead, I would use built in rbd
support.
However, you would use something like nfs/iscsi if you were to connect other
hypervisors to ceph backend. Having failover capabilities is important here ))
Andrei
--
Andrei Mikhailovsky
Director
Arhont Infor
Hi All,
I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect to
the MON from a client using ceph-fuse.
On the client, I installed the ceph-fuse 0.80.1 and added fuse. But, I
think it is wrong. The result is
# modprobe fuse
(Any output was nothing)
# lsmod | grep fuse
(Any out
On Tue, 15 Jul 2014, Andrija Panic wrote:
> Hi Sage, since this problem is tunables-related, do we need to expect
> same behavior or not when we do regular data rebalancing caused by
> adding new/removing OSD? I guess not, but would like your confirmation.
> I'm already on optimal tunables, but
hi.
there may be 2 ways. but cephfs is not product-ready.
1. you can use a file stored in cephfs as a target.
2.there is a rbd.ko which map a rbd device as a block device, which you can
assign to target.
i have not tested yet.
good luck
At 2014-07-15 09:18:53, "Drew Weaver" wrote:
O
hi,all
i take a glance at ceph code of cls_rbd.cc.
it seems that snap and clone both do not R/W any data, they just add some keys
and values, even rbds in different pools.
am i missing something? or could you explain deeper about the implemention of
snap and clone.
thanks very much.
___
It's generally recommended that you use disks in JBOD mode rather than
involving RAID.
-Greg
On Monday, July 14, 2014, 不坏阿峰 wrote:
> I have installed and test Ceph on VMs before, i know a bit about
> configuration and install.
> Now i want to use physic PC Server to install Ceph and do some Test
Are you saturating your network bandwidth? That's what it sounds like. :)
-Greg
On Monday, July 14, 2014, baijia...@126.com wrote:
> hi, everyone!
>
> I test RGW get obj ops, when I use 100 threads get one and the same
> object , I find that performance is very good, meadResponseTime is 0.1s.
You can't use erasure coded pools directly with RBD. They're only suitable
for use with RGW or as the base pool for a replicated cache pool, and you
need to be very careful/specific with the configuration. I believe this is
well-documented, so check it out! :)
-Greg
On Saturday, July 12, 2014, qix
Hi All,
There is a new release of ceph-deploy, the easy deployment tool
for Ceph.
There is a minor cleanup when ceph-deploy disconnects from remote hosts that was
creating some tracebacks. And there is a new flag for the `new` subcommand that
allows to specify an fsid for the cluster.
The full l
After upgrading 0.80.1 to 0.80.3 I see many regular messages on every OSD log:
2014-07-15 19:44:48.292839 7fa5a659f700 0 osd.5 62377 crush map has features
2199057072128, adjusting msgr requires for mons
(constant part: "crush map has features 2199057072128, adjusting msgr requires
for mons"
What did ceph-fuse output to its log file or the command line?
On Tuesday, July 15, 2014, Jaemyoun Lee wrote:
> Hi All,
>
> I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect
> to the MON from a client using ceph-fuse.
>
> On the client, I installed the ceph-fuse 0.80.1 and
Dzianis Kahanovich пишет:
After upgrading 0.80.1 to 0.80.3 I see many regular messages on every OSD log:
2014-07-15 19:44:48.292839 7fa5a659f700 0 osd.5 62377 crush map has features
2199057072128, adjusting msgr requires for mons
(constant part: "crush map has features 2199057072128, adjusting
Okay, first the basics: cls_rbd.cc operates only on rbd header
objects, so it's doing coordinating activities, not the actual data
handling. When somebody does an operation on an rbd image, they put
some data in the header object so that everybody else can coordinate
(if it's open) or continue (if
The output is nothing because ceph-fuse fell into an infinite while loop as
I explain below.
Where can I find the log file of ceph-fuse?
Jae.
2014. 7. 16. 오전 1:59에 "Gregory Farnum" 님이 작성:
> What did ceph-fuse output to its log file or the command line?
>
> On Tuesday, July 15, 2014, Jaemyoun Lee
Dzianis Kahanovich пишет:
Dzianis Kahanovich пишет:
After upgrading 0.80.1 to 0.80.3 I see many regular messages on every OSD log:
2014-07-15 19:44:48.292839 7fa5a659f700 0 osd.5 62377 crush map has features
2199057072128, adjusting msgr requires for mons
(constant part: "crush map has featur
On Tue, Jul 15, 2014 at 10:15 AM, Jaemyoun Lee wrote:
> The output is nothing because ceph-fuse fell into an infinite while loop as
> I explain below.
>
> Where can I find the log file of ceph-fuse?
It defaults to /var/log/ceph, but it may be empty. I realize the task
may have hung, but I'm prett
This Firefly point release fixes an potential data corruption problem
when ceph-osd daemons run on top of XFS and service Firefly librbd
clients. A recently added allocation hint that RBD utilizes triggers
an XFS bug on some kernels (Linux 3.2, and likely others) that leads
to data corruption and
Hi Randy,
This is the same kernel we reproduced the issue on as well. Sam traced
this down to the XFS allocation hint ioctl we recently started using for
RBD. We've just pushed out a v0.80.4 firefly release that disables the
hint by default. It should stop the inconsistencies from popping up
thank you very much, Karan, for your explanation.
Regards
Pragya Jain
On Tuesday, 15 July 2014 1:53 PM, Karan Singh wrote:
>
>
>Hi Pragya
>
>
>Let me try to answer these.
>
>
>1# The decisions is based on your use case ( performance , reliability ) .If
>you need high performance out of yo
On 07/16/2014 07:58 AM, lakshmi k s wrote:
Hello Ceph Users -
My Ceph setup consists of 1 admin node, 3 OSDs, I radosgw and 1 client.
One of OSD node also hosts monitor node. Ceph Health is OK and I have
verified the radosgw runtime. I have created S3 and Swift users using
radosgw-admin. But whe
31 matches
Mail list logo