On Jan 24, 2014, at 11:08 AM, "Yan, Zheng" wrote:
> On Fri, Jan 24, 2014 at 5:03 PM, Arne Wiebalck wrote:
>> Hi,
>>
>> We're about to start looking more seriously into CephFS and were wondering
>> about the tradeoffs in our RHEL6.x based environment between using the
>> kernel client (on 3.
Hi all,
I'd like to submit a strange behavior...
Context : lab platform
CEPH emperor
Ceph-deploy 1.3.4
Ubuntu 12.04
Issue:
We have 3 OSD up and running; we encountered no difficulties in creating them.
We tried to create an osd.3 using ceph-deploy on a storage node (r-cephosd301)
from an admin
Hi John,
Thanks for your reply.
Can I use ceph-deploy new to deploy the new cluster? Or everything will
have to be done manually?
Looking forward to your reply, thank you.
Cheers.
On Sat, Jan 25, 2014 at 4:04 AM, John Spray wrote:
> Yes, you can have two different monitor daemons on the sa
Hi all,
Now I try to compile ceph source code with version 0.72 on centos.
I have compiled it successfully with './autogen.sh', './configure', 'make'.
Then I can get ceph programs under the src/ dir. Then I run the command
'./ceph -v' under src/ dir, it shows me the output:
[root@node0 src]# ./ceph
On 27/01/2014 13:17, Tim Zhang wrote:
> Hi all,
> Now I try to compile ceph source code with version 0.72 on centos.
> I have compiled it successfully with './autogen.sh', './configure', 'make'.
> Then I can get ceph programs under the src/ dir. Then I run the command
> './ceph -v' under src/ d
I have just tried this with ceph-deploy and it does indeed seem to
work. You have to do the following:
1. Pass a "--cluster=" argument to ceph-deploy with a new
cluster name (your first cluster will have been called 'ceph' by
default, call the new one something different)
2. After calling "ceph-de
my impression is that 0.72 ceph-fuse is quite stable. But it's still
slower than the kernel cephfs driver.
Yan, Zheng
On Mon, Jan 27, 2014 at 4:51 PM, Arne Wiebalck wrote:
>
>
> On Jan 24, 2014, at 11:08 AM, "Yan, Zheng" wrote:
>
>> On Fri, Jan 24, 2014 at 5:03 PM, Arne Wiebalck wrote:
>>> Hi,
Hi,
Would I take this to understand that this may be a known issue with udev
on RHEL then? We will for now add them to the fstab.
Thanks,
derek
On 1/25/14, 9:23 PM, Michael J. Kidd wrote:
> While clearly not optimal for long term flexibility, I've found that
> adding my OSD's to fstab allows th
Hi Derek,
Would like to get to the bottom of your problem. Is it that the monitors
don't start after a reboot? Is there an error in
/var/log/ceph/ceph-mon.`hostname`.log?
sage
On Mon, 27 Jan 2014, Derek Yarnell wrote:
> Hi,
>
> Would I take this to understand that this may be a known issue
On Mon, 27 Jan 2014, Derek Yarnell wrote:
> Hi Sage,
>
> Our clusters are slightly different but no the monitors start just fine.
> On our test and rgw clusters we run monitors co-located with our OSDs.
> The monitors start just fine. My understanding is that when booting
> the hosts detect a d
Looks like you got lost over the Christmas holidays; sorry!
I'm not an expert on running rgw but it sounds like garbage collection
isn't running or something. What version are you on, and have you done
anything to set it up?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Su
These aren't really consumerized yet, so you pretty much have to
google and see if somebody's already discussed them or go through the
code. Not sure where they are on the priority list for docs.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Sun, Jan 26, 2014 at 7:34 PM, D
Hi Sage,
Our clusters are slightly different but no the monitors start just fine.
On our test and rgw clusters we run monitors co-located with our OSDs.
The monitors start just fine. My understanding is that when booting
the hosts detect a disk hot-plug event in udev via the
/lib/udev/rules.d/9
Hi Andreas,
I have tried the cloud storage server option in NetBackup and it seems to
be failing at authentication stage.
#tpconfig -add -storage_server rgw.local.lan -stype amazon_raw
-sts_user_id 12345 -password 67890 Failed to open server connection to
type amazon_raw server rgw.local.l
>> Our best guess so far is that this line is not matching the underlying
>> disk that is getting hotplugged (95-ceph-osd.rules). Is
>> ID_PART_ENTRY_TYPE just the partition UUID or are we not understanding
>> identifier correctly?
>>
>> ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff
Hi,
Having messed up my last RHEL6 Ceph cluster rather spectacularly I decided to
build again from scratch using the latest versions of the various packages.
Not having proxy access does make it more of a pfaff to deploy Ceph, but I will
admit the latest version of ceph-deploy is a vast impro
On Mon, Jan 27, 2014 at 2:15 PM, wrote:
> Hi,
>
>
>
> Having messed up my last RHEL6 Ceph cluster rather spectacularly I decided
> to build again from scratch using the latest versions of the various
> packages. Not having proxy access does make it more of a pfaff to deploy
> Ceph, but I will a
This fix has been merged, thanks Derek!
sage
On Mon, 27 Jan 2014, Derek Yarnell wrote:
> >> Our best guess so far is that this line is not matching the underlying
> >> disk that is getting hotplugged (95-ceph-osd.rules). Is
> >> ID_PART_ENTRY_TYPE just the partition UUID or are we not understa
Hi Markus,
It should be as below:
drwxr-xr-x /etc/ceph
-rw-r--r-- /etc/ceph/ceph.conf
-rw-r--r-- /etc/ceph/ceph.client.admin.keyring
drwxr-xr-x /var/lib/ceph
Make sure to watch the following ticket as well:
http://tracker.ceph.com/issues/6825
Best Regards
Sherry
On Saturday, Januar
Hi all
I would like to run op and usage log on a production cluster to aid in some
debugging. The problem is it would have to run for several weeks. Does
anyone have any feel for what the performance hit would be ? Are large ops
logs and usage logs expensive?
Also if I was to run with them on , h
On Mon, Jan 27, 2014 at 3:34 PM, Caius Howcroft
wrote:
> Hi all
>
> I would like to run op and usage log on a production cluster to aid in some
> debugging. The problem is it would have to run for several weeks. Does
> anyone have any feel for what the performance hit would be ? Are large ops
> lo
Has anyone done the work to boot a machine (physical or virtual) from a
CEPH filesystem or RBD?
I'm very interested in this, as I have several systems that don't need a
LOT of disk throughput and have PLENTY of network bandwidth unused, making
them primary candidates for such a setup. I thought a
This isn't a topic I know a ton about, but:
It is not possible to boot from CephFS, but will be soon (search for
"[PATCH 1/4] init: Add a new root device option, the Ceph file
system").
I think it is possible to boot from rbd (there is native kernel
support for it as a block device, for starters),
On 28/01/14 13:37, Schlacta, Christ wrote:
> Has anyone done the work to boot a machine (physical or virtual) from a
> CEPH filesystem or RBD?
Booting a VM from RBD is doable in modern QEMU.
The QEMU process connects to RBD and presents it to the VM as a standard
block device, the VM doesn't know
On Mon, Jan 27, 2014 at 9:05 PM, Stuart Longland wrote:
> On 25/01/14 16:41, Stuart Longland wrote:
>> Hi Gregory,
>> On 24/01/14 12:20, Gregory Farnum wrote:
>>> Did the cluster actually detect the node as down? (You could check
>>> this by looking at the ceph -w output or similar when running th
HI,
I have a setup of 3 node ceph cluster : Each ceph node have one monitor and
2 osds running and exposed native device as ceph client.
While I/o was running ,I made network down for 100 seconds on on of ceph
nodes(so one monitor and two osds were down) . For few seconds (~70 sec)
There were no
I'll have to look at the iscsi and zfs initramfs hooks, and see if I can
model it most concisely on what they currently do. Between the two, I
should be able to hack something up.
On Mon, Jan 27, 2014 at 9:46 PM, Stuart Longland wrote:
> On 28/01/14 15:29, Schlacta, Christ wrote:
> > iPXE supp
Is the list misconfigured? Clicking "Reply" in my mail client on nearly
EVERY list sends a reply to the list, but for some reason, this list is one
of the very, extremely, exceedingly few lists where that doesn't work as
expected. Is the list misconfigured? Anyway, if someone could fix this,
it
I'm pasting this in here piecemeal, due to a misconfiguration of the list.
I'm posting this back to the original thread in the hopes of the
conversation being continued. I apologize in advance for the poor
formatting below.
On Mon, Jan 27, 2014 at 12:50 PM, Schlacta, Christ wrote:
> Thanks for
29 matches
Mail list logo