The auth key needs to be copied to all machines in the cluster. It looks
like the key might not be on the 10.81.2.100 machine.
Check /etc/ceph for the key if you are running Debian or Ubuntu
I am just new to all this myself so I may be totally wrong but it seems
plausible in my head :)
-Matt
Hi,
I'm experimenting with running hbase using the hadoop-ceph java
filesystem implementation, and I'm having an issue with space usage.
With the hbase daemons running, The amount of data in the 'data' pool
grows continuously, at a much higher rate than expected. Doing a du,
or ls -lh on a mounted
Anybody?
On Tue, May 7, 2013 at 1:19 PM, Igor Laskovy wrote:
> I tried do that and put behind RR DNS, but unfortunately only one host can
> server requests from clients - second host does not responds totally. I
> am not to good familiar with apache, in standard log files nothing helpful.
> Ma
Greetings!
The videos, blueprints, etherpads, and irc logs from the developer
summit this week have been posted on both the original wiki page as
well as in an aggregated blog post:
http://ceph.com/events/ceph-developer-summit-summary-and-session-videos/
Thanks to everyone who came and made this
For High availability RGW you would need a load balancer. HA Proxy is
an example of a load balancer that has been used successfully with
rados gateway endpoints.
On Thu, May 9, 2013 at 5:51 AM, Igor Laskovy wrote:
> Anybody?
>
>
> On Tue, May 7, 2013 at 1:19 PM, Igor Laskovy wrote:
>>
>> I tried
On 05/09/2013 09:57 AM, Tyler Brekke wrote:
> For High availability RGW you would need a load balancer. HA Proxy is
> an example of a load balancer that has been used successfully with
> rados gateway endpoints.
Strictly speaking for HA you need an HA solution. E.g. heartbeat. Main
difference betw
This release fixes a problem when upgrading a bobtail cluster that had
snapshots to cuttlefish. Please use this instead of v0.61 if you are
upgrading to avoid possible ceph-osd daemon crashes. There is also fix
for a problem deploying monitors and generating new authentication keys.
Notable c
So I feel like I'm missing something. I just deployed 3 storage nodes
with ceph-deploy, each with a monitor daemon and 6-8 osd's. All of
them seem to be active with health OK. However, it doesn't seem that
I ended up with a useful ceph.conf.
( running 0.61-113-g61354b2-1raring )
This is all I g
On Thu, 9 May 2013, Greg Chavez wrote:
> So I feel like I'm missing something. I just deployed 3 storage nodes
> with ceph-deploy, each with a monitor daemon and 6-8 osd's. All of
> them seem to be active with health OK. However, it doesn't seem that
> I ended up with a useful ceph.conf.
>
> ( r
Mike,
I'm guessing that HBase is creating and deleting its blocks, but that the
deletes are delayed:
http://ceph.com/docs/master/dev/delayed-delete/
which would explain the correct reporting at the file system level, but not the
actual 'data' pool. I'm not as familiar with this level of deta
Hi everyone,
After reading all the research papers and docs over the last few months and
waiting for Cuttlefish, I finally deployed a test cluster of 18 osds across
6 hosts. It's performing better than I expected so far, all on the default
single interface.
I was also surprised by the minimal ce
I don't seem to be able to use the cephfs command via a fuse mount. Is
this expected? I saw no mention of it in the doc. This is on the default
precise kernel (3.2.0-40-generic #64-Ubuntu SMP Mon Mar 25 21:22:10 UTC
2013 x86_64 x86_64 x86_64 GNU/Linux).
danny@ceph:/ceph$ cephfs . show_layout
Er
I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else seen
this?
When creating new monitor, on the server node 1, found the directory prepended
with default cluster name "ceph" ( was created,
root@svl-ceph-01:/var/lib/ceph# ll /var/lib/ceph/mon/total 12drwxr-xr-x 3 root
root 4
On Thu, 9 May 2013, Danny Luhde-Thompson wrote:
> Hi everyone,
> After reading all the research papers and docs over the last few months and
> waiting for Cuttlefish, I finally deployed a test cluster of 18 osds across
> 6 hosts. It's performing better than I expected so far, all on the default
>
On Thu, 9 May 2013, Danny Luhde-Thompson wrote:
> I don't seem to be able to use the cephfs command via a fuse mount. Is this
> expected? I saw no mention of it in the doc. This is on the default
> precise kernel (3.2.0-40-generic #64-Ubuntu SMP Mon Mar 25 21:22:10 UTC 2013
> x86_64 x86_64 x86_6
On Thu, 9 May 2013, w sun wrote:
> I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else
> seen this?
>
> When creating new monitor, on the server node 1, found the directory
> prepended with default cluster name "ceph" ( was created,
> root@svl-ceph-01:/var/lib/ceph# ll /var/li
Ah, now I see the only purpose of cluster naming is for adding same node to
multiple clusters. Thx for the quick pointer. --weiguo
Date: Thu, 9 May 2013 12:52:42 -0700
From: s...@inktank.com
To: ws...@hotmail.com
CC: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy issue with non-d
I have confused about ceph-deploy use case too.
>Modify the ceph.conf in the ceph-deploy directory to add your
>
>cluster network = 1.2.3.0/24
What about mention the cluster IP for concrete OSD "clusteraddr= 1.2.3.1"
? Does it grab it automatically?
On Thu, May 9, 2013 at 10:45 PM, Sage Weil w
I am investigating using Ceph as a storage target for virtual servers in
VMware. We have 3 servers packed with hard drives ready for the proof of
concept. I am looking for some direction. Is this a valid use for Ceph?
If so, has anybody accomplished this? Are there any documents on how to
s
RBD is not supported by VMware/vSphere. You will need to build a NFS/iSCSI/FC
GW to support VMware. Here is a post someone has been trying and you may have
to contact them directly for status,
http://ceph.com/community/ceph-over-fibre-for-vmware/
--weiguo
To: ceph-users@lists.ceph.com
From: jare
Jared,
As Weiguo says you will need to use a gateway to present a Ceph block
device (RBD) in a format VMware understands. We've contributed the
relevant code to the TGT iSCSI target (see blog:
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/) and though
we haven't done a massive amount of
On Thu, May 09, 2013 at 11:51:32PM +0100, Neil Levine wrote:
> Jared,
>
> As Weiguo says you will need to use a gateway to present a Ceph block
> device (RBD) in a format VMware understands. We've contributed the
> relevant code to the TGT iSCSI target (see blog:
> http://ceph.com/dev-notes/adding
Leen,
Do you mean you get LIO working with RBD directly? Or are you just
re-exporting a kernel mounted volume?
Neil
On Thu, May 9, 2013 at 11:58 PM, Leen Besselink wrote:
> On Thu, May 09, 2013 at 11:51:32PM +0100, Neil Levine wrote:
>> Jared,
>>
>> As Weiguo says you will need to use a gateway
On Fri, May 10, 2013 at 12:12:45AM +0100, Neil Levine wrote:
> Leen,
>
> Do you mean you get LIO working with RBD directly? Or are you just
> re-exporting a kernel mounted volume?
>
Yes, re-exporting a kernel mounted volume on seperate gateway machines.
> Neil
>
> On Thu, May 9, 2013 at 11:58
24 matches
Mail list logo