On Mon, Mar 11, 2013 at 11:05 AM, Scott Kinder wrote:
> Auth is disabled, John. I tried mounting the ceph FS on a Ubuntu server, and
> I see the following error in /var/log/syslog:
>
> libceph: mon0 10.122.32.21:6789 connection failed
>
> When the mount fails, it outputs "mount error 5 = Input/out
On Fri, Mar 8, 2013 at 9:57 PM, Rustam Aliyev wrote:
> Hi,
>
> We need to store ~500M of small files (<1MB) and we were looking to RadosGW
> solution. We expect about 20 ops/sec (read+write). I'm trying to understand
> how Monitoring nodes store Crush maps and what are the limitations.
>
> For ins
On Thu, Mar 14, 2013 at 4:09 AM, Léon Keijser wrote:
> Hi,
>
> Every now and then I'm unable to unmap an RBD device:
>
> root@c2-backup ~ # rbd showmapped
> id poolimage snapdevice
> 0 20-kowin-a 20-kowin-a-01 - /dev/rbd0
> root@c2-backup ~ # rbd unmap /dev/rbd0
>
On Fri, Mar 1, 2013 at 1:29 AM, nighteblis li wrote:
> OS:
> # uname -a
> Linux QA-DB-009 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 2010 x86_64
> x86_64 x86_64 GNU/Linux
>
> distro: # cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 5.6 (Tikanga)
>
> ceph: 0.56.3
>
> # gcc -v
On Tue, Feb 26, 2013 at 4:35 PM, wrote:
> Hello,
>
> I was wondering whether it would be feasible to manage existing FC-SAN
> storage with ceph. Now this may sound somewhat weird, so let me explain:
>
> as it turns out you can't actually trust SAN boxes with RAID-6 devices
> to actually hold your
All,
I mistakenly updated the 'next' branch with a bunch of commits from
master that didn't belong in the next branch. I have force pushed
next as it was before the update, but this was a non-linear change to
the branch, so anyone who fetched in the interim will need to re-fetch
and rebase next.
Pal,
Are you still seeing this problem? It looks like you have a bad
crushmap. Can you post that to the list if you changed it?
-slang [developer @ http://inktank.com | http://ceph.com]
On Wed, Mar 20, 2013 at 11:41 AM, "Gergely Pál - night[w]"
wrote:
> Hello!
>
> I've deployed a test ceph cl
On Mon, Apr 1, 2013 at 5:59 AM, Papaspyrou, Alexander
wrote:
> Folks,
>
> we are trying to setup a ceph cluster with about 40 or so OSDs on our
> hosting provider's infrastructure. Our rollout works with Opscode Chef, and
> I'm driving my people to automate away everything they can.
>
> I've worke
On Thu, Mar 28, 2013 at 6:32 AM, Kai Blin wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 2013-03-28 09:16, Volker Lendecke wrote:
>> On Wed, Mar 27, 2013 at 10:43:36PM -0700, Matthieu Patou wrote:
>>> On 03/27/2013 10:41 AM, Marco Aroldi wrote:
Hi list, I'm trying to create a
Hi Erdem,
This is likely a bug. We've created a ticket to keep track:
http://tracker.ceph.com/issues/4645.
-slang [inktank dev | http://www.inktank.com | http://www.ceph.com]
On Mon, Apr 1, 2013 at 3:18 AM, Erdem Agaoglu wrote:
> In addition, i was able to extract some logs from the last time
On Fri, Apr 12, 2013 at 6:14 PM, Jeremy Allison wrote:
> On Wed, Apr 03, 2013 at 03:53:58PM -0500, Sam Lang wrote:
>> On Thu, Mar 28, 2013 at 6:32 AM, Kai Blin wrote:
>> > -BEGIN PGP SIGNED MESSAGE-
>> > Hash: SHA1
>> >
>> > On 2013-03-28 09
On Mon, Apr 22, 2013 at 9:26 AM, konradwro wrote:
> Hello, i have cephfs mounted on each node, when i put file on node1 i can
> see file on node2 and its ok. But cephfs + apache + php5 works quite slow,
> with noticeable delay.
I think your best option is to optimize this setup. Lets first make
12 matches
Mail list logo