Sage,
I have the same issue with ceph 0.61.3 on Ubuntu 13.04.
ceph@ceph-node4:~/mycluster$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu1304--64--vg-root 15G 1.5G 13G 11% /
none 4.0K 0 4.0K 0% /sys/fs/
On 06/11/2013 11:59 AM, Guido Winkelmann wrote:
Hi,
I'm having issues with data corruption on RBD volumes again.
I'm using RBD volumes for virtual harddisks for qemu-kvm virtual machines.
Inside these virtual machines I have been running a C++ program (attached)
that fills a mounted filesystem
On 06/11/2013 08:10 AM, Alvaro Izquierdo Jimeno wrote:
Hi all,
I want to connect an openstack Folsom glance service to ceph.
The first option is setting up the glance-api.conf with 'default_store=rbd' and
the user and pool.
The second option is defined in
https://blueprints.launchpad.net/gla
Hi Ceph lovers
I really need some help here.Iam trying to setup a test ceph cluster and do a
case study on ceph storage.
So that I can propose it to customers who needs scalable storage .I started
with documentation provided in your website but am stuck with an error
.
Hi,
Is really no one on the list interrested in fixing this? Or am i the only one
having this kind of bug/problem?
Am 11.06.2013 16:19, schrieb Smart Weblications GmbH - Florian Wiessner:
> Hi List,
>
> i observed that an rbd rm results in some osds mark one osd as down
> wrongly in cuttlefish.
OpenStack doesn't know how to set different caching options for attached block device.See the following blueprint, https://blueprints.launchpad.net/nova/+spec/enable-rbd-tuning-optionsThis might be implemented for Havana.Cheers.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving
Ah,
The fix for this is 92a49fb0f79f3300e6e50ddf56238e70678e4202, which first
appeared in the 3.9 kernel. The mainline 3.8 stable kernel is EOL, but
Canonical is still maintaining one for ubuntu. I can send a note to them.
sage
On Thu, 13 Jun 2013, Da Chun wrote:
> Sage,
>
> I have the sa
Hi Florian,
Sorry, I missed this one. Since this is fully reproducible, can you
generate a log of the crash by doing something like
ceph osd tell \* injectargs '--debug-osd 20 --debug-filestore 20 --debug-ms 20'
(that is a lot of logging, btw), triggering a crash, and then sending us
the log
Hello,
We ran into a problem with our test cluster after adding monitors. It
now seems that our main monitor doesn't want to start anymore. The logs
are flooded with:
2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a@0(leader).osd e2809
update_from_paxos applying incremental 2810
2013-06-13
Hello,
We ran into a problem with our test cluster after adding monitors. It
now seems that our main monitor doesn't want to start anymore. The logs
are flooded with:
2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a@0(leader).osd e2809
update_from_paxos applying incremental 2810
2013-06-13
Both of those errors are "unable to authenticate". The daemons aren't
finding your authentication keys where they expect to (generally in
/var/lib/ceph or an appropriate subdir); if you set these up manually you
new to copy them over (and perhaps generate them). The documentation on the
website sho
On Thursday, June 13, 2013, wrote:
> Hello,
>
> We ran into a problem with our test cluster after adding monitors. It now
> seems that our main monitor doesn't want to start anymore. The logs are
> flooded with:
>
> 2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a@0(leader).osd e2809
> update_from
On 2013-06-13 18:06, Gregory Farnum wrote:
On Thursday, June 13, 2013, wrote:
Hello,
We ran into a problem with our test cluster after adding monitors. It
now seems that our main monitor doesn't want to start anymore. The
logs are flooded with:
2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a
On Thu, Jun 13, 2013 at 6:33 AM, Sławomir Skowron wrote:
> Hi, sorry for late response.
>
> https://docs.google.com/file/d/0B9xDdJXMieKEdHFRYnBfT3lCYm8/view
>
> Logs in attachment, and on google drive, from today.
>
> https://docs.google.com/file/d/0B9xDdJXMieKEQzVNVHJ1RXFXZlU/view
>
> We have suc
On 06/13/2013 05:25 PM, pe...@2force.nl wrote:
On 2013-06-13 18:06, Gregory Farnum wrote:
On Thursday, June 13, 2013, wrote:
Hello,
We ran into a problem with our test cluster after adding monitors. It
now seems that our main monitor doesn't want to start anymore. The
logs are flooded with:
20
On 2013-06-13 18:57, Joao Eduardo Luis wrote:
On 06/13/2013 05:25 PM, pe...@2force.nl wrote:
On 2013-06-13 18:06, Gregory Farnum wrote:
On Thursday, June 13, 2013, wrote:
Hello,
We ran into a problem with our test cluster after adding monitors.
It
now seems that our main monitor doesn't wan
Apologies for interrupting the normal business...
Hi all,
The ICCLab [1] has another new position opened that perhaps you or someone
you know might be interested in. Briefly, the position is a Applied
Researcher in the area of Cloud Computing (more IaaS than PaaS) and would
need particular skills
On Jun 12, 2013, at 8:15 PM, Yehuda Sadeh wrote:
> On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote:
>> On Jun 12, 2013, at 2:51 PM, Yehuda Sadeh wrote:
>>
>>> On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote:
On Jun 12, 2013, at 2:02 PM, Yehuda Sadeh wrote:
> On Wed, Ju
On Thu, Jun 13, 2013 at 3:01 PM, John Nielsen wrote:
> On Jun 12, 2013, at 8:15 PM, Yehuda Sadeh wrote:
>
>> On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote:
>>> On Jun 12, 2013, at 2:51 PM, Yehuda Sadeh wrote:
>>>
On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote:
> On Jun 12,
On Jun 13, 2013, at 4:03 PM, Yehuda Sadeh wrote:
> On Thu, Jun 13, 2013 at 3:01 PM, John Nielsen wrote:
>> On Jun 12, 2013, at 8:15 PM, Yehuda Sadeh wrote:
>>
>>> On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote:
With:
caps osd = "allow x, allow pool .pubintent-log rw
20 matches
Mail list logo