There is same error in Cinder.
On Thu, Aug 15, 2013 at 10:07 PM, Michael Morgan wrote:
> On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote:
> > On 08/14/2013 02:22 PM, Michael Morgan wrote:
> > >Hello Everyone,
> > >
> > > I have a Ceph test cluster doing storage for an OpenStack Gr
OK, no worries. Was just after maximum availability.
On 08/16/2013 08:19 PM, Mikaël Cluseau wrote:
On 08/17/2013 02:06 PM, Dan Mick wrote:
That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?
gitlab's policy is the following :
Pub
On 08/17/2013 02:06 PM, Dan Mick wrote:
That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?
gitlab's policy is the following :
Public access
If checked, this project can be cloned /without any/ authentication. It
will also be list
On 08/17/2013 02:06 PM, Dan Mick wrote:
That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?
umm it seems the policy of gitlab is that you can clone but not browse
online... but you can clone so it's freely available :
$ GIT_SSL_N
That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?
On 08/14/2013 10:54 PM, Mikaël Cluseau wrote:
Hi lists,
in this release I see that the ceph command is not compatible with
python 3. The changes were not all trivial so I gave up,
Hello Ceph-users,
Running dumping (upgraded yesterday) and several hours after the upgrade the
following type of message repeated over and over in logs. Started about 8 hours
ago.
1 mon.1@0(leader).paxos(paxos active c 6005920..6006535) is_readable
now=2013-08-16 14:35:53.351282 lease_expire=
On Fri, 16 Aug 2013, Ian Colle wrote:
> Please note, you do not need to specify the version of deb or rpm if you
> want the latest. Just continue to point to http://ceph.com/debian or
> http://ceph.com/rpm and you'll get the same thing as
> http://ceph.com/debian-dumpling and http://ceph.com/rpm-du
On Fri, Aug 16, 2013 at 12:20 PM, Aquino, BenX O
wrote:
> Thanks for the response;
>
> Here's the version:
>
> root@ubuntuceph700athf1:/etc/ceph# aptitude versions ceph-deploy
> Package ceph-deploy:
> i 1.0-1
>stable
Thanks for the response;
Here's the version:
root@ubuntuceph700athf1:/etc/ceph# aptitude versions ceph-deploy
Package ceph-deploy:
i 1.0-1
stable 500
On Fri, Aug 16, 2013 at 10:38 AM, Bernhard Glomm wrote:
> Hi all,
>
> since ceph-deploy/ceph-create-keys is broken
> (see bug 4924)
>
That may or may not be related to your specific problem.
Have you looked at the mon logs in the hosts where create-keys is not
working?
For example, this is qui
Please note, you do not need to specify the version of deb or rpm if you
want the latest. Just continue to point to http://ceph.com/debian or
http://ceph.com/rpm and you'll get the same thing as
http://ceph.com/debian-dumpling and http://ceph.com/rpm-dumpling
Ian R. Colle
Director of Engineering
Hi all,
since ceph-deploy/ceph-create-keys is broken
(see bug 4924)
and mkcephfs is deprecated
is there a howto for deploying the system without using
neither of this tools? (especially not ceph-create-keys
since that won't stop running without doing anything ;-)
Since I have only 5 instances I
On Fri, Aug 9, 2013 at 12:05 PM, Aquino, BenX O wrote:
> CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7
>
> ADMINNODE:
>
> root@ubuntuceph900athf1:~# ceph -v
>
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
>
> root@ubuntuceph900athf1:~#
>
>
>
> SERVERNODE:
>
> root@ubuntuceph700ath
On Thu, Aug 15, 2013 at 11:45 AM, Alfredo Deza wrote:
>
>
>
> On Thu, Aug 15, 2013 at 11:41 AM, Jim Summers wrote:
>
>>
>> I ran:
>>
>> ceph-deploy mon create chost0 chost1
>>
>> It seemed to be working and then hung at:
>>
>> [chost0][DEBUG ] checking for done path:
>> /var/lib/ceph/mon/ceph-cho
On Thu, Aug 15, 2013 at 11:45 AM, Alfredo Deza wrote:
>
>
>
> On Thu, Aug 15, 2013 at 11:41 AM, Jim Summers wrote:
>
>>
>> I ran:
>>
>> ceph-deploy mon create chost0 chost1
>>
>> It seemed to be working and then hung at:
>>
>> [chost0][DEBUG ] checking for done path:
>> /var/lib/ceph/mon/ceph-cho
Hi Georg,
I'm not an expert on the monitors, but that's probably where I would
start. Take a look at your monitor logs and see if you can get a sense
for why one of your monitors is down. Some of the other devs will
probably be around later that might know if there are any known issues
with
So that is after running `disk zap`. What does it say after using
ceph-deploy and failing?
After ceph-disk -v prepare /dev/sdaa /dev/sda1:
root@ceph001:~# parted /dev/sdaa print
Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sdaa: 3001GB
Sector size (logical/physical): 512B/4096B
Parti
Hi,
> Is this a known bug in this version?
Yes.
> (Do you know some workaround to fix this?)
Upgrade.
Cheers,
Sylvain
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with "ceph pg force_
Hi,
We noticed some issues on CEPH/S3 cluster, I think it related with scrubbing:
large memory leaks.
Logs 09.xx: https://www.dropbox.com/s/4z1fzg239j43igs/ceph-osd.4.log_09xx.tar.gz
>From 09.30 to 09.44 (14 minutes) osd.4 proces grows up to 28G.
I think this is something curious:
2013-08-16 09
Hi all,
There is a new bug-fix release of ceph-deploy, the easy ceph deployment tool.
ceph-deploy can be installed from three different sources depending on
your package manager and distribution, currently available for RPMs,
DEBs and directly as a Python package from the Python Package Index.
D
Hi,
Thanks for your response.
> It's possible, as deep scrub in particular will add a bit of load (it
> goes through and compares the object contents).
It is possible that the scrubbing blocks access(RW or only W) to bucket index
when check .dir... file?
When rgw index is very large I guess it
<< to explicitly specify the filesystem type or
use wipefs(8) to clean up the device.
mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime',
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit
status 3
On Aug 16, 2013, at 1:26 AM, Sébastien Han wrote:
> Hi,
>
> I have a patch on the way to use the stripes.
>
> The first version is on the operator side (the operator configures it and
> this will take affect for all the created volume)
Cool! If you share it we'd be happy to help with testing.
Hi all,
I am doing some research about mds.
there are so many types lock and states .But I don't found some
document to describe.
Is there anybody tell me what is "loner" and "lock_mix"
thanks
___
ceph-users mailing list
--
-
이태훈
HP : 010 4118 0113
Email: thth...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
-
이태훈
HP : 010 4118 0113
Email: thth...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi list:
After I deployed a cuttlefish(6.1.07) cluster on three nodes(OS Ubuntu
12.04),one ceph-deploy node ,one monitor node and a OSD node . None other
daemons was found on monitor with "sudo initctl list | grep ceph ".As the
content below , I can only find the monitor daemon process.
28 matches
Mail list logo