Hi!
As the Ceph community grows, it's important that help is available to new
users. Up to this point, Inktank engineers have been monitoring the channel
and mailing list as time permits. That made sense when there wasn't much foot
traffic, but we have a lot now! We'd like to introduce some
Just for posterity, my ultimate solution was to patch nova on each
compute host to always return True in _check_shared_storage_test_file
(nova/virt/libvirt/driver.py)
This did make migration work with "nova live-migration", with one
caveat. Since Nova is assuming that /var/lib/nova/instances is o
On Wed, Mar 13, 2013 at 10:15 AM, Gandalf Corvotempesta
wrote:
> 2013/3/12 Wolfgang Hennerbichler :
>> Hi,
>>
>> I've a question on cluster-network documented here:
>> http://ceph.com/docs/master/rados/configuration/network-config-ref/
>
> Is cluster network only needed by OSDs? MDSs and MONs shou
On 03/12/2013 12:46 AM, Wolfgang Hennerbichler wrote:
On 03/11/2013 11:56 PM, Josh Durgin wrote:
dd if=/dev/zero of=/bigfile bs=2M &
Serial console gets jerky, VM gets unresponsive. It doesn't crash, but
it's not 'healthy' either. CPU load isn't very high, it's in the waiting
state a lot:
2013/3/12 Wolfgang Hennerbichler :
> Hi,
>
> I've a question on cluster-network documented here:
> http://ceph.com/docs/master/rados/configuration/network-config-ref/
Is cluster network only needed by OSDs? MDSs and MONs should not have
access to that network ?
On 3/13/2013 9:31 AM, Greg Farnum wrote:
On Wednesday, March 13, 2013 at 5:52 AM, Ansgar Jazdzewski wrote:
hi,
i added 10 new OSD's to my cluster, after the growth is done, i got:
##
# ceph -s
health HEALTH_WARN 217 pgs stuck unclean
monmap e4: 2 mons at {a=10.100.217.3:6789/0,b=1
On Wednesday, March 13, 2013 at 5:52 AM, Ansgar Jazdzewski wrote:
> hi,
>
> i added 10 new OSD's to my cluster, after the growth is done, i got:
>
> ##
> # ceph -s
> health HEALTH_WARN 217 pgs stuck unclean
> monmap e4: 2 mons at {a=10.100.217.3:6789/0,b=10.100.217.4:6789/0
> (http://1
On Wed, Mar 13, 2013 at 2:46 AM, Jun Jun8 Liu wrote:
>
> Hi all
>
> I want to create a bucket using radosgw-admin,but I got a error
> -95. my environment is Ubuntu 12, ceph 0.56.3.
>
>
>
> root@:~# radosgw-admin bucket link --bucket=testtt
> --uid=liuj
>
>
Hi Sage,
Thank you for your reply. Yes, vi indeed was creating new files, thereby
resetting the layout.
Also, the setfattr command works :)
Regards,
Varun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-user
Hi Ashish,
Yep, that would be the correct way to do it.
If you already have a cluster running, a ceph -s will also show usage, ie
like:
>ceph -s
pgmap v1842777: 8064 pgs: 8064 active+clean; 1069 GB data, 2144 GB used,
7930 GB / 10074 GB avail; 3569B/s wr, 0op/s
This is a small test-cluster with
Hi Guys,
Just want to know, how can I calculate the capacity of my ceph cluster. I
don't know whether a simple RAID system calculation will work or not.
I have 5 servers each with the storage of 2TB and there are three copies of
data, will it be ok to calculate the capacity in following way:
Hi all
I want to create a bucket using radosgw-admin,but I got a error -95.
my environment is Ubuntu 12, ceph 0.56.3.
root@:~# radosgw-admin bucket link --bucket=testtt
--uid=liuj
error linking bucket to user: r=-95
another question.
U
Wolfgang,
On Tue, Mar 12, 2013 at 2:18 AM, Wolfgang Hennerbichler
wrote:
> Hi,
>
> I've a question on cluster-network documented here:
> http://ceph.com/docs/master/rados/configuration/network-config-ref/
>
> In the docs we learn for the cluster network directive:
> The IP address and netmask of
13 matches
Mail list logo