Thanks for reply!
I am running NBU 7.5.6 so this tech mote does not apply but you are
probably right, this issue seems to be related to symantec netbackup.
Is there a way i can test the user authentication for my radosgw? While my
radosgw is responding properly to s3cmd, i can create buckets and p
Thank you zorg :) In theory it does help, however, I've already got it
installed currently from a local repository. I'm planning to throw that
local repo into ceph and call it a day here. I did notice that libvirt is
noticeably absent from your repository. What do you use in place of
libvirt to
I finally have my first test cluster up and running. No data on it, yet. The
config is: three mons, and three OSDS servers. Each OSDS server has eight 4TB
SAS drives and two SSD journal drives.
The cluster is healthy, so I started playing with PG and PGP values. By the
provided calculation
Hi Gregory,
On 28/01/14 15:51, Gregory Farnum wrote:
>> I do note ntp doesn't seem to be doing its job, but that's a side issue.
> Actually, that could be it. If you take down one of the monitors and
> the other two have enough of a time gap that they won't talk to each
> other, your cluster won't
Hi all,
thank you very much for your input.
I sync the clock on all hosts per ntpdate pool.ntp.org and sync this
with the hwclock on every host.
For strange reason, on is after some minutes out of sync. I can't say
where this comes from...
Perhaps this is a special gentoo-thing or a "cheap-pc
Hi,
I wanted to get some advice on number of PGs to use for RadosGW. I
know the ceph docs say 100 per OSD seems about right. But how should I
distribute those over the various pools (.rgw.bucket,
.rgw.buckets.index) ? Clearly as much of the data will go into
.rgw.buckets that should get the li
On 01/28/2014 10:08 AM, Graeme Lambert wrote:
> Hi Karan,
>
> Surely this doesn't apply to all pools though? Several of the pools
> created for the RADOS gateway have very small levels of objects and if I
> set 256 PGs to all pools I would have warnings about the ratio of
> objects to pgs.
Withi
If one node which happens to have a single raid 0 hardisk is "slow", would
that impact the whole ceph cluster? That is, when VMs interact with the rbd
pool to read and write data, would the kvm client "wait" for that slow
hardisk/node to return with the requested data, thus making that slow
hardisk
Sage Weil writes:
> On Sun, 26 Jan 2014, Schlacta, Christ wrote:
>> So on Debian wheezy, qemu is built without ceph/rbd support. I don't know
>> about everyone else, but I use backported qemu. Does anyone provide a
>> trusted, or official, build of qemu from Debian backports that supports
>> c
Hello
we have a public repository with qemu-kvm wheezy-backports build with rbd
deb http://deb.probesys.com/debian/ wheezy-backports main
hope it can help
Le 26/01/2014 12:43, Schlacta, Christ a écrit :
So on Debian wheezy, qemu is built without ceph/rbd support. I don't
know about everyone
Hi,
Thanks for Your response.
ceph -v
ceph version 0.67.5 (a60ac9194718083a4b6a225fc17cad6096c69bd1)
grep -i rgw /etc/ceph/ceph.conf | grep -v socket
rgw_cache_enabled = true
rgw_cache_lru_size = 1
rgw_thread_pool_size = 2048
rgw op thread timeout = 6000
On 28/01/14 17:12, Schlacta, Christ wrote:
> Is the list misconfigured? Clicking "Reply" in my mail client on nearly
> EVERY list sends a reply to the list, but for some reason, this list is
> one of the very, extremely, exceedingly few lists where that doesn't
> work as expected. Is the list mis
12 matches
Mail list logo