Hi Sage,
i uploaded the query to http://yadi.sk/d/XoyLElnCDrc6Q
last time, after i saw "slow request" in osd.4,
i removed, format and re-added osd.4
but after i saw this query, i found many "acting 4". and i think that
indicate that this PG was in osd.4 before.
currently pg 4.7d acting in 0,6
On 12/05/2013 10:44 PM, Chris C wrote:
I've been working on getting this setup working. I have virtual
machines working using rbd based images by editing the domain directly.
Is there any way to make the creation process better? We are hoping to
be able to use a virsh pool using the rbd driver
Hi all,
I am working on the ceph radosgw(v0.72.1) and when I call the rest api to
read the bucket policy, I got an internal server error(request URL is:
/admin/bucket?policy&format=json&bucket=test.).
However, when I call this:
/admin/bucket?policy&format=json&bucket=test&object=obj, I got the
See thread a couple days ago "[ceph-users] qemu-kvm packages for centos"
On Thu, Dec 5, 2013 at 10:44 PM, Chris C wrote:
> I've been working on getting this setup working. I have virtual machines
> working using rbd based images by editing the domain directly.
>
> Is there any way to make the cr
Arf forgot to mention that I’ll do a software mdadm RAID 1 with both sda1 and
sdb1 and put the OS on this.
The rest (sda2 and sdb2) will go for the journals.
@James: I think that Gandalf’s main idea was to save some costs/space on the
servers so having dedicated disks is not an option. (that wha
Hi all,
What will be the fastest disks setup between those 2:
- 1 OSD build from 6 disks in raid 10 and one ssd for journal
- 3 OSDs, each with 2 disks in raid 1 and a common ssd for all
journals (or more ssds if ssd performance will be an issue)
Mainly, will 1 OSD raid 10 be faster or slower the
Hi every one,
I did not get any answer to my basic cephx question last week, so let me
ask it one more time here, before I completely give up on Ceph and move on.
So, my issue is:
When all authentication settings are "none":
* The cluster works fine
* The file "/etc/ceph/ceph.client.admin.key
On 12/06/2013 11:00 AM, Cristian Falcas wrote:
Hi all,
What will be the fastest disks setup between those 2:
- 1 OSD build from 6 disks in raid 10 and one ssd for journal
- 3 OSDs, each with 2 disks in raid 1 and a common ssd for all
journals (or more ssds if ssd performance will be an issue)
M
Hi,
All of our clusters have this in ceph.conf:
[global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
keyring = /etc/ceph/keyring
and the client.admin secret in /etc/ceph/keyring:
# cat /etc/ceph/keyring
[client.admin]
key = ...
With t
2013/12/6 Sebastien Han :
> @James: I think that Gandalf’s main idea was to save some costs/space on the
> servers so having dedicated disks is not an option. (that what I understand
> from your comment “have the OS somewhere else” but I could be wrong)
You are right. I don't have space for one
Most servers also have internal SD card slots. There are SD cards
advertising >90MB/s, though I haven't tried them as OS boot personally.
On 2013-12-06 11:14, Gandalf Corvotempesta wrote:
2013/12/6 Sebastien Han :
@James: I think that Gandalf’s main idea was to save some
costs/space on the se
Le 05/12/2013 14:01, Karan Singh a écrit :
Hello Everyone
Trying to boot from ceph volume using bolg
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
and http://docs.openstack.org/user-guide/content/boot_from_volume.html
Need help for this error.
=
> Most servers also have internal SD card slots. There are SD cards
> advertising >90MB/s, though I haven't tried them as OS boot personally.
We did this with some servers 2 1/2 years ago with some Blade hardware: did not
work out so well.
High level of failures on the SD card's even with all s
Because of the latest librbd version, the image format has been changed from
“rbd:pool/image” to “pool/image”.
So you see the error msg “ could not open disk image”.
发件人: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] 代表 Gilles Mocellin
发送时间: 2013年12月6日 19:38
收
Out of curiosity I tried the 'ceph' command from windows too. I had to rename
librados.dll to librados.so.2, install a readline replacement
(https://pypi.python.org/pypi/pyreadline/2.0), and even then it completely
ignored anything I put on the command line, but from the ceph shell I could do
t
Hi,
In version dumpling upgraded from bobtail working create the same bucket.
root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
Bucket successfully created.
root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
Bucket successfully created.
I installed new dumpling cluster and:
Wido, we were thinking along that line as well. We're trying to figure out
which path will cause the least amount of pain ;)
/C
On Fri, Dec 6, 2013 at 3:27 AM, Wido den Hollander wrote:
> On 12/05/2013 10:44 PM, Chris C wrote:
>
>> I've been working on getting this setup working. I have virt
Looks like the issue is not caused by the bug I presume. Could you
please run following commands, and send the output to me.
rados -p data ls >object.list
find /cephmountpoint -printf '%i\t%p\n' >inode.list
Regards
Yan, Zheng
Hi,
Here it goes (https://dl.dropboxusercontent.com/u/107865390/c
On Fri, Dec 6, 2013 at 5:13 AM, Wojciech Giel
wrote:
> Hello,
> I trying to install ceph but can't get it working documentation is not clear
> and confusing how to do it.
> I have cloned 3 machines with ubuntu 12.04 minimal system. I'm trying to
> follow docs
>
> http://ceph.com/docs/master/start/
On Fri, Dec 6, 2013 at 5:13 AM, Wojciech Giel
wrote:
> Hello,
> I trying to install ceph but can't get it working documentation is not clear
> and confusing how to do it.
> I have cloned 3 machines with ubuntu 12.04 minimal system. I'm trying to
> follow docs
>
> http://ceph.com/docs/master/start/
> looking at tcpdump all the traffic is going exactly where it is supposed to
> go, in particular an osd on the 192.168.228.x network appears to talk to an
> osd on the 192.168.229.x network without anything strange happening. I was
> just wondering if there was anything about ceph that could ma
Dan,
I found the thread but it looks like another dead end :(
/Chris C
On Fri, Dec 6, 2013 at 4:46 AM, Dan van der Ster wrote:
> See thread a couple days ago "[ceph-users] qemu-kvm packages for centos"
>
> On Thu, Dec 5, 2013 at 10:44 PM, Chris C wrote:
> > I've been working on getting this s
I think the version of Libvirt included with RHEL/CentOS supports RBD storage
(but not pools), so outside of compiling a newer version not sure there can be
anything else done aside from waiting for repo additions/newer versions of the
distro.
Not sure what your scenario is, but this is the ex
[Moving this thread to ceph-devel]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi James,
Thank you for this clarification. I am quite aware of that, which is why
the journals are on SAS disks in RAID0 (SSDs out of scope).
I still have trouble believing that fast-but-not-super-fast journals is
the main reason for the poor performances observed. Maybe I am mistaken?
Bes
Hi Dan,
Thank you for the advice and indications. We have the exact same
configuration, except I am only enabling "auth cluster", and I am using
"ceph.client.admin.keyring" instead of simply "keyring".
Both locations "/etc/ceph/ceph.client.admin.keyring" and
"/etc/ceph/keyring" are presented
Hopefully a Ceph developer will be able to clarify how small writes are
journaled?
The write-through 'bug' seems to explain small-block performance I've
measured in various configurations (I find similar results to you).
I've not still tested the patch cited, but it would be *very*
interesti
Unfortunately our shop is heavily in bed with the Foreman. OpenStack,
OpenNebula, CloudStack, oVirt-engine/node aren't options for us at this
time.
Ubuntu was dismissed by our team so thats not an option either.
On Fri, Dec 6, 2013 at 10:54 AM, Campbell, Bill <
bcampb...@axcess-financial.com>
Will throw my US $0.02 in here...
We’re running CentOS 6.4[1] + modern Ceph, VMs are managed by CloudStack. We
use distro packages whenever possible - most times folks suggest building
something from source, I have to be dragged kicking and screaming to agreement.
Ceph support is one of the ver
I'm having trouble reproducing this one. Are you running on latest
dumpling? Does it happen with any newly created bucket, or just with
buckets that existed before?
Yehuda
On Fri, Dec 6, 2013 at 5:07 AM, Dominik Mostowiec
wrote:
> Hi,
> In version dumpling upgraded from bobtail working create th
If I understand correctly you have one sas disk as a journal for multiple OSDs.
If you do small synchronous writes it will become a IO bottleneck pretty
quickly:
Due to multiple journals on the same disk it will no longer be sequential
writes writes to one journal but 4k writes to x journals mak
Ncolasc,
You said: "Just ran a fresh install of version Emperor on an empty cluster,
and I am left clueless, trying to troubleshoot cephx. *After ceph-deploy
created the keys, I used ceph-authtool to generate the client.admin keyring
and the monitor keyring, as indicated in the doc.* The configura
On Fri, Dec 6, 2013 at 1:45 AM, Gao, Wei M wrote:
> Hi all,
>
>
>
> I am working on the ceph radosgw(v0.72.1) and when I call the rest api to
> read the bucket policy, I got an internal server error(request URL is:
> /admin/bucket?policy&format=json&bucket=test.).
>
> However, when I call this:
>
Hello Cephers
I would like to say a BIG THANKS to ceph community for helping me in setting up
and learning ceph.
I have created a small documentation http://karan-mj.blogspot.fi/ of my
experience with ceph till now , i belive it would help beginners in installing
ceph and integrating it with
Hi Chris,
On 06.12.2013 18:28, Chris C wrote:
Unfortunately our shop is heavily in bed with the Foreman. OpenStack,
OpenNebula, CloudStack, oVirt-engine/node aren't options for us at this
time.
Ubuntu was dismissed by our team so thats not an option either.
We use only Fedora servers for eve
On 12/06/2013 04:03 PM, Alek Paunov wrote:
> We use only Fedora servers for everything, so I am curious, why you are
> excluded this option from your research? (CentOS is always problematic
> with the new bits of technology).
6 months lifecycle and having to os-upgrade your entire data center 3
t
On 07.12.2013 00:11, Dimitri Maziuk wrote:
On 12/06/2013 04:03 PM, Alek Paunov wrote:
We use only Fedora servers for everything, so I am curious, why you are
excluded this option from your research? (CentOS is always problematic
with the new bits of technology).
6 months lifecycle and having
On 12/06/2013 04:28 PM, Alek Paunov wrote:
> On 07.12.2013 00:11, Dimitri Maziuk wrote:
>> 6 months lifecycle and having to os-upgrade your entire data center 3
>> times a year?
>>
>> (OK maybe it's "18 months" and "once every 9 months")
>
> Most servers novadays are re-provisioned even more ofte
We rely on the stability of rhel/centos as well. We have no patch/upgrade
policy or regulatory directive to do so. Our servers are set and forget.
We circle back for patch/upgrades only for break/fix.
I tried F19 just for the fun of it. We ended up with conflicts trying to
run qemu-kvm with ce
On 07.12.2013 01:03, Chris C wrote:
We rely on the stability of rhel/centos as well. We have no patch/upgrade
policy or regulatory directive to do so. Our servers are set and forget.
We circle back for patch/upgrades only for break/fix.
Stability means keeping the ABIs (and in general all i
Does F19/20 have libvirt compiled to access direct native rbd images or via
pools?
On Fri, Dec 6, 2013 at 7:38 PM, Alek Paunov wrote:
> On 07.12.2013 01:03, Chris C wrote:
>
>> We rely on the stability of rhel/centos as well. We have no patch/upgrade
>> policy or regulatory directive to do so.
On 12/05/2013 02:37 PM, Dmitry Borodaenko wrote:
Josh,
On Tue, Nov 19, 2013 at 4:24 PM, Josh Durgin wrote:
I hope I can release or push commits to this branch contains live-migration,
incorrect filesystem size fix and ceph-snapshort support in a few days.
Can't wait to see this patch! Are yo
42 matches
Mail list logo