Hi,
from now I have 5 monitors which share slow SSD with several OSD
journal. As a result, each data migration operation (reweight, recovery,
etc) is very slow and the cluster is near down.
So I have to change that. I'm looking to replace this 5 monitors by 3
new monitors, which still share (very
Thanks wido ,I have rectified it , I have created the ceph cluster and created
cloudstack osd.
On hypervisor(KVM host) side do I need to install any ceph packages to
communicate to ceph storage cluster which was exists on other host.
Regards
Sadhu
-Original Message-
From: ceph
Hi Josh,
I have a session logged with:
debug_ms=1:debug_rbd=20:debug_objectcacher=30
as you requested from Mike, even if I think, we do have another story
here, anyway.
Host-kernel is: 3.10.0-rc7, qemu-client 1.6.0-rc2, client-kernel is
3.2.0-51-amd...
Do you want me to open a ticket
Hi,
recently I had a problem with openstack glance and ceph.
I used the
http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-glance
documentation and
http://docs.openstack.org/developer/glance/configuring.html documentation
I'm using ubuntu 12.04 LTS with grizzly from Ubuntu Cloud Archive
Steffan,
It works for me. I have:
user@node:/etc/ceph# cat /etc/glance/glance-api.conf | grep rbd
default_store = rbd
# glance.store.rbd.Store,
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = images
rbd_store_pool = images
rbd_store_chunk_size = 4
Thanks,
Mike Dawson
HI,
My storage health cluster is warning state , one of the osd is in down state
and even if I try to start the osd it fail to start
sadhu@ubuntu3:~$ ceph osd stat
e22: 2 osds: 1 up, 1 in
sadhu@ubuntu3:~$ ls /var/lib/ceph/osd/
ceph-0 ceph-1
sadhu@ubuntu3:~$ ceph osd tree
# idweight type na
Looks like you didn't get osd.0 deployed properly. Can you show:
- ls /var/lib/ceph/osd/ceph-0
- cat /etc/ceph/ceph.conf
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250
On 8/8/2013 9:13 AM, Suresh Sadhu wrote
Hi list,
I saw the info about data striping in
http://ceph.com/docs/master/architecture/#data-striping .
But couldn't find the way to set these values.
Could you please tell me how to that or give me a link? Thanks!___
ceph-users mailing list
ceph-user
On 08/08/2013 06:01 AM, Steffen Thorhauer wrote:
Hi,
recently I had a problem with openstack glance and ceph.
I used the
http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-glance
documentation and
http://docs.openstack.org/developer/glance/configuring.html documentation
I'm using ubuntu 1
On Wed, 7 Aug 2013, Tren Blackburn wrote:
> On Tue, Aug 6, 2013 at 11:14 AM, Joao Pedras wrote:
> Greetings all.
> I am installing a test cluster using one ssd (/dev/sdg) to hold the
> journals. Ceph's version is 0.61.7 and I am using ceph-deploy obtained
> from ceph's git yesterday. This is
On Wed, 7 Aug 2013, Nulik Nol wrote:
> thanks Dan,
> i meant like PRIMARY KEY in a RDBMS, or Key for NoSQL (key-value pair)
> database to perform put() get() operations. Well, if it is string then
> it's ok, I can print binary keys in HEX or uuencode or something like
> that.
> Is there a limit on
This can help you.
http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/
On Thu, Aug 8, 2013 at 7:48 AM, Da Chun wrote:
> Hi list,
> I saw the info about data striping in
> http://ceph.com/docs/master/architecture/#data-striping .
> But couldn't find the way to set thes
Hello,
I don't know if it's useful, but I can also reproduce this bug with :
rbd kernel 3.10.4
ceph osd 0.61.4
image format 2
rbd formatted in xfs, after some snapshots, and mount/umount test (no
write on the file system), xfs mount make segfault and kernel have same log.
Cheers,
Laurent Bar
Thanks Mike,Please find the output of two commands
sadhu@ubuntu3:~$ ls /var/lib/ceph/osd/ceph-0
sadhu@ubuntu3:~$ cat /etc/ceph/ceph.conf
[global]
fsid = 593dac9e-ce55-4803-acb4-2d32b4e0d3be
mon_initial_members = ubuntu3
mon_host = 10.147.41.3
#auth_supported = cephx
auth cluster required = cephx
On 8/8/2013 12:30 PM, Suresh Sadhu wrote:
Thanks Mike,Please find the output of two commands
sadhu@ubuntu3:~$ ls /var/lib/ceph/osd/ceph-0
^^^ that is a problem. It appears that osd.0 didn't get deployed
properly. To see an example of what structure should be there, do:
ls /var/lib/ceph/osd
On Thu, 8 Aug 2013, Olivier Bonvalet wrote:
> Hi,
>
> from now I have 5 monitors which share slow SSD with several OSD
> journal. As a result, each data migration operation (reweight, recovery,
> etc) is very slow and the cluster is near down.
>
> So I have to change that. I'm looking to replace
On 08/08/2013 05:40 AM, Oliver Francke wrote:
Hi Josh,
I have a session logged with:
debug_ms=1:debug_rbd=20:debug_objectcacher=30
as you requested from Mike, even if I think, we do have another story
here, anyway.
Host-kernel is: 3.10.0-rc7, qemu-client 1.6.0-rc2, client-kernel is
3.2.0
Earlier its created properly after rebooting host ,mount points are gone due
to that ls command not shown earlier but now I have mounted again now am able
to see the same folder structure
sadhu@ubuntu3:/var/lib/ceph$ ls /var/lib/ceph/osd/ceph-1
activate.monmap active ceph_fsid current fsi
Hey all - I just posted the IRC chat logs from the Ceph Developer Summit.
You can find them on the wiki, one log for sessions 1-16 and another for
sessions 17-29:
http://wiki.ceph.com/01Planning/CDS/Emperor/Chat_Log%3A_Sessions_1-16
http://wiki.ceph.com/01Planning/CDS/Emperor/Chat_Log%3A_Session
I've seen a couple posts here about broken clusters that had to repair
by modifing the monmap, osdmap, or the crush rules.
The old school sysadmin in me says it would be a good idea to make
backups of these 3 databases. So far though, it seems like everybody
was able to repair their clusters by d
Le jeudi 08 août 2013 à 09:43 -0700, Sage Weil a écrit :
> On Thu, 8 Aug 2013, Olivier Bonvalet wrote:
> > Hi,
> >
> > from now I have 5 monitors which share slow SSD with several OSD
> > journal. As a result, each data migration operation (reweight, recovery,
> > etc) is very slow and the cluster
Let me just clarify... the prepare process created all 10 partitions in sdg
the thing is that only 2 (sdg1, sdg2) would be present in /dev. The partx
bit is just a hack as I am not familiar with the entire sequence.
Initially I was deploying this test cluster in 5 nodes, each with 10
spinners, 1 OS
On Fri, 9 Aug 2013, Olivier Bonvalet wrote:
> Le jeudi 08 ao?t 2013 ? 09:43 -0700, Sage Weil a ?crit :
> > On Thu, 8 Aug 2013, Olivier Bonvalet wrote:
> > > Hi,
> > >
> > > from now I have 5 monitors which share slow SSD with several OSD
> > > journal. As a result, each data migration operation (r
On Thu, 8 Aug 2013, Joao Pedras wrote:
> Let me just clarify... the prepare process created all 10 partitions in sdg
> the thing is that only 2 (sdg1, sdg2) would be present in /dev. The partx
> bit is just a hack as I am not familiar with the entire sequence. Initially
> I was deploying this test
I might be able to give that a shot tomorrow as I will probably reinstall
this set.
On Thu, Aug 8, 2013 at 6:19 PM, Sage Weil wrote:
> On Thu, 8 Aug 2013, Joao Pedras wrote:
> > Let me just clarify... the prepare process created all 10 partitions in
> sdg
> > the thing is that only 2 (sdg1, sdg
All,
we're currently evaluating the use of S3 or SWIFT as storage option for our
staff. In particular, we're looking for a process that would allow us to:
1) provision user accounts
2) manage quota / acls / objects
3) outline best practices in how to access the data for
- end users (GUI client)
-
hi all,
I am not sure if I am the only one having issues with ceph-deploy
behind a firewall or not. I haven't seen any other reports of similar
issues yet. With http proxies I am able to have apt-get working, but
wget is still an issue.
Working to use the newer ceph-deploy mechanism to deploy
On Thu, 8 Aug 2013, Harvey Skinner wrote:
> hi all,
>
> I am not sure if I am the only one having issues with ceph-deploy
> behind a firewall or not. I haven't seen any other reports of similar
> issues yet. With http proxies I am able to have apt-get working, but
> wget is still an issue.
Thi
28 matches
Mail list logo