Re: [ceph-users] cache pools on hypervisor servers

2014-08-12 Thread Robert van Leeuwen
> I was hoping to get some answers on how would ceph behaive when I install > SSDs on the hypervisor level and use them as cache pool. > Let's say I've got 10 kvm hypervisors and I install one 512GB ssd on each > server. >I then create a cache pool for my storage cluster using these ssds. My >qu

Re: [ceph-users] Power Outage

2014-08-12 Thread hjcho616
I upgraded ceph to 0.83 (while repairing OSDs I went down to 0.80.5) because it was missing that cephfs-journal-tool.  I did a cephfs-journal-tool journal inspect and it showed that it was damaged, so did a reset and now I am able to mount cephfs again. Thanks! Regards, Hong On Tuesday, Augu

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
Hi Craig, # ceph pg 2.587 query # ceph pg 2.c1 query # ceph pg 2.92 query # ceph pg 2.e3 query Please download the output form here: http://server.riederer.org/ceph-user/ # It is not possible to map a rbd: # rbd map testshareone --pool rbd --name client.admi

[ceph-users] can osd start up if journal is lost and it has not been replayed?

2014-08-12 Thread yuelongguang
hi,all 1. can osd start up if journal is lost and it has not been replayed? 2. how it catchs up latest epoch? take osd as example, where is the code? it better you consider journal is lost or not. in my mind journal only includes meta/R/W operations, does not include data(file data). t

Re: [ceph-users] Issues with installing 2 node system

2014-08-12 Thread Ojwang, Wilson O (Wilson)
Karan, Thanks. The issue is fixed based on your instructions. Regards Wilson From: Karan Singh [mailto:karan.si...@csc.fi] Sent: Tuesday, August 12, 2014 7:08 AM To: Ojwang, Wilson O (Wilson) Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Issues with installing 2 node system Try to add

Re: [ceph-users] cache pools on hypervisor servers

2014-08-12 Thread Andrei Mikhailovsky
Anyone have an idea on how it works? Thansk - Original Message - From: "Andrei Mikhailovsky" To: ceph-users@lists.ceph.com Sent: Monday, 4 August, 2014 10:10:03 AM Subject: [ceph-users] cache pools on hypervisor servers Hello guys, I was hoping to get some answers on how would

Re: [ceph-users] Power Outage

2014-08-12 Thread Craig Lewis
I can't really help with MDS. Hopefully somebody else will chime in here. (Resending, because my last reply was too large.) On Tue, Aug 12, 2014 at 12:44 PM, hjcho616 wrote: > Craig, > > Thanks. It turns out one of my memory stick went bad after that power > outage. While trying to fix the

[ceph-users] v0.67.10 Dumpling released

2014-08-12 Thread Sage Weil
This stable update release for Dumpling includes primarily fixes for RGW, including several issues with bucket listings and a potential data corruption problem when multiple multi-part uploads race. There is also some throttling capability added in the OSD for scrub that can mitigate the perfo

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Craig Lewis
For the incomplete PGs, can you give me the output of ceph pg dump I'm interested in the recovery_state key of that JSON data. On Tue, Aug 12, 2014 at 5:29 AM, Riederer, Michael wrote: > Sorry, but I think that does not help me. I forgot to mention something about > the operating system: >

Re: [ceph-users] CRUSH map advice

2014-08-12 Thread Craig Lewis
On Mon, Aug 11, 2014 at 11:26 PM, John Morris wrote: > On 08/11/2014 08:26 PM, Craig Lewis wrote: > >> Your MON nodes are separate hardware from the OSD nodes, right? >> > > Two nodes are OSD + MON, plus a separate MON node. > > > If so, >> with replication=2, you should be able to shut down one

Re: [ceph-users] Issues with installing 2 node system

2014-08-12 Thread Alfredo Deza
On Tue, Aug 12, 2014 at 8:08 AM, Karan Singh wrote: > Try to add proxy settings in wgetrc file (/etc/wgetrc) and rpm macros > (/etc/rpm/macros) > > # cat /etc/wgetrc | grep -i proxy > #https_proxy = http://proxy.yoyodyne.com:18023/ > http_proxy = : > #ftp_proxy = http://proxy.yoyodyne.com:18023/

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
i just use s3cmd do test. i plan to use s3/swift with inkScope or for Openstack. so i need prepare rados Gateway first. but i meet this issue now 2014-08-12 22:05 GMT+07:00 Christopher O'Connell : > I've had a tremendous difficultly using S3 command when using RGW. I've > successfully used an

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread Christopher O'Connell
I've had a tremendous difficultly using S3 command when using RGW. I've successfully used an older php client, but not s3cmd. For the moment, we're no longer using s3cmd with RGW, because it simply doesn't seem to work, other than for listing buckets. On Tue, Aug 12, 2014 at 10:52 AM, debian Only

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
i have test, meet same issue on Wheezy and Ubuntu12.04 with Ceph0.80.5 too. it can be sucessful when use : radosgw-admin user create --subuser=testuser:swf0001 --display-name="Test User One" --key-type=swift --access=full and it will create correct swift user in Pool .users.swift > # rados ls -p

Re: [ceph-users] wired output of ceph df : Firefly 0.80.5

2014-08-12 Thread Karan Singh
Forgot to mention i am observing EB in ceph -s output , does it mean Exabyte ;-) # ceph -s cluster 009d3518-e60d-4f74-a26d-c08c1976263c health HEALTH_WARN 'cache-pool' at/near target max monmap e3: 3 mons at mdsmap e14: 1/1/1 up {0=storage0101-ib=up:active} osdmap e

[ceph-users] wired output of ceph df : Firefly 0.80.5

2014-08-12 Thread Karan Singh
Hello Developers I have encountered some wired output of ceph df command , suddenly When i was writing some data on cache-pool , and checked its used % , i found some used as 8E ( don’t know what is this ) and the used % for cache-pool was 0 # ceph df GLOBAL: SIZE AVAIL RAW USED

Re: [ceph-users] [Ceph-community] working ceph.conf file?

2014-08-12 Thread O'Reilly, Dan
Well, the boxes I’m using for my POC are pretty old but the cost was right (we had ‘em laying around in a storeroom), and I’m certain that’s at the root of the problem. When I move out of that and into a genuine environment, I’ll be moving to all-new hardware that shouldn’t have the problem. I

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
Sorry, but I think that does not help me. I forgot to mention something about the operating system: root@ceph-1-storage:~# dpkg -l | grep libleveldb1 ii libleveldb1 1.12.0-1precise.ceph fast key-value storage library root@ceph-1-storage:~# lsb_release -a No LS

Re: [ceph-users] Issues with installing 2 node system

2014-08-12 Thread Karan Singh
Try to add proxy settings in wgetrc file (/etc/wgetrc) and rpm macros (/etc/rpm/macros) # cat /etc/wgetrc | grep -i proxy #https_proxy = http://proxy.yoyodyne.com:18023/ http_proxy = : #ftp_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. use_pr

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Karan Singh
I am not sure if this helps , but have a look https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10078.html - Karan - On 12 Aug 2014, at 12:04, Riederer, Michael wrote: > Hi Karan, > > root@ceph-admin-storage:~/ceph-cluster/crush-map-4-ceph-user-list# ceph osd > getcrushmap -o crushm

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
# my Trouble shooting # when i try use s3cmd to check , use user johndoe i created. it can create bucket. ### root@ceph-radosgw:~# more .s3cfg [default] access_key = UGM3MB541JI0WG3WJIZ7 bucket_location = US cloudfront_host = cloudfront.amazonaws.com default_mim

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
root@ceph-radosgw:~# radosgw-admin user create --uid="testuser" --display-name="First User" { "user_id": "testuser", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "testuser", "access_key

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
Hi Karan, root@ceph-admin-storage:~/ceph-cluster/crush-map-4-ceph-user-list# ceph osd getcrushmap -o crushmap.bin got crush map from osdmap epoch 30748 root@ceph-admin-storage:~/ceph-cluster/crush-map-4-ceph-user-list# crushtool -d crushmap.bin -o crushmap.txt root@ceph-admin-storage:~/ceph-clus

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread Karan Singh
For your item number 3 , can you try : Removing the keys for sub user ( testuser:swf0001 ) Once Key is removed for sub user , try recreating the key [ # radosgw-admin key create --subuser=testuser:swf0001 --key-type=swift --gen-secret ] - Karan - On 12 Aug 2014, at 11:26, debian Only wro

Re: [ceph-users] "no user info saved" after user creation / can't create buckets

2014-08-12 Thread debian Only
i meet same problem with u , but i still can not after i create .rgw.buckets .rgw.buckets.index .log .intent-log .usage still stuck here. 2014-03-13 7:38 GMT+07:00 Greg Poirier : > And, I figured out the issue. > > The utility I was using to create pools, zones, and regions automatically >

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Karan Singh
Can you provide your cluster’s ceph osd dump | grep -i pooland crush map output. - Karan - On 12 Aug 2014, at 10:40, Riederer, Michael wrote: > Hi all, > > How do I get my Ceph Cluster back to a healthy state? > > root@ceph-admin-storage:~# ceph -v > ceph version 0.80.5 (38b73c67d375a2

[ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
Dear all i have meet some issue when access radosgw. Fobidden 403 and fail to create subuser key when use radosgw ceph version 0.80.5(ceph osd, radosgw), OS Wheezy (1) Reference of installation http://ceph.com/docs/master/radosgw/config/#configuring-print-continue (2) Config File root@c

[ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
Hi all, How do I get my Ceph Cluster back to a healthy state? root@ceph-admin-storage:~# ceph -v ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6) root@ceph-admin-storage:~# ceph -s cluster 6b481875-8be5-4508-b075-e1f660fd7b33 health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck