Re: [ceph-users] OCFS2 on RBD

2014-11-26 Thread Ilya Dryomov
On Wed, Nov 26, 2014 at 6:55 AM, Martijn Dekkers wrote: > > [ ... ] > > Whilst this looks like an OCFS2 issue, I am posting this here as I have seen > some bugs in the Ceph tracker with similar patterns: ceph socket closed, > combined with [TASK] blocked for more than 120 seconds. > > I would appr

[ceph-users] questions about federated gateways and region

2014-11-26 Thread yueliang
Hi appreciate for any help. 1, metadata sync in two gateways which on different region, when I get a object which the first gateway found it on the other region, it will help to do http-redirect auto. If I do put bucket/object, the bucket on the other region already, does the first gateway hel

[ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Yujian Peng
Hi all, I have a ceph cluster in production. Most of the write requests are small. I found that iops is a bottleneck. I want to move all of the journal datas to partitions on SSDs. Here is the procedures: 1.Set noout flag. ceph osd set noout 1.Stop osd 0 2.Copy the journal datas to new partition

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Mark Nelson
On 11/26/2014 04:05 AM, Yujian Peng wrote: Hi all, I have a ceph cluster in production. Most of the write requests are small. I found that iops is a bottleneck. I want to move all of the journal datas to partitions on SSDs. Here is the procedures: 1.Set noout flag. ceph osd set noout 1.Stop osd

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Lindsay Mathieson
On Wed, 26 Nov 2014 05:37:43 AM Mark Nelson wrote: > I don't know if things have changed, but I don't think you want to > outright move the journal like that. Instead, something like: > > ceph-osd -i N --flush-journal > link to the new journal device ln -s /var/lib/ceph/osd/ceph- /journal /de

[ceph-users] Compile from source with Kinetic support

2014-11-26 Thread Julien Lutran
Hi all, I am trying to build Ceph from source with kinetic support. Unfortunately, the build is failing : root@host:~/sources/ceph# ./autogen.sh root@host:~/sources/ceph# ./configure --with-kinetic root@host:~/sources/ceph# make [...] CXX os/libos_la-LFNIndex.lo CXX os/libos_la-M

Re: [ceph-users] Compile from source with Kinetic support

2014-11-26 Thread Haomai Wang
Obviously it's a careless bug. I will fix it soon! On Wed, Nov 26, 2014 at 8:19 PM, Julien Lutran wrote: > Hi all, > > I am trying to build Ceph from source with kinetic support. Unfortunately, > the build is failing : > > root@host:~/sources/ceph# ./autogen.sh > root@host:~/sources/ceph# ./conf

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Christian Balzer
On Wed, 26 Nov 2014 05:37:43 -0600 Mark Nelson wrote: > On 11/26/2014 04:05 AM, Yujian Peng wrote: [snip] > > > > > Since the size of jornal partitions on SSDs is 10G, I want to set > > filestore max sync interval to 30 minutes. Is 30 minutes resonable? > > How to set filestore max sync interval

Re: [ceph-users] private network - VLAN vs separate switch

2014-11-26 Thread Sreenath BH
Thanks for all the help. Can the moving over from VLAN to separate switches be done on a live cluster? Or does there need to be a down time? -Sreenath On 11/26/14, Kyle Bader wrote: >> For a large network (say 100 servers and 2500 disks), are there any >> strong advantages to using separate swit

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Dan Van Der Ster
> On 26 Nov 2014, at 13:47, Christian Balzer wrote: > > On Wed, 26 Nov 2014 05:37:43 -0600 Mark Nelson wrote: > >> On 11/26/2014 04:05 AM, Yujian Peng wrote: > [snip] >> >>> >>> Since the size of jornal partitions on SSDs is 10G, I want to set >>> filestore max sync interval to 30 minutes. Is

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Yujian Peng
Mark Nelson, thanks for your help! I will set filestore max sync interval to a couple of values to observe the effects. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Yujian Peng
Thanks a lot! IOPS is a bottleneck in my cluster and the object disks are much slower than SSDs. I don't know whether SSDs will be used as caches if filestore_max_sync_interval is set to a big value. I will set filestore_max_sync_interval to a couple of value to observe the effect. If filesto

Re: [ceph-users] private network - VLAN vs separate switch

2014-11-26 Thread Kyle Bader
> Thanks for all the help. Can the moving over from VLAN to separate > switches be done on a live cluster? Or does there need to be a down > time? You can do it on a life cluster. The more cavalier approach would be to quickly switch the link over one server at a time, which might cause a short io

[ceph-users] Ceph as backend for 2012 Hyper-v?

2014-11-26 Thread Jay Janardhan
I want to present Ceph storage as storage backend for Hyper V guests. Is anyone running a setup similar to that? Is there any documentation, best practices guide that anyone can point me to? Appreciate any help! Thanks, -Jay ___ ceph-users mailing list c

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Dan Van Der Ster
Hi, > On 26 Nov 2014, at 17:07, Yujian Peng wrote: > > > Thanks a lot! > IOPS is a bottleneck in my cluster and the object disks are much slower than > SSDs. I don't know whether SSDs will be used as caches if > filestore_max_sync_interval is set to a big value. I will set > filestore_max_s

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Dan Van Der Ster
> On 26 Nov 2014, at 17:26, Dan Van Der Ster wrote: > > Hi, > >> On 26 Nov 2014, at 17:07, Yujian Peng wrote: >> >> >> Thanks a lot! >> IOPS is a bottleneck in my cluster and the object disks are much slower than >> SSDs. I don't know whether SSDs will be used as caches if >> filestore_ma

Re: [ceph-users] Ceph as backend for 2012 Hyper-v?

2014-11-26 Thread Nick Fisk
Hi Jay, The way I would doit until Ceph supports HA iSCSI (see blueprint) would be to configure a Ceph cluster as normal and then create RBD’s for your block storage. I would then map these RBD’s on some “proxy” servers, these would be running in an HA cluster with resource agents for RBD

[ceph-users] Several osds per node

2014-11-26 Thread ivan babrou
Hi! I'm using ceph (0.80.5) in docker (http://github.com/ulexus/docker-ceph) with host networking. When I have one osd per host, it works just fine. When I have only one node with several osds on the same host, it works fine too. But when osds are on many nodes and some node has more that one, thi

Re: [ceph-users] Create OSD on ZFS Mount (firefly)

2014-11-26 Thread Eric Eastman
> - Have created ZFS mount: > “/var/lib/ceph/osd/ceph-0” > - followed the instructions at: > http://ceph.com/docs/firefly/rados/operations/add-or-rm-osds/ > failing on the step 4. Initialize the OSD data directory. > ceph-osd -i 0 —mkfs --mkkey > 2014-11-25 22:12:26.563666 7ff12b466780 -1 > fil

[ceph-users] Several osds per node

2014-11-26 Thread ivan babrou
Hi! I'm using ceph (0.80.5) in docker (http://github.com/ulexus/docker-ceph) with host networking. When I have one osd per host, it works just fine. When I have only one node with several osds on the same host, it works fine too. But when osds are on many nodes and some node has more that one, thi

[ceph-users] Many OSDs on one node and replica distribution

2014-11-26 Thread Rene Hadler
Hi dear list, i have a question about distribution of replicas on hosts with multiple OSDs. For example this configuration: 4x nodes each node has 4 OSDs replica set to 3 When i save now an object to the pool, how it is replicated? Is there a chance that the original object and the 2 replicas ar

Re: [ceph-users] Many OSDs on one node and replica distribution

2014-11-26 Thread Michael Kuriger
You'll have to check your crush rule to determine that. ceph osd getcrushmap -o crushmap crushtool -d crushmap -o crushmap.txt vi crushmap.txt check the rules near the end of that file. Rule 0 shows placement by host, and rule 1 shows placemeny by osd. You can add another rule to your config

[ceph-users] Ceph in AWS

2014-11-26 Thread Roman Naumenko
Hi, Are there any interesting papers about running Ceph in aws in regards what to expect in terms of performance, instance sizing, recommended architecture etc? We're planning to use it for shared storage on web servers. --Roman ___ ceph-users maili

[ceph-users] S3DistCp with Ceph

2014-11-26 Thread Alex Kamil
Can s3distcp tool be used for sending data from hdfs to a separate ceph cluster? and what is the recommended way of using ceph as a backup service for hdfs thanks Alex ___

Re: [ceph-users] Create OSD on ZFS Mount (firefly)

2014-11-26 Thread Lindsay Mathieson
On Tue, 25 Nov 2014 03:47:08 PM Eric Eastman wrote: > It has been almost a year since I last tried ZFS, but I had to add to the > ceph.conf file: > >filestore zfs_snap = 1 >journal aio = 0 >journal dio = 0 > > Eric Thanks Eric, I figured it out in the end, though I haven't tried

[ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread b
I've been deleting a bucket which originally had 60TB of data in it, with our cluster doing only 1 replication, the total usage was 120TB. I've been deleting the objects slowly using S3 browser, and I can see the bucket usage is now down to around 2.5TB or 5TB with duplication, but the usage i

[ceph-users] ceph RDB question

2014-11-26 Thread Geoff Galitz
Hi. If I create an RDB instance, and then use fusemount to access it from various locations as a POSIX entity, I assume I'll need to create a filesystem on it. To access it from various remote servers I assume I'd also need a distributed/parallel filesystem? My use case is a docker-registry with

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread Yehuda Sadeh
On Wed, Nov 26, 2014 at 2:32 PM, b wrote: > I've been deleting a bucket which originally had 60TB of data in it, with > our cluster doing only 1 replication, the total usage was 120TB. > > I've been deleting the objects slowly using S3 browser, and I can see the > bucket usage is now down to aroun

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread b
On 2014-11-27 09:38, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 2:32 PM, b wrote: I've been deleting a bucket which originally had 60TB of data in it, with our cluster doing only 1 replication, the total usage was 120TB. I've been deleting the objects slowly using S3 browser, and I can see

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread Yehuda Sadeh
On Wed, Nov 26, 2014 at 3:09 PM, b wrote: > On 2014-11-27 09:38, Yehuda Sadeh wrote: >> >> On Wed, Nov 26, 2014 at 2:32 PM, b wrote: >>> >>> I've been deleting a bucket which originally had 60TB of data in it, with >>> our cluster doing only 1 replication, the total usage was 120TB. >>> >>> I've

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread b
On 2014-11-27 10:21, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 3:09 PM, b wrote: On 2014-11-27 09:38, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 2:32 PM, b wrote: I've been deleting a bucket which originally had 60TB of data in it, with our cluster doing only 1 replication, the total

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread Yehuda Sadeh
On Wed, Nov 26, 2014 at 3:49 PM, b wrote: > On 2014-11-27 10:21, Yehuda Sadeh wrote: >> >> On Wed, Nov 26, 2014 at 3:09 PM, b wrote: >>> >>> On 2014-11-27 09:38, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 2:32 PM, b wrote: > > > I've been deleting a bucket which origi

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread b
On 2014-11-27 11:36, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 3:49 PM, b wrote: On 2014-11-27 10:21, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 3:09 PM, b wrote: On 2014-11-27 09:38, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 2:32 PM, b wrote: I've been deleting a bucket which

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-26 Thread b
On 2014-11-27 11:36, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 3:49 PM, b wrote: On 2014-11-27 10:21, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 3:09 PM, b wrote: On 2014-11-27 09:38, Yehuda Sadeh wrote: On Wed, Nov 26, 2014 at 2:32 PM, b wrote: I've been deleting a bucket which

[ceph-users] ERROR: failed to create bucket: XmlParseFailure

2014-11-26 Thread Frank Li
Hi, Is anyone help me to resolve the error as follows ? Thank a lot's. rest-bench --api-host=172.20.10.106 --bucket=test --access-key=BXXX --secret=z --protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write host=172.20.10.106 ERROR: failed to c

[ceph-users] S3CMD and Ceph

2014-11-26 Thread b
I'm having some issues with a user in ceph using S3 Browser and S3cmd It was previously working. I can no longer use s3cmd to list the contents of a bucket, i am getting 403 and 405 errors When using S3browser, I can see the contents of the bucket, I can upload files, but i cannot create addit

[ceph-users] Question about ceph-deploy

2014-11-26 Thread mail list
hi all, I want to install ceph using ceph-deploy following http://docs.ceph.com/docs/master/start/quick-start-preflight/ And i want to use the latest version — giant, so i execute the following commands: {code} louis@louis-Latitude-E5440:~/ceph/my-cluster$ wget -q -O- 'https://ceph.com/git/?p=

Re: [ceph-users] Question about ceph-deploy

2014-11-26 Thread Jean-Charles LOPEZ
Hi Louis, ceph-deploy install —release=giant admin-node Cheers JC > On Nov 26, 2014, at 20:38, mail list wrote: > > ceph-deploy install admin-node ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-

Re: [ceph-users] private network - VLAN vs separate switch

2014-11-26 Thread Sreenath BH
Thanks for all the help. We will follow the more careful approach! -Sreenath On 11/26/14, Kyle Bader wrote: >> Thanks for all the help. Can the moving over from VLAN to separate >> switches be done on a live cluster? Or does there need to be a down >> time? > > You can do it on a life cluster. T

[ceph-users] perf counter reset

2014-11-26 Thread 池信泽
Hi, cepher: How to reset the perf counter. Such as I want to reset the journal_queue_ops 0. Is there a comand to reset it? Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Question about ceph-deploy

2014-11-26 Thread mail list
Thanks JC, It works, and i think ceph should modify the manual. On Nov 27, 2014, at 13:59, Jean-Charles LOPEZ wrote: > Hi Louis, > > ceph-deploy install —release=giant admin-node > > Cheers > JC > > > >> On Nov 26, 2014, at 20:38, mail list wrote: >> >> ceph-deploy install admin-node >