Re: [ceph-users] “ceph pg dump summary –f json” question

2014-05-16 Thread xan.peng
Weired. Maybe you can check the source code (src/mon/PGMonitor.cc,
around L1434).
But, looks like there is another command "ceph pg dump_json {all |
summary | sum | pools | ...} which you can try.

On Fri, May 16, 2014 at 2:56 PM, Cao, Buddy  wrote:
> In my env, "ceph pg dump all -f json" only returns result below,
>
> {"version":45685,"stamp":"2014-05-15 
> 23:50:27.773608","last_osdmap_epoch":13875,"last_pg_scan":13840,"full_ratio":"0.95","near_full_ratio":"0.85","pg_stats_sum":{"stat_sum":{"num_bytes":151487109145,"num_objects":36186,"num_object_clones":0,"num_object_copies":72372,"num_objects_missing_on_primary":0,"num_objects_degraded":5716,"num_objects_unfound":0,"num_read":8502912,"num_read_kb":611729970,"num_write":2737247,"num_write_kb":340122861,"num_scrub_errors":39,"num_shallow_scrub_errors":39,"num_deep_scrub_errors":0,"num_objects_recovered":267486,"num_bytes_recovered":1120311874505,"num_keys_recovered":94},"stat_cat_sum":{},"log_size":952236,"ondisk_log_size":952236},"osd_stats_sum":{"kb":19626562368,"kb_used":296596996,"kb_avail":19329965372,"hb_in":[],"hb_out":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"op_queue_age_hist":{"histogram":[1,2,0,1,1,2,56,5,20,18],"upper_bound":1024},"fs_perf_stat":{"commit_latency_ms":55456,"apply_latency_ms":408}},"pg_stats_delta":{"stat_sum":{"num_bytes":45867008,"num_objects":10,"num_object_clones":0,"num_object_copies":20,"num_objects_missing_on_primary":0,"num_objects_degraded":0,"num_objects_unfound":0,"num_read":385,"num_read_kb":29360,"num_write":296,"num_write_kb":61320,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0},"stat_cat_sum":{},"log_size":0,"ondisk_log_size":0}}
>
> But "ceph pg dump summary" returns much more info than this and even includes 
> a lot pg summaries likes below.  Any idea? thanks
> 3.3 30  0   0   0   125829120   77  77  
> active+clean2014-05-15 23:41:47.950780  13875'7713875:129 
>   [11,4]  [11,4]  0'0 2014-05-15 20:02:58.937561  0'0 2014-05-13 
> 20:02:53.011326
> 4.4 49  0   0   0   205520896   30013001
> active+clean2014-05-15 19:52:41.534026  13875'5784  13875:29766   
>   [41,26] [41,26] 13244'5438  2014-05-15 09:47:09.501469  0'0 
> 2014-05-14 09:45:29.872229
> 5.5 0   0   0   0   0   0   0   active+clean  
>   2014-05-15 19:52:40.344205  0'0 13875:1702  [25,33] [25,33] 0'0 
> 2014-05-15 09:48:33.0674980'0   2014-05-14 09:47:40.406650
> 6.6 0   0   0   0   0   0   0   active+clean  
>   2014-05-15 09:49:45.360999  0'0 13875:1678  [24,9]  [24,9]  0'0 
> 2014-05-15 09:49:45.3609510'0   2014-05-14 09:47:42.871263
> 7.7 0   0   0   0   0   0   0   active+clean  
>   2014-05-15 20:12:03.812881  0'0 13875:58[33,25] [33,25] 0'0 
> 2014-05-15 20:12:02.8590310'0   2014-05-15 20:12:02.859031
> pool 0  0   0   0   0   0   0   0
> pool 1  21  0   0   0   947021  21
> pool 2  0   0   0   0   0   0   0
> pool 3  13006   0   20320   54429261968 39905   39905
> pool 4  22725   0   36160   95251489157 909458  909458
> pool 5  3   0   0   0   262 4   4
> pool 6  0   0   0   0   0   0   0
> pool 7  0   0   0   0   0   0   0
> pool 8  0   0   0   0   0   0   0
>  sum35755   0   56480   149680760857949388  949388
> osdstat kbused  kbavail kb  hb in   hb out
> 0   79744   486985484   487065228   
> [1,8,9,16,17,32,33,34,35,36,37,47]  []
> 1   80068   486985160   487065228   
> [0,2,8,9,16,17,32,33,34,35,36,37]   []
> 2   9602980 477462248   487065228   [1,3,12,13,18,19,38,39,40,41] 
>   []
> 3   18896156468169072   487065228   
> [2,4,11,12,13,18,19,20,38,39,40,41] []
> 4   15758800471306428   487065228   
> [3,5,12,13,18,19,20,38,39,40,41][]
> 5   13118956473946272   487065228   
> [4,6,7,11,12,18,20,38,39,40][]
> 6   83000   486982228   487065228   
> [5,7,14,15,22,23,42,43,44,45,46,47] []
>
>
> Wei Cao (Buddy)
>
> -Original Message-
> From: xan.peng [mailto:xanp...@gmail.com]
> Sent: Friday, May 16, 2014 2:20 PM
> To: Cao, Buddy
> Cc: ceph-us...@ceph.com
> Subject: Re: [ceph-users] “ceph pg dump summary –f json” question
>
> Looks like "ceph pg dump all -f json" = "ceph pg dump summary".
>
> On Fri, May 16, 2014 at 1:54 PM, Cao, Buddy  wrote:
>> Hi there,
>>
>> “ceph pg dump summary –f json” does not returns data as much as “ceph
>> pg dump summary”,  are there any ways to get 

Re: [ceph-users] “ceph pg dump summary –f json” question

2014-05-16 Thread Cao, Buddy
Thanks Peng, it works!


Wei Cao (Buddy)

-Original Message-
From: xan.peng [mailto:xanp...@gmail.com] 
Sent: Friday, May 16, 2014 3:34 PM
To: Cao, Buddy
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] “ceph pg dump summary –f json” question

Weired. Maybe you can check the source code (src/mon/PGMonitor.cc, around 
L1434).
But, looks like there is another command "ceph pg dump_json {all | summary | 
sum | pools | ...} which you can try.

On Fri, May 16, 2014 at 2:56 PM, Cao, Buddy  wrote:
> In my env, "ceph pg dump all -f json" only returns result below,
>
> {"version":45685,"stamp":"2014-05-15 
> 23:50:27.773608","last_osdmap_epoch":13875,"last_pg_scan":13840,"full_
> ratio":"0.95","near_full_ratio":"0.85","pg_stats_sum":{"stat_s
> um":{"num_bytes":151487109145,"num_objects":36186,"num_object_clones":
> 0,"num_object_copies":72372,"num_objects_missing_on_primary":0,"num_ob
> jects_degraded":5716,"num_objects_unfound":0,"num_read":8502912,"num_r
> ead_kb":611729970,"num_write":2737247,"num_write_kb":340122861,"num_sc
> rub_errors":39,"num_shallow_scrub_errors":39,"num_deep_scrub_errors":0
> ,"num_objects_recovered":267486,"num_bytes_recovered":1120311874505,"n
> um_keys_recovered":94},"stat_cat_sum":{},"log_size":952236,"ondisk_log
> _size":952236},"osd_stats_sum":{"kb":19626562368,"kb_used":296596996,"
> kb_avail":19329965372,"hb_in":[],"hb_out":[],"snap_trim_queue_len":0,"
> num_snap_trimming":0,"op_queue_age_hist":{"histogram":[1,2,0,1,1,2,56,
> 5,20,18],"upper_bound":1024},"fs_perf_stat":{"commit_latency_ms":55456
> ,"apply_latency_ms":408}},"pg_stats_delta":{"stat_sum":{"num_bytes":45
> 867008,"num_objects":10,"num_object_clones":0,"num_object_copies":20,"
> num_objects_missing_on_primary":0,"num_objects_degraded":0,"num_object
> s_unfound":0,"num_read":385,"num_read_kb":29360,"num_write":296,"num_w
> rite_kb":61320,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_
> deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0
> ,"num_keys_recovered":0},"stat_cat_sum":{},"log_size":0,"ondisk_log_si
> ze":0}}
>
> But "ceph pg dump summary" returns much more info than this and even includes 
> a lot pg summaries likes below.  Any idea? thanks
> 3.3 30  0   0   0   125829120   77  77  
> active+clean2014-05-15 23:41:47.950780  13875'7713875:129 
>   [11,4]  [11,4]  0'0 2014-05-15 20:02:58.937561  0'0 2014-05-13 
> 20:02:53.011326
> 4.4 49  0   0   0   205520896   30013001
> active+clean2014-05-15 19:52:41.534026  13875'5784  13875:29766   
>   [41,26] [41,26] 13244'5438  2014-05-15 09:47:09.501469  0'0 
> 2014-05-14 09:45:29.872229
> 5.5 0   0   0   0   0   0   0   active+clean  
>   2014-05-15 19:52:40.344205  0'0 13875:1702  [25,33] [25,33] 0'0 
> 2014-05-15 09:48:33.0674980'0   2014-05-14 09:47:40.406650
> 6.6 0   0   0   0   0   0   0   active+clean  
>   2014-05-15 09:49:45.360999  0'0 13875:1678  [24,9]  [24,9]  0'0 
> 2014-05-15 09:49:45.3609510'0   2014-05-14 09:47:42.871263
> 7.7 0   0   0   0   0   0   0   active+clean  
>   2014-05-15 20:12:03.812881  0'0 13875:58[33,25] [33,25] 0'0 
> 2014-05-15 20:12:02.8590310'0   2014-05-15 20:12:02.859031
> pool 0  0   0   0   0   0   0   0
> pool 1  21  0   0   0   947021  21
> pool 2  0   0   0   0   0   0   0
> pool 3  13006   0   20320   54429261968 39905   39905
> pool 4  22725   0   36160   95251489157 909458  909458
> pool 5  3   0   0   0   262 4   4
> pool 6  0   0   0   0   0   0   0
> pool 7  0   0   0   0   0   0   0
> pool 8  0   0   0   0   0   0   0
>  sum35755   0   56480   149680760857949388  949388
> osdstat kbused  kbavail kb  hb in   hb out
> 0   79744   486985484   487065228   
> [1,8,9,16,17,32,33,34,35,36,37,47]  []
> 1   80068   486985160   487065228   
> [0,2,8,9,16,17,32,33,34,35,36,37]   []
> 2   9602980 477462248   487065228   [1,3,12,13,18,19,38,39,40,41] 
>   []
> 3   18896156468169072   487065228   
> [2,4,11,12,13,18,19,20,38,39,40,41] []
> 4   15758800471306428   487065228   
> [3,5,12,13,18,19,20,38,39,40,41][]
> 5   13118956473946272   487065228   
> [4,6,7,11,12,18,20,38,39,40][]
> 6   83000   486982228   487065228   
> [5,7,14,15,22,23,42,43,44,45,46,47] []
>
>
> Wei Cao (Buddy)
>
> -Original Message-
> From: xan.peng [mailto:xanp...@gmail.com]
> Sent: Friday, May 16, 2014 2:20 PM
> To: Cao, Buddy
> Cc: ceph-us...@ceph.com
> Subject: Re:

Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-16 Thread David McBride
On 15/05/14 18:07, Dietmar Maurer wrote:

> Besides, it would be great if ceph could use existing cluster stacks like 
> corosync, ...
> Is there any plan to support that?

For clarity: To what end?

Recall that Ceph already incorporates its own cluster-management
framework, and the various Ceph daemons already operate in a clustered
manner.

(The documentation at
http://ceph.com/docs/firefly/architecture/#scalability-and-high-availability
may be helpful if you are not already familiar with this.)

Kind regards,
David
-- 
David McBride 
Unix Specialist, University Information Services
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-16 Thread Dietmar Maurer
> Recall that Ceph already incorporates its own cluster-management framework,
> and the various Ceph daemons already operate in a clustered manner.

Sure. But it guess it could reduce 'ceph' code size if you use an existing 
framework.

We (Proxmox VE) run corosync by default on all nodes, so it would also make
configuration easier.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-16 Thread Robert Sander
On 16.05.2014 10:49, Dietmar Maurer wrote:
>> Recall that Ceph already incorporates its own cluster-management framework,
>> and the various Ceph daemons already operate in a clustered manner.
> 
> Sure. But it guess it could reduce 'ceph' code size if you use an existing 
> framework.

Ceph has nothing to do with a HA cluster based on pacemaker.
It has a complete different logic built in.
The only similarity is that both use a quorum algorithm to detect split
brain situations.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-16 Thread Dietmar Maurer
> Ceph has nothing to do with a HA cluster based on pacemaker.
> It has a complete different logic built in.
> The only similarity is that both use a quorum algorithm to detect split brain
> situations.

I talk about cluster services like 'corosync', which provide membership and 
quorum services.

For example, 'sheepdog' simply has plugins for cluster framework, and currently 
support
corosync and zookeeper.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] raid levels (Information needed)

2014-05-16 Thread Jerker Nyberg


I would say the levels of redundancy could roughly be translated like 
this.


 RAID0  one replica (size=1)
 RAID1  two replicas (size=2)
 RAID10 two replicas (size=2)
 RAID5  erasure coding (erasure-code-m=1)
 RAID6  erasure coding (erasure-code-m=2)
 RAIDZ3 erasure coding (erasure-code-m=3)

Read more here:

http://ceph.com/docs/master/rados/operations/pools/

A seven disk RAID6 (4 data, 2 parity and 1 hot spare) would then be 
similar to a Ceph erasure coded pool on seven OSDs with erasure-code-k=4 
and erasure-code-m=2.


Kind regards,
Jerker Nyberg.


On Fri, 16 May 2014, yalla.gnan.ku...@accenture.com wrote:


Hi All,

What are the kinds of raid levels of  storage provided by Ceph block devices ?

Thanks
Kumar



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] raid levels (Information needed)

2014-05-16 Thread yalla.gnan.kumar
Hi Jerker,

Thanks for the reply.

The link you posted describes only object storage. I need information of raid 
levels implementation for block devices.


Thanks
Kumar

-Original Message-
From: Jerker Nyberg [mailto:jer...@update.uu.se] 
Sent: Friday, May 16, 2014 2:43 PM
To: Gnan Kumar, Yalla
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] raid levels (Information needed)


I would say the levels of redundancy could roughly be translated like this.

  RAID0  one replica (size=1)
  RAID1  two replicas (size=2)
  RAID10 two replicas (size=2)
  RAID5  erasure coding (erasure-code-m=1)
  RAID6  erasure coding (erasure-code-m=2)
  RAIDZ3 erasure coding (erasure-code-m=3)

Read more here:

http://ceph.com/docs/master/rados/operations/pools/

A seven disk RAID6 (4 data, 2 parity and 1 hot spare) would then be similar to 
a Ceph erasure coded pool on seven OSDs with erasure-code-k=4 and 
erasure-code-m=2.

Kind regards,
Jerker Nyberg.


On Fri, 16 May 2014, yalla.gnan.ku...@accenture.com wrote:

> Hi All,
>
> What are the kinds of raid levels of  storage provided by Ceph block devices ?
>
> Thanks
> Kumar
>
> 
>
> This message is for the designated recipient only and may contain privileged, 
> proprietary, or otherwise confidential information. If you have received it 
> in error, please notify the sender immediately and delete the original. Any 
> other use of the e-mail by you is prohibited. Where allowed by local law, 
> electronic communications with Accenture and its affiliates, including e-mail 
> and instant messaging (including content), may be scanned by our systems for 
> the purposes of information security and assessment of internal compliance 
> with Accenture policy.
> __
> 
>
> www.accenture.com
>


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] raid levels (Information needed)

2014-05-16 Thread Robert Sander
On 16.05.2014 11:42, yalla.gnan.ku...@accenture.com wrote:
> Hi Jerker,
> 
> Thanks for the reply.
> 
> The link you posted describes only object storage. I need information of raid 
> levels implementation for block devices.
> 

There is no RAID level for RBDs. These are "virtual" block devices and
are mapped to objects in the Ceph cluster.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] visualizing a ceph cluster automatically

2014-05-16 Thread Drew Weaver
Does anyone know of any tools that help you visually monitor a ceph cluster 
automatically?

Something that is host, osd, mon aware and shows various status of components, 
etc?

Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal SSD durability

2014-05-16 Thread Simon Ironside

On 13/05/14 13:23, Christian Balzer wrote:

Alas a DC3500 240GB SSD will perform well enough at half the price of
the DC3700 and give me enough breathing room at about 80GB/day writes,
so this is what I will order in the end.

Did you consider DC3700 100G with similar price?


The 3500 is already potentially slower than the actual HDDs when doing
sequential writes, the 100GB 3700 most definitely so.


Hi,

Any thoughts or experience of the Kingston E50 100GB SSD?

The 310TB endurance, power-loss protection and 550/530MBps sequential 
read/write rates seems to be quite suitable for journalling.


Cheers,
Simon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Not specifically related to ceph but 6tb sata drives on Dell Poweredge servers

2014-05-16 Thread Drew Weaver
Hi there,

I'm sure that the Ceph community was somewhat excited when Seagate released 
their enterprise 6TB SAS/SATA hard drives recently, previously the only other 
6TB drives which were available for enterprises were the HGST helium ones which 
are nearly impossible to find unless you are buying them by the truck.

I got a few of the 6TB Seagate drives in to test and I'm sad to report that 
these don't appear to be supported on any current Dell PERC raid controllers.

I've contacted our dell support folks and they've indicated that they are not 
going to add support to the existing cards and instead if we want 6TB+ drive 
size support that we have to upgrade to 13gen servers when they come out. I 
also tested these drives on the LSI versions of the same exact cards and they 
work flawlessly after a firmware update; so Dell is basically just making a 
'marketing choice' not to support these drives, there is no technical reason 
why they shouldn't work given that the LSI firmware works just fine.

I just thought I would save you guys some trouble =)

-Drew


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] visualizing a ceph cluster automatically

2014-05-16 Thread Sergey Korolev
Try this
https://github.com/inkscope/inkscope


2014-05-16 16:01 GMT+04:00 Drew Weaver :

>  Does anyone know of any tools that help you visually monitor a ceph
> cluster automatically?
>
>
>
> Something that is host, osd, mon aware and shows various status of
> components, etc?
>
>
>
> Thanks,
>
> -Drew
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem with ceph_filestore_dump, possibly stuck in a loop

2014-05-16 Thread Jeff Bachtel
Overnight, I tried to use ceph_filestore_dump to export a pg that is 
missing from other osds from an osd, with the intent of manually copying 
the export to the osds in the pg map and importing.


Unfortunately, what is on-disk 59gb of data had filled 1TB when I got in 
this morning, and still hadn't completed. Is it possible for a loop to 
develop in a ceph_filestore_dump export?


My C++ isn't the best. I can see in ceph_filestore_dump.cc int 
export_files a loop could occur if a broken collection was read, 
possibly. Maybe.


--debug output seems to confirm?

grep '^read' /tmp/ceph_filestore_dump.out  | sort | wc -l ; grep '^read' 
/tmp/ceph_filestore_dump.out  | sort | uniq | wc -l

2714
258

(only 258 unique reads are being reported, but each repeated > 10 times 
so far)


From start of debug output

Supported features: compat={},rocompat={},incompat={1=initial feature 
set(~v.18),2=pginfo object,3=object 
locator,4=last_epoch_clean,5=categories,6=hobjectpool,7=biginfo,8=leveldbinfo,9=leveldblog,10=snapmapper,11=sharded 
objects}
On-disk features: compat={},rocompat={},incompat={1=initial feature 
set(~v.18),2=pginfo object,3=object 
locator,4=last_epoch_clean,5=categories,6=hobjectpool,7=biginfo,8=leveldbinfo,9=leveldblog,10=snapmapper}

Exporting 0.2f
read 8210002f/100d228.00019150/head//0
size=4194304
data section offset=1048576 len=1048576
data section offset=2097152 len=1048576
data section offset=3145728 len=1048576
data section offset=4194304 len=1048576
attrs size 2

then at line 1810
ead 8210002f/100d228.00019150/head//0
size=4194304
data section offset=1048576 len=1048576
data section offset=2097152 len=1048576
data section offset=3145728 len=1048576
data section offset=4194304 len=1048576
attrs size 2


If this is a loop due to a broken filestore, is there any recourse on 
repairing it? The osd I'm trying to dump from isn't in the pg map for 
the cluster, I'm trying to save some data by exporting this version of 
the pg and importing it on an osd that's mapped. If I'm failing at a 
basic premise even trying to do that, please let me know so I can wave 
off (in which case, I believe I'd use ceph_filestore_dump to delete all 
copies of this pg in the cluster so I can force create it, which is 
failing at this time).


Thanks,

Jeff

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Storage Multi Tenancy

2014-05-16 Thread Sebastien Han
Jeroen,

Actually this is more a question for the OpenStack ML.
All the use cases you described are not possible at the moment.

The only thing you can get is shared ressources across all the tenants, you 
can’t really pin any ressource to a specific tenant.
This could done I guess, but not available yet.

Cheers.
 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 15 May 2014, at 10:20, Jeroen van Leur  wrote:

> Hello,
> 
> Currently I am integrating my ceph cluster into Openstack by using Ceph’s 
> RBD. I’d like to store my KVM virtual machines on pools that I have made on 
> the ceph cluster.
> I would like to achieve to have multiple storage solutions for multiple 
> tenants. Currently when I launch an instance the instance will be set on the 
> Ceph pool that has been defined in the cinder.conf file of my Openstack 
> controller node. If you set up an multi storage backend for cinder then the 
> scheduler will determine which storage backend will be used without looking 
> at the tenant. 
> 
> What I would like to happen is that the instance/VM that’s being launched by 
> a specific tenant should have two choices; either choose for a shared Ceph 
> Pool or have their own pool. Another option might even be a tenant having his 
> own ceph cluster. When the instance is being launched on either shared pool, 
> dedicated pool or even another cluster, I would also like the extra volumes 
> that are being created to have the same option. 
> 
> 
> Data needs to be isolated from another tenants and users and therefore 
> choosing other pools/clusters would be nice. 
> Is this goal achievable or is it impossible. If it’s achievable could I 
> please have some assistance in doing so. Has anyone ever done this before.
> 
> I would like thank you in advance for reading this lengthy e-mail. If there’s 
> anything that is unclear, please feel free to ask.
> 
> Best Regards,
> 
> Jeroen van Leur
> 
> — 
> Infitialis
> Jeroen van Leur
> Sent with Airmail
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Alternate pools for RGW

2014-05-16 Thread Ilya Storozhilov
Hello Ceph-community,


does it possible to somehow configure RGW to use alternate pools other than 
predefined ones? Does it possible add additional pool to RGW to store data in 
as it could be  done in case of Cephfs?


Thank you very much and best regards!
Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Advanced CRUSH map rules

2014-05-16 Thread Fabrizio G. Ventola
Ok, thanks for the suggestions, I will try to achieve this in the next
days and I will share my experience with you.

Cheers,
Fabrizio

On 14 May 2014 20:12, Gregory Farnum  wrote:
> On Wed, May 14, 2014 at 10:52 AM, Pavel V. Kaygorodov  wrote:
>> Hi!
>>
>>> CRUSH can do this. You'd have two choose ...emit sequences;
>>> the first of which would descend down to a host and then choose n-1
>>> devices within the host; the second would descend once. I think
>>> something like this should work:
>>>
>>> step take default
>>> step choose firstn 1 datacenter
>>> step chooseleaf firstn -1 room
>>> step emit
>>> step chooseleaf firstn 1 datacenter
>>> step emit
>>>
>>
>> May be I'm wrong, but this will not guarantee choice of different 
>> datacenters for n-1 and remaining replica.
>> I have experimented with rules like this, trying to put one replica to "main 
>> host" and other replicas to some other hosts.
>> Some OSDs was referenced two times in some of generated pg's.
>
> Argh, I forgot about this, but you're right. :( So you can construct
> these sorts of systems manually (by having different "step take...step
> emit" blocks, but CRUSH won't do it for you in a generic way.
>
> However, for *most* situations that people are interested in, you can
> pull various tricks to accomplish what you're actually after. (I
> haven't done this one myself, but I'm told others have.) For instance,
> if you just want 1 copy segregated from the others, you can do this:
>
> step take default
> step choose firstn 2 datacenter
> step chooseleaf firstn -1 room
> step emit
>
> That will generate an ordered list of 2(n-1) OSDs, but since you only
> want n, you'll take n-1 from the first datacenter and only 1 from the
> second. :) You can extend this to n-2, etc.
>
> If you have the pools associated with particular datacenters, you can
> set up rules which place a certain number of copies in the primary
> datacenter, and then use parallel crush maps to choose one of the
> other datacenters for a given number of replica copies. (That is, you
> can have multiple root buckets; one for each datacenter that includes
> everybody BUT the datacenter it is associated with.)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Berlin MeetUp

2014-05-16 Thread Robert Sander
Hi,

we are currently planning the next Ceph MeetUp in Berlin, Germany, for
May 26 at 6 pm.

If you want to participate please head over to
http://www.meetup.com/Ceph-Berlin/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] attive+degraded cluster

2014-05-16 Thread Ignazio Cassano
Hit all, I successfully installed a ceph cluster firefly version  made up
of 3 osd and one monitor host.
After that I created a pool and 1 rdb  object  for kvm .
It works fine .
I verified my pool has a replica size = 3 but a I read the default should
be = 2.
Trying to shut down an osd and getting it out, ceph health displays
attive+degraded state and remains in this state until I add again one osd .
Is this a correct behaviour ?
Reading documentation I understood that cluster should repair itself going
in active clean state .
Is  possible it remains in degraded state because I have a replica size = 3
and only 2 osd ?

Sorry for my bad english.

Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph with VMWare / XenServer

2014-05-16 Thread Andrei Mikhailovsky
Uwe, 

could you please help me a bit with configuring multipathing on two different 
storage servers and connecting it to xenserver. 

I am looking at the multipathing howto and it tells me that for multipathing to 
work the iscsi querry from the target server should return two paths. However, 
if you have two separate servers with tgt installed, each one would only return 
a single path. 

I've configured two servers (tgt1 and tgt2) with tgt, each pointing to the same 
rbd image. the iscsi config files are identical. One server is using 
192.168.170.200 ip, the second one uses 192.168.171.200. When doing a query, 
the tgt1, it returns: 


192.168.170.200:3260,1 
iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 

and tgt2 returns: 

192.168.171.200:3260,1 
iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 


According to the documentation, each server should return both paths, like 
this: 
192.168.170.200:3260,1 
iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 
192.168.171.200:3260,1 
iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 


Is there a manual way of configuring multipathing? Or have I not created the 
tgt configs correctly? 

Cheers 

Andrei 

- Original Message -

From: "Uwe Grohnwaldt"  
To: ceph-users@lists.ceph.com 
Sent: Monday, 12 May, 2014 12:57:48 PM 
Subject: Re: [ceph-users] Ceph with VMWare / XenServer 

Hi, 

at the moment we are using tgt with RBD backend compiled from source on Ubuntu 
12.04 and 14.04 LTS. We have two machines within two ip-ranges (e.g. 
192.168.1.0/24 and 192.168.2.0/24). One machine in 192.168.1.0/24 and one 
machine in 192.168.2.0/24. The config for tgt is the same on both machines, 
they export the same rbd. This works well for XenServer. 

For VMWare you have to disable VAAI to use it with tgt 
(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665)
 If you don't disable it, ESXi becomes very slow and unresponsive. 

I think the problem is the iSCSI Write Same Support but I haven't tried which 
of the settings of VAAI is responsible for this behavior. 

Mit freundlichen Grüßen / Best Regards, 
-- 
Consultant 
Dipl.-Inf. Uwe Grohnwaldt 
Gutleutstr. 351 
60327 Frankfurt a. M. 

eMail: u...@grohnwaldt.eu 
Telefon: +49-69-34878906 
Mobil: +49-172-3209285 
Fax: +49-69-348789069 

- Original Message - 
> From: "Andrei Mikhailovsky"  
> To: ceph-users@lists.ceph.com 
> Sent: Montag, 12. Mai 2014 12:00:48 
> Subject: [ceph-users] Ceph with VMWare / XenServer 
> 
> 
> 
> Hello guys, 
> 
> I am currently running a ceph cluster for running vms with qemu + 
> rbd. It works pretty well and provides a good degree of failover. I 
> am able to run maintenance tasks on the ceph nodes without 
> interrupting vms IO. 
> 
> I would like to do the same with VMWare / XenServer hypervisors, but 
> I am not really sure how to achieve this. Initially I thought of 
> using iscsi multipathing, however, as it turns out, multipathing is 
> more for load balancing and nic/switch failure. It does not allow me 
> to perform maintenance on the iscsi target without interrupting 
> service to vms. 
> 
> Has anyone done either a PoC or better a production environment where 
> they've used ceph as a backend storage with vmware / xenserver? The 
> important element for me is to have the ability of performing 
> maintenance tasks and resilience to failovers without interrupting 
> IO to vms. Are there any recommendations or howtos on how this could 
> be achieved? 
> 
> Many thanks 
> 
> Andrei 
> 
> 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] attive+degraded cluster

2014-05-16 Thread Gregory Farnum
On Friday, May 16, 2014, Ignazio Cassano  wrote:

> Hit all, I successfully installed a ceph cluster firefly version  made up
> of 3 osd and one monitor host.
> After that I created a pool and 1 rdb  object  for kvm .
> It works fine .
> I verified my pool has a replica size = 3 but a I read the default should
> be = 2.
> Trying to shut down an osd and getting it out, ceph health displays
> attive+degraded state and remains in this state until I add again one osd .
> Is this a correct behaviour ?
> Reading documentation I understood that cluster should repair itself going
> in active clean state .
> Is  possible it remains in degraded state because I have a replica size =
> 3 and only 2 osd ?
>
Yep, that's it. You can change the size to 2, if that's really all the
number of copies you need:
ceph osd pool set  size 2
Iirc.
-Greg




> Sorry for my bad english.
>
> Ignazio
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] visualizing a ceph cluster automatically

2014-05-16 Thread Christian Eichelmann
I have written a small and lightweight gui, which can also acts as a json rest 
api (for non-interactive monitoring):

https://github.com/Crapworks/ceph-dash

Maybe thats what you searching for.

Regards,
Christian

Von: ceph-users [ceph-users-boun...@lists.ceph.com]" im Auftrag von "Drew 
Weaver [drew.wea...@thenap.com]
Gesendet: Freitag, 16. Mai 2014 14:01
An: 'ceph-users@lists.ceph.com'
Betreff: [ceph-users] visualizing a ceph cluster automatically

Does anyone know of any tools that help you visually monitor a ceph cluster 
automatically?

Something that is host, osd, mon aware and shows various status of components, 
etc?

Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal SSD durability

2014-05-16 Thread Christian Balzer
On Fri, 16 May 2014 13:51:09 +0100 Simon Ironside wrote:

> On 13/05/14 13:23, Christian Balzer wrote:
> >>> Alas a DC3500 240GB SSD will perform well enough at half the price of
> >>> the DC3700 and give me enough breathing room at about 80GB/day
> >>> writes, so this is what I will order in the end.
> >> Did you consider DC3700 100G with similar price?
> >
> > The 3500 is already potentially slower than the actual HDDs when doing
> > sequential writes, the 100GB 3700 most definitely so.
> 
> Hi,
> 
> Any thoughts or experience of the Kingston E50 100GB SSD?
> 
> The 310TB endurance, power-loss protection and 550/530MBps sequential 
> read/write rates seems to be quite suitable for journalling.
> 
Thanks for bringing that to my attention.
It looks very good until one gets to the Sandforce controller in the specs.

As in, if you're OK with occasional massive spikes in latency, go for it
(same for the Intel 530). 
If you prefer consistent perfomance, avoid.

Christian

> Cheers,
> Simon
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with radosgw and some file name characters

2014-05-16 Thread Yehuda Sadeh
Was talking about this. There is a different and simpler rule that we
use nowadays, for some reason it's not well documented:

RewriteRule  ^/(.*) /s3gw.3.fcgi?%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]

I still need to see a more verbose log to make a better educated guess.

Yehuda

On Thu, May 15, 2014 at 3:01 PM, Andrei Mikhailovsky  wrote:
>
> Yehuda,
>
> what do you mean by the rewrite rule? is this for Apache? I've used the ceph
> documentation to create it. My rule is:
>
>
> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
> /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>
> Or are you talking about something else?
>
> Cheers
>
> Andrei
> 
> From: "Yehuda Sadeh" 
> To: "Andrei Mikhailovsky" 
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, 15 May, 2014 4:05:06 PM
> Subject: Re: [ceph-users] Problem with radosgw and some file name characters
>
>
> Your rewrite rule might be off a bit. Can you provide log with 'debug rgw =
> 20'?
>
> Yehuda
>
> On Thu, May 15, 2014 at 8:02 AM, Andrei Mikhailovsky 
> wrote:
>> Hello guys,
>>
>>
>> I am trying to figure out what is the problem here.
>>
>>
>> Currently running Ubuntu 12.04 with latest updates and radosgw version
>> 0.72.2-1precise. My ceph.conf file is pretty standard from the radosgw
>> howto.
>>
>>
>>
>> I am testing radosgw as a backup solution to S3 compatible clients. I am
>> planning to copy a large number of files/folders and I am having issues
>> with
>> a large number of files. The client reports the following error on some
>> files:
>>
>>
>> 
>>
>> 
>>
>> AccessDenied
>>
>> 
>>
>>
>> Looking on the server backup I only see the following errors in the
>> radosgw.log file:
>>
>> 2014-05-13 23:50:35.786181 7f09467dc700  1 == starting new request
>> req=0x245d7e0 =
>> 2014-05-13 23:50:35.786470 7f09467dc700  1 == req done req=0x245d7e0
>> http_status=403 ==
>>
>>
>> So, i've done  a small file set comprising of test files including the
>> following names:
>>
>> Testing and Testing.txt
>> Testing ^ Testing.txt
>> Testing = Testing.txt
>> Testing _ Testing.txt
>> Testing - Testing.txt
>> Testing ; Testing.txt
>> Testing ! Testing.txt
>> Testing ? Testing.txt
>> Testing ( Testing.txt
>> Testing ) Testing.txt
>> Testing @ Testing.txt
>> Testing $ Testing.txt
>> Testing * Testing.txt
>> Testing & Testing.txt
>> Testing # Testing.txt
>> Testing % Testing.txt
>> Testing + Testing.txt
>>
>> From the above list the files with the following characters are giving me
>> Access Denied / 403 error:
>>
>> =;()@$*&+
>>
>> The rest of the files are successfully uploaded.
>>
>> Does anyone know what is required to fix the problem?
>>
>> Many thanks
>>
>> Andrei
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Alternate pools for RGW

2014-05-16 Thread Craig Lewis

On 5/16/14 03:12 , Ilya Storozhilov wrote:

Hello Ceph-community,


does it possible to somehow configure RGW to use alternate pools other than 
predefined ones? Does it possible add additional pool to RGW to store data in 
as it could be  done in case of Cephfs?


Thank you very much and best regards!
Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Yes.  RadosGW has placement_targets and placement_pools.

Yehuda gave a good example of creating a normal and fast pool, and 
setting a user to use default to the fast pool:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005272.html

Buckets have a placement target.  The objects in the bucket use the 
bucket's placement target.  I know you can can a user's default 
placement target per user.  You can also specify a placement target when 
creating a bucket.  As far as I know, it can only be set at bucket 
creation time.





--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com 

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website   | Twitter 
  | Facebook 
  | LinkedIn 
  | Blog 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] visualizing a ceph cluster automatically

2014-05-16 Thread Don Talton (dotalton)
Have to plug Kraken too!

https://github.com/krakendash/krakendash

Here is a screenshot http://i.imgur.com/fDnqpO9.png


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Drew 
Weaver
Sent: Friday, May 16, 2014 5:01 AM
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] visualizing a ceph cluster automatically

Does anyone know of any tools that help you visually monitor a ceph cluster 
automatically?

Something that is host, osd, mon aware and shows various status of components, 
etc?

Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal SSD durability

2014-05-16 Thread Simon Ironside

On 16/05/14 16:34, Christian Balzer wrote:

Thanks for bringing that to my attention.
It looks very good until one gets to the Sandforce controller in the specs.

As in, if you're OK with occasional massive spikes in latency, go for it
(same for the Intel 530).
If you prefer consistent perfomance, avoid.


Cool, that saves me from burning £100 unnecessarily. Thanks.
I've one more suggestion before I just buy an Intel DC S3500 . . .

Seagate 600 Pro 100GB
520/300 Sequential Read/Write
80k/20k Random 4k Read/Write IOPS
Power Loss Protection
280/650TB endurance (two figures, weird, but both high)
5yr warranty and not a bad price

http://www.seagate.com/www-content/product-content/ssd-fam/600-pro-ssd/en-gb/docs/600-pro-ssd-data-sheet-ds1790-3-1310gb.pdf

It's not a SandForce controller :) It's a LAMD LM87800.

Cheers,
Simon.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with radosgw and some file name characters

2014-05-16 Thread Andrei Mikhailovsky
Yehuda, 

Here is what I get with debug logging. I've sanitised output a bit: 


2014-05-16 21:37:23.565906 7fb9e67fc700 1 == starting new request 
req=0x2243820 = 
2014-05-16 21:37:23.565964 7fb9e67fc700 2 req 14:0.58::HEAD 
/Testing%20=%20Testing.txt::initializing 
2014-05-16 21:37:23.565976 7fb9e67fc700 10 host=file-test. 
rgw_dns_name= 
2014-05-16 21:37:23.566011 7fb9e67fc700 10 s->object=Testing = Testing.txt 
s->bucket=file-test 
2014-05-16 21:37:23.566018 7fb9e67fc700 20 FCGI_ROLE=RESPONDER 
2014-05-16 21:37:23.566020 7fb9e67fc700 20 SCRIPT_URL=/Testing = Testing.txt 
2014-05-16 21:37:23.566020 7fb9e67fc700 20 
SCRIPT_URI=https://file-test./Testing = Testing.txt 
2014-05-16 21:37:23.566021 7fb9e67fc700 20 HTTP_AUTHORIZATION=AWS 
1VBD6566:x6JvIaP666 
2014-05-16 21:37:23.566022 7fb9e67fc700 20 HTTPS=on 
2014-05-16 21:37:23.566047 7fb9e67fc700 20 SSL_TLS_SNI=file-test. 
2014-05-16 21:37:23.566048 7fb9e67fc700 20 HTTP_HOST=file-test. 
2014-05-16 21:37:23.566049 7fb9e67fc700 20 HTTP_DATE=Fri, 16 May 2014 20:37:23 
GMT 
2014-05-16 21:37:23.566050 7fb9e67fc700 20 HTTP_USER_AGENT=DragonDisk 1.05 ( 
http://www.dragondisk.com ) 
2014-05-16 21:37:23.566051 7fb9e67fc700 20 HTTP_X_FORWARDED_FOR=10.1.1.228 
2014-05-16 21:37:23.566052 7fb9e67fc700 20 
HTTP_X_FORWARDED_HOST=file-test. 
2014-05-16 21:37:23.566053 7fb9e67fc700 20 HTTP_X_FORWARDED_SERVER= 
2014-05-16 21:37:23.566054 7fb9e67fc700 20 HTTP_CONNECTION=Keep-Alive 
2014-05-16 21:37:23.566055 7fb9e67fc700 20 PATH=/usr/local/bin:/usr/bin:/bin 
2014-05-16 21:37:23.566055 7fb9e67fc700 20 SERVER_SIGNATURE= 
2014-05-16 21:37:23.566056 7fb9e67fc700 20 SERVER_SOFTWARE=Apache/2.2.22 
(Ubuntu) 
2014-05-16 21:37:23.566056 7fb9e67fc700 20 SERVER_NAME=file-test. 
2014-05-16 21:37:23.566057 7fb9e67fc700 20 SERVER_ADDR=192.168.169.200 
2014-05-16 21:37:23.566057 7fb9e67fc700 20 SERVER_PORT=443 
2014-05-16 21:37:23.566058 7fb9e67fc700 20 REMOTE_ADDR=192.168.169.121 
2014-05-16 21:37:23.566059 7fb9e67fc700 20 DOCUMENT_ROOT=/var/www 
2014-05-16 21:37:23.566059 7fb9e67fc700 20 SERVER_ADMIN= 
2014-05-16 21:37:23.566060 7fb9e67fc700 20 SCRIPT_FILENAME=/var/www/s3gw.fcgi 
2014-05-16 21:37:23.566060 7fb9e67fc700 20 REMOTE_PORT=52750 
2014-05-16 21:37:23.566061 7fb9e67fc700 20 GATEWAY_INTERFACE=CGI/1.1 
2014-05-16 21:37:23.566062 7fb9e67fc700 20 SERVER_PROTOCOL=HTTP/1.1 
2014-05-16 21:37:23.566062 7fb9e67fc700 20 REQUEST_METHOD=HEAD 
2014-05-16 21:37:23.566063 7fb9e67fc700 20 QUERY_STRING=page=Testing¶ms= = 
Testing.txt 
2014-05-16 21:37:23.566063 7fb9e67fc700 20 
REQUEST_URI=/Testing%20=%20Testing.txt 
2014-05-16 21:37:23.566064 7fb9e67fc700 20 SCRIPT_NAME=/Testing = Testing.txt 
2014-05-16 21:37:23.566066 7fb9e67fc700 2 req 14:0.000160:s3:HEAD 
/Testing%20=%20Testing.txt::getting op 
2014-05-16 21:37:23.566071 7fb9e67fc700 2 req 14:0.000166:s3:HEAD 
/Testing%20=%20Testing.txt:get_obj:authorizing 
2014-05-16 21:37:23.566095 7fb9e67fc700 20 get_obj_state: rctx=0x7fb9a40076a0 
obj=.users:1VBD65 state=0x7fb9a4007768 s->prefetch_data=0 
2014-05-16 21:37:23.566106 7fb9e67fc700 10 moving .users+1VBD6566 to cache 
LRU end 
2014-05-16 21:37:23.566109 7fb9e67fc700 10 cache get: name=.users+1VBD656 : 
hit 
2014-05-16 21:37:23.566117 7fb9e67fc700 20 get_obj_state: s->obj_tag was set 
empty 
2014-05-16 21:37:23.566123 7fb9e67fc700 10 moving .users+1VBD656 to cache 
LRU end 
2014-05-16 21:37:23.566124 7fb9e67fc700 10 cache get: name=.users+1VBD656 : 
hit 
2014-05-16 21:37:23.566151 7fb9e67fc700 20 get_obj_state: rctx=0x7fb9a4007c60 
obj=.users.uid:andrei state=0x7fb9a4007568 s->prefetch_data=0 
2014-05-16 21:37:23.566157 7fb9e67fc700 10 moving .users.uid+andrei to cache 
LRU end 
2014-05-16 21:37:23.566159 7fb9e67fc700 10 cache get: name=.users.uid+andrei : 
hit 
2014-05-16 21:37:23.566163 7fb9e67fc700 20 get_obj_state: s->obj_tag was set 
empty 
2014-05-16 21:37:23.566166 7fb9e67fc700 10 moving .users.uid+andrei to cache 
LRU end 
2014-05-16 21:37:23.566167 7fb9e67fc700 10 cache get: name=.users.uid+andrei : 
hit 
2014-05-16 21:37:23.566214 7fb9e67fc700 10 get_canon_resource(): 
dest=/file-test/Testing%20=%20Testing.txt 
2014-05-16 21:37:23.566217 7fb9e67fc700 10 auth_hdr: 
HEAD 
Fri, 16 May 2014 20:37:23 GMT 
/file-test/Testing%20=%20Testing.txt 
2014-05-16 21:37:23.566292 7fb9e67fc700 15 calculated 
digest=EGQFDk7vQUX5IIb11AgaonaMzng= 
2014-05-16 21:37:23.566295 7fb9e67fc700 15 auth_sign=x6JvIaPCGQv66 
2014-05-16 21:37:23.566296 7fb9e67fc700 15 compare=51 
2014-05-16 21:37:23.566298 7fb9e67fc700 10 failed to authorize request 
2014-05-16 21:37:23.566409 7fb9e67fc700 2 req 14:0.000504:s3:HEAD 
/Testing%20=%20Testing.txt:get_obj:http status=403 
2014-05-16 21:37:23.566544 7fb9e67fc700 1 == req done req=0x2243820 
http_status=403 == 
2014-05-16 21:37:23.600916 7fba57ba3780 20 enqueued request req=0x2241b80 
2014-05-16 21:37:23.600930 7fba57ba3780 20 RGWWQ: 
2014-05-16 21:37:23.600932 7fba57ba3780 20 req: 0x2241b80 
2014-05-16 21:37:23.600936

Re: [ceph-users] Journal SSD durability

2014-05-16 Thread Carlos M. Perez
Unfortunately, the Seagate Pro 600 has been discontinued, 
http://comms.seagate.com/servlet/servlet.FileDownload?file=00P300JHLCCEA5.  
The replacement is the 1200 series which are more 2x the price but have a SAS 
12gbps interface.  You can still find the 600's out there at around $300/drive. 
 Still a very good price based on specs and backed by the reviews.

The Kingston E100's have a DWPD rating of 11 at the 100/200GB capacity, and 
similar specs to the S3700's (400GB), but more expensive per GB & PBW than the 
intel S3700, so I'd probably stick with the S3700s.

Carlos M. Perez
CMP Consulting Services
305-669-1515

> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Simon Ironside
> Sent: Friday, May 16, 2014 4:08 PM
> To: Christian Balzer
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Journal SSD durability
> 
> On 16/05/14 16:34, Christian Balzer wrote:
> > Thanks for bringing that to my attention.
> > It looks very good until one gets to the Sandforce controller in the specs.
> >
> > As in, if you're OK with occasional massive spikes in latency, go for
> > it (same for the Intel 530).
> > If you prefer consistent perfomance, avoid.
> 
> Cool, that saves me from burning £100 unnecessarily. Thanks.
> I've one more suggestion before I just buy an Intel DC S3500 . . .
> 
> Seagate 600 Pro 100GB
> 520/300 Sequential Read/Write
> 80k/20k Random 4k Read/Write IOPS
> Power Loss Protection
> 280/650TB endurance (two figures, weird, but both high) 5yr warranty and
> not a bad price
> 
> http://www.seagate.com/www-content/product-content/ssd-fam/600-pro-
> ssd/en-gb/docs/600-pro-ssd-data-sheet-ds1790-3-1310gb.pdf
> 
> It's not a SandForce controller :) It's a LAMD LM87800.
> 
> Cheers,
> Simon.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal SSD durability

2014-05-16 Thread Simon Ironside

On 16/05/14 22:30, Carlos M. Perez wrote:

Unfortunately, the Seagate Pro 600 has been discontinued, 
http://comms.seagate.com/servlet/servlet.FileDownload?file=00P300JHLCCEA5.  
The replacement is the 1200 series which are more 2x the price but have a SAS 
12gbps interface.  You can still find the 600's out there at around $300/drive. 
 Still a very good price based on specs and backed by the reviews.


Thanks, that's encouraging. What a shame they're being discontinued.
You can certainly still get them in the UK at ~£100 (~$160) a drive. 
Sounds like it might be worth a shot.


Cheers,
Simon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with radosgw and some file name characters

2014-05-16 Thread Craig Lewis

On 5/15/14 15:01 , Andrei Mikhailovsky wrote:


Yehuda,

what do you mean by the rewrite rule? is this for Apache? I've used 
the ceph documentation to create it. My rule is:



RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) 
/s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING} 
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]


Or are you talking about something else?

Cheers

Andrei

*From: *"Yehuda Sadeh" 
*To: *"Andrei Mikhailovsky" 
*Cc: *ceph-users@lists.ceph.com
*Sent: *Thursday, 15 May, 2014 4:05:06 PM
*Subject: *Re: [ceph-users] Problem with radosgw and some file name 
characters


Your rewrite rule might be off a bit. Can you provide log with 'debug 
rgw = 20'?


Yehuda



The RewriteRule you're using tells Apache to only send URLs with 
AlphaNumeric characters to the FastCGI gateway.  You'll only see those 
errors in the Apache logs, not the RadosGW logs.  I assume the old rule 
was an attempt at armoring the system against invalid inputs, but caused 
more harm than good.


Yehuda's new rule:

RewriteRule  ^/(.*) /s3gw.3.fcgi?%{QUERY_STRING} 
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]

Will send all requests to the FastCGI module, regardless of the ASCII 
characters in use.  I can't vouch for Unicode support though.





--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com 

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website   | Twitter 
  | Facebook 
  | LinkedIn 
  | Blog 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com