Weired. Maybe you can check the source code (src/mon/PGMonitor.cc,
around L1434).
But, looks like there is another command "ceph pg dump_json {all |
summary | sum | pools | ...} which you can try.
On Fri, May 16, 2014 at 2:56 PM, Cao, Buddy wrote:
> In my env, "ceph pg dump all -f json" only retu
Thanks Peng, it works!
Wei Cao (Buddy)
-Original Message-
From: xan.peng [mailto:xanp...@gmail.com]
Sent: Friday, May 16, 2014 3:34 PM
To: Cao, Buddy
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] “ceph pg dump summary –f json” question
Weired. Maybe you can check the source code (s
On 15/05/14 18:07, Dietmar Maurer wrote:
> Besides, it would be great if ceph could use existing cluster stacks like
> corosync, ...
> Is there any plan to support that?
For clarity: To what end?
Recall that Ceph already incorporates its own cluster-management
framework, and the various Ceph da
> Recall that Ceph already incorporates its own cluster-management framework,
> and the various Ceph daemons already operate in a clustered manner.
Sure. But it guess it could reduce 'ceph' code size if you use an existing
framework.
We (Proxmox VE) run corosync by default on all nodes, so it wo
On 16.05.2014 10:49, Dietmar Maurer wrote:
>> Recall that Ceph already incorporates its own cluster-management framework,
>> and the various Ceph daemons already operate in a clustered manner.
>
> Sure. But it guess it could reduce 'ceph' code size if you use an existing
> framework.
Ceph has no
> Ceph has nothing to do with a HA cluster based on pacemaker.
> It has a complete different logic built in.
> The only similarity is that both use a quorum algorithm to detect split brain
> situations.
I talk about cluster services like 'corosync', which provide membership and
quorum services.
I would say the levels of redundancy could roughly be translated like
this.
RAID0 one replica (size=1)
RAID1 two replicas (size=2)
RAID10 two replicas (size=2)
RAID5 erasure coding (erasure-code-m=1)
RAID6 erasure coding (erasure-code-m=2)
RAID
Hi Jerker,
Thanks for the reply.
The link you posted describes only object storage. I need information of raid
levels implementation for block devices.
Thanks
Kumar
-Original Message-
From: Jerker Nyberg [mailto:jer...@update.uu.se]
Sent: Friday, May 16, 2014 2:43 PM
To: Gnan Kumar,
On 16.05.2014 11:42, yalla.gnan.ku...@accenture.com wrote:
> Hi Jerker,
>
> Thanks for the reply.
>
> The link you posted describes only object storage. I need information of raid
> levels implementation for block devices.
>
There is no RAID level for RBDs. These are "virtual" block devices an
Does anyone know of any tools that help you visually monitor a ceph cluster
automatically?
Something that is host, osd, mon aware and shows various status of components,
etc?
Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
On 13/05/14 13:23, Christian Balzer wrote:
Alas a DC3500 240GB SSD will perform well enough at half the price of
the DC3700 and give me enough breathing room at about 80GB/day writes,
so this is what I will order in the end.
Did you consider DC3700 100G with similar price?
The 3500 is already
Hi there,
I'm sure that the Ceph community was somewhat excited when Seagate released
their enterprise 6TB SAS/SATA hard drives recently, previously the only other
6TB drives which were available for enterprises were the HGST helium ones which
are nearly impossible to find unless you are buying
Try this
https://github.com/inkscope/inkscope
2014-05-16 16:01 GMT+04:00 Drew Weaver :
> Does anyone know of any tools that help you visually monitor a ceph
> cluster automatically?
>
>
>
> Something that is host, osd, mon aware and shows various status of
> components, etc?
>
>
>
> Thanks,
>
>
Overnight, I tried to use ceph_filestore_dump to export a pg that is
missing from other osds from an osd, with the intent of manually copying
the export to the osds in the pg map and importing.
Unfortunately, what is on-disk 59gb of data had filled 1TB when I got in
this morning, and still had
Jeroen,
Actually this is more a question for the OpenStack ML.
All the use cases you described are not possible at the moment.
The only thing you can get is shared ressources across all the tenants, you
can’t really pin any ressource to a specific tenant.
This could done I guess, but not availab
Hello Ceph-community,
does it possible to somehow configure RGW to use alternate pools other than
predefined ones? Does it possible add additional pool to RGW to store data in
as it could be done in case of Cephfs?
Thank you very much and best regards!
Ilya
__
Ok, thanks for the suggestions, I will try to achieve this in the next
days and I will share my experience with you.
Cheers,
Fabrizio
On 14 May 2014 20:12, Gregory Farnum wrote:
> On Wed, May 14, 2014 at 10:52 AM, Pavel V. Kaygorodov wrote:
>> Hi!
>>
>>> CRUSH can do this. You'd have two choose
Hi,
we are currently planning the next Ceph MeetUp in Berlin, Germany, for
May 26 at 6 pm.
If you want to participate please head over to
http://www.meetup.com/Ceph-Berlin/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030
Hit all, I successfully installed a ceph cluster firefly version made up
of 3 osd and one monitor host.
After that I created a pool and 1 rdb object for kvm .
It works fine .
I verified my pool has a replica size = 3 but a I read the default should
be = 2.
Trying to shut down an osd and getting
Uwe,
could you please help me a bit with configuring multipathing on two different
storage servers and connecting it to xenserver.
I am looking at the multipathing howto and it tells me that for multipathing to
work the iscsi querry from the target server should return two paths. However,
if
On Friday, May 16, 2014, Ignazio Cassano wrote:
> Hit all, I successfully installed a ceph cluster firefly version made up
> of 3 osd and one monitor host.
> After that I created a pool and 1 rdb object for kvm .
> It works fine .
> I verified my pool has a replica size = 3 but a I read the de
I have written a small and lightweight gui, which can also acts as a json rest
api (for non-interactive monitoring):
https://github.com/Crapworks/ceph-dash
Maybe thats what you searching for.
Regards,
Christian
Von: ceph-users [ceph-users-boun...@lists.ceph.com]
On Fri, 16 May 2014 13:51:09 +0100 Simon Ironside wrote:
> On 13/05/14 13:23, Christian Balzer wrote:
> >>> Alas a DC3500 240GB SSD will perform well enough at half the price of
> >>> the DC3700 and give me enough breathing room at about 80GB/day
> >>> writes, so this is what I will order in the e
Was talking about this. There is a different and simpler rule that we
use nowadays, for some reason it's not well documented:
RewriteRule ^/(.*) /s3gw.3.fcgi?%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
I still need to see a more verbose log to make a better educated guess.
Ye
On 5/16/14 03:12 , Ilya Storozhilov wrote:
Hello Ceph-community,
does it possible to somehow configure RGW to use alternate pools other than
predefined ones? Does it possible add additional pool to RGW to store data in
as it could be done in case of Cephfs?
Thank you very much and best reg
Have to plug Kraken too!
https://github.com/krakendash/krakendash
Here is a screenshot http://i.imgur.com/fDnqpO9.png
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Drew
Weaver
Sent: Friday, May 16, 2014 5:01 AM
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] vi
On 16/05/14 16:34, Christian Balzer wrote:
Thanks for bringing that to my attention.
It looks very good until one gets to the Sandforce controller in the specs.
As in, if you're OK with occasional massive spikes in latency, go for it
(same for the Intel 530).
If you prefer consistent perfomance,
Yehuda,
Here is what I get with debug logging. I've sanitised output a bit:
2014-05-16 21:37:23.565906 7fb9e67fc700 1 == starting new request
req=0x2243820 =
2014-05-16 21:37:23.565964 7fb9e67fc700 2 req 14:0.58::HEAD
/Testing%20=%20Testing.txt::initializing
2014-05-16 21:37:23
Unfortunately, the Seagate Pro 600 has been discontinued,
http://comms.seagate.com/servlet/servlet.FileDownload?file=00P300JHLCCEA5.
The replacement is the 1200 series which are more 2x the price but have a SAS
12gbps interface. You can still find the 600's out there at around $300/drive.
On 16/05/14 22:30, Carlos M. Perez wrote:
Unfortunately, the Seagate Pro 600 has been discontinued,
http://comms.seagate.com/servlet/servlet.FileDownload?file=00P300JHLCCEA5.
The replacement is the 1200 series which are more 2x the price but have a SAS
12gbps interface. You can still fin
On 5/15/14 15:01 , Andrei Mikhailovsky wrote:
Yehuda,
what do you mean by the rewrite rule? is this for Apache? I've used
the ceph documentation to create it. My rule is:
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
/s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:
31 matches
Mail list logo