I am planning to use ceph object gateway to store data in ceph
cluster.I need two different users of Rados gateway to store data in
different pools.How can I create and assign different pools to
different ceph object gateway users?
Thanks in advance
___
Hi list,
Can RADOS fulfil the following use case:
I wish to have a radosgw-S3 object store that is "LIVE",
this represents "current" objects of users.
Separated by an air-gap is another radosgw-S3 object store that is "ARCHIVE".
The objects will only be created and manipulated by radosgw.
Peri
Am 15.10.2014 22:08, schrieb Iban Cabrillo:
> HI Cephers,
>
> I have an other question related to this issue, What would be the
> procedure to restore a server fail (a whole server for example due to a
> mother board trouble with no damage on disk).
>
> Regards, I
>
Hi,
- change serverboard.
-
Hi John,
Thanks I will look into it. Is there already a new Giant release date?
Jasper
Van: john.sp...@inktank.com [john.sp...@inktank.com] namens John Spray
[john.sp...@redhat.com]
Verzonden: donderdag 16 oktober 2014 12:23
Aan: Jasper Siero
CC: Gregory
hi,all
pool size/min_size does not make any effect on erasure-coded pool,right?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
HI Udo,
Thanks a lot! The resync flag have solved my doubts.
Regards, I
2014-10-16 12:21 GMT+02:00 Udo Lembke :
> Am 15.10.2014 22:08, schrieb Iban Cabrillo:
> > HI Cephers,
> >
> > I have an other question related to this issue, What would be the
> > procedure to restore a server fail (a who
Following up: firefly fix for undump is: https://github.com/ceph/ceph/pull/2734
Jasper: if you still need to try undumping on this existing firefly
cluster, then you can download ceph-mds packages from this
wip-firefly-undump branch from
http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref
Hi eveyone, I use replicate 3, many unfound object and Ceph very slow.
pg 6.9d8 is active+recovery_wait+degraded+remapped, acting [22,93], 4
unfound
pg 6.766 is active+recovery_wait+degraded+remapped, acting [21,36], 1
unfound
pg 6.73f is active+recovery_wait+degraded+remapped, acting [19,84],
Hi Dan,
Maybe I misunderstand what you are trying to do, but I think you are trying to
add your Ceph RBD pool into libvirt as a storage pool?
If so, it's relatively straightforward - here's an example from my setup:
Related libvirt
Tuan,
I had a similar behaviour when I've connected the cache pool tier. I resolved
the issues by restarting all my osds. If your case is the same, try it and see
if it works. If not, I guess the guys here and on the ceph irc might be able to
help you.
Cheers
Andrei
- Original Message
Thanks Dan (Doctor, doctor...)
Correct. I'd like to abstract the details of the rbd storage from the VM
definitions as much as possible (like not having the monitor IPs/ports
defined). I plan on experimenting with monitors and so forth on ceph and would
like to not have to touch every single VM
Hi,
does anyone know if journals are involved when ceph rebalance data (on
osd crash for example) or when replication size is changed?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
What I've found is the nicest way of handling this is to add all the
mons to your ceph.conf file. The QEMU client will use these if you
don't define any in the libvirt config.
Similarly, define a libvirt 'secret' and you can use that for auth, so
you only have one place to change it. My enti
Bonjour Emmanuel,
The journals are used for all write operations on an OSD. They will be used if
an OSD crashes and others need to create new copies of objects.
Cheers
On 16/10/2014 06:51, Emmanuel Lacour wrote:
>
> Hi,
>
>
> does anyone know if journals are involved when ceph rebalance data
The best way to setup SSH is to use a ~/.ssh/config file It solves
a lot of issues!
Example:
~/.ssh/config
Host ceph1 cephosd1
HostName 192.168.1.10
User ceph
Host ceph2 cephosd2
HostName 192.168.1.11
User ceph
With that you can just do a "ssh ceph1" for example... All other S
On 10/15/2014 08:43 AM, Amon Ott wrote:
Am 14.10.2014 16:23, schrieb Sage Weil:
On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
...
*
It's ok, I resolved the problem. I made a typo in the hosts file! But thanks
for the response
James
From: JIten Shah [mailto:jshah2...@me.com]
Sent: 16 October 2014 00:16
To: Support - Avantek
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ssh; cannot resolve hostname errors
Please sen
hi,all
pool size/min_size does not make any effect on erasure-coded pool,right?
and erasure-coded pool does support rbd?
thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I am deploying a 3 node ceph storagollowing thee cluster for my company,
following the webinar: http://www.youtube.com/watch?v=R3gnLrsZSno
I am stuck at formating the osd's and making them ready to mount the
directories. Below is the error thrown back:
root@ceph-node1:~# mkcephfs -a -c
Hi,
I am trying to deploy ceph and I am just getting an runtime error (please see
attached). Any ideas?
James Rothero
Ceph Deploy error
Description: Ceph Deploy error
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
On 10/15/2014 09:37 AM, Sakhi Hadebe wrote:
>
> Hi,
>
> I am deploying a 3 node ceph storagollowing thee cluster for my company,
> following the webinar: http://www.youtube.com/watch?v=R3gnLrsZSno
>
I recommend using ceph-deploy. mkcephfs is deprecated and should not be
used anymore.
Wido
Hi,
Did you get this error on a freshly installed ubuntu-14.04 ?
[node1monitor][DEBUG ]
[node1monitor][WARNIN] E: Package 'ceph-mds' has no installation candidate
[node1monitor][WARNIN] E: Unable to locate package ceph-fs-common
[node1monitor][ERROR ] RuntimeError: command returned non-zero exit
Le 16/10/2014 16:16, Loic Dachary a écrit :
> Bonjour Emmanuel,
>
> The journals are used for all write operations on an OSD. They will be used
> if an OSD crashes and others need to create new copies of objects.
>
>
thanks Loic, I had doubt on this ;)
--
Easter-eggs
See this discussion:
http://comments.gmane.org/gmane.comp.file-systems.ceph.user/4992
Yehuda
On Thu, Oct 16, 2014 at 12:11 AM, Shashank Puntamkar
wrote:
> I am planning to use ceph object gateway to store data in ceph
> cluster.I need two different users of Rados gateway to store data in
> dif
Hello Mark -
Can you please paste your keystone.conf? Also It seems that Icehouse install
that I have does not have keystone.conf. Do we need to create one? Like I said,
adding WSGIChunkedRequest On in Keystone.conf did not solve my issue.
Thanks,
Lakshmi.
On Wednesday, October 15, 2014 10:17
Hi Support,
On 16/10/2014 08:11, Support - Avantek wrote:
> Yes that's right. I've set up a small 32bit arm processor cluster and just
> experimenting with Ceph running Ubuntu 14.04
I don't think there are arm repositories on ceph.com. Maybe there is a way to
instruct ceph-deploy to use the nat
Hi list,
During my test, I found ceph doesn't scale as I expected on a 30 osds
cluster.
The following is the information of my setup:
HW configuration:
15 Dell R720 servers, and each server has:
Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 20 cores and hyper-thread
enabled.
128GB memo
If you're running a single client to drive these tests, that's your
bottleneck. Try running multiple clients and aggregating their numbers.
-Greg
On Thursday, October 16, 2014, Mark Wu wrote:
> Hi list,
>
> During my test, I found ceph doesn't scale as I expected on a 30 osds
> cluster.
> The fo
Hello cephers,
I've been testing flashcache and enhanceio block device caching for the osds
and i've noticed i have started getting the slow requests. The caching type
that I use is ready only, so all writes bypass the caching ssds and go directly
to osds, just like what it used to be before i
Loic,
You mean these? http://ceph.com/debian/dists/trusty/main/binary-armhf/
Ian
- Original Message -
From: "Loic Dachary"
To: "Support - Avantek"
Cc: "ceph-users"
Sent: Thursday, October 16, 2014 10:17:29 AM
Subject: Re: [ceph-users] Error deploying Ceph
Hi Support,
On 16/10/2014
Thanks for the detailed information. but I am already using fio with rbd
engine. Almost 4 volumes can reach the peak.
2014 年 10 月 17 日 上午 1:03于 wud...@gmail.com写道:
Thanks for the detailed information. but I am already using fio with rbd
engine. Almost 4 volumes can reach the peak.
2014 年 10 月 17 日
forgot to cc the list
-- 转发的邮件 --
发件人:"Mark Wu"
日期:2014 年 10 月 17 日 上午 12:51
主题:Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.
收件人:"Gregory Farnum"
抄送:
Thanks for the reply. I am not using single client. Writing 5 rbd volumes
on 3 host can reach the peak.
Hi Dan,
I'd like to decommission a node to reproduce the problem and post enough
information for you (at least) to understand what is going on.
Unfortunately I'm a ceph newbie, so I'm not sure what info would be of
interest before/during the drain.
Probably the crushmap would be of interest
[Re-added the list.]
I assume you added more clients and checked that it didn't scale past
that? You might look through the list archives; there are a number of
discussions about how and how far you can scale SSD-backed cluster
performance.
Just scanning through the config options you set, you mig
Thanks, Brian. That helps a lot. I suspect that wasn't needed if the MON hosts
were defined within ceph.conf, but hadn't tried it previously.
To go with the pools quesiton, I'm able to define a pool for my RBD cluster
(and it obtains storage info, images present, etc). Is there a way to refer t
Mark, please read this:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html
On 16 Oct 2014, at 19:19, Mark Wu wrote:
>
> Thanks for the detailed information. but I am already using fio with rbd
> engine. Almost 4 volumes can reach the peak.
>
> 2014 年 10 月 17 日 上午 1:03于 wud.
Hi,
While I certainly can (attached) - if your install has keystone running
it *must* have one. It will be hiding somewhere!
Cheers
Mark
On 17/10/14 05:12, lakshmi k s wrote:
Hello Mark -
Can you please paste your keystone.conf? Also It seems that Icehouse install
that I have does not hav
Thank you Mark. Strangely, Icehouse install that I have didn't seem to have
one. At least not in /etc/apache2/ sub-directories. Like I said earlier, I can
make the keystone openstack integration work seamlessly if I move all the
keystone related flags under global section. Not otherwise. I am st
We do observe the same issue on our 12 SSD setup, disable the all log maybe
helpful.
Cheers,
xinxin
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Wu
Sent: Friday, October 17, 2014 12:18 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Performance doesn't sca
Hello (Greg in particular),
On Thu, 16 Oct 2014 10:06:58 -0700 Gregory Farnum wrote:
> [Re-added the list.]
>
> I assume you added more clients and checked that it didn't scale past
> that? You might look through the list archives; there are a number of
> discussions about how and how far you c
Hello,
Consider this rather basic configuration file:
---
[global]
fsid = e6687ef7-54e1-44bd-8072-f9ecab00815
mon_initial_members = ceph-01, comp-01, comp-02
mon_host = 10.0.0.21,10.0.0.5,10.0.0.6
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_x
Hi there rgw folks,
Just wondering if the server-side copy operation ties up the radosgw
host to actually proxy the data or if the copy is handled
transparently by rados and the backend OSDs?
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@li
thanks zhu for your reply.
Regards
Pragya Jain
On Thursday, 16 October 2014 7:08 AM, zhu qiang
wrote:
>
>
>Maybe you can try “ceph df detail”
>and sum the pool’s usage for your end-user
>
>GLOBAL:
>SIZE AVAIL RAW USED %RAW USED OBJECTS
>3726G 3570G 3799M
Hi,
>>Thanks for the detailed information. but I am already using fio with rbd
>>engine. Almost 4 volumes can reach the peak.
What is your cpu usage of fio-rbd ?
Myself I'm cpu bound on 8cores with around 4iops read 4K.
- Mail original -
De: "Mark Wu"
À: "Daniel Schwager"
Cc: ce
44 matches
Mail list logo