Hi,
On 01/08/2016 03:02 PM, Paweł Sadowski wrote:
Hi,
Quick results for 1/5/10 jobs:
*snipsnap*
Run status group 0 (all jobs):
WRITE: io=21116MB, aggrb=360372KB/s, minb=360372KB/s, maxb=360372KB/s,
mint=6msec, maxt=6msec
*snipsnap*
Run status group 0 (all jobs):
WRITE: io=57
Hello Community , wishing you a great new year :)
This is the recommended upgrade path
http://docs.ceph.com/docs/master/install/upgrading-ceph/
Ceph Deploy
Ceph Monitors
Ceph OSD Daemons
Ceph Metadata Servers
Ceph Object Gateways
How about upgrading Ceph clients ( in my case openstack compute an
Hi,
I don't recommend to set weight to zero, because you may see
MAX_AVAIL=0 in `ceph df` due to #13840. http://tracker.ceph.com/issues/13840
Any small & non-zero value is fine.
Tom
On 01-12 09:01, Rafael Lopez wrote:
> I removed some osds from a host yesterday using the reweight method and it
thanks, is it unique in the ceph cluster? or in the world?---原始邮件---发件人: "Jason Dillaman "发送时间: 2016年1月8日 23:12:22收件人: "min fang";抄送: "ceph-users";主题: Re: [ceph-users] can rbd block_name_prefix be changed?It's constant for an RBD image and is tied to the image's internal unique ID.
--
Jason
It's unique per-pool.
--
Jason Dillaman
- Original Message -
> From: "louisfang2013"
> To: "Jason Dillaman"
> Cc: "ceph-users"
> Sent: Tuesday, January 12, 2016 5:56:18 AM
> Subject: 回复:[ceph-users] can rbd block_name_prefix be changed?
> thanks, is it unique in the ceph cluster
Good day! I am working on a robust backup script for RBD and ran into a
need to reliably determine start and end snapshots for differential exports
(done with rbd export-diff).
I can clearly see these if dumping the ASCII header of the export file,
e.g.:
iss@lab2-b1:/data/volume1$ strings
exp-ts
Hello,
I have a question about how cache tier works with rbd volumes!?
So i created a pool of SSD's for cache and a pool on HDD's for cold storage
that acts as backend for cinder volumes. I create a volume in cinder from
an image and spawn an instance. The volume is created in the cache pool as
e
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mihai Gheorghe
> Sent: 12 January 2016 14:25
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
> replica crush rule!
>
> Hello,
>
> I ha
Hi Robert,
Please do whatever needed to get it pulled into Hammer.
Nick
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Robert LeBlanc
> Sent: 11 January 2016 20:48
> To: Nick Fisk
> Cc: Ceph-User
> Subject: Re: [ceph-users] using cach
Thank you very much for the quick answer.
I supose cache tier works the same way for object storage aswell!?
How is a delete of a cinder volume handled. I ask you this because after
the volume got flushed to the cold storage, i then deleted it from cinder.
It got deleted from the cache pool aswel
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mihai Gheorghe
> Sent: 12 January 2016 14:56
> To: Nick Fisk ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
> replica crush rule!
>
>
2016-01-12 17:08 GMT+02:00 Nick Fisk :
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Mihai Gheorghe
> > Sent: 12 January 2016 14:56
> > To: Nick Fisk ; ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Ceph cache tier and rbd
> -Original Message-
> From: Mihai Gheorghe [mailto:mcaps...@gmail.com]
> Sent: 12 January 2016 15:42
> To: Nick Fisk ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
> replica crush rule!
>
>
> 2016-01-12 17:08 GMT+02:00 Nick Fisk :
On 01/12/2016 06:10 AM, Alex Gorbachev wrote:
Good day! I am working on a robust backup script for RBD and ran into a
need to reliably determine start and end snapshots for differential
exports (done with rbd export-diff).
I can clearly see these if dumping the ASCII header of the export file,
One more question. Seeing that cache tier holds data on it untill it
reaches % ratio, i suppose i must set replication to 2 or higher on the
cache pool to not lose hot data not writen to the cold storage in case of a
drive failure, right?
Also, will there be any perfomance penalty if i set the osd
Yes, I would recommend you match the replication levels of the cache and base
pools, although as SSD’s can rebuild faster, there is an argument that you
might be able to get away with a 2x replication for them.
Yes its fine for the journals to sit on the same SSD as the data. There is a
slig
Le 12/01/2016 18:27, Mihai Gheorghe a écrit :
> One more question. Seeing that cache tier holds data on it untill it
> reaches % ratio, i suppose i must set replication to 2 or higher on
> the cache pool to not lose hot data not writen to the cold storage in
> case of a drive failure, right?
>
> A
Hi,
i want to add support for fast-diff and object map to our "old" firefly
v2 rbd images.
The current hammer release can't to this.
Is there any reason not to cherry-pick this one? (on my own)
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515ea1380ee9e4f867504e10
and use it with hamme
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
We are using cache tiering in two production clusters at the moment.
One cluster is running in forward mode at the moment due to the
excessive promotion/demotion. I've got Nick's patch backported to
Hammer and am going through the test suite at the m
On 1/12/2016 4:51 AM, Burkhard Linke wrote:
Hi,
On 01/08/2016 03:02 PM, Paweł Sadowski wrote:
Hi,
Quick results for 1/5/10 jobs:
*snipsnap*
Run status group 0 (all jobs):
WRITE: io=21116MB, aggrb=360372KB/s, minb=360372KB/s,
maxb=360372KB/s,
mint=6msec, maxt=6msec
*snipsnap*
when my journal disk don't have enough space, i want change a other disk which
has enough space to save journal.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This may help you.
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-May/039576.html
Rgds,
Shinobu
- Original Message -
From: "小科" <1103262...@qq.com>
To: "ceph-users"
Sent: Wednesday, January 13, 2016 12:06:33 PM
Subject: [ceph-users] how to change the journal disk
when
maybe the below blog can help you :
http://cephnotes.ksperis.com/blog/2014/06/29/ceph-journal-migration/
在 Wed, 13 Jan 2016 11:06:33 +0800,小科 <1103262...@qq.com> 写道:
when my journal disk don't have enough space, i want change a other
disk which has enough space to save journal.
--
---
Hi,
Is there any way to check the block device space usage under the specified
pool? I need to know the capacity usage. If the block device is used over 80%,
I will send an alert to user.
Thanks a lot!
Best Regards,
WD
--
On 01/13/2016 06:48 AM, wd_hw...@wistron.com wrote:
> Hi,
>
> Is there any way to check the block device space usage under the
> specified pool? I need to know the capacity usage. If the block device
> is used over 80%, I will send an alert to user.
>
This can be done in Infernalis / Jewel, bu
Thanks Wido.
So it seems there is no way to do this under Hammer.
WD
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
den Hollander
Sent: Wednesday, January 13, 2016 2:19 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] How to che
On 01/09/2016 02:34 AM, Wukongming wrote:
Hi, all
I notice this sentence "Running GFS or OCFS on top of RBD will not work with
caching enabled." on http://docs.ceph.com/docs/master/rbd/rbd-config-ref/. why? Is
there any way to open rbd cache with ocfs2 based on? Because I have a fio test w
On 01/13/2016 07:27 AM, wd_hw...@wistron.com wrote:
> Thanks Wido.
> So it seems there is no way to do this under Hammer.
>
Not very easily no. You'd have to count and stat all objects for a RBD
image to figure this out.
> WD
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-
On 01/12/2016 11:11 AM, Stefan Priebe wrote:
Hi,
i want to add support for fast-diff and object map to our "old" firefly
v2 rbd images.
The current hammer release can't to this.
Is there any reason not to cherry-pick this one? (on my own)
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515
Hi again...
Regarding this issue, I just tried to use partx instead of partprobe. I hit a
different problem...
My layout is 4 partitions in an SSD device to serve as journals for 4
differents OSDs. Something like
/dev/sdb1 (journal of /dev/sdd1)
/dev/sdb2 (journal of /dev/sd31)
/dev/sdb3 (jour
On 01/12/2016 10:34 PM, Wido den Hollander wrote:
On 01/13/2016 07:27 AM, wd_hw...@wistron.com wrote:
Thanks Wido.
So it seems there is no way to do this under Hammer.
Not very easily no. You'd have to count and stat all objects for a RBD
image to figure this out.
For hammer you'd need anot
Am 13.01.2016 um 07:37 schrieb Josh Durgin:
> On 01/12/2016 11:11 AM, Stefan Priebe wrote:
>> Hi,
>>
>> i want to add support for fast-diff and object map to our "old" firefly
>> v2 rbd images.
>>
>> The current hammer release can't to this.
>>
>> Is there any reason not to cherry-pick this one? (o
32 matches
Mail list logo