Hi all,
Sometimes the cache tier function will delete the object data of base
pool when start flush, i don't the reason why do so.
the code is in the PrimaryLogPg.cc file (L version), the notes of
start_flush function (" In general,we need to send a delete and a copyfrom
Hi all,
Now i have a problem about the process of start_flush function , the
following is the detailed information:
the ceph version :12.1
the problem: I don't understand the annotation in the start_flush
function(from line 8573 to 8590) and the process! . Hope someone can
Hello,
On Tue, Jan 10, 2017 at 11:11 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Daznis
>> Sent: 09 January 2017 12:54
>> To: ceph-users
>> Subject: [ceph-users]
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Daznis
> Sent: 09 January 2017 12:54
> To: ceph-users
> Subject: [ceph-users] Ceph cache tier removal.
>
> Hello,
>
>
> I'm running preliminar
Hello,
I'm running preliminary test on cache tier removal on a live cluster,
before I try to do that on a production one. I'm trying to avoid
downtime, but from what I noticed it's either impossible or I'm doing
something wrong. My cluster is running Centos 7.2 and 0.94.9 ceph.
Example 1:
I'm s
Hello Vincent,
There was indeed a bug in hammer 0.94.6 that caused data corruption, only
if you were using min_read_recency_for_promote > 1.
That was discussed on the mailing list [0] and fixed in 0.94.7 [1]
AFAIK, infernalis releases were never affected.
[0] http://www.spinics.net/lists/ceph-u
Is there now a stable version of Ceph in Hammer and/or Infernalis whis
which we can safely use cache tier in write back mode ?
I saw few month ago a post saying that we have to wait for a next release
to use it safely.
___
ceph-users mailing list
ceph-use
You've probably got some issues with the exact commands you're running and
how they interact with read-only caching — that's a less-common cache type.
You'll need to get somebody who's experienced using those cache types or
who has worked with it recently to help out, though.
-Greg
On Tue, Apr 26,
Hi Greg,
yes directory is hashed four levels deep and contain files
# ls -l /var/lib/ceph/osd/ceph-1/current/1.0_head/DIR_0/DIR_0/DIR_0/DIR_0/
total 908
-rw-r--r--. 1 root root 601 Mar 15 15:01
1021bdf.__head_E5BD__1
-rw-r--r--. 1 root root 178571 Mar 15 15:06
1026de5.
On Thursday, April 21, 2016, Benoît LORIOT wrote:
> Hello,
>
> we want to disable readproxy cache tier but before doing so we would like
> to make sure we won't loose data.
>
> Is there a way to confirm that flush actually write objects to disk ?
>
> We're using ceph version 0.94.6.
>
>
> I tried
Hello,
we want to disable readproxy cache tier but before doing so we would like
to make sure we won't loose data.
Is there a way to confirm that flush actually write objects to disk ?
We're using ceph version 0.94.6.
I tried that, with cephfs_data_ro_cache being the hot storage pool and
cephf
.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Josef Johansson
> Sent: 20 April 2016 06:57
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph cache tier clean rate too low
>
> Hi,
> response in lin
Hi,
response in line
On 20 Apr 2016 7:45 a.m., "Christian Balzer" wrote:
>
>
> Hello,
>
> On Wed, 20 Apr 2016 03:42:00 + Stephen Lord wrote:
>
> >
> > OK, you asked ;-)
> >
>
> I certainly did. ^o^
>
> > This is all via RBD, I am running a single filesystem on top of 8 RBD
> > devices in an
Hello,
On Wed, 20 Apr 2016 03:42:00 + Stephen Lord wrote:
>
> OK, you asked ;-)
>
I certainly did. ^o^
> This is all via RBD, I am running a single filesystem on top of 8 RBD
> devices in an effort to get data striping across more OSDs, I had been
> using that setup before adding the cac
OK, you asked ;-)
This is all via RBD, I am running a single filesystem on top of 8 RBD devices
in an
effort to get data striping across more OSDs, I had been using that setup
before adding
the cache tier.
3 nodes with 11 6 Tbyte SATA drives each for a base RBD pool, this is setup with
replica
Hello,
On Tue, 19 Apr 2016 20:21:39 + Stephen Lord wrote:
>
>
> I Have a setup using some Intel P3700 devices as a cache tier, and 33
> sata drives hosting the pool behind them.
A bit more details about the setup would be nice, as in how many nodes,
interconnect, replication size of the
I Have a setup using some Intel P3700 devices as a cache tier, and 33 sata
drives hosting the pool behind them. I setup the cache tier with writeback,
gave it a size and max object count etc:
ceph osd pool set target_max_bytes 5000
ceph osd pool set nvme target_max_bytes 5000
January 2016 11:48
To: Robert LeBlanc
Cc: ceph-users@lists.ceph.com; Nick Fisk
Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
replica crush rule!
What are the recommanded specs of a SSD for journaling. It's a little bit
tricky now to move the journals for spi
case of mechanical drive this is a problem!
> >
> > And thank you for clearing this things out for me.
> >
> > 2016-01-12 18:03 GMT+02:00 Nick Fisk :
> >>
> >> > -Original Message-
> >> > From: Mihai Gheorghe [mailto:mcaps...@gmail.
is a problem!
>
> And thank you for clearing this things out for me.
>
> 2016-01-12 18:03 GMT+02:00 Nick Fisk :
>>
>> > -Original Message-
>> > From: Mihai Gheorghe [mailto:mcaps...@gmail.com]
>> > Sent: 12 January 2016 15:42
>> > To: Nick
Le 12/01/2016 18:27, Mihai Gheorghe a écrit :
> One more question. Seeing that cache tier holds data on it untill it
> reaches % ratio, i suppose i must set replication to 2 or higher on
> the cache pool to not lose hot data not writen to the cold storage in
> case of a drive failure, right?
>
> A
-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
replica crush rule!
One more question. Seeing that cache tier holds data on it untill it reaches %
ratio, i suppose i must set replication to 2 or higher on the cache pool to not
lose hot data not
From: Mihai Gheorghe [mailto:mcaps...@gmail.com]
> > Sent: 12 January 2016 15:42
> > To: Nick Fisk ; ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary,
> HDD
> > replica crush rule!
> >
> >
> > 2016-01-1
> -Original Message-
> From: Mihai Gheorghe [mailto:mcaps...@gmail.com]
> Sent: 12 January 2016 15:42
> To: Nick Fisk ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
> replica crush rule!
>
>
> 2016-01-1
2016-01-12 17:08 GMT+02:00 Nick Fisk :
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Mihai Gheorghe
> > Sent: 12 January 2016 14:56
> > To: Nick Fisk ; ceph-users@lists.ceph.com
> > Subject: Re:
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mihai Gheorghe
> Sent: 12 January 2016 14:56
> To: Nick Fisk ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
&g
ph-users@lists.ceph.com
> > Subject: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
> > replica crush rule!
> >
> > Hello,
> >
> > I have a question about how cache tier works with rbd volumes!?
> >
> > So i created a pool of SSD
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mihai Gheorghe
> Sent: 12 January 2016 14:25
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Ceph cache tier and rbd volumes/SSD primary, HDD
> replica crush rul
Hello,
I have a question about how cache tier works with rbd volumes!?
So i created a pool of SSD's for cache and a pool on HDD's for cold storage
that acts as backend for cinder volumes. I create a volume in cinder from
an image and spawn an instance. The volume is created in the cache pool as
e
Hi all,
I'm testing ceph cache tier(0.80.9). The IOPS is really very good with
cache tier, but it's very slow to delete a rbd(even if an empty rbd).
It seems as if the cache pool will mark all the objects in the rbd to be
deleted, even if the objects do not exist.
Is this a problem of rbd?
How c
Hi all,
I have a ceph cluster(0.80.7) in production.
Now I encounter a bottleneck of iosp, so I want to add a cache
tier with SSDs to provide better I/O performance. Here is the procedure:
1. Create a cache pool
2. Set up a cahce tire
ceph osd tier add cold-storage hot-storage
3. Set cach
Thanks a lot to /*Be-El*/ from #ceph (irc://irc.oftc.net/ceph)
The problem is resolved after setting 'target_max_bytes' for cache pool:
*$ ceph osd pool set cache target_max_bytes 1840*
Because setting only 'cache_target_full_ratio' to 0.7 - is not
sufficient for cache tiering agent, i
Hi,
you need to set the max dirty bytes and/or max dirty objects as these 2
parameters will default to 0 for your cache pool.
ceph osd pool set target_max_objects x
ceph osd pool set target_max_bytes x
The ratios you already set (dirty_ratio = 0.4 and full_ratio = 0.7) will be
applie
hi, folks! I'm testing cache tier for erasure coded pool and with
RBD image on it. And now I'm facing a problem with full cache pool
and object are not evicted automatically, Only if I run manually
rados -p cache cache-flush-evict-all*
client side is:
superuser@share:~$ uname -a
34 matches
Mail list logo