You can still use EC for CephFS without a cache tier since you are using
Luminous. This is new functionality since Luminous was released while the
majority of guides you will see are for setups on Jewel and older versions
of ceph. Here's the docs regarding this including how to do it.
http://docs.
Here is a quick update. I found that a CephFS client process was accessing
the big 1TB file, which I think had a lock on the file, preventing the
flushing of objects to the underlying data pool. Once I killed that
process, objects started to flush to the data pool automatically (with
target_max_byt
Hello Chad,
On Fri, Oct 6, 2017 at 10:01 AM, Chad William Seys
wrote:
> Thanks John! I see that a pool can have more than one "application". Should
> I feel free to combine uses (e.g. cephfs,rbd) or is this counterindicated?
That's not currently possible but we are thinking about changes which
All of this data is test data, yeah? I would start by removing the
cache-tier and pool, recreate it and attach it, configure all of the
settings including the maximums, and start testing things again. I would
avoid doing the 1.3TB file test until after you've confirmed that the
smaller files are
Thanks John! I see that a pool can have more than one "application".
Should I feel free to combine uses (e.g. cephfs,rbd) or is this
counterindicated?
Thanks!
Chad.
Just to stern this up a bit...
In the future, you may find that things stop working if you remove the
application tags.
For e
This point release brings a number of important bugfixes in all major
components of Ceph, we recommend all Jewel 10.2.x users to upgrade.
For more details checkout the release notes entry at the ceph blog
http://ceph.com/releases/v10-2-10-jewel-released/
Notable Changes
-
* build/
Hi,
Now that ceph-volume is part of the Luminous release, we've been able
to provide filestore support for LVM-based OSDs. We are making use of
LVM's powerful mechanisms to store metadata which allows the process
to no longer rely on UDEV and GPT labels (unlike ceph-disk).
Bluestore support shoul
On Fri, Oct 6, 2017 at 5:01 PM, Richard Hesketh
wrote:
> When I try to run the command "ceph osd status" on my cluster, I just get an
> error. Luckily unlike the last issue I had with ceph fs commands it doesn't
> seem to be crashing any of the daemons.
>
> root@vm-ds-01:/var/log/ceph# ceph osd
Curiously, it has been quite a while, but there is still no object in the
underlying data pool:
# rados -p cephfs_data ls
Any advice?
On Fri, Oct 6, 2017 at 9:45 AM, David Turner wrote:
> Notice in the URL for the documentation the use of "luminous". When you
> looked a few weeks ago, you migh
On Fri, Oct 6, 2017 at 5:28 PM, Chad William Seys
wrote:
> Scrolled down a bit and found this blog post:
> https://ceph.com/community/new-luminous-pool-tags/
>
> If things haven't changed:
>
>>Could someone tell me / link to what associating a ceph pool to an
>> application does?
>
>
> ATM it'
Notice in the URL for the documentation the use of "luminous". When you
looked a few weeks ago, you might have been looking at the documentation
for a different version of Ceph. You can change that to jewel, hammer,
kraken, master, etc depending on which version of Ceph you are running or
reading
Hi Christian,
I set those via CLI:
# ceph osd pool set cephfs_cache target_max_bytes 1099511627776
# ceph osd pool set cephfs_cache target_max_objects 100
but manual flushing doesn't appear to work:
# rados -p cephfs_cache cache-flush-evict-all
100046a.0ca6
it just gets stuck
On Fri, 6 Oct 2017 09:14:40 -0700 Shawfeng Dong wrote:
> I found the command: rados -p cephfs_cache cache-flush-evict-all
>
That's not what you want/need.
Though it will fix your current "full" issue.
> The documentation (
> http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/) has
Scrolled down a bit and found this blog post:
https://ceph.com/community/new-luminous-pool-tags/
If things haven't changed:
Could someone tell me / link to what associating a ceph pool to an
application does?
ATM it's a tag and does nothing to the pool/PG/etc structure
I hope this info
Hi All,
Could someone tell me / link to what associating a ceph pool to an
application does?
I hope this info includes why "Disabling an application within a pool
might result in loss of application functionality" when running 'ceph
osd application disable '
Thanks!
Chad.
___
I found the command: rados -p cephfs_cache cache-flush-evict-all
The documentation (
http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/) has
been improved a lot since I last checked it a few weeks ago!
-Shaw
On Fri, Oct 6, 2017 at 9:10 AM, Shawfeng Dong wrote:
> Thanks, Luis.
>
On Fri, 6 Oct 2017 16:55:31 +0100 Luis Periquito wrote:
> Not looking at anything else, you didn't set the max_bytes or
> max_objects for it to start flushing...
>
Precisely!
He says, cackling, as he goes to cash in his bet. ^o^
> On Fri, Oct 6, 2017 at 4:49 PM, Shawfeng Dong wrote:
> > Dear a
Thanks, Luis.
I've just set max_bytes and max_objects:
target_max_objects: 100 (1M)
target_max_bytes: 1099511627776 (1TB)
but nothing appears to be happening. Is there a way to force flushing?
Thanks,
Shaw
On Fri, Oct 6, 2017 at 8:55 AM, Luis Periquito wrote:
> Not looking at anything els
When I try to run the command "ceph osd status" on my cluster, I just get an
error. Luckily unlike the last issue I had with ceph fs commands it doesn't
seem to be crashing any of the daemons.
root@vm-ds-01:/var/log/ceph# ceph osd status
Error EINVAL: Traceback (most recent call last):
File "/
Not looking at anything else, you didn't set the max_bytes or
max_objects for it to start flushing...
On Fri, Oct 6, 2017 at 4:49 PM, Shawfeng Dong wrote:
> Dear all,
>
> Thanks a lot for the very insightful comments/suggestions!
>
> There are 3 OSD servers in our pilot Ceph cluster, each with 2x
Dear all,
Thanks a lot for the very insightful comments/suggestions!
There are 3 OSD servers in our pilot Ceph cluster, each with 2x 1TB SSDs
(boot disks), 12x 8TB SATA HDDs and 2x 1.2TB NVMe SSDs. We use the
bluestore backend, with the first NVMe as the WAL and DB devices for OSDs
on the HDDs. A
Le jeudi 05 octobre 2017 à 21:52 +0200, Ilya Dryomov a écrit :
> On Thu, Oct 5, 2017 at 6:05 PM, Olivier Bonvalet > wrote:
> > Le jeudi 05 octobre 2017 à 17:03 +0200, Ilya Dryomov a écrit :
> > > When did you start seeing these errors? Can you correlate that
> > > to
> > > a ceph or kernel upgrad
On Fri, Oct 6, 2017, 1:05 AM Christian Balzer wrote:
>
> Hello,
>
> On Fri, 06 Oct 2017 03:30:41 + David Turner wrote:
>
> > You're missing most all of the important bits. What the osds in your
> > cluster look like, your tree, and your cache pool settings.
> >
> > ceph df
> > ceph osd df
> >
On 17-10-06 11:25, ulem...@polarzone.de wrote:
Hi,
again is an update available without release-note...
http://ceph.com/releases/v10-2-10-jewel-released/ isn't found.
No announcement on the mailing list (perhaps i miss something).
While I do not see v10.2.10 tag in repo yet, it looks like packag
On Thu, Oct 5, 2017 at 11:55 PM, Stefan Kooman wrote:
> Hi,
>
> The influx module had made it's appearance in Mimic. I'm eager to try it out
> on
> luminous. So I went ahead and put the code [1] in /usr/lib/ceph/mgr/influx.
>
> I installed "python-influxdb" because it's a dependency (Ubuntu 16.0.
Hi,
again is an update available without release-note...
http://ceph.com/releases/v10-2-10-jewel-released/ isn't found.
No announcement on the mailing list (perhaps i miss something).
I know, normaly it's save to update ceph, but two releases ago it
wasn't.
Udo
__
Quoting Wido den Hollander (w...@42on.com):
> > I want to suggest that we keep the older packages in the repo list.
> > They are on the mirrors anyway (../debian/pool/main/{c,r}/ceph/).
> They still are, aren't they?
> http://eu.ceph.com/debian-luminous/pool/main/c/ceph/
Well, yes. But the repos
27 matches
Mail list logo