Thanks Jeff.
If I set Attr_Expiration_Time as zero in conf , deos it mean timeout
is zero? If so, every client will see the change immediately. Will it
decrease the performance hardly?
I seems that GlusterFS FSAL use UPCALL to invalidate the cache. How
about the CephFS FSAL?
On Thu, Feb 14, 2019
On 2/14/19 2:08 PM, Dan van der Ster wrote:
> On Thu, Feb 14, 2019 at 12:07 PM Wido den Hollander wrote:
>>
>>
>>
>> On 2/14/19 11:26 AM, Dan van der Ster wrote:
>>> On Thu, Feb 14, 2019 at 11:13 AM Wido den Hollander wrote:
On 2/14/19 10:20 AM, Dan van der Ster wrote:
> On Thu.,
Wrong metadata paste of osd.73 in previous message.
{
"id": 73,
"arch": "x86_64",
"back_addr": "10.10.10.6:6804/175338",
"back_iface": "vlan3",
"bluefs": "1",
"bluefs_db_access_mode": "blk",
"bluefs_db_block_size": "4096",
"bluefs_db_dev": "259:22",
"bluefs_d
Hi,
Most of my osd's use's part of slow storage for RocksDB, but some is
not. I investigated this and think because is most oldest of Bluestore
osd's in this cluster.
I figure out this by /var/lib/osd/ creation date, don't know is
possible or not to determine real osd creation date from osd
Thanks. I read the your reply in https://www.mail-archive.com/ceph-users@lists.ceph.com/msg48717.htmlso using indep will do fewer data remap when osd failed.using firstn: 1, 2, 3, 4, 5 -> 1, 2, 4, 5, 6 , 60% data remapusing indep :1, 2, 3, 4, 5 -> 1, 2, 6, 4, 5, 25% data remapam
Hi Marc,
You can see previous designs on the Ceph store:
https://www.proforma.com/sdscommunitystore
--
Mike Perez (thingee)
On Fri, Jan 18, 2019 at 12:39 AM Marc Roos wrote:
>
>
> Is there an overview of previous tshirts?
>
>
> -Original Message-
> From: Anthony D'Atri [mailto:a...@dre
Reminder that early bird rate ends tomorrow. If you are proposing a
talk, please still register and we can issue you a refund if your talk
is accepted. We will plan better with early bird and cfp acceptance
dates with future events.
https://ceph.com/cephalocon/barcelona-2019/
--
Mike Perez (thing
Do you see anything in the kernel logs for the disk in question around the
same time as the error?
Are you getting this randomly or just on this particular OSD?
There was a bug with newer kernels and high memory pressure causing read
error’s, however it was mostly fixed in 12.2.10 but making a re
On Thu, 14 Feb 2019, John Petrini wrote:
> Cost and available disk slots are also worth considering since you'll
> burn a lot more by going RAID-1, which again really isn't necessary.
> This may be the most convincing reason not to bother.
Generally speaking, if the choice is between a 2 RAID-1 SS
Sure, will this. For now I have creased the size to 30G (from 15G).
On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >During backfilling scenarios, the mon
You can but it's usually not recommended. When you replace a failed
disk the RAID rebuild is going to drag down the performance of the
remaining disk and subsequently all OSD's that are backed by it. This
can hamper the performance of the entire cluster. You could probably
tune rebuild priority in
On Thu, 7 Feb 2019, Dan van der Ster wrote:
> On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> wrote:
> >
> > Hi Dan,
> > >During backfilling scenarios, the mons keep old maps and grow quite
> > >quickly. So if you have balancing, pg splitting, etc. ongoing for
> > >awhile, the mon stores wil
On Thu, Feb 14, 2019 at 12:07 PM Wido den Hollander wrote:
>
>
>
> On 2/14/19 11:26 AM, Dan van der Ster wrote:
> > On Thu, Feb 14, 2019 at 11:13 AM Wido den Hollander wrote:
> >>
> >> On 2/14/19 10:20 AM, Dan van der Ster wrote:
> >>> On Thu., Feb. 14, 2019, 6:17 a.m. Wido den Hollander
>
Hello - Can we use the ceph osd journal disk in RAID#1 to achieve the
HA for journal disks?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, 2019-02-14 at 20:57 +0800, Marvin Zhang wrote:
> Here is the copy from https://tools.ietf.org/html/rfc7530#page-40
> Will Client query 'change' attribute every time before reading to know
> if the data has been changed?
>
> +-+++-+-
Here is the copy from https://tools.ietf.org/html/rfc7530#page-40
Will Client query 'change' attribute every time before reading to know
if the data has been changed?
+-+++-+---+
| Name| ID | Data Type | Acc | Defined in
Hi,
I am quite new to ceph and just try to set up a ceph cluster. Initially
I used ceph-deploy for this but when I tried to create a BlueStore osd
ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
using ceph-volume to create the osd, but this also fails. Below you can
see what
On Thu, 2019-02-14 at 19:49 +0800, Marvin Zhang wrote:
> Hi Jeff,
> Another question is about Client Caching when disabling delegation.
> I set breakpoint on nfs4_op_read, which is OP_READ process function in
> nfs-ganesha. Then I read a file, I found that it will hit only once on
> the first time,
Yes and no... bluestore seems to not work really optimal. For example,
it has no filestore-like journal waterlining and flushes the deferred
write queue just every 32 writes (deferred_batch_ops). And when it does
that it's basically waiting for the HDD to commit and slowing down all
further writes.
Hi Jeff,
Another question is about Client Caching when disabling delegation.
I set breakpoint on nfs4_op_read, which is OP_READ process function in
nfs-ganesha. Then I read a file, I found that it will hit only once on
the first time, which means latter reading operation on this file will
not trigg
On Thu, 2019-02-14 at 10:35 +0800, Marvin Zhang wrote:
> On Thu, Feb 14, 2019 at 8:09 AM Jeff Layton wrote:
> > > Hi,
> > > As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to
> > > config active/passive NFS-Ganesha to use CephFs. My question is if we
> > > can use active/active nfs-g
On 2/14/19 11:26 AM, Dan van der Ster wrote:
> On Thu, Feb 14, 2019 at 11:13 AM Wido den Hollander wrote:
>>
>> On 2/14/19 10:20 AM, Dan van der Ster wrote:
>>> On Thu., Feb. 14, 2019, 6:17 a.m. Wido den Hollander >>>
Hi,
On a cluster running RGW only I'm running into BlueStore 1
On Thu, Feb 14, 2019 at 11:13 AM Wido den Hollander wrote:
>
> On 2/14/19 10:20 AM, Dan van der Ster wrote:
> > On Thu., Feb. 14, 2019, 6:17 a.m. Wido den Hollander >>
> >> Hi,
> >>
> >> On a cluster running RGW only I'm running into BlueStore 12.2.11 OSDs
> >> being 100% busy sometimes.
> >>
> >
On 2/14/19 10:20 AM, Dan van der Ster wrote:
> On Thu., Feb. 14, 2019, 6:17 a.m. Wido den Hollander >
>> Hi,
>>
>> On a cluster running RGW only I'm running into BlueStore 12.2.11 OSDs
>> being 100% busy sometimes.
>>
>> This cluster has 85k stale indexes (stale-instances list) and I've been
>>
While we're at it, a way to know what in the default.rgw...non-ec pool one
can remove. We have tons of old zero-size objects there which are probably
useless and just take up (meta)space.
Den tors 14 feb. 2019 kl 09:26 skrev Charles Alva :
> Hi All,
>
> Is there a way to trim Ceph default.rgw.lo
Hi All,
Is there a way to trim Ceph default.rgw.log pool so it won't take huge
space? Or perhaps is there logrotate mechanism in placed?
Kind regards,
Charles Alva
Sent from Gmail Mobile
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
26 matches
Mail list logo