Den mån 12 nov. 2018 kl 06:19 skrev 대무무 :
>
> Hello.
> I installed ceph framework in 6 servers and I want to manage the user access
> log. So I configured ceph.conf in the server which installing the rgw.
>
> ceph.conf
> [client.rgw.~~~]
> ...
> rgw enable usage log = True
>
> However, I c
Hi list,
Having finished our adventures with Infernalis we're now finally running
Jewel (10.2.11) on all Ceph nodes. Woohoo!
However, there's still KVM production boxes with block-rbd.so being
linked to librados 0.94.10 which is Hammer.
Current relevant status parts:
health HEALTH_WA
On 10/11/2018 06:35, Gregory Farnum wrote:
Yes, do that, don't try and back up your monitor. If you restore a
monitor from backup then the monitor — your authoritative data source —
will warp back in time on what the OSD peering intervals look like,
which snapshots have been deleted and created
Hi,
We are planning to build NAS solution which will be primarily used via NFS
and CIFS and workloads ranging from various archival application to more
“real-time processing”. The NAS will not be used as a block storage for
virtual machines, so the access really will always be file oriented.
We a
Hi,
I'm trying to set up a Influx plugin (http://docs.ceph.com/docs/mimic/mgr/
influx/). The docs says that it will be available in Mimic release, but I
can see it (and enable) in current Luminous. It seems that someone else
acutally used it in Luminous (http://lists.ceph.com/pipermail/ceph-us
On 11/12/18 12:54 PM, mart.v wrote:
> Hi,
>
> I'm trying to set up a Influx plugin
> (http://docs.ceph.com/docs/mimic/mgr/influx/). The docs says that it
> will be available in Mimic release, but I can see it (and enable) in
> current Luminous. It seems that someone else acutally used it in
> Lu
Hi!
ZFS won't play nice on ceph. Best would be to mount CephFS directly with
the ceph-fuse driver on the endpoint.
If you definitely want to put a storage gateway between the data and the
compute nodes, then go with nfs-ganesha which can export CephFS directly
without local ("proxy") mount.
I had
We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
It's very stable and if your use-case permits you can set zfs
sync=disabled to get very fast write performance that's tough to beat.
But if you're building something new today and have *only* the NAS
use-case then it would make b
Hi Dan,
ZFS without sync would be very much identical to ext2/ext4 without journals
or XFS with barriers disabled.
The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high
risk (using ext4 with kvm-mode unsafe would be similar I think).
Also, ZFS only works as expected with schedu
On Mon, Nov 12, 2018 at 3:53 PM Felix Stolte wrote:
>
> Hi folks,
>
> is anybody using cephfs with snapshots on luminous? Cephfs snapshots are
> declared stable in mimic, but I'd like to know about the risks using
> them on luminous. Do I risk a complete cephfs failure or just some not
> working s
>>
>> is anybody using cephfs with snapshots on luminous? Cephfs snapshots
>> are declared stable in mimic, but I'd like to know about the risks
>> using them on luminous. Do I risk a complete cephfs failure or just
>> some not working snapshots? It is one namespace, one fs, one data and
>>
Hi Kevin,
I should have also said, that we are internally inclined towards the
"monster VM" approach due to seemingly simpler architecture (data
distribution on block layer rather than on file system layer). So my
original question is more about comparing the two approaches (distribution
on block
My 2 cents would be depends how H/A you need.
Going with the monster VM you have a single point of failure and a single
point of network congestion.
If you go the CephFS route you remove that single point of failure if you
mount to clients directly. And also can remove that single point of networ
Some kind of single point will always be there I guess. Because even if we
go with the distributed filesystem, it will be mounted to the access VM and
this access VM will be providing NFS/CIFS protocol access. So this machine
is single point of failure (indeed we would be running two of them for
ac
Does your use case mean you need something like nfs/cifs and can’t use
CephFS mount directly?
Has been quite a few advances in that area with quotas and user management
in recent versions.
But obviously all depends on your use case at client end.
On Mon, 12 Nov 2018 at 10:51 PM, Premysl Kouril
Yes, the access VM layer is there because of multi-tenancy - we need to
provide parts of the storage into different private environments (can be
potentially on private IP addresses). And we need both - NFS as well as
CIFS.
On Mon, Nov 12, 2018 at 3:54 PM Ashley Merrick
wrote:
> Does your use cas
Hi,
Is it identical?
In the places we use sync=disabled (e.g. analysis scratch areas),
we're totally content with losing last x seconds/minutes of writes,
and understood that on-disk consistency is not impacted.
Cheers,Dan
On Mon, Nov 12, 2018 at 3:16 PM Kevin Olbrich wrote:
>
> Hi Dan,
>
> ZFS w
Is anyone else seeing this?
I have just setup another cluster to check on completely different hardware
and everything running EC still.
And getting inconsistent PG’s flagged after an auto deep scrub which can be
fixed by just running another deep-scrub.
On Thu, 8 Nov 2018 at 4:23 PM, Ashley Mer
Maybe you are hitting the kernel bug worked around by
https://github.com/ceph/ceph/pull/23273
-- Jonas
On 12/11/2018 16.39, Ashley Merrick wrote:
> Is anyone else seeing this?
>
> I have just setup another cluster to check on completely different hardware
> and everything running EC still.
>
Thanks does look like it ticks all the boxes.
As it’s been merged I’ll hold off till the next release than rebuilding
from source. As from what it seems it won’t cause an issue outside of just
re running the deep-scrub manually which is what the fix is basically doing
(but isolated to just the fai
Hello,
The documentation mentions that in order to integrate RGW to keystone, we
need to supply an admin user.
We are using S3 APIs only and don't require openstack integration, except
for keystone.
We can make authentication requests to keystone without requiring an admin
token (POST v3/s3tokens
Hi again,
I just read (and reread, and again) the chapter of Ceph Cookbook on
upgrades and
http://docs.ceph.com/docs/jewel/rados/operations/crush-map/#tunables and
figured there's a way back if needed.
The sortbitwise flag is set (repeering was almost instant) and tunables
to "hammer".
There's a
Is it possible to search the mailing list archives?
http://lists.ceph.com/pipermail/ceph-users-ceph.com/
seems to have a search function, but in my experience never finds anything.
--
Bryan Henderson San Jose, California
Hi ceph-users,
If you’re in Dallas for SC18, please join us for the Ceph Community BoF, Ceph
Applications in HPC Environments.
It’s Tomorrow night, from 5:15-6:45PM central. See below for all the details!
https://sc18.supercomputing.org/presentation/?id=bof103&sess=sess364
Cheers,
—Doug
___
This one i am using
https://www.mail-archive.com/ceph-users@lists.ceph.com/On Nov 12, 2018 10:32
PM, Bryan Henderson wrote:
>
> Is it possible to search the mailing list archives?
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/
>
> seems to have a search function, but in my experien
Hi,
I have been reading up on this a bit, and found one particularly useful mailing
list thread [1].
The fact that there is such a large jump when your DB fits into 3 levels (30GB)
vs 4 levels (300GB) makes it hard to choose SSDs of an appropriate size. My
workload is all RBD, so objects shoul
26 matches
Mail list logo