Hi Marc,
On 04/21/2018 11:34 AM, Marc Roos wrote:
> I wondered if there are faster ways to copy files to and from a bucket,
> like eg not having to use the radosgw? Is nfs-ganesha doing this faster
> than s3cmd?
I have doubts that putting another layer of on top of S3 will make if
faster than
On Sat, 21 Apr 2018, Marc Roos said:
>
> I wondered if there are faster ways to copy files to and from a bucket,
> like eg not having to use the radosgw? Is nfs-ganesha doing this faster
> than s3cmd?
I find the go-based S3 clients e.g. rclone, minio mc, are a bit faster than the
python-based
I have problems exporting a bucket that really does exist. I have tried
Path = "/test:test3"; Path = "/test3";
Results in ganesha not start with message
ExportId=301 Path=/test:test3 FSAL_ERROR=(Invalid object type,0)
If I use path=/ I can mount something but that is a empty export, but
cannot
On Fri, Apr 20, 2018 at 9:32 AM, Sean Purdy wrote:
> Just a quick note to say thanks for organising the London Ceph/OpenStack day.
> I got a lot out of it, and it was nice to see the community out in force.
+1, thanks for Wido and the ShapeBlue guys for a great event, and to
everyone for coming
Hello folks,
We are running radosgw(Luminous) with Swift API enabled. We observed
that after updating an object the "hash" and "content_type"
fields were concatenated with "\u000".
Steps to reproduce the issue.
[1] Create a container(Swift nomenclature)
[2] Upload a file with "swift --debug
I'm starting to get a small Ceph cluster running. I'm to the point where
I've created a pool, and stored some test data in it, but I'm having
trouble configuring the level of replication that I want.
The goal is to have two OSD host nodes, each with 20 OSDs. The target
replication will be:
o
Hello everyone,
I am in the process of designing a Ceph cluster, that will contain
only SSD OSDs, and I was wondering how should I size my cpu.
The cluster will only be used for block storage.
The OSDs will be Samsung PM863 (2Tb or 4Tb, this will be determined
when we will set the total volumetry
Hi,
this doesn't sound like a good idea: two hosts is usually a poor
configuration for Ceph.
Also, fewer disks on more servers is typically better than lots of disks in
few servers.
But to answer your question: you could use a crush rule like this:
min_size 4
max_size 4
step take default
step ch
That seems to have worked. Thanks much!
And yes, I realize my setup is less than ideal, but I'm planning on
migrating from another storage system, and this is the hardware I have
to work with. I'll definitely keep your recommendations in mind when I
start to grow the cluster.
On 04/23/2018 1
I forgot to add some information which is critical here:
On 23/04/18 4:50 PM, Syed Armani wrote:
> Hello folks,
>
> We are running radosgw(Luminous) with Swift API enabled. We observed
> that after updating an object the "hash" and "content_type"
> fields were concatenated with "\u000".
>
> Ste
I believe that Luminous has an ability like this as you can specify how
many objects you anticipate a pool to have when you create it. However, if
you're creating pools in Luminous, you're probably using bluestore. For
Jewel and before, pre-splitting PGs doesn't help as much as you'd think.
As so
If your cluster needs both datacenters to operate, then I wouldn't really
worry about where you active MDS is running. OTOH, if you're set on having
the active MDS be in 1 DC or the other, you could utilize some external
scripting to see if the active MDS is in DC #2 while an MDS for DC #1 is in
s
When figuring out why space is not freeing up after deleting buckets and
objects in RGW, look towards the RGW Garbage Collection. This has come up
on the ML several times in the past. I am almost finished catching up on a
GC of 200 Million objects that was taking up a substantial amount of space
>From my experience, Luminous now only uses a .users pool and not the
.users.etc pools. I agree that this could be better documented for
configuring RGW. I don't know the full list. Before you go and delete any
pools make sure to create users, put data in the cluster, and that no
objects exist in
Mimic (and higher) contain a new async gc mechanism, which should
handle this workload internally.
Matt
On Mon, Apr 23, 2018 at 2:55 PM, David Turner wrote:
> When figuring out why space is not freeing up after deleting buckets and
> objects in RGW, look towards the RGW Garbage Collection. This
If you can move away from having a non-default cluster name, do that. It's
honestly worth the hassle if it's early enough in your deployment.
Otherwise you'll end up needing to symlink a lot of things to the default
ceph name. Back when it was supported, we still needed to have
/etc/ceph/ceph.con
Thanks, yeah we will move away from it. Sadly, this is one of many
little- (or non-) documented things that have made adapting Ceph for
large-scale use a pain. Hopefully it will be worth it.
On Mon, Apr 23, 2018 at 4:25 PM, David Turner wrote:
> If you can move away from having a non-default
Hi all,
I am building a new cluster that will be using Luminous, Filestore, NVME
journals and 10k sas drives.
Is there a way to estimate proper values for:
filestore_queue_max_bytes
filestore_queue_max_ops
journal_max_write_bytes
journal_max_write_entries
or is it a matter of testing and trial
Hello,
On Mon, 23 Apr 2018 17:43:03 +0200 Florian Florensa wrote:
> Hello everyone,
>
> I am in the process of designing a Ceph cluster, that will contain
> only SSD OSDs, and I was wondering how should I size my cpu.
Several threads about this around here, but first things first.
Any specifics
On 04/23/2018 09:24 PM, Christian Balzer wrote:
>
>> If anyone has some ideas/thoughts/pointers, I would be glad to hear them.
>>
> RAM, you'll need a lot of it, even more with Bluestore given the current
> caching.
> I'd say 1GB per TB storage as usual and 1-2GB extra per OSD.
Does that still
On 04/23/2018 12:09 PM, John Spray wrote:
> On Fri, Apr 20, 2018 at 9:32 AM, Sean Purdy wrote:
>> Just a quick note to say thanks for organising the London Ceph/OpenStack
>> day. I got a lot out of it, and it was nice to see the community out in
>> force.
>
> +1, thanks for Wido and the Shap
21 matches
Mail list logo