ainer when this happens.
On Wed, Jun 26, 2024 at 9:04 AM Daniel Gryniewicz <mailto:d...@redhat.com>> wrote:
On 6/25/24 3:21 PM, Matthew Vernon wrote:
> On 24/06/2024 21:18, Matthew Vernon wrote:
>
>> 2024-06-24T17:33:26.880065+00:00 moss-be2001 ceph-mgr[1
On 6/25/24 3:21 PM, Matthew Vernon wrote:
On 24/06/2024 21:18, Matthew Vernon wrote:
2024-06-24T17:33:26.880065+00:00 moss-be2001 ceph-mgr[129346]: [rgw
ERROR root] Non-zero return from ['radosgw-admin', '-k',
'/var/lib/ceph/mgr/ceph-moss-be2001.qvwcaq/keyring', '-n',
'mgr.moss-be2001.qvwcaq'
On 6/12/24 5:43 AM, Szabo, Istvan (Agoda) wrote:
Hi,
Wonder how radosgw knows that a transaction is done and didn't break the
connection between the user interface and gateway?
Let's see this is one request:
2024-06-12T16:26:03.386+0700 7fa34c7f0700 1 beast: 0x7fa5bc776750: 1.1.1.1 - -
[202
On 12/13/23 05:27, Janne Johansson wrote:
Den ons 13 dec. 2023 kl 10:57 skrev Rok Jaklič :
Hi,
shouldn't etag of a "parent" object change when "child" objects are added
on s3?
Example:
1. I add an object to test bucket: "example/" - size 0
"example/" has an etag XYZ1
2. I add an object
Since 1000 is the hard coded limit in AWS, maybe you need to set
something on the client as well? "client.rgw" should work for setting
the config in RGW.
Daniel
On 5/18/23 03:01, Rok Jaklič wrote:
Thx for the input.
I tried several config sets e.g.:
ceph config set client.radosgw.mon2 rgw_d
multi delete is inherently limited to 1000 per operation by AWS S3:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
This is a hard-coded limit in RGW as well, currently. You will need to
batch your deletes in groups of 1000. radosgw-admin has a
"--purge-objects" option
On 4/20/23 10:38, Casey Bodley wrote:
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote:
Hi Everyone,
I've been having trouble finding an answer to this question. Basically
I'm wanting to know if stuff in the .log pool is actively used for
anything or if it's just logs that can be deleted.
I
Yes, the POSIXDriver will support that. If you want NFS access, we'd
suggest you use Ganesha's FSAL_RGW to access through RGW (because
multipart uploads are not fun), but it will work.
Daniel
On 3/21/23 15:48, Fox, Kevin M wrote:
Will either the file store or the posix/gpfs filter support th
route?
Thanks,
Kevin
From: Daniel Gryniewicz
Sent: Monday, March 6, 2023 6:21 AM
To: Kai Stian Olstad
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: s3 compatible interface
Check twice before you click! This email originated from outside PNNL.
On 3/3/
On 3/3/23 13:53, Kai Stian Olstad wrote:
On Wed, Mar 01, 2023 at 08:39:56AM -0500, Daniel Gryniewicz wrote:
We're actually writing this for RGW right now. It'll be a bit before
it's productized, but it's in the works.
Just curious, what is the use cases for this featur
I can't speak for RBD, but for RGW, as long as you upgrade all the RGWs
themselves, clients will be fine, since they speak S3 to the RGWs, not
RADOS.
Daniel
On 3/3/23 04:29, Massimo Sgaravatto wrote:
Dear all
I am going to update a ceph cluster (where I am using only rbd and rgw,
i.e. I didn'
We're actually writing this for RGW right now. It'll be a bit before
it's productized, but it's in the works.
Daniel
On 2/28/23 14:13, Fox, Kevin M wrote:
Minio no longer lets you read / write from the posix side. Only through minio
itself. :(
Haven't found a replacement yet. If you do, ple
Does the mount have the "noexec" option on it?
Daniel
On 8/22/22 21:02, zxcs wrote:
In case someone missing the picture. Just copy the text as below:
1d@***ceph dir**$ 1s -lrth
total 13M
-rwxr-xr-x 1 ld ld 13M Nov 29 2021 cmake-3.22
1rwxrwxrwx 1 ld ld 10 Jul 26 10:03 cmake > cmake-3.22
-rwxrw
Seems like the notification for a multipart upload should look different
to a normal upload?
Daniel
On 7/20/22 08:53, Yehuda Sadeh-Weinraub wrote:
Can maybe leverage one of the other calls to check for upload completion:
list multipart uploads and/or list parts. The latter should work if you
h
Lifecycle only runs once per day, so you cannot set times less than a day.
Daniel
On 6/21/22 05:04, farhad kh wrote:
i want set lc for incomplete multipart but i not find document that say use
minute or hour for time
how can set time for lc less than day ?
Abort incomplete
On 6/15/22 14:06, Casey Bodley wrote:
(oops, i had cc'ed this to the old ceph-users list)
On Wed, Jun 15, 2022 at 1:56 PM Casey Bodley wrote:
On Mon, May 11, 2020 at 10:20 AM Abhishek Lekshmanan wrote:
The basic premise is for an account to be a container for users, and
also related funct
This is caused by an object that does not yet have a bucket associated
with it. It doesn't happen in S3, because S3 doesn't set_atomic() that
early, and it's fixed on main by the objctx removal (which is too
complicated for backport). Can you open a tracker for this, so that we
can get a fix
You can fail from one running Ganesha to another, using something like
ctdb or pacemaker/corosync. This is how some other clustered
filesysytem (e.g. Gluster) use Ganesha. This is not how the Ceph
community has decided to implement HA with Ganesha, so it will be a more
manual setup for you, b
showmount uses the MNT protocol, which is only part of NFSv3. NFSv4
mounts a pseudoroot, under which actual exports are exposed, so the
NFSv4 equivalent is to mount /, and then list it.
In general, NFSv4 should be used in preference to NFSv3 whenever possible.
Daniel
On 10/4/21 9:10 AM, Fyod
On 7/20/21 5:23 PM, [AR] Guillaume CephML wrote:
Hello,
On 20 Jul 2021, at 17:48, Daniel Gryniewicz wrote:
That's probably this one: https://tracker.ceph.com/issues/49892 Looks like we
forgot to mark it for backport. I've done that now, so it should be in the
next Pacific.
That's probably this one: https://tracker.ceph.com/issues/49892 Looks
like we forgot to mark it for backport. I've done that now, so it
should be in the next Pacific.
Daniel
On 7/20/21 11:28 AM, [AR] Guillaume CephML wrote:
Hi all,
Context :
We are moving a customer users/buckets/o
That's this one:
https://github.com/ceph/ceph/pull/41893
Daniel
On 6/29/21 5:35 PM, Chu, Vincent wrote:
Hi, I'm running into an issue with RadosGW where multipart uploads crash, but
only on buckets with a hyphen, period or underscore in the bucket name and with
a bucket policy applied. We've
This tracker:
https://tracker.ceph.com/issues/50556
and this PR:
https://github.com/ceph/ceph/pull/41288
Daniel
On 5/12/21 7:00 AM, Daniel Iwan wrote:
Hi
I have started to see segfaults during multiplart upload to one of the
buckets
File is about 60MB in size
Upload of the same file to a brand
In order to enable NFS via Ganesha, you will need either an RGW or a
CephFS. Within the context of a Ceph deployment, Ganesha cannot export
anything it's own, it just exports either RGW or CephFS.
Daniel
On 4/5/21 1:43 PM, Robert Sander wrote:
Hi,
I have a test cluster now running on Pacifi
Hi.
Unfortunately, there isn't a good guide for sizing Ganesha. It's pretty
light weight, and so the machines it needs are generally smaller than
what Ceph needs, so you probably won't have much of a problem.
The scaling of Ganesha is in 2 factors, based on the workload involved:
the CPU us
I'm not sure what to add. NFSv3 uses a random port for the server, and
uses a service named portmapper so that clients can find the port of the
server. Connecting to the portmapper requires a privileged container.
With docker, this is done with the --privileged option. I don't know
how to do
as 13:24, Daniel Gryniewicz escribió:
NFSv3 needs privileges to connect to the portmapper. Try running
your docker container in privileged mode, and see if that helps.
Daniel
On 9/23/20 11:42 AM, Gabriel Medve wrote:
Hi,
I have a CEPH 15.2.5 running in a docker , i configure nfs ganesha
wit
The preference for 4.1 and later is because 4.0 has a much less useful
graceful restart (which is used for HA/failover as well). Ganesha
itself supports 4.0 perfectly fine, and it should work fine with Ceph,
but HA setups will be much more difficult, and will be limited in
functionality.
Dan
It looks like your radosgw is using a different version of librados. In
the backtrace, the top useful line begins:
librados::v14_2_0
when it should be v15.2.0, like the ceph::buffer in the same line.
Is there an old librados lying around that didn't get cleaned up somehow?
Daniel
On 1/28/
total_time is calculated from the top of process_request() until the
bottom of process_request(). I know that's not hugely helpful, but it's
accurate.
This means it starts after the front-end passes the request off, and
counts until after a response is sent to the client. I'm not sure if it
NFSv3 needs privileges to connect to the portmapper. Try running your
docker container in privileged mode, and see if that helps.
Daniel
On 9/23/20 11:42 AM, Gabriel Medve wrote:
Hi,
I have a CEPH 15.2.5 running in a docker , i configure nfs ganesha with
nfs version 3 but i can not mount it
Basically same thing that happens when you overwrite any object. New
data is sent from the client, and a new Head is created pointing at it.
The old head is removed, and the data marked for garbage collection if
it's unused (which it won't be, in this case, since another Head points
at it).
rados_connect() is used by the recovery and/or grace code. It's
configured separately from CephFS, so it's errors are unrelated to
CephFS issues.
Daniel
On 6/3/20 8:54 AM, Simon Sutter wrote:
Hello,
Thank you very much.
I was a bit worried about all the other messages, especially those tw
You need to disable _MSPAC_SUPPORT to get rid of this dep.
Daniel
On 11/17/19 5:55 AM, Marc Roos wrote:
==
Package Arch Version
It sounds like you're putting the FSAL_CEPH config in another file in
/etc/ganesha. Ganesha only loads one file: /etc/ganesha/ganesha.conf -
other files need to be included in that file with the %include command.
For a simple config like yours, just use the single
/etc/ganesha/ganesha.conf fil
On 3/24/20 1:16 PM, Maged Mokhtar wrote:
On 24/03/2020 16:48, Maged Mokhtar wrote:
On 24/03/2020 15:14, Daniel Gryniewicz wrote:
On 3/24/20 8:19 AM, Maged Mokhtar wrote:
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50
On 3/24/20 8:19 AM, Maged Mokhtar wrote:
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs
write caching on, or should it be configured off for failover ?
You ca
Lifecycle is designed to run once per day. There's a lot of resource
optimization that's done based on this assumption to reduce the overhead
of lifecycle on the cluster. One of these is that it only builds the
list of objects to handle the first time it's run in that day. So, in
this case,
On 11/17/19 1:42 PM, Marc Roos wrote:
Hi Daniel,
I am able to mount the buckets with your config, however when I try to
write something, my logs get a lot of these errors:
svc_732] nfs4_Errno_verbose :NFS4 :CRIT :Error I/O error in
nfs4_write_cb converted to NFS4ERR_IO but was set non-retry
S3 is not a browser friendly protocol. There isn't a way to get
user-friendly output via the browser alone, you need some form of
client that speaks the S3 REST protocol. The most commonly used one
by us is s3cmd, which is a command line utility. A quick google
search finds some web-based client
Sounds like someone turned on MSPAC support, which is off by default.
It should probably be left off.
Daniel
On 9/26/19 1:19 PM, Marc Roos wrote:
Yes I think this one libntirpc. In 2.6 this samba dependency was not
there.
-Original Message-
From: Daniel Gryniewicz [mailto:d
Ganesha itself has no dependencies on samba (and there aren't any on
my system, when I build). These must be being pulled in by something
else that Ganesha does use.
Daniel
On Thu, Sep 26, 2019 at 11:21 AM Marc Roos wrote:
>
>
> Is it really necessary to have these dependencies in nfs-ganesha 2
43 matches
Mail list logo