>
> I bumped into an very interesting challenge, how to secure erase a rbd
> image data without any encryption?
>
> The motivation is to ensure that there is no information leak on OSDs
> after deleting a user specified rbd image, without the extra burden of
> using rbd encryption.
>
> any ideas
> I have a very old Ceph cluster running the old dumpling version 0.67.1. One
> of the three monitors suffered a hardware failure and I am setting up a new
> server to replace the third monitor running Ubuntu 22.04 LTS (all the other
> monitors are using the old Ubuntu 12.04 LTS).
> - Try to inst
Unfortunately this is impossible to achieve.
Unless you can guarantee that the same physical pieces of disk are going to
always be mapped to the same parts of the RBD device then you will leave data
lying around on the array. How easy it is to recover is a bit of a question
about how valuable t
Den tors 8 juni 2023 kl 09:43 skrev Marc :
> > I bumped into an very interesting challenge, how to secure erase a rbd
> > image data without any encryption?
As Darren replied while I was typing this, you can't have dangerous
data written all over a cluster which automatically moves data around,
an
Hi,
can you paste the following output?
# ceph config-key list | grep grafana
Do you have a mgr/cephadm/grafana_key set? I would check the contents
of crt and key and see if they match. A workaround to test the
certificate and key pair would be to use a per-host config [1]. Maybe
it's not
Hi Ceph users,
We have 3 clusters running Pacific 16.2.9 all setup in a multisite
configuration with no data replication (we wanted to use per bucket policies
but never got them working to our satisfaction). All of the resharding
documentation I've found regarding multisite is centred around m
Hi,
sorry for not responing earlier.
Pardon my ignorance, I'm not quite sure I know what you mean by subtree
pinning. I quickly googled it and saw it was a new feature in Luminous. We
are running Pacific. I would assume this feature was not out yet.
Luminous is older than Pacific, so the feat
Hi,
I wonder if a redeploy of the crash service would fix that, did you try that?
Zitat von Zakhar Kirpichenko :
I've opened a bug report https://tracker.ceph.com/issues/61589, which
unfortunately received no attention.
I fixed the issue by manually setting directory ownership
for /var/lib/ce
Curious if anyone had any guidance on this question...
On 4/29/23 7:47 AM, Brad House wrote:
I'm in the process of exploring if it is worthwhile to add RadosGW to
our existing ceph cluster. We've had a few internal requests for
exposing the S3 API for some of our business units, right now we j
I have a small cluster on Pacific with roughly 600 RBD images. Out of those
600 images I have 2 which are in a somewhat odd state.
root@cephmon:~# rbd info Cloud-Ceph1/vm-134-disk-0
rbd image 'vm-134-disk-0':
size 1000 GiB in 256000 objects
order 22 (4 MiB objects)
snaps
Dear Ceph folks,
In a Ceph cluster there could be multiple points (e.g. librbd clients) being
able to execute rbd commands. My question is that , is there a methold to
reliably record or keep a full rbd command history that ever being executed?
This would be helpful for auditors as well as for
Thank you for the answer, that's what I was looking for!
On Wed, Jun 7, 2023 at 7:59 AM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
>
>
> On Tue, Jun 6, 2023 at 4:30 PM Dario Graña wrote:
>
>> Hi,
>>
>> I'm installing a new instance (my first) of Ceph. Our cluster runs
>> AlmaLinu
Hi, we have a ceph 17.2.6 with ragosgw and a couple of buckets in it.
We use it for backup with lock directly from veeam.
After few backups we got
HEALTH_WARN 2 large omap objects
│·
ceph df detail:
[root@k8s-1 ~]# ceph df detail
--- RAW STORAGE ---
CLASS SIZEAVAIL USED RAW USED %RAW USED
hdd600 GiB 600 GiB 157 MiB 157 MiB 0.03
TOTAL 600 GiB 600 GiB 157 MiB 157 MiB 0.03
--- POOLS ---
POOLID PGS STOR
ok, I will try it. Could you show me the archive doc?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I had set it from 0.05 to 1 with "ceph config set mon
target_max_misplaced_ratio 1.0", it's still invalid.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
When set the user's auth and then ls namespace, it is ok.
But when I set the user's auth with namespace, ls namespace returns with error,
but why?___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Richard,
Thank you, that's what I thought, I've also seen that doc.
But so, I imagine that log_meta is false on secondary zones because metadata
requests are forwarded to master zone, no need to sync.
Regards,
--
Gilles
Le jeudi 8 juin 2023, 03:15:56 CEST Richard Bade a écrit :
> Hi Gilles,
Hi,
> On 7 Jun 2023, at 10:02, Louis Koo wrote:
>
> I had set it from 0.05 to 1 with "ceph config set mon
> target_max_misplaced_ratio 1.0", it's still invalid.
Because is setting for a mgr, not for mon, try `ceph config set mgr
target_max_misplaced_ratio 1`
Cheers,
k
__
Hi,
> On 7 Jun 2023, at 14:39, zyz wrote:
>
> When set the user's auth and then ls namespace, it is ok.
>
>
> But when I set the user's auth with namespace, ls namespace returns with
> error, but why?
Because data with namespaces in "without namespace" space
k
_
Sure: https://docs.ceph.com/en/latest/rados/operations/balancer/#throttling
Zitat von Louis Koo :
ok, I will try it. Could you show me the archive doc?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@
On Mon, Jun 5, 2023 at 11:48 AM Janek Bevendorff
wrote:
>
> Hi Patrick, hi Dan!
>
> I got the MDS back and I think the issue is connected to the "newly
> corrupt dentry" bug [1]. Even though I couldn't see any particular
> reason for the SIGABRT at first, I then noticed one of these awfully
> fami
Thanks, other question is how to know where this option is set, mon or mgr?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks, I will take a look.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
the pgp_num reduce quickly but pg_num is still slowly.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
25 matches
Mail list logo