Hi,
I followed the steps to repair journal and MDS I found here in the list.
I hit a bug that stopped my MDS to start so I took the long way with
reading the data.
Everything went fine and I can even mount one of my CephFS now. That's a
big relieve.
But when I start scrub, I just get retur
Hi, according to this thread [1] it means that the scrub path is not
existing, but you don’t specify a path at all. Can you retry with „/„?
Do you see anything with damage ls?
[1] https://www.mail-archive.com/ceph-users@lists.ceph.com/msg56062.html
Zitat von Thomas Widhalm :
Hi,
I followe
Hi,
Ok, added "/" and no it works. Thanks.
Saw the "/" in the text I used. Just thought it meant "use either this
command or that" sorry.
On 29.04.23 13:00, Eugen Block wrote:
Hi, according to this thread [1] it means that the scrub path is not
existing, but you don’t specify a path at all.
I'm in the process of exploring if it is worthwhile to add RadosGW to
our existing ceph cluster. We've had a few internal requests for
exposing the S3 API for some of our business units, right now we just
use the ceph cluster for VM disk image storage via RBD.
Everything looks pretty straight
On Fri, 28 Apr 2023 22:46:32 +0530 Milind Changire wrote:
> If a dir doesn't exist at the moment of snapshot creation, then the
> schedule is deactivated for that dir.
Ha! Good catch! As often, I completely forgot about the /volumes prefix...
--
ceterum censeo microsoftem esse delendam.
pgpcZ
Hello,
What is your current setup, 1 server pet data center with 12 osd each? What
is your current crush rule and LRC crush rule?
On Fri, Apr 28, 2023, 12:29 Michel Jouvin
wrote:
> Hi,
>
> I think I found a possible cause of my PG down but still understand why.
> As explained in a previous mai
Hi,
No... our current setup is 3 datacenters with the same configuration,
i.e. 1 mon/mgr + 4 OSD servers with 16 OSDs each. Thus the total of 12
OSDs servers. As with LRC plugin, k+m must be a multiple of l, I found
that k=9/m=66/l=5 with crush-locality=datacenter was achieving my goal
of bei
Bailey,
Thanks for your extensive reply, you got me down the wormhole of CephFS
and SMB (and looking at a lot of 45drives videos and knowledge base,
Houston dashboard, reading up on CTDB, etc), and this is a really
interesting option as well! Thanks for the write-up.
By the way, are you usi
Thanks Alex, interesting perspectives.
I already thought about proxmox as well, and that would also work quite
nicely. I think that would be the most performant option to put VM's on
RBD.
But my entire goal was to run SMB servers on top of that hypervisor
layer, to serve SMB shares to Windo
Hi Angelo,
You can always use Samba to serve shares, it works well with AD, if that is
needed. You may want to benchmark your prototypes in an as close to
production setting as possible.
--
Alex Gorbachev
ISS Storcium
iss-integration.com
On Sat, Apr 29, 2023 at 10:58 PM Angelo Hongens wrote:
Den fre 28 apr. 2023 kl 14:51 skrev Niklas Hambüchen :
>
> Hi all,
>
> > Scrubs only read data that does exist in ceph as it exists, not every
> > sector of the drive, written or not.
>
> Thanks, this does explain it.
>
> I just discovered:
>
> ZFS had this problem in the past:
>
> *
> https://ut
11 matches
Mail list logo