Hi,
You mean new fresh deployed OSD's or old just restarted OSD's?
Thanks,
k
Sent from my iPhone
> On 8 Jun 2021, at 23:30, Jan-Philipp Litza wrote:
>
> recently I'm noticing that starting OSDs for the first time takes ages
> (like, more than an hour) before they are even picked up by the mo
Stored==used was resolved for this cluster. Actually problem is what you was
discover in previous year: zero's. Filestore lack of META counter - always
zero. When I purged last drained OSD from cluster - statistics becomes to
normal immediately
Thanks,
k
> On 20 May 2021, at 21:22, Dan van
Hi Jan-Philipp,
I've noticed this a couple of times on Nautilus after doing some large
backfill operations. It seems the osd map doesn't clear properly after
the cluster returns to Health OK and builds up on the mons. I do a
"du" on the mon folder e.g. du -shx /var/lib/ceph/mon/ and this shows
seve
I am running the Ceph ansible script to install ceph version Stable-6.0
(Pacific).
When running the sample yml file that was supplied by the github repo it
runs fine up until the "ceph-mon : check if monitor initial keyring already
exists" step. There it will hang for 30-40 minutes before failing.
Hi everyone,
recently I'm noticing that starting OSDs for the first time takes ages
(like, more than an hour) before they are even picked up by the monitors
as "up" and start backfilling. I'm not entirely sure if this is a new
phenomenon or if it always was that way. Either way, I'd like to
unders
When I had issues with the monitors, it was access on the monitor folder under
/var/lib/ceph//mon./store.db, make sure
it is owned by the ceph user.
My issues originated from a hardware issue - the memory needed 1.3 v, but the
mother board was only reading 1.2 (The memory had the issue, the fir
On Tue, Jun 8, 2021 at 9:20 PM Phil Merricks wrote:
>
> Hey folks,
>
> I have deployed a 3 node dev cluster using cephadm. Deployment went
> smoothly and all seems well.
>
> If I try to mount a CephFS from a client node, 2/3 mons crash however.
> I've begun picking through the logs to see what I
A DocuBetter Meeting will be held on 09 June 2021 at 1730 UTC.
This is the monthly DocuBetter Meeting that is more convenient for
European and North American Ceph contributors than the other meeting,
which is convenient for people in Australia and Asia (and which is very
rarely attended).
Topics:
Hey folks,
I have deployed a 3 node dev cluster using cephadm. Deployment went
smoothly and all seems well.
If I try to mount a CephFS from a client node, 2/3 mons crash however.
I've begun picking through the logs to see what I can see, but so far
other than seeing the crash in the log itself,
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.10.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to th
Yes, but with this the bucket contents will not be synced only. The bucket will
be available everywhere just will be empty.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
--
Some more information:
HGK is the master, ASH and SGP is the secondary, let me show 1 shard in all DCs
(FYI the bucket has been deleted which is relates to this bucket index).
HKG and ASH give back empty command output for this:
rados -p hkg or ash.rgw.buckets.index listomapvals
.dir.9213182a-
Hi Michael,
El 8/6/21 a las 11:38, Ml Ml escribió:
Hello List,
i used to build 3 Node Cluster with spinning Rust and later with
(Enterprise) SSDs.
All i did was to buy a 19" Server with 10/12 Slots, plug in the Disks
and i was done.
The Requirements were just 10/15TB Disk usage (30-45TB Raw).
Hello List,
i used to build 3 Node Cluster with spinning Rust and later with
(Enterprise) SSDs.
All i did was to buy a 19" Server with 10/12 Slots, plug in the Disks
and i was done.
The Requirements were just 10/15TB Disk usage (30-45TB Raw).
Now i was asked if i could also build a cheap 200-500T
Hi,
In my multisite setup 1 big bucket has been deleted and seems like hasn't been
cleaned up on one of the secondary site.
Is it safe to delete the 11 shard objects from the index pool which holding the
omaps of that bucket files?
Also a quick question, is it a problem if we use like this?
Cre
Den tis 8 juni 2021 kl 14:31 skrev Rok Jaklič :
> Which mode is that and where can I set it?
> This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/ ?
Yes, the description says it all there, doesn't it?
>>
>> Apart from that, there is a mode for RGW with tenant/bucketname wh
Which mode is that and where can I set it?
This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/
?
On Tue, Jun 8, 2021 at 2:24 PM Janne Johansson wrote:
> Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič :
> > Hi,
> > I try to create buckets through rgw in following order:
>
Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič :
> Hi,
> I try to create buckets through rgw in following order:
> - *bucket1* with *user1* with *access_key1* and *secret_key1*
> - *bucket1* with *user2* with *access_key2* and *secret_key2*
>
> when I try to create a second bucket1 with user2 I get
On 6/8/21 4:59 PM, Szabo, Istvan (Agoda) wrote:
Yes, but with this the bucket contents will not be synced only. The bucket will
be available everywhere just will be empty.
There is option to enable sync on the bucket(s) which will then be
synced across all the configured zones (as per the gr
Hi Michael,
On 08.06.21 11:38, Ml Ml wrote:
Now i was asked if i could also build a cheap 200-500TB Cluster
Storage, which should also scale. Just for Data Storage such as
NextCloud/OwnCloud.
With similar requirements (server primarily for Samba and NextCloud,
some RBD use, very limited budge
Hi,
On 08/06/2021 11:37, Rok Jaklič wrote:
I try to create buckets through rgw in following order:
- *bucket1* with *user1* with *access_key1* and *secret_key1*
- *bucket1* with *user2* with *access_key2* and *secret_key2*
when I try to create a second bucket1 with user2 I get *Error response
Hi,
I try to create buckets through rgw in following order:
- *bucket1* with *user1* with *access_key1* and *secret_key1*
- *bucket1* with *user2* with *access_key2* and *secret_key2*
when I try to create a second bucket1 with user2 I get *Error response code
BucketAlreadyExists.*
Why? Should no
Since you mention NextCloud it will probably be RWG deployment. ALso it's
not clear why 3 nodes? Is rack-space a premium?
Just to compare your suggestion:
3x24 (I guess 4U?) x 8TB with Replication = 576 TB raw storage + 192 TB
usable
Let's go 6x12 (2U) x 4TB with EC 3+2 = 288 TB raw storage + 172
Den tis 8 juni 2021 kl 11:39 skrev Ml Ml :
> Maybe combine 3x 10TB HDDs to a 30TB Raid0/striping Disk => which
> would speed up the performance, but have a bigger impact on a dying
> disk.
^^
This sounds like a very bad idea.
When this 30T monster fails, you will have to wait for 30TB to reb
Hi,
client_force_lazyio only works for ceph-fuse and libcephfs:
https://github.com/ceph/ceph/pull/26976/files
You can use the ioctl to enable per file with the kernel mount, but
you might run into the same problem we did:
https://tracker.ceph.com/issues/44166
Please share if it works for you.
C
25 matches
Mail list logo