On Mon, 22 Apr 2019, 22:20 Gregory Farnum, wrote:
> On Sat, Apr 20, 2019 at 9:29 AM Igor Podlesny wrote:
> >
> > I remember seeing reports in regards but it's being a while now.
> > Can anyone tell?
>
> No, this hasn't changed. It's unlikely it ever will; I think NFS
> resolved the issue but it
I'm am running a Ceph Cluster on 5 Servers, all with a single osd and
acting as a client (kernel) for nearly half a year now and didn't encounter
a lockup yet. Total storage is 3.25TB with about 600GB raw storage used, if
that matters.
Dan van der Ster schrieb am Di., 23. Apr. 2019, 09:33:
> On
I have only this in the default section, I think it is related to not
having any configuration for some of these osd's. I 'forgot' to add the
most recently added node [osd.x] sections. But in any case nothing afaik
that should have them behave differently.
[osd]
osd journal size = 1024
osd p
I am not sure about your background knowledge of ceph, but if you are
starting. Maybe first try and get ceph working in a virtual environment,
that should not be to much of a problem. Then try migrating it to your
container. Now you are probably fighting to many issues at the same
time.
Hi,
You probably forgot to recreate the OSD after changing
bluestore_min_alloc_size.
Regards,
Frédéric.
- Le 22 Avr 19, à 5:41, 刘 俊 a écrit :
> Hi All ,
> I still see this issue with latest ceph Luminous 12.2.11 and 12.2.12.
> I have set bluestore_min_alloc_size = 4096 before the tes
Hello,
I have a cluster with 6 OSD nodes each with 10 SATA 8TB drives. Node 6 was
just added. All nodes are 10Gbps on the network with Jumbo frames. S3
application access is working as expected but recovery is extremely slow.
Based on past posts I attempted to do the following:
Alter the osd_r
You should be able to see all pools in use in a RGW zone from the
radosgw-admin command. This [1] is probably overkill for most, but I deal
with multi-realm clusters so I generally think like this when dealing with
RGW. Running this as is will create a file in your current directory for
each zone
Thanks, but does this not work on Luminous maybe? I am on the mon hosts
trying this:
# ceph config set osd osd_recovery_max_active 4
Invalid command: unused arguments: [u'4']
config set : Set a configuration option at runtime (not
persistent)
Error EINVAL: invalid command
# ceph daemon osd.0
On 04/18/2019 06:24 AM, Matthias Leopold wrote:
> Hi,
>
> the Ceph iSCSI gateway has a problem when receiving discovery auth
> requests when discovery auth is not enabled. Target discovery fails in
> this case (see below). This is especially annoying with oVirt (KVM
> management platform) where yo
Hello,
Recently I have an issue when Bluestore try to delete WAL files (indicated
from osd log), the IO of the disk (HDD - Spinning) will reached 100% and
will introduced slow request to the cluster.
Is there any way to throttle this operation down or completely disable it?
Thanks
Regards,
I Ge
On Tue, Apr 23, 2019 at 2:58 PM Marc Roos wrote:
>
>
>
> I am not sure about your background knowledge of ceph, but if you are
> starting. Maybe first try and get ceph working in a virtual environment,
> that should not be to much of a problem. Then try migrating it to your
> container. Now you ar
Hi,
I'm running a cluster for a period of time. I find the cluster usually
run into unhealthy state recently.
With 'ceph health detail', one or two pg are inconsistent. What's
more, pg in wrong state each day are not placed on the same disk,
so that I don't think it's a disk problem.
The cluster
12 matches
Mail list logo