Hi Matt and Adam,
Thanks a lot for your reply.
Attached are logs that that are generated when I shared the bucket from a
rgw user (ceph-dashboard) to a ldap user (sonhaiha) and vice versa.
[sonhaiha@DEFR500 ~]$ s3cmd -c .s3cfg-cephdb info s3://shared-bucket
s3://shared-bucket/ (bucket):
Lo
>> On Sun, Oct 14, 2018 at 8:21 PM wrote:
>> how many cephfs mounts that access the file? Is is possible that some
>> program opens that file in RW mode (even they just read the file)?
>
>
> The nature of the program is that it is "prepped" by one-set of commands
> and queried by another, thus the
On 10/15/18 12:02 PM, jes...@krogh.cc wrote:
>>> On Sun, Oct 14, 2018 at 8:21 PM wrote:
>>> how many cephfs mounts that access the file? Is is possible that some
>>> program opens that file in RW mode (even they just read the file)?
>>
>>
>> The nature of the program is that it is "prepped" by one
On 10/15/18 12:41 PM, Dietmar Rieder wrote:
> On 10/15/18 12:02 PM, jes...@krogh.cc wrote:
On Sun, Oct 14, 2018 at 8:21 PM wrote:
how many cephfs mounts that access the file? Is is possible that some
program opens that file in RW mode (even they just read the file)?
>>>
>>>
>>> The
Does a man exist on ceph-objectstore-tool ? if yes, where can i find it ?
Thx
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
We've got some servers with some small size SSD but no hard disks other than
system disks. While they're not suitable for OSD, will the SSD be useful for
running MON/MGR/MDS?
Thanks a lot.
Regards,
/st wong
___
ceph-users mailing list
ceph-us
> On 10/15/18 12:41 PM, Dietmar Rieder wrote:
>> No big difference here.
>> all CentOS 7.5 official kernel 3.10.0-862.11.6.el7.x86_64
>
> ...forgot to mention: all is luminous ceph-12.2.7
Thanks for your time in testing, this is very valueable to me in the
debugging. 2 questions:
Did you "sleep 9
Hello,
I am currently running Luminous 12.2.8 on Ubuntu with 4.15.0-36-generic kernel
from the official ubuntu repo. The cluster has 4 mon + osd servers. Each osd
server has the total of 9 spinning osds and 1 ssd for the hdd and ssd pools.
The hdds are backed by the S3710 ssds for journaling w
On 10/15/18 1:17 PM, jes...@krogh.cc wrote:
>> On 10/15/18 12:41 PM, Dietmar Rieder wrote:
>>> No big difference here.
>>> all CentOS 7.5 official kernel 3.10.0-862.11.6.el7.x86_64
>>
>> ...forgot to mention: all is luminous ceph-12.2.7
>
> Thanks for your time in testing, this is very valueable t
Hi,
On 15/10/18 11:44, Vincent Godin wrote:
> Does a man exist on ceph-objectstore-tool ? if yes, where can i find it ?
No, but there is some --help output:
root@sto-1-1:~# ceph-objectstore-tool --help
Allowed options:
--help produce help message
--type arg
Hi Andrei,
we have been using the script from [1] to define the number of PGs to
deep-scrub in parallel, we currently use MAXSCRUBS=4, you could start
with 1 to minimize performance impacts.
And these are the scrub settings from our ceph.conf:
ceph:~ # grep scrub /etc/ceph/ceph.conf
osd_sc
John,
Thanks for your reply. I am glad you clarified the docs URL mystery for me
as that has confused me many times.
About the Dashboard: Does that mean that, with Mimic 13.2.2, the only
dashboard user management command that works is to create a user? In other
words, no way to check the user l
On Mon, Oct 15, 2018 at 1:47 PM Hayashida, Mami wrote:
>
> John,
>
> Thanks for your reply. I am glad you clarified the docs URL mystery for me
> as that has confused me many times.
>
> About the Dashboard: Does that mean that, with Mimic 13.2.2, the only
> dashboard user management command tha
Mgr and MDS do not use physical space on a disk. Mons do use the disk and
benefit from SSDs, but they write a lot of stuff all the time. Depending
why the SSDs aren't suitable for OSDs, they might not be suitable for mons
either.
On Mon, Oct 15, 2018, 7:16 AM ST Wong (ITSC) wrote:
> Hi all,
>
>
Ah, ok. Thanks!
On Mon, Oct 15, 2018 at 8:52 AM, John Spray wrote:
> On Mon, Oct 15, 2018 at 1:47 PM Hayashida, Mami
> wrote:
> >
> > John,
> >
> > Thanks for your reply. I am glad you clarified the docs URL mystery for
> me as that has confused me many times.
> >
> > About the Dashboard: Doe
Perhaps this is the same issue as indicated here:
https://tracker.ceph.com/issues/36364
Can you check OSD iostat reports for similarities to this ticket, please?
Thanks,
Igor
On 10/15/2018 2:26 PM, Andrei Mikhailovsky wrote:
Hello,
I am currently running Luminous 12.2.8 on Ubuntu with
4.15
Hi,
Versions 12.2.7 and 12.2.8. I've set up a bucket with versioning enabled and
upload a lifecycle configuration. I upload some files and delete them,
inserting delete markers. The configured lifecycle DOES remove the deleted
binaries (non current versions). The lifecycle DOES NOT remove
Hello,
I successfully deployed Ceph cluster with 16 OSDs and created CephFS before.
But after rebooting due to mds slow request problem, when creating CephFS,
Ceph mds goes creating status and never changes.
Seeing Ceph status, there is no other problem I think. Here is 'ceph -s'
result:
csl@hpc1
Hi folks,
Just wanted to announce that with the help of Kefu, I was able to create a
working tap for ceph client libraries and binaries for the OSX platform.
Currently, we only test the tap on High-Sierra and Mojave.
This was mostly built so that people can use go-ceph on their Macs without
VM, b
On Mon, Oct 15, 2018 at 3:34 PM Kisik Jeong wrote:
>
> Hello,
>
> I successfully deployed Ceph cluster with 16 OSDs and created CephFS before.
> But after rebooting due to mds slow request problem, when creating CephFS,
> Ceph mds goes creating status and never changes.
> Seeing Ceph status, ther
Yes, but there is a lot of non document options !
For example, when we tried to rebuild a mon store, we had to add the
option --no-mon-config (which is not in the help) because
ceph-objectstore-tool tried to join the monitors and never responded.
It would be nice if someone could produce a more com
On Mon, Oct 15, 2018 at 4:24 PM Kisik Jeong wrote:
>
> Thank you for your reply, John.
>
> I restarted my Ceph cluster and captured the mds logs.
>
> I found that mds shows slow request because some OSDs are laggy.
>
> I followed the ceph mds troubleshooting with 'mds slow request', but there is
On 10/11/2018 12:08 AM, Wido den Hollander wrote:
> Hi,
>
> On a Luminous cluster running a mix of 12.2.4, 12.2.5 and 12.2.8 I'm
> seeing OSDs writing heavily to their logfiles spitting out these lines:
>
>
> 2018-10-10 21:52:04.019037 7f90c2f0f700 0 stupidalloc 0x0x55828ae047d0
> dump 0x15
I had the same thing happen too when I built a ceph cluster on a single VM
for testing, I wasn't concerned though because I knew the slow speed was
likely a problem.
On Mon, Oct 15, 2018 at 7:34 AM Kisik Jeong
wrote:
> Hello,
>
> I successfully deployed Ceph cluster with 16 OSDs and created Cep
I think the answer is, yes. I'm pretty sure only the OSDs require very
long life enterprise grade SSDs
On Mon, Oct 15, 2018 at 4:16 AM ST Wong (ITSC) wrote:
> Hi all,
>
>
>
> We’ve got some servers with some small size SSD but no hard disks other
> than system disks. While they’re not suitable
On 10/15/2018 07:50 PM, solarflow99 wrote:
> I think the answer is, yes. I'm pretty sure only the OSDs require very
> long life enterprise grade SSDs
>
Yes and No. Please use reliable Datacenter Grade SSDs for your MON
databases.
Something like 200GB is more then enough in your MON servers.
I attached osd & fs dumps. There are two pools (cephfs_data,
cephfs_metadata) for CephFS clearly. And this system's network is 40Gbps
ethernet for public & cluster. So I don't think the network speed would be
problem. Thank you.
2018년 10월 16일 (화) 오전 1:18, John Spray 님이 작성:
> On Mon, Oct 15, 2018
I don't know anything about the BlueStore code, but given the snippets
you've posted this appears to be a debug thing that doesn't expect to be
invoked (or perhaps only in an unexpected case that it's trying hard to
recover from). Have you checked where the dump() function is invoked from?
I'd imag
This is really cool! I can mostly parse what the tap is doing, and it's
good to see somebody managed to programmatically define the build
dependencies since that's always been an issue for people on OS X.
On Mon, Oct 15, 2018 at 7:43 AM Christopher Blum
wrote:
> Hi folks,
>
> Just wanted to anno
On Tue, Oct 9, 2018 at 10:57 PM Dylan McCulloch wrote:
> Hi Greg,
>
> Nowhere in your test procedure do you mention syncing or flushing the
> files to disk. That is almost certainly the cause of the slowness
>
> We have tested performing sync after file creation and the delay still
> occurs. (See
On Thu, Oct 11, 2018 at 3:22 PM Graham Allan wrote:
> As the osd crash implies, setting "nobackfill" appears to let all the
> osds keep running and the pg stays active and can apparently serve data.
>
> If I track down the object referenced below in the object store, I can
> download it without e
On 10/15/2018 08:23 PM, Gregory Farnum wrote:
> I don't know anything about the BlueStore code, but given the snippets
> you've posted this appears to be a debug thing that doesn't expect to be
> invoked (or perhaps only in an unexpected case that it's trying hard to
> recover from). Have you che
We turned on all the RBD v2 features while running Jewel; since then all
clusters have been updated to Luminous 12.2.2 and additional clusters added
that have never run Jewel.
Today I find that a few percent of volumes in each cluster have issues,
examples below.
I'm concerned that these issu
Hi Wido,
once you apply the PR you'll probably see the initial error in the log
that triggers the dump. Which is most probably the lack of space
reported by _balance_bluefs_freespace() function. If so this means that
BlueFS rebalance is unable to allocate contiguous 1M chunk at main
device to
Hi,
On 10/15/2018 10:43 PM, Igor Fedotov wrote:
> Hi Wido,
>
> once you apply the PR you'll probably see the initial error in the log
> that triggers the dump. Which is most probably the lack of space
> reported by _balance_bluefs_freespace() function. If so this means that
> BlueFS rebalance is
On 10/15/2018 11:47 PM, Wido den Hollander wrote:
Hi,
On 10/15/2018 10:43 PM, Igor Fedotov wrote:
Hi Wido,
once you apply the PR you'll probably see the initial error in the log
that triggers the dump. Which is most probably the lack of space
reported by _balance_bluefs_freespace() function.
Thanks.
Shall I mount /var/lib/ceph/mon on the SSD device, or by updating the "mon
data" ? Seems not recommended to change the default data location.
We're going to try ceph ansible. Shall we first set up the cluster, then move
/var/lib/ceph/mon to SSD devices on all MON ? Thanks.
/st wong
On 10/16/2018 12:04 AM, Igor Fedotov wrote:
>
> On 10/15/2018 11:47 PM, Wido den Hollander wrote:
>> Hi,
>>
>> On 10/15/2018 10:43 PM, Igor Fedotov wrote:
>>> Hi Wido,
>>>
>>> once you apply the PR you'll probably see the initial error in the log
>>> that triggers the dump. Which is most probabl
38 matches
Mail list logo