Thanks!
I completely missed that, adding name='client.something' did the trick.
/andreas
On 22 December 2017 at 02:22, David Turner wrote:
> You aren't specifying your cluster user, only the keyring. So the
> connection command is still trying to use the default client.admin instead
> of client
Hi,
I'm looking at OCP [0] servers for Ceph and I'm not able to find yet
what I'm looking for.
First of all, the geek in me loves OCP and the design :-) Now I'm trying
to match it with Ceph.
Looking at wiwynn [1] they offer a few OCP servers:
- 3 nodes in 2U with a single 3.5" disk [2]
- 2
Hi David,
I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working
now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very
unhappy with that). Which Samba Version & Linux Distribution are using?
Are you using quotas on subdirectories and are they applied when you
ex
On Fri, Dec 22, 2017 at 3:20 AM, Yan, Zheng wrote:
> idle client shouldn't hold so many caps.
>
i'll try to make it reproducible for you to test.
yes. For now, it's better to run "echo 3 >/proc/sys/vm/drop_caches"
> after cronjob finishes
Thanks. I'll adopt that for now.
Regards,
Webert L
it depends on how you use it. for me, it runs fine on the OSD hosts but the
mds server consumes loads of RAM, so be aware of that.
if the system load average goes too high due to osd disk utilization the
MDS server might run into troubles too, as delayed response from the host
could cause the MDS t
On Thu, Dec 21, 2017 at 12:52 PM, shadow_lin wrote:
>
> After 18:00 suddenly the write throughput dropped and the osd latency
> increased. TCmalloc started relcaim page heap freelist much more
> frequently.All of this happened very fast and every osd had the indentical
> pattern.
>
Could that be c
On Fri, Dec 22, 2017 at 3:23 PM, nigel davies wrote:
> Right ok I take an look. Can you do that after the pool /cephfs has been set
> up
>
yes, see http://docs.ceph.com/docs/jewel/rados/operations/pools/
>
> On 21 Dec 2017 12:25 pm, "Yan, Zheng" wrote:
>>
>> On Thu, Dec 21, 2017 at 6:18 PM, nig
Hi Wido,
We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
couple of years.
The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
enclosures.
Other than that, I have nothing particularly interesting to say about
these. Our data centre procurement team have al
On Fri, 22 Dec 2017 12:10:18 +0100, Felix Stolte wrote:
> I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working
> now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very
> unhappy with that).
The ceph:user_id smb.conf functionality was first shipped with
Samba 4.7.
Hi Wido,
what are you trying to optimise? Space? Power? Are you tied to OCP?
I remember Ciara had some interesting designs like this
http://www.ciaratech.com/product.php?id_prod=539&lang=en&id_cat1=1&id_cat2=67
though I don't believe they are OCP.
I also had a look and supermicro has a few that
Quoting Stefan Kooman (ste...@bit.nl):
> Quoting Dan van der Ster (d...@vanderster.com):
> > Hi,
> >
> > We've used double the defaults for around 6 months now and haven't had any
> > behind on trimming errors in that time.
> >
> >mds log max segments = 60
> >mds log max expiring = 40
> >
On 12/22/2017 03:27 PM, Luis Periquito wrote:
Hi Wido,
what are you trying to optimise? Space? Power? Are you tied to OCP?
A lot of things. I'm not tied to OCP, but OCP has a lot of advantages
over regular 19" servers and thus I'm investigating Ceph+OCP
- Less power loss due to only one
On 12/22/2017 02:40 PM, Dan van der Ster wrote:
Hi Wido,
We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
couple of years.
The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
enclosures.
Yes, I see. I was looking for a solution without a JBOD and abo
I followed the exact steps of the following page:
http://ceph.com/rgw/new-luminous-rgw-metadata-search/
"us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue,
the service runs successfully.
"us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service
was unable t
On 12/20/2017 03:21 PM, Steven Vacaroaia wrote:
> Hi,
>
> I apologies for creating a new thread ( I already mentioned my issue in
> another one)
> but I am hoping someone will be able to
> provide clarification / instructions
>
> Looks like the patch for including qfull_time is missing from ker
Thank you!
Karun Josy
On Thu, Dec 21, 2017 at 3:51 PM, Konstantin Shalygin wrote:
> Is this the correct way to removes OSDs, or am I doing something wrong ?
>>
> Generic way for maintenance (e.g. disk replace) is rebalance by change osd
> weight:
>
>
> ceph osd crush reweight osdid 0
>
> cluste
Hello,
I am unable to delete this abandoned image.Rbd info shows a watcher ip
Image is not mapped
Image has no snapshots
rbd status cvm/image --id clientuser
Watchers:
watcher=10.255.0.17:0/3495340192 client.390908
cookie=18446462598732841114
How can I evict or black list a watcher cl
Been looking around the web and I cant find a what seems to be "clean way"
to remove an OSD host from the "ceph osd tree" command output. I am
therefore hesitant to add a server with the same name, but I still see the
removed/failed nodes from the list. Anyone know how to do that? I found an
art
The hosts got put there because OSDs started for the first time on a server
with that name. If you name the new servers identically to the failed ones,
the new osds will just place themselves under the host in the crush map and
everything will be fine. There shouldn't be any problems with that base
It's already in qemu 2.9
http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d
"
This patches introduces 2 new cmdline parameters. The -m parameter to specify
the number of coroutines running in parallel (defaults to 8). And the -W
parameter to
allow qemu-img to w
20 matches
Mail list logo