Hi cephers,
Could someone tell me the right steps, for bring to life and OSD server?
data and journal disks seems to be OK but the dual SD slot for SO have
failed.
Regards, I
--
Iban Cabrillo Bartolome
Instituto de F
Hi,cephers
we only using krbd in ceph.and it works well near two yeas.but now I
face a size problem.
I have 7 nodes with 10 3T osd each.we using ceph 0.67.9 in
ubuntu12.04.I know it is too old,but update is beyond my control.
and now we use 80% size,so we start to delete historic unneeded
data,b
Hi
I relieved to have found a solution to this problem.
The ansible script for generating the key did not pass the key to the following
command line and sent therefore an empty string to this script (see
monitor_secret).
ceph-authtool /var/lib/ceph/tmp/keyring.mon.{{ monitor_name }} --create-k
Hi Christian,
Thanks for your reply, our performance requirement will be like below.
It will be very helpful is your provide the details for below scenario.
As of now 6 TB data usage, in feature size will be 10 TB.
Per ceph client read and write
Read :- kB/s 57726
Write :-
I'm a lurker here and don't know much about ceph, but:
If fdatasync hardly makes a difference, then either it's not being honoured
(which would be a major problem), or there's something else that is a
bottleneck in your test (more likely).
It's not uncommon for a poor choice of block size (bs) to
Hi,
I am contemplating using a NVRAM card for OSD journals in place of SSD drives
in our ceph cluster.
Configuration:
* 4 Ceph servers
* Each server has 24 OSDs (each OSD is a 1TB SAS drive)
* 1 PCIe NVRAM card of 16GB capacity per ceph server
* Both Client &
Hi All,
I saw the previous thread on this related to
http://tracker.ceph.com/issues/15597
and Yehuda's fix script
https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
Running this seems to have landed me in a weird state.
I can create and get new buckets and objects
On Fri, May 20, 2016 at 9:03 AM, Jonathan D. Proulx wrote:
> Hi All,
>
> I saw the previous thread on this related to
> http://tracker.ceph.com/issues/15597
>
> and Yehuda's fix script
> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
>
> Running this seems to hav
On Fri, May 20, 2016 at 09:21:58AM -0700, Yehuda Sadeh-Weinraub wrote:
:On Fri, May 20, 2016 at 9:03 AM, Jonathan D. Proulx wrote:
:> Hi All,
:>
:> I saw the previous thread on this related to
:> http://tracker.ceph.com/issues/15597
:>
:> and Yehuda's fix script
:>
https://raw.githubusercontent.c
[ too much to quote ]
Dense nodes often work better for object-focused workloads than block-focused,
the impact of delayed operations is simply speed vs. a tenant VM crashing.
Re RAID5 volumes to decrease the number of OSD’s: This sort of approach is
getting increasing attention in that it br
> Ceph will not acknowledge a client write before all journals (replica
> size, 3 by default) have received the data, so loosing one journal SSD
> will NEVER result in an actual data loss.
Some say that all replicas must be written; others say that only min_size, 2 by
default, must be written be
Hi All,
I am a newbie here. In latest CEPH (ceph-jewel), I am not able to
remove the filesystem, getting error messages.
Is this the right way to destroy the cephfs?
$ ceph fs rm cephfs1 --yes-i-really-mean-it
Error EINVAL: all MDS daemons must be inactive before removing filesystem
Thanks,
Ra
Hi Ravi,
well is the answer not already written in there ?
MDS all of ?
Please ceph -s :)
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
All,
I have found an issue with ceph OSDs that are on a SAN and Multipathed. It may
not matter that they are multipathed, but that is how our setup is where I
found the issue.
Our setup has an infiniband network which uses SRP to annunciate block devices
on a DDN.
Every LUN can be seen by every
So, which is correct, all replicas must be written or only min_size before ack?
But for me the takeaway is that writes are protected - even if the journal
drive crashes, I am covered.
- epk
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
A
You should be protected against single component failures, yes, that's the
point of journals.
It's important to ensure that on-disk volatile cache -- these days in the
8-128MB range -- remain turned off, otherwise it usually presents an
opportunity for data loss especially when power drops. Di
Hello,
On Fri, 20 May 2016 21:47:43 + Andrus, Brian Contractor wrote:
> All,
> I have found an issue with ceph OSDs that are on a SAN and Multipathed.
> It may not matter that they are multipathed, but that is how our setup
> is where I found the issue.
>
Your problem/issue is that Ceph is
On Fri, 20 May 2016 14:14:45 -0700 Ravi Nasani wrote:
> Hi All,
>
> I am a newbie here. In latest CEPH (ceph-jewel), I am not able to
> remove the filesystem, getting error messages.
>
>
> Is this the right way to destroy the cephfs?
>
> $ ceph fs rm cephfs1 --yes-i-really-mean-it
> Error EINV
On Fri, 20 May 2016 17:00:03 +0800 lin zhou wrote:
> Hi,cephers
> we only using krbd in ceph.and it works well near two yeas.but now I
> face a size problem.
>
> I have 7 nodes with 10 3T osd each.we using ceph 0.67.9 in
> ubuntu12.04.I know it is too old,but update is beyond my control.
>
> an
On Thu, 19 May 2016 10:26:37 -0400 Benjeman Meekhof wrote:
> Hi Christian,
>
> Thanks for your insights. To answer your question the NVMe devices
> appear to be some variety of Samsung:
>
> Model: Dell Express Flash NVMe 400GB
> Manufacturer: SAMSUNG
> Product ID: a820
>
Alright, these appear
21 matches
Mail list logo