Without read ahead set, seq io performance will be lower than random..This is
because Ceph IO path is serialized on a PG and for seq case there is more
chance to hit the same PG for consecutive IOs..
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
thanks, what I am talking is IOPs, which also reflect the bandwidth in fio
testing. I did not set readahead parameter, I will try. One more question
here is, for only 4k write or read IO random pattern, comparing with using
a single bigger rbd image, will performance(IOPS) be improved if I
distribu
Hi,
we're experiencing some strange issues running ceph 0.87 in our, I
think, quite large cluster (taking number of objects as a measurement).
mdsmap e721086: 1/1/1 up {0=storagemds01=up:active}, 2 up:standby
osdmap e143048: 92 osds: 92 up, 92 in
flags noout,noscrub,nodeep-s
Hi Andrei,
Which proxy server solved your problem?
On Tue, Feb 2, 2016 at 6:41 PM, Andrei Mikhailovsky
wrote:
> Hi Eric,
>
> I remember having very similar issue when I was setting up radosgw. It
> turned out to be the issue on the proxy server side and not the radosgw.
> After trying a differen
Yes, it is monitor,
sorry for this mistake spelling.
-- Original --
From: "Shinobu Kinjo";;
Date: Feb 3, 2016
To: "yang";
Cc: "ceph-users";
Subject: Re: [ceph-users] how to monit ceph bandwidth?
monit?
probably monitor??
Rgds,
Shinobu
- Original
monit?
probably monitor??
Rgds,
Shinobu
- Original Message -
From: "yang"
To: "ceph-users"
Sent: Wednesday, February 3, 2016 2:23:19 PM
Subject: [ceph-users] how to monit ceph bandwidth?
Hello everyone,
I have a ceph cluster (v0.94.5) with cephFS. There is several clients in the
clus
Hello everyone,
I have a ceph cluster (v0.94.5) with cephFS. There is several clients in the
cluster,
every client use their own directory in cephFS with ceph-fuse.
I want to monit the IO bandwidth of the cluster and the client.
r/w bandwidth and op/s can be reached by `ceph -s`,
But I do not kn
Thanks all for the help! I managed to mount the storage again. I'll write
it down below.
For the stuck sessions, I tried the method here to evict them:
http://docs.ceph.com/docs/master/cephfs/eviction/
However, the evict command stuck at one session of the ceph-fuse. I have to
restart the mds to g
On Tue, Feb 2, 2016 at 1:10 PM, Zhao Xu wrote:
> I am not lucky on the ceph-fuse
>
> [root@igc-head ~]# ceph-fuse -d -m igc-head,is1,i1,i2,i3:6789 /mnt/igcfs/
> 2016-02-03 04:55:08.756420 7fe3f7437780 0 ceph version 0.94.5
> (9764da52395923e0b32908d83a9f7304401fee43), process ceph-fuse, pid 5822
You can try
ceph daemon mds.host session evict
to kill it off.
-- Original --
From: "Zhao Xu";;
Date: Feb 3, 2016
To: "Goncalo Borges";
Cc: "ceph-users@lists.ceph.com";
Subject: Re: [ceph-users] Urgent help needed for ceph storage "mount error 5=
Input/ou
Hi X
Like you, I am just a site admin so you should carefully evaluate the following
suggestions I give. Obviously, the safest thing is to wait for CephFS
supporters (Jonh, Greg or Yan Zheng). :-)
If you look to the state of connections, you have some in 'opening', others in
'killing' and othe
I see a lot sessions. How can I clear these session? Since I've rebooted
the cluster already, why these sessions are still there?
[root@igc-head ~]# ceph daemon mds.igc-head session ls
[
{
"id": 274143,
"num_leases": 0,
"num_caps": 0,
"state": "closing",
Hi X
Have you tried to inspect the mds for problematic sessions still connected from
those clients?
To check which sessions are still connected to the mds, do (in ceph 9.2.0, the
command might be different or even do not exist in other older versions)
ceph daemon mds. session ls
Cheers
G.
I am not lucky on the ceph-fuse
[root@igc-head ~]# ceph-fuse -d -m igc-head,is1,i1,i2,i3:6789 /mnt/igcfs/
2016-02-03 04:55:08.756420 7fe3f7437780 0 ceph version 0.94.5
(9764da52395923e0b32908d83a9f7304401fee43), process ceph-fuse, pid 5822
ceph-fuse[5822]: starting ceph client
2016-02-03 04:55:08
Try to mount with ceph-fuse. It worked for me when I've faced the same
sort of issues you are now dealing with.
-Mykola
On Tue, Feb 2, 2016 at 8:42 PM, Zhao Xu wrote:
Thank you Mykola. The issue is that I/we strongly suggested to add
OSD for many times, but we are not the decision maker.
For
Thank you Mykola. The issue is that I/we strongly suggested to add OSD for
many times, but we are not the decision maker.
For now, I just want to mount the ceph drive again, even in read only mode,
so that they can read the data. Any idea on how to achieve this?
Thanks,
X
On Tue, Feb 2, 2016 at 9
I would strongly(!) suggest you to add few more OSDs to cluster before
things get worse / corrupted.
-Mykola
On Tue, Feb 2, 2016 at 6:45 PM, Zhao Xu wrote:
Hi All,
Recently our ceph storage is running at low performance. Today, we
can not write to the folder. We tried to unmount the ceph
Hi All,
Recently our ceph storage is running at low performance. Today, we can
not write to the folder. We tried to unmount the ceph storage then to
re-mount it, however, we can not even mount it now:
# mount -v -t ceph igc-head,is1,i1,i2,i3:6789:/ /mnt/igcfs/ -o
name=admin,secretfile=/etc/admi
Thanks for the follow-up.
Lets start with normal erasure coding plugin. Here we have k data node and
m parity nodes.
For RDP or RAID-DP or MSR/MBR kind of erasure code the codeword is two
dimensional array.
FOr example, in case of RDP Erasure code the cored is kx(k+m) array. The
codeword has k ro
On Mon, Feb 1, 2016 at 7:33 AM, Syed Hussain wrote:
> Hi,
>
> I've been working to develop a CEPH EC plgin for array type erasure code,
> for example
> RAID-DP (i.e RDP). Later I realized that I can't continue with (k+m) format
> in CEPH as
> like normal RS code if I want to save disk IO (or Read
On Tue, Feb 2, 2016 at 3:42 PM, Jose M wrote:
> Hi,
>
>
> One simple question, in the ceph docs says that to use Ceph as an HDFS
> replacement, I can use the CephFs Hadoop plugin
> (http://docs.ceph.com/docs/master/cephfs/hadoop/).
>
>
> What I would like to know if instead of using the plugin, I
No, I have not had any issues with 4.3.x.
On Tue, Feb 2, 2016 at 3:28 PM, Yan, Zheng wrote:
On Tue, Feb 2, 2016 at 8:28 PM, Mykola Dvornik
wrote:
No, I've never seen this issue on the Fedora stock kernels.
So either my workflow is not triggering it on the Fedora software
stack or
the is
Hi,
One simple question, in the ceph docs says that to use Ceph as an HDFS
replacement, I can use the CephFs Hadoop plugin
(http://docs.ceph.com/docs/master/cephfs/hadoop/).
What I would like to know if instead of using the plugin, I can mount ceph in
fstab and then point hdfs dirs (namenode
Hi Eric,
I remember having very similar issue when I was setting up radosgw. It turned
out to be the issue on the proxy server side and not the radosgw. After trying
a different proxy server the problem has been solved. Perhaps you have the same
issue.
Andrei
> From: "Eric Magutu"
> To: "
If testing with fio and librbd, you may also find that increasing the
thresholds for RBD readahead will help significantly. Specifically, set
"rbd readahead disable after bytes" to 0 so rbd readahead stays enabled.
In most cases with buffered reads on a real client volume, rbd
readahead isn't
On Tue, Feb 2, 2016 at 8:28 PM, Mykola Dvornik wrote:
> No, I've never seen this issue on the Fedora stock kernels.
>
> So either my workflow is not triggering it on the Fedora software stack or
> the issues is CentOS / RHEL - specific.
I mean did you encounter this problem when using ceph-fuse o
Hi,
I am getting an error when I try to upload files with the + character. I
get a 403 error from the radosgw
< PUT
/sync-0815/Twenty/EPISODE%2075/Amina/picts/rooney/Wayne%2BRooney%2BManchester%2BUnited%2Bv%2BSwansea%2BCity%2BXowlsw0Gl0_x.jpg
HTTP/1.1
< Date: Fri, 22 Jan 2016 08:54:36 GMT
< Expec
Could you share the fio command and your read_ahead_kb setting for the OSD
devices ? "performance is better" is a little too general. I understand
that we usually mean higher IOPS or higher aggregate throughput when we say
performance is better. However, application random read performance
"gene
No, I've never seen this issue on the Fedora stock kernels.
So either my workflow is not triggering it on the Fedora software stack
or the issues is CentOS / RHEL - specific.
Anyway I will file the ceph-fuse bug then.
On Tue, Feb 2, 2016 at 12:43 PM, Yan, Zheng wrote:
On Tue, Feb 2, 2016 at
Hi, I did a fio testing on my ceph cluster, and found ceph random read
performance is better than sequential read. Is it true in your stand?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
On Tue, Feb 2, 2016 at 5:32 PM, Mykola Dvornik wrote:
> One of my clients is using
>
> 4.3.5-300.fc23.x86_64 (Fedora release 23)
did you encounter this problem on client using 4.3.5 kernel? If you
did, this issue should be ceph-fuse bug.
>
> while all the other clients reply on
>
> 3.10.0-327.4.
Hello,
my situation is next: we want to fill with datas an OSD here, and we want
to use this prepared OSD in the customer's system. So is there any solution
to reuse this OSD? What we need to use this? Same config and keyring files?
I try this, but unfortunatelly it cant work for me.
Rados import/
One of my clients is using
4.3.5-300.fc23.x86_64 (Fedora release 23)
while all the other clients reply on
3.10.0-327.4.4.el7.x86_64 (CentOS Linux release 7.2.1511)
Should I file report a bug on the RedHat bugzilla?
On Tue, Feb 2, 2016 at 8:57 AM, Yan, Zheng wrote:
On Tue, Feb 2, 2016 at 2:27
33 matches
Mail list logo