On Wed, Aug 12, 2015 at 5:08 AM, Bob Ababurko wrote:
> What is risky about enabling mds_bal_frag on a cluster with data and will
> there be any performance degradation if enabled?
No specific gotchas, just that it is not something that has especially
good coverage in our automated tests. We rece
Hi all,
we are running ceph version 0.94.2 with a cephfs mounted using ceph-fuse on
Ubuntu 14.04 LTS. I think we have found a bug that lets us semi-reprodicibly
crash the ceph-fuse process.
On the file system we have many files that contain non-ASCII characters in
various encodings (ISO8859, UTF
Jörg Henne writes:
> we are running ceph version 0.94.2 with a cephfs mounted using ceph-fuse on
> Ubuntu 14.04 LTS. I think we have found a bug that lets us semi-reprodicibly
> crash the ceph-fuse process.
Reported as http://tracker.ceph.com/issues/12674
___
Hi
Something that's been bugging me for a while is I am trying to diagnose iowait
time within KVM guests. Guests doing reads or writes tend do about 50% to 90%
iowait but the host itself is only doing about 1% to 2% iowait. So the result
is the guests are extremely slow.
I currently run 3x ho
Hi.
Read this thread here:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17360.html
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
2015-08-12 14:52 GMT+03:00 Pieter Koorts :
> Hi
>
> Something that's been bugging me for a while is I am trying to diagnose
> iowait time withi
I tried it, the error propagates to whichever OSD gets the errorred PG.
For the moment, this is my worst problem. I have one PG
incomplete+inactive, and the OSD with the highest priority in it gets
100 blocked requests (I guess that is the maximum), and, although
running, doesn't get other req
Hi,
I would like to hear from people who use cache tier in Ceph about best
practices and things I should avoid.
I remember hearing that it wasn't that stable back then. Has it changed in
Hammer release?
Any tips and tricks are much appreciated!
Thanks
Dominik
__
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Zalewski
> Sent: 12 August 2015 14:40
> To: ceph-us...@ceph.com
> Subject: [ceph-users] Cache tier best practices
>
> Hi,
>
> I would like to hear from people who use cache tier in Ce
Hi Irek,
Thanks for the link. I have removed the SSD's for now and performance is up to
30MB/s on a benchmark now. To be honest, I new the Samsung SSD weren't great
but did not expect them to be worse then just plain hard disks.
Pieter
On Aug 12, 2015, at 01:09 PM, Irek Fasikhov wrote:
Hi.
Здравствуйте!
On Wed, Aug 12, 2015 at 02:30:59PM +, pieter.koorts wrote:
> Hi Irek,
> Thanks for the link. I have removed the SSD's for now and performance is up
> to 30MB/s on a benchmark now. To be honest, I new the Samsung SSD weren't
> great but did not expect them to be worse then ju
Hi,
for mds there is the ability to rename snapshots. But for rbd i can't
see one.
Is there a way to rename a snapshot?
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
4.0.6-300.fc22.x86_64
On Tue, Aug 11, 2015 at 10:24 PM, Yan, Zheng wrote:
> On Wed, Aug 12, 2015 at 5:33 AM, Hadi Montakhabi wrote:
>
>>
>> [sequential read]
>> readwrite=read
>> size=2g
>> directory=/mnt/mycephfs
>> ioengine=libaio
>> direct=1
>> blocksize=${BLOCKSIZE}
>> numjobs=1
>> iodep
Hi all, we have setup CEPH cluster with 60 OSD (2 diff types) (5 nodes, 12
disks on each, 10 HDD, 2 SSD)
Also we cover this with custom crushmap with 2 root leaf
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-100 5.0 root ssd
-102 1.0 host ix-s2-ssd
2 1.
I ran a ceph osd reweight-by-utilization yesterday and partway through
had a network interruption. After the network was restored the cluster
continued to rebalance but this morning the cluster has stopped
rebalance and status will not change from:
# ceph status
cluster af859ff1-c394-4c9a-95e2
An update:
It seems that I am arriving at memory shortage. Even with 32 GB for 20
OSDs and 2 GB swap, ceph-osd uses all available memory.
I created another swap device with 10 GB, and I managed to get the
failed OSD running without crash, but consuming extra 5 GB.
Are there known issues regard
Hi Igor
I suspect you have very much the same problem as me.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg22260.html
Basically Samsung drives (like many SATA SSD's) are very much hit and miss so
you will need to test them like described here to see if they are any good.
http://ww
Hello.
Could you please help me to remove osd from cluster;
# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.02998 root default
-2 0.00999 host ceph1
0 0.00999 osd.0 up 1.0 1.0
-3 0.00999 host ceph2
1 0.00999 osd.1
If you are using the default configuration to create the pool (3 replicas),
after losing 1 OSD and having 2 left, CRUSH would not be able to find enough
OSDs (at least 3) to map the PG thus it would stuck at unclean.
Thanks,
Guang
> From: chm...@yandex
There currently is no mechanism to rename snapshots without hex editing the RBD
image header data structure. I created a new Ceph feature request [1] to add
this ability in the future.
[1] http://tracker.ceph.com/issues/12678
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla...@redh
Yeah. You are right. Thank you.
> On Aug 12, 2015, at 19:53, GuangYang wrote:
>
> If you are using the default configuration to create the pool (3 replicas),
> after losing 1 OSD and having 2 left, CRUSH would not be able to find enough
> OSDs (at least 3) to map the PG thus it would stuck at
If I am using a more recent client(kernel OR ceph-fuse), should I still be
worried about the MDS's crashing? I have added RAM to my MDS hosts and its
my understanding this will also help mitigate any issues, in addition to
setting mds_bal_frag = true. Not having used cephfs before, do I always
ne
On Thu, Aug 13, 2015 at 7:05 AM, Bob Ababurko wrote:
>
> If I am using a more recent client(kernel OR ceph-fuse), should I still be
> worried about the MDS's crashing? I have added RAM to my MDS hosts and its
> my understanding this will also help mitigate any issues, in addition to
> setting mds
I also encounter a problem,standby mds can not be altered to active when active
mds service stopped,which bother me for serval days.Maybe MDS cluster can solve
those problem,but ceph team haven't released this feature.
yangyongp...@bwstor.com.cn
From: Yan, Zheng
Date: 2015-08-13 10:21
To: Bo
On Wed, Aug 12, 2015 at 7:21 PM, Yan, Zheng wrote:
> On Thu, Aug 13, 2015 at 7:05 AM, Bob Ababurko wrote:
> >
> > If I am using a more recent client(kernel OR ceph-fuse), should I still
> be
> > worried about the MDS's crashing? I have added RAM to my MDS hosts and
> its
> > my understanding th
24 matches
Mail list logo