Hello cephers,
Does anyone know when is the planned release date for Giant?
Cheers
Andrei
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks, Sage,
Can't wait to try it out and see if there are any improvements in the caching
pool tier.
Cheers
Andrei
- Original Message -
> From: "Sage Weil"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Tuesday, 21 Octo
Hello cephers,
I would like to know if it is possible to underprovision the ssd disks when
using with ceph-deploy?
I would like to leave at least 10% in unpartitioned space on each ssd to make
sure it will keep stable write performance overtime. In the past, i've
experienced performance degr
Hello guys,
Since upgrading my cluster to Giant from the previous stable release I started
having massive problems with client IO. I've done the upgrade 2 weeks ago and
since then the IO on my ceph cluster has been unavailable 3 times already.
Quick info on my storage cluster - 3 mons, 2 osd
14-11-14 13:26, Andrei Mikhailovsky wrote:
> Hello guys,
>
> Since upgrading my cluster to Giant from the previous stable release I
> started having massive problems with client IO. I've done the upgrade 2
> weeks ago and since then the IO on my ceph cluster has
- Original Message -
On 11/14/2014 01:50 PM, Andrei Mikhailovsky wrote:
> Wido, I've not done any changes from the default settings. There are no
> firewalls between the ceph cluster members and I do not see a great deal of
> network related errors either. There is a tin
Any other suggestions why several osds are going down on Giant and causing IO
to stall? This was not happening on Firefly.
Thank s
- Original Message -
On 11/14/2014 01:50 PM, Andrei Mikhailovsky wrote:
> Wido, I've not done any changes from the default settings. Ther
Hello cephers,
I need your help and suggestion on what is going on with my cluster. A few
weeks ago i've upgraded from Firefly to Giant. I've previously written about
having issues with Giant where in two weeks period the cluster's IO froze three
times after ceph down-ed two osds. I have in to
Sam, the logs are rather large in size. Where should I post it to?
Thanks
- Original Message -
From: "Samuel Just"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, 18 November, 2014 7:54:56 PM
Subject: Re: [ceph-users] Giant upgrade - s
ginal Message -
From: "Samuel Just"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, 18 November, 2014 8:53:47 PM
Subject: Re: [ceph-users] Giant upgrade - stability issues
pastebin or something, probably.
-Sam
On Tue, Nov 18, 2014 at 12:34 PM, An
not had these problems with Firefly, Emperor or Dumpling
releases on the same hardware and same cluster loads.
Thanks
Andrei
On Tue, Nov 18, 2014 at 3:34 PM, Andrei Mikhailovsky wrote:
> Sam,
>
> Pastebin or similar will not take tens of megabytes worth of logs. If we are
> ta
at 1:39 PM, Andrei Mikhailovsky wrote:
>
>>You indicated that osd 12 and 16 were the ones marked down, but it
>>looks like only 0,1,2,3,7 were marked down in the ceph.log you sent.
>>The logs for 12 and 16 did indicate that they had been partitioned
>>from the oth
ults
from other servers.
Unless you have other tests in mind, I think there are no issues with the
network.
I fire up another test for 24 hours this time to see if it makes a difference.
Thanks
Andrei
- Original Message -
From: "Samuel Just"
To: "Andrei
Thanks, I will try that.
Andrei
- Original Message -
From: "Samuel Just"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, 20 November, 2014 4:26:00 PM
Subject: Re: [ceph-users] Giant upgrade - stability issues
You can try to capture lo
Hello guys,
Could some one comment on the optimal or recommended values of various threads
values in ceph.conf?
At the moment I have the following settings :
filestore_op_threads = 8
osd_disk_threads = 8
osd_op_threads = 8
filestore_merge_threshold = 40
filestore_split_multiple = 8
Are
Thanks for the advise!
I've checked a couple of my Intel 520s which I use for the osd journals and
have been using them for almost 2 years now.
I do not have a great deal of load though. Only have about 60vms or so which
have a general usage.
Disk 1:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRE
amount of writes and are still at 95% health level. Looks very odd / incorrect
to me.
Cheers
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http://www.wi-foo.com
Tel: +44 (0)870 4431337
Fax: +44 (0)208 429 3111
PGP: Key ID - 0x2B3438DE
PGP
OSDs, and I don't want
> any one node able to mark the rest of the cluster as down (it
> happened once).
> On Sat, Nov 22, 2014 at 6:24 AM, Andrei Mikhailovsky <
> and...@arhont.com > wrote:
> > Hello guys,
>
> > Could some one comment on the optimal or
ly, which I did not. Not sure what to think now.
Andrei
- Original Message -
> From: "Andrei Mikhailovsky"
> To: sj...@redhat.com
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, 20 November, 2014 4:50:21 PM
> Subject: Re: [ceph-users] Giant upgrade - sta
Hello guys,
I've got a bunch of hang tasks of the nfsd service running over the cephfs
(kernel) mounted file system. Here is an example of one of them.
[433079.991218] INFO: task nfsd:32625 blocked for more than 120 seconds.
[433080.029685] Not tainted 3.15.10-031510-generic #201408132333
[
One more thing I've missed to say. All failures that i've seen happen when
there is a deep scrubbing process running.
Andrei
- Original Message -
> From: "Andrei Mikhailovsky"
> To: sj...@redhat.com
> Cc: ceph-users@lists.ceph.com
> Sent: Thursd
back again.
Has anyone experienced similar issues?
Andrei
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "ceph-users"
> Sent: Friday, 28 November, 2014 9:08:17 AM
> Subject: [ceph-users] Giant + nfs over cephfs hang tasks
> Hello guys,
; dd if=/dev/zero
of=/tmp/cephfs/4G3 bs=1M count=4K oflag=direct &
Cheers
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "ceph-users"
> Sent: Friday, 28 November, 2014 11:22:07 AM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tas
Dan, are you setting this on the guest vm side? Did you run some tests to see
if this impacts performance? Like small block size performance, etc?
Cheers
- Original Message -
> From: "Dan Van Der Ster"
> To: "ceph-users"
> Sent: Friday, 28 November, 2014 1:33:20 PM
> Subject: Re: [c
?
Thanks
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "ceph-users"
> Sent: Friday, 28 November, 2014 12:02:57 PM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tasks
> I've just tried the latest ubuntu-vivid kernel and also seeing hang
&
Ilya, yes I do! LIke these from different osds:
[ 4422.212204] libceph: osd13 192.168.168.201:6819 socket closed (con state
OPEN)
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
>
I will give it a go and let you know.
Cheers
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Friday, 28 November, 2014 5:28:28 PM
> Subject: Re: [ceph-users] Giant + nfs over cephfs
and get back to you with more info.
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Friday, 28 November, 2014 5:28:28 PM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang
o, it looks like you have nailed the bug!
Do you plan to backport the fix to the 3.16 or 3.17 branches?
Cheers
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Friday, 28 November, 20
odule?
Thanks
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Saturday, 29 November, 2014 8:45:54 AM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tasks
> On Sat, Nov 29, 2014 at
Ilya, I will give it a try and get back to you shortly,
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Saturday, 29 November, 2014 10:40:48 AM
> Subject: Re: [ceph-users] Giant +
I think I had a similar issue recently when I've added a new pool. All pgs that
corresponded to the new pool were shown as degraded/unclean. After doing a bit
of testing I've realized that my issue was down to this:
replicated size 2
min_size 2
replicated size and min size was the same. In m
if this message should be treated as alarming.
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Saturday, 29 November, 2014 10:40:48 AM
> Subject: Re: [ceph-users] Giant + nfs over
time dd if=/dev/zero of=4G77 bs=4M count=5K
oflag=direct &
I've ran the same test about 10 times but with only 4 concurrent dds and that
didn't cause the issue.
Should I try the 3.18 kernel again to see if 8dds produce similar output?
Andrei
- Original Message -
gt; To: "Ilya Dryomov" , "Andrei Mikhailovsky"
>
> Cc: "ceph-users"
> Sent: Saturday, 29 November, 2014 10:19:32 PM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tasks
> Ilya, do you have a ticket reference for the bug?
> Andrei, we run NFS
nal Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users" , "Gregory Farnum"
>
> Sent: Monday, 1 December, 2014 8:22:08 AM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tasks
> On Mon, Dec 1
Ilya,
I see. My server is has 24GB of ram + 3GB of swap. While running the tests,
I've noticed that the server had 14GB of ram shown as cached and only 2MB were
used from the swap. Not sure if this is helpful to your debugging.
Andrei
--
Andrei Mikhailovsky
Director
Arhont Inform
Hello guys,
I am seeing about a dozen or so messages like these on a daily basis. Was
wondering if this is something I should worry about?
2014-12-04 06:51:14.639833 7fb3ea615700 0 -- 192.168.168.200:6815/3326
submit_message osd_op_reply(175402 rbd_data.1bb23193a95f874.2c08
[set-
ding iperf and alike.
Andrei
- Original Message -
> From: "Jake Young"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, 4 December, 2014 4:57:47 PM
> Subject: Re: [ceph-users] Giant osd problems - loss of IO
> On
p_sack = 1
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_adv_win_scale = 1
Jake, thanks for your suggestions.
Andrei
- Original Message -
> From: "Jake Young"
> To: "Andrei Mikhailovsky" ,
> ceph-users@lists.ceph.com
> Sent: Saturday, 6 December, 2014 5:02:1
Hi guys,
Did anyone figure out what could be causing this problem and a workaround?
I've noticed a very annoying behaviour with my vms. It seems to happen randomly
about 5-10 times a day and the pauses last between 2-10 minutes. It happens
across all vms on all host servers in my cluster. I
Jonas,
I've seen this happening on a weekly basis when I was running 0.61 branch as
well, however after switching to 0.67 branch it has stopped. Perhaps you should
try upgrading
Andrei
- Original Message -
From: "Jonas Rottmann (centron GmbH)"
To: "ceph-us...@ceph.com"
Sent:
Mike, I am using 1.5.0:
QEMU emulator version 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5~cloud0), Copyright (c)
2003-2008 Fabrice Bellard
which is installed from the Ubuntu cloud Havana ppa
Thanks
--
Andrei Mikhailovsky
Director
Arhont Information Security
Web: http://www.arhont.com
http
Hi guys,
Could someone explain what's the new perf stats show and if the numbers are
reasonable on my cluster?
I am concerned about the high fs_commit_latency, which seems to be above 150ms
for all osds. I've tried to find the documentation on what this command
actually shows, but couldn't f
Hello guys,
Was hoping someone could help me with strange read performance problems on
osds. I have a test setup of 4 kvm host servers which are running about 20 test
linux vms between them. The vms' images are stored in ceph cluster and accessed
via rbd. I also have 2 osd servers with repli
ark Nelson"
To: ceph-users@lists.ceph.com
Sent: Tuesday, 25 March, 2014 12:27:21 PM
Subject: Re: [ceph-users] rbd + qemu osd performance
On 03/25/2014 07:19 AM, Andrei Mikhailovsky wrote:
> Hello guys,
>
> Was hoping someone could help me with strange read performance problems
> o
Gusy, sorry for confusion. I've meant to say that I am using XFS _NOT_ ZFS. The
rest of the information is correct.
Cheers
- Original Message -
From: "Mark Nelson"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Wednesday, 26 March, 2014 1:39
Cedric,
Sorry for confusion, these values are from xfs and not zfs as i've incorrectly
mentioned in my email.
Andrei
- Original Message -
From: "Cédric Lemarchand"
To: "Andrei Mikhailovsky"
Cc: "Mark Nelson" , ceph-users@lists.ceph.com
Sent: W
Hello guys,
I would like to offer NFS service to the XenServer and VMWare hypervisors for
storing vm images. I am currently running ceph rbd with kvm, which is working
reasonably well.
What would be the best way of running NFS services over CEPH, so that the
XenServer and VMWare's vm disk im
ssage -
From: "Wido den Hollander"
To: ceph-users@lists.ceph.com
Sent: Wednesday, 7 May, 2014 11:15:39 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 05/07/2014 11:46 AM, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I would like to offer NFS service to
Vlad, is there a howto somewhere describing the steps on how to setup iscsi
multipathing over ceph? It looks like a good alternative to nfs
Thanks
- Original Message -
From: "Vlad Gorbunov"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Wedne
ot;Vlad Gorbunov"
To: "Sergey Malinin"
Cc: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com
Sent: Wednesday, 7 May, 2014 2:23:52 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
Conne
2014 12:26:17 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 07/05/14 19:46, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I would like to offer NFS service to the XenServer and VMWare
> hypervisors for storing vm images. I am currently running ceph rbd with
&
Ideally I would like to have a setup with 2+ iscsi servers, so that I can
perform maintenance if necessary without shutting down the vms running on the
servers. I guess multipathing is what I need.
Also I will need to have more than one xenserver/vmware host servers, so the
iscsi LUNs will be
possible with iscsi?
Cheers
Andrei
- Original Message -
From: "Leen Besselink"
To: ceph-users@lists.ceph.com
Sent: Saturday, 10 May, 2014 8:31:02 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote:
for all your help
Andrei
- Original Message -
From: "Leen Besselink"
To: ceph-users@lists.ceph.com
Cc: "Andrei Mikhailovsky"
Sent: Sunday, 11 May, 2014 11:41:08 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Sun, May 11, 2014 at 09:2
Hello guys,
I am currently running a ceph cluster for running vms with qemu + rbd. It works
pretty well and provides a good degree of failover. I am able to run
maintenance tasks on the ceph nodes without interrupting vms IO.
I would like to do the same with VMWare / XenServer hypervisors, bu
09285
Fax: +49-69-348789069
- Original Message -
> From: "Andrei Mikhailovsky"
> To: ceph-users@lists.ceph.com
> Sent: Montag, 12. Mai 2014 12:00:48
> Subject: [ceph-users] Ceph with VMWare / XenServer
>
>
>
> Hello guys,
>
> I am currently running a c
14 4:52 AM, Andrei Mikhailovsky wrote:
> Leen,
>
> thanks for explaining things. I does make sense now.
>
> Unfortunately, it does look like this technology would not fulfill my
> requirements as I do need to have an ability to perform maintenance
> without shutting down vms
str. 351
60327 Frankfurt a. M.
eMail: u...@grohnwaldt.eu
Telefon: +49-69-34878906
Mobil: +49-172-3209285
Fax: +49-69-348789069
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "Uwe Grohnwaldt"
> Cc: ceph-users@lists.ceph.com
> Sent: Mont
Georg,
I've had similar issues when I had a "+" character in my secret key. Not all
clients support it. You might need to escape this with \ and see if it works.
Andrei
- Original Message -
From: "Georg Höllrigl"
To: ceph-users@lists.ceph.com
Sent: Tuesday, 13 May, 2014 1:30:14
Hello guys,
I am trying to figure out what is the problem here.
Currently running Ubuntu 12.04 with latest updates and radosgw version
0.72.2-1precise. My ceph.conf file is pretty standard from the radosgw howto.
I am testing radosgw as a backup solution to S3 compatible clients. I
ing about something else?
Cheers
Andrei
- Original Message -
From: "Yehuda Sadeh"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, 15 May, 2014 4:05:06 PM
Subject: Re: [ceph-users] Problem with radosgw and some file name characters
Your rewrite
72-3209285
Fax: +49-69-348789069
- Original Message -----
> From: "Andrei Mikhailovsky"
> To: ceph-users@lists.ceph.com
> Sent: Montag, 12. Mai 2014 12:00:48
> Subject: [ceph-users] Ceph with VMWare / XenServer
>
>
>
> Hello guys,
>
> I am curr
e reason it's not well documented:
RewriteRule ^/(.*) /s3gw.3.fcgi?%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
I still need to see a more verbose log to make a better educated guess.
Yehuda
On Thu, May 15, 2014 at 3:01 PM, Andrei Mikhailovsky wrote:
>
> Yehu
s
Andrei
- Original Message -
From: "Yehuda Sadeh"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Friday, 16 May, 2014 5:44:52 PM
Subject: Re: [ceph-users] Problem with radosgw and some file name characters
Was talking about this. There is a diff
with the rule you've suggested.
Any idea how to fix the issue?
Cheers
Andrei
- Original Message -
From: "Andrei Mikhailovsky"
To: "Yehuda Sadeh"
Cc: ceph-users@lists.ceph.com
Sent: Monday, 19 May, 2014 9:30:03 AM
Subject: Re: [ceph-users] Problem with ra
Anyone have any idea how to fix the problem with getting 403 when trying to
upload files with none standard characters? I am sure I am not the only one
with these requirements.
Cheers
- Original Message -
From: "Andrei Mikhailovsky"
To: "Yehuda Sadeh&qu
That looks very interesting indeed. I've tried to use nginx, but from what I
recall it had some ssl related issues. Have you tried to make the ssl work so
that nginx acts as an ssl proxy in front of the radosgw?
Cheers
Andrei
- Original Message -
From: "Brian Rak"
To: ceph-user
l Message -
From: "Brian Rak"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, 20 May, 2014 9:29:45 PM
Subject: Re: [ceph-users] nginx (tengine) and radosgw
I haven't tried SSL yet. We currently don't have a wildcard certificate for
thi
Hello
I've recently upgraded my ceph cluster from 0.94.1 to 0.94.3 and noticed that
after about a day i started getting the emails from our network/host monitoring
system. The notifications were that there are too many processes on the osd
servers. I've not seen this before and I am running ce
Hello everyone
I am planning to upgrade my ceph servers from Ubuntu 12.04 to 14.04 and I am
wondering if you have a recommended process of upgrading the OS version without
causing any issues to the ceph cluster?
Many thanks
Andrei
___
ceph-users
Same here, the upgrade went well. So far so good.
- Original Message -
From: "Francois Lafont"
To: "ceph-users"
Sent: Tuesday, 20 October, 2015 9:14:43 PM
Subject: Re: [ceph-users] v0.94.4 Hammer released
Hi,
On 20/10/2015 20:11, Stefan Eriksson wrote:
> A change like this b
Hello guys,
I've upgraded to the latest Hammer release and I've just noticed a massive
issue after the upgrade (((
I am using ceph for virtual machine rbd storage over cloudstack. I am having
issues with starting virtual routers. The libvirt error message is:
cat r-1407-VM.log
2015-10-21
Any thoughts anyone?
Is it safe to perform OS version upgrade on the osd and mon servers?
Thanks
Andrei
- Original Message -
From: "Andrei Mikhailovsky"
To: ceph-us...@ceph.com
Sent: Tuesday, 20 October, 2015 8:05:19 PM
Subject: [ceph-users] ceph and upgrading
follow the procedure
on the second osd server.
Do you think this could work?
Performance wise, i do not have a great IO demand, in particular over a
weekend.
Thanks
Andrei
- Original Message -
From: "Luis Periquito"
To: "Andrei Mikhailovsky"
Cc: "ceph-us
is also OK."? By
clients do you mean the host servers?
Many thanks
Andrei
- Original Message -
> From: "Wido den Hollander"
> To: "ceph-users" , "Andrei Mikhailovsky"
>
> Sent: Tuesday, 26 April, 2016 21:17:59
> Subject: Re: [ceph-users] H
riginal Message -
> From: "Wido den Hollander"
> To: "andrei"
> Cc: "ceph-users"
> Sent: Tuesday, 26 April, 2016 22:18:37
> Subject: Re: [ceph-users] Hammer broke after adding 3rd osd server
>> Op 26 april 2016 om 22:31 schreef Andrei Mikhailovsky
hat else I could try, please let me know.
>
> Andrei
>
> - Original Message -
>> From: "Wido den Hollander"
>> To: "andrei"
>> Cc: "ceph-users"
>> Sent: Tuesday, 26 April, 2016 22:18:37
>> Subject: Re: [ceph-users
Hello everyone,
Please excuse me if this topic has been covered already. I've not managed to
find a guide, checklist or even a set of notes on optimising OS level
settings/configuration/services for running ceph. One of the main reasons for
asking is I've recently had to troubleshoot a bunch o
Hello,
I am planning to make some changes to our ceph cluster and would like to ask
the community of the best route to take.
Our existing cluster is made of 3 osd servers (two of which are also mon
servers) and the total of 3 mon servers. The cluster is currently running on
Ubuntu 14.04.x LT
Hello
I've recently updated my Hammer ceph cluster running on Ubuntu 14.04 LTS
servers and noticed a few issues during the upgrade. Just wanted to share my
experience.
I've installed the latest Jewel release. In my opinion, some of the issues I
came across relate to poor upgrade documentatio
Hi Anthony,
>
>> 2. Inefficient chown documentation - The documentation states that one should
>> "chown -R ceph:ceph /var/lib/ceph" if one is looking to have ceph-osd ran as
>> user ceph and not as root. Now, this command would run a chown process one
>> osd
>> at a time. I am considering my c
Interesting,
I've switched to jemalloc about a month ago while running Hammer. after
installing the library and using the /etc/ld.so.preload I am seeing that all
ceph-osd processes are indeed using the library. I've upgraded to Jewel a few
days ago and see the same picture:
# time lsof |grep c
"ceph", MODE="660"
So, it looks as all /dev/sd** (including partition numbers) which has the model
attribute INTEL SSDSC2BA20 and changes the ownership. You might want to adjust
your model number for the ssd journals.
Andrei
> From: "Ernst Pijper"
> To: &
Hello ceph users,
I've recently upgraded my ceph cluster from Hammer to Jewel (10.2.1 and then
10.2.2). The cluster was running okay after the upgrade. I've decided to use
the optimal tunables for Jewel as the ceph status was complaining about the
straw version and my cluster settings were not
IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 18.06.2016 um 18:04 schrieb Andrei Mikhailovsky:
>
it and it slowly started to increase over the
course of about an hour. At the end, there was 100% iowait on all vms. If this
was the case, wouldn't I see iowait jumping to 100% pretty quickly? Also, I
wasn't able to start any of my vms until i've rebooted one of my osd / mon
server
seemed to be OK.
>> > At this stage, I have a strong suspicion that it is the introduction of
>> > "require_feature_tunables5 = 1" in the tunables. This seems to require
>> > all RADOS connections to be re-established.
>> Do you have any evidence of tha
Hi Daniel,
Many thanks for your useful tests and your results.
How much IO wait do you have on your client vms? Has it significantly increased
or not?
Many thanks
Andrei
- Original Message -
> From: "Daniel Swarbrick"
> To: "ceph-users"
> Cc: "ceph-devel"
> Sent: Wednesday, 22 June
warbrick"
> To: "ceph-users"
> Cc: "ceph-devel"
> Sent: Wednesday, 22 June, 2016 17:09:48
> Subject: Re: [ceph-users] cluster down during backfilling, Jewel tunables and
> client IO optimisations
> On 22/06/16 17:54, Andrei Mikhailovsky wrote:
>>
Hi
I am trying to run an osd level benchmark but get the following error:
# ceph tell osd.3 bench
Error EPERM: problem getting command descriptions from osd.3
I am running Jewel 10.2.2 on Ubuntu 16.04 servers. Has the syntax change or do
I have an issue?
Cheers
Andrei
__
Hello again
Any thoughts on this issue?
Cheers
Andrei
> From: "Andrei Mikhailovsky"
> To: "ceph-users"
> Sent: Wednesday, 22 June, 2016 18:02:28
> Subject: [ceph-users] Error EPERM when running ceph tell command
> Hi
> I am trying to run an osd
Hello
We are planning to make changes to our IT infrastructure and as a result the
fqdn and IPs of the ceph cluster will change. Could someone suggest the best
way of dealing with this to make sure we have a minimal ceph downtime?
Many thanks
Andrei
P addresses of cluster
> members
> On 16-07-22 13:33, Andrei Mikhailovsky wrote:
>> Hello
>> We are planning to make changes to our IT infrastructure and as a result the
>> fqdn and IPs of the ceph cluster will change. Could someone suggest the best
>> way of dealing
Gregory,
I've been given a tip by one of the ceph user list members on tuning values and
data migration and cluster IO. I had an issues twice already where my vms would
simply loose IO and crash while the cluster is being optimised for the new
tunables.
The recommendations were to upgrade the
Seb, yeah, it would be nice to have the debs. Is there a ppa that we could all use?ThanksAndreiFrom: "Sebastien Han" To: "Alex Bligh" Cc: ceph-users@lists.ceph.comSent: Tuesday, 28 May, 2013 11:21:28 PMSubject: Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packagesArf sorry Wolfgang I scratched your
Hello guys,
Was wondering if there are any news on the CentOS 6 qemu-kvm packages with rbd
support? I am very keen to try it out.
Thanks
Andrei
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-cep
Yip, no, I have not tried them, but I certainly will! Do I need a patched
libvirtd as well, or is this working out of the box?
Thanks
Andrei
- Original Message -
From: "YIP Wai Peng"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, 4 J
Yavor,
I would highly recommend taking a look at the quick install guide:
http://ceph.com/docs/next/start/quick-start/
As per the guide, you need to precreate the directories prior to starting ceph.
Andrei
- Original Message -
From: "Явор Маринов"
To: ceph-users@lists.ceph.com
S
101 - 200 of 265 matches
Mail list logo