[ceph-users] Firefly 0.80.9 OSD issues with conn ect claims to be...wrong node

2015-06-04 Thread Alex Gorbachev
Hello, seeing issues with OSDs stalling and error messages such as: 2015-06-04 06:48:17.119618 7fc932d59700 0 -- 10.80.4.15:6820/3501 >> 10.80.4.30 :6811/3003603 pipe(0xb6b4000 sd=19 :33085 s=1 pgs=311 cs=4 l=0 c=0x915c6e0).conn ect claims to be 10.80.4.30:6811/4106 not 10.80.4.30:6811/3003603 -

Re: [ceph-users] .New Ceph cluster - cannot add additional monitor

2015-06-14 Thread Alex Gorbachev
I wonder if your issue is related to: http://tracker.ceph.com/issues/5195 "I had to add the new monitor to the local ceph.conf file and push that with "ceph-deploy --overwrite-conf config push " to all cluster hosts and I had to issue "ceph mon add " on one of the existing cluster monitors" Reg

Re: [ceph-users] Combining MON & OSD Nodes

2015-06-25 Thread Alex Gorbachev
I would not do this, MONs are very important and any load or stability issues on OSD nodes would interfere with the cluster uptime. I found it acceptable to run MONs on virtual machines with local storage. But since MONs oversee OSD nodes, I believe combining them is a recipe for disaster, FWIW.

[ceph-users] Redundant networks in Ceph

2015-06-27 Thread Alex Gorbachev
ce to faults within the switch core, which is really only detectable at application layer. Am I missing an already existing feature? Please advise. Best regards, Alex Gorbachev Intelligent Systems Services Inc. ___ ceph-users mailing list ceph-users@lists.

Re: [ceph-users] Redundant networks in Ceph

2015-06-27 Thread Alex Gorbachev
s as paths, rather than links, as these are higher level object storage exchanges. Thank you, Alex > > Nick > >> -Original Message----- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Alex Gorbachev >> Sent: 27 June 2015 19:02 >&g

Re: [ceph-users] Redundant networks in Ceph

2015-06-28 Thread Alex Gorbachev
oss > > contaminate in any way. > > Probably implementing something like multipathTCP would be the best bet to > mirror the traditional dual fabric SAN design. > > Assuming http://www.multipath-tcp.org/ and http://lwn.net/Articles/544399/ Looks very interesting. >

[ceph-users] OSD crashes

2015-07-03 Thread Alex Gorbachev
Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9. Thank you for any advice. Alex Jul 3 03:42:06 roc-4r-sca020 kernel: [554036.261899] BUG: unable to handle kernel paging request at 0019001c J

Re: [ceph-users] OSD crashes

2015-07-03 Thread Alex Gorbachev
ow to set it > correctly... > > Jan > > > On 03 Jul 2015, at 10:16, Alex Gorbachev wrote: > > Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and > we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9. > > Thank you for any advice.

Re: [ceph-users] Block Storage Image Creation Process

2015-07-11 Thread Alex Gorbachev
Hi Jiwan, On Sat, Jul 11, 2015 at 4:44 PM, Jiwan N wrote: > Hi Ceph-Users, > > I am quite new to Ceph Storage (storage tech in general). I have been > investigating Ceph to understand the precise process clearly. > > *Q: What actually happens When I create a block image of certain size?* > > Th

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-11 Thread Alex Gorbachev
FWIW. Based on the excellent research by Mark Nelson ( http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/) we have dropped SSD journals altogether, and instead went for the battery protected controller writeback cache. Benefits: - No negative force multiplier

Re: [ceph-users] Deadly slow Ceph cluster revisited

2015-07-17 Thread Alex Gorbachev
May I suggest checking also the error counters on your network switch? Check speed and duplex. Is bonding in use? Is flow control on? Can you swap the network cable? Can you swap a NIC with another node and does the problem follow? Hth, Alex On Friday, July 17, 2015, Steve Thompson wrote: >

Re: [ceph-users] OSD crashes

2015-07-22 Thread Alex Gorbachev
: > What’s the value of /proc/sys/vm/min_free_kbytes on your system? Increase > it to 256M (better do it if there’s lots of free memory) and see if it > helps. > It can also be set too high, hard to find any formula how to set it > correctly... > > Jan > > > On 03 Jul

Re: [ceph-users] How to improve single thread sequential reads?

2015-08-16 Thread Alex Gorbachev
Hi Nick, On Thu, Aug 13, 2015 at 4:37 PM, Nick Fisk wrote: >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Nick Fisk >> Sent: 13 August 2015 18:04 >> To: ceph-users@lists.ceph.com >> Subject: [ceph-users] How to improve single thread se

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-08-17 Thread Alex Gorbachev
What about https://github.com/Frontier314/EnhanceIO? Last commit 2 months ago, but no external contributors :( The nice thing about EnhanceIO is there is no need to change device name, unlike bcache, flashcache etc. Best regards, Alex On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz wrote:

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-08-18 Thread Alex Gorbachev
r's VM, and that customer didn't have a strong > technical background to be able to fiddle with it... > So I haven't tested it heavily. > > Bcache should be the obvious choice if you are in control of the environment. > At least you can cry on LKML's shoulder whe

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-08-18 Thread Alex Gorbachev
> IE, should we be focusing on IOPS? Latency? Finding a way to avoid journal > overhead for large writes? Are there specific use cases where we should > specifically be focusing attention? general iscsi? S3? databases directly > on RBD? etc. There's tons of different areas that we can work on

Re: [ceph-users] Bad performances in recovery

2015-08-20 Thread Alex Gorbachev
> > Just to update the mailing list, we ended up going back to default > ceph.conf without any additional settings than what is mandatory. We are > now reaching speeds we never reached before, both in recovery and in > regular usage. There was definitely something we set in the ceph.conf > bogging

[ceph-users] Slow responding OSDs are not OUTed and cause RBD client IO hangs

2015-08-22 Thread Alex Gorbachev
Hello, this is an issue we have been suffering from and researching along with a good number of other Ceph users, as evidenced by the recent posts. In our specific case, these issues manifest themselves in a RBD -> iSCSI LIO -> ESXi configuration, but the problem is more general. When there is an

Re: [ceph-users] Slow responding OSDs are not OUTed and cause RBD client IO hangs

2015-08-24 Thread Alex Gorbachev
T seem to be pretty stable in testing. >> >> Nick >> >>> -Original Message- >>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >>> Alex Gorbachev >>> Sent: 23 August 2015 02:17 >>> To: ceph-users >>>

Re: [ceph-users] Slow responding OSDs are not OUTed and cause RBD client IO hangs

2015-08-24 Thread Alex Gorbachev
is done to those OSDs. I am thankful this is not production storage, but worried of this situation in production - the OSDs are staying up and in, but their latencies are slowing clusterwide IO to a crawl. I am trying to envision this situation in production and how would one find out what is slo

Re: [ceph-users] 1 hour until Ceph Tech Talk

2015-08-29 Thread Alex Gorbachev
Hi Patrick, On Thu, Aug 27, 2015 at 12:00 PM, Patrick McGarry wrote: > Just a reminder that our Performance Ceph Tech Talk with Mark Nelson > will be starting in 1 hour. > > If you are unable to attend there will be a recording posted on the > Ceph YouTube channel and linked from the page at: >

[ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-02 Thread Alex Gorbachev
e have experienced a repeatable issue when performing the following: Ceph backend with no issues, we can repeat any time at will in lab and production. Cloning an ESXi VM to another VM on the same datastore on which the original VM resides. Practically instantly, the LIO machine becomes unrespon

Re: [ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-03 Thread Alex Gorbachev
On Thu, Sep 3, 2015 at 6:58 AM, Jan Schermer wrote: > EnhanceIO? I'd say get rid of that first and then try reproducing it. Jan, EnhanceIO has not been used in this case, in fact we have never had a problem with it in read cache mode. Thank you, Alex > > Jan > >> On 03 S

Re: [ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-03 Thread Alex Gorbachev
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger wrote: > (RESENDING) > > On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote: >> e have experienced a repeatable issue when performing the following: >> >> Ceph backend with no issues, we can repeat

Re: [ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-04 Thread Alex Gorbachev
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger wrote: > (RESENDING) > > On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote: >> e have experienced a repeatable issue when performing the following: >> >> Ceph backend with no issues, we can repeat

[ceph-users] OSD crash

2015-09-08 Thread Alex Gorbachev
Hello, We have run into an OSD crash this weekend with the following dump. Please advise what this could be. Best regards, Alex 2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >> 10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271 cs=251 l=0 c=0x10d34580).f

Re: [ceph-users] OSD crash

2015-09-22 Thread Alex Gorbachev
Hi Brad, This occurred on a system under moderate load - has not happened since and I do not know how to reproduce. Thank you, Alex On Tue, Sep 22, 2015 at 7:29 PM, Brad Hubbard wrote: > - Original Message - > > > From: "Alex Gorbachev" > > To: "ce

Re: [ceph-users] Diffrent OSD capacity & what is the weight of item

2015-09-24 Thread Alex Gorbachev
Please review http://docs.ceph.com/docs/master/rados/operations/crush-map/ regarding weights Best regards, Alex On Wed, Sep 23, 2015 at 3:08 AM, wikison wrote: > Hi, > I have four storage machines to build a ceph storage cluster as > storage nodes. Each of them is attached a 120 GB HDD

Re: [ceph-users] Potential OSD deadlock?

2015-10-04 Thread Alex Gorbachev
We had multiple issues with 4TB drives and delays. Here is the configuration that works for us fairly well on Ubuntu (but we are about to significantly increase the IO load so this may change). NTP: always use NTP and make sure it is working - Ceph is very sensitive to time being precise /etc/de

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Alex Gorbachev
GbE networking seems to be helping a lot, it could be just the superior switch response on a higher end switch. Using blk_mq scheduler, it's been reported to improve performance on random IO. Good luck! -- Alex Gorbachev Storcium On Sun, Nov 8, 2015 at 5:07 PM, Timofey Titovets wrote: &

Re: [ceph-users] network failover with public/custer network - is that possible

2015-11-28 Thread Alex Gorbachev
e should be helpful as well to add robustness to the Ceph networking backend. Best regards, Alex > > Thanks for feedback and regards . Götz > > > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] rbd merge-diff error

2015-12-07 Thread Alex Gorbachev
have found this link http://tracker.ceph.com/issues/12911 but not sure if the patch should have already been in hammer or how to get it? System: ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) Ubuntu 14.04.3 kernel 4.2.1-040201-generic Thank you -- Alex Gorbachev Sto

Re: [ceph-users] rbd merge-diff error

2015-12-08 Thread Alex Gorbachev
Hi Josh, On Mon, Dec 7, 2015 at 6:50 PM, Josh Durgin wrote: > On 12/07/2015 03:29 PM, Alex Gorbachev wrote: > >> When trying to merge two results of rbd export-diff, the following error >> occurs: >> >> iss@lab2-b1:~$ rbd export-diff --from-snap autosn

Re: [ceph-users] Starting a cluster with one OSD node

2016-05-13 Thread Alex Gorbachev
ling list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Starting a cluster with one OSD node

2016-05-15 Thread Alex Gorbachev
> On Friday, May 13, 2016, Mike Jacobacci wrote: > Hello, > > I have a quick and probably dumb question… We would like to use Ceph > for our storage, I was thinking of a cluster with 3 Monitor and OSD > nodes. I was wondering if it was a bad idea to start a Ceph cluster >>

[ceph-users] Pacemaker Resource Agents for Ceph by Andreas Kurz

2016-05-15 Thread Alex Gorbachev
by clients' IO load. https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTLogicalUnit https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTTarget https://github.com/akurz/resource-agents/blob/SCST/heartbeat/iscsi-scstd -- Alex Gorbachev http://www.iss-integratio

Re: [ceph-users] Must host bucket name be the same with hostname ?

2016-06-11 Thread Alex Gorbachev
ith hostname. > > > > > > Or host bucket name does no matter? > > > > > > > > Best regards, > > > > Xiucai > > -- > Christian BalzerNetwork/Systems Engineer > ch...@gol.com Global OnLine

Re: [ceph-users] Disaster recovery and backups

2016-06-11 Thread Alex Gorbachev
ful restore. Best regards, Alex > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list

[ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-06-28 Thread Alex Gorbachev
turned off CFQ and blk-mq/scsi-mq and are using just the noop scheduler. Does the ceph kernel code somehow use the fair scheduler code block? Thanks -- Alex Gorbachev Storcium Jun 28 09:46:41 roc04r-sca090 kernel: [137912.684974] CPU: 30 PID: 10403 Comm: ceph-osd Not tainted 4.4.13-040413

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-06-28 Thread Alex Gorbachev
m Bishop : > > Yes - I noticed this today on Ubuntu 16.04 with the default kernel. No > useful information to add other than it's not just you. > > Tim. > > On Tue, Jun 28, 2016 at 11:05:40AM -0400, Alex Gorbachev wrote: > > After upgrading to kernel 4.4.13 on Ubun

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-02 Thread Alex Gorbachev
x27;s quite a long standing issue that's only just been >>> resolved, another user chimed in on the lkml thread a couple of days >>> ago as well and again his trace had ceph-osd in it as well. >>> >>> https://lkml.org/lkml/headers/2016/6/21/491 >>> >>

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-04 Thread Alex Gorbachev
ooked at the fair.c code in kernel source tree 4.4.14 and it is quite different than Peter's patch (assuming 4.5.x source), so the patch does not apply cleanly. Maybe another 4.4.x kernel will get the update. Thanks, Alex > > On 29 June 2016 at 18:29, Stefan Priebe - Profihost AG > w

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-04 Thread Alex Gorbachev
HI Nick, On Fri, Jul 1, 2016 at 2:11 PM, Nick Fisk wrote: > However, there are a number of pain points with iSCSI + ESXi + RBD and they > all mainly centre on write latency. It seems VMFS was designed around the > fact that Enterprise storage arrays service writes in 10-100us, whereas Ceph

Re: [ceph-users] Backing up RBD snapshots to a different cloud service

2016-07-10 Thread Alex Gorbachev
ent deduplication. HTH, Alex > > > Any advice is greatly appreciated. > > Thanks, > Brendan > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Alex Gorbachev
re several options for those. Currently running 3 VMware clusters with 15 hosts total, and things are quite decent. Regards, Alex Gorbachev Storcium > > Thank you ! > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > >

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-19 Thread Alex Gorbachev
/lkml/2016/7/12/919 https://lkml.org/lkml/2016/7/12/297 -- Alex Gorbachev Storcium > > 2016-07-05 11:47 GMT+03:00 Nick Fisk : >>> -Original Message- >>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >>> Alex Gorbachev >>

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-27 Thread Alex Gorbachev
output - this means that discard is being sent to the backing (RBD) device, correct? Including the ceph-users list to see if there is a reason RBD is not processing this discard/unmap. Thank you, -- Alex Gorbachev Storcium Jul 26 08:23:38 e1 kernel: [ 858.324715] [20426]: scst: scst_cmd_done_

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-27 Thread Alex Gorbachev
with UNMAP) - blkdiscard does release the space -- Alex Gorbachev Storcium On Wed, Jul 27, 2016 at 11:55 AM, Alex Gorbachev wrote: > Hi Vlad, > > On Mon, Jul 25, 2016 at 10:44 PM, Vladislav Bolkhovitin wrote: >> Hi, >> >> I would suggest to rebuild SCST in the deb

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-30 Thread Alex Gorbachev
Hi Vlad, On Wednesday, July 27, 2016, Vladislav Bolkhovitin wrote: > > Alex Gorbachev wrote on 07/27/2016 10:33 AM: > > One other experiment: just running blkdiscard against the RBD block > > device completely clears it, to the point where the rbd-diff method > > report

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-30 Thread Alex Gorbachev
> > On Wednesday, July 27, 2016, Vladislav Bolkhovitin wrote: >> >> >> Alex Gorbachev wrote on 07/27/2016 10:33 AM: >> > One other experiment: just running blkdiscard against the RBD block >> > device completely clears it, to the point where the rbd-diff

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-01 Thread Alex Gorbachev
# blkdiscard -o 0 -l 4096000 /dev/rbd28 root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END { print SUM/1024 " KB" }' 819200 KB root@e1:/var/log# blkdiscard -o 0 -l 4096 /dev/rbd28 root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END { print

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-01 Thread Alex Gorbachev
Hi Ilya, On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote: > On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev > wrote: >> RBD illustration showing RBD ignoring discard until a certain >> threshold - why is that? This behavior is unfortunately incompatible >> with ESXi

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Alex Gorbachev
On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote: > Alex Gorbachev wrote on 08/01/2016 04:05 PM: >> Hi Ilya, >> >> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote: >>> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev >>> wrote: >>>

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Alex Gorbachev
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote: > On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev > wrote: >> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote: >>> Alex Gorbachev wrote on 08/01/2016 04:05 PM: >>>> Hi Ilya, >>>> >

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-03 Thread Alex Gorbachev
On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote: > Alex Gorbachev wrote on 08/02/2016 07:56 AM: >> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote: >>> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev >>> wrote: >>>> On Mon, Aug 1,

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-03 Thread Alex Gorbachev
On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev wrote: > On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote: >> Alex Gorbachev wrote on 08/02/2016 07:56 AM: >>> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote: >>>> On Tue, Aug 2, 2016 at 3:49 P

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-04 Thread Alex Gorbachev
On Wed, Aug 3, 2016 at 10:54 AM, Alex Gorbachev wrote: > On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev > wrote: >> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote: >>> Alex Gorbachev wrote on 08/02/2016 07:56 AM: >>>> On Tue, Aug 2, 2016 at 9:56

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-05 Thread Alex Gorbachev
On Tuesday, August 2, 2016, Ilya Dryomov wrote: > On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev > wrote: > > On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin > wrote: > >> Alex Gorbachev wrote on 08/01/2016 04:05 PM: > >>> Hi Ilya, > >>

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-07 Thread Alex Gorbachev
> I'm confused. How can a 4M discard not free anything? It's either > going to hit an entire object or two adjacent objects, truncating the > tail of one and zeroing the head of another. Using rbd diff: > > $ rbd diff test | grep -A 1 25165824 > 25165824 4194304 data > 29360128 4194304 data >

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-07 Thread Alex Gorbachev
ing. Thank you for your input, it is very practical and helpful long term. Alex > > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-13 Thread Alex Gorbachev
On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote: > On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev > wrote: >>> I'm confused. How can a 4M discard not free anything? It's either >>> going to hit an entire object or two adjacent objects, truncating the >&

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-13 Thread Alex Gorbachev
On Sat, Aug 13, 2016 at 12:36 PM, Alex Gorbachev wrote: > On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote: >> On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev >> wrote: >>>> I'm confused. How can a 4M discard not free anything? It's either >>>&

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-18 Thread Alex Gorbachev
On Sat, Aug 13, 2016 at 4:51 PM, Alex Gorbachev wrote: > On Sat, Aug 13, 2016 at 12:36 PM, Alex Gorbachev > wrote: >> On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote: >>> On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev >>> wrote: >>>>> I'm

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-08-20 Thread Alex Gorbachev
On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev wrote: > On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов wrote: >> Guys, >> >> This bug is hitting me constantly, may be once per several days. Does >> anyone know is there a solution already? > > > I see ther

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-20 Thread Alex Gorbachev
Hi Nick, On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk wrote: >> -Original Message- >> From: w...@globe.de [mailto:w...@globe.de] >> Sent: 21 July 2016 13:23 >> To: n...@fisk.me.uk; 'Horace Ng' >> Cc: ceph-users@lists.ceph.com >> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Perfo

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Alex Gorbachev
CephFS/Ganesha. Thanks for your very valuable info on analysis and hw build. Alex > > > > Am 21.08.2016 um 09:31 schrieb Nick Fisk >: > > >> -Original Message- > >> From: Alex Gorbachev [mailto:a...@iss-integration.com ] > >> Sent: 21 August

Re: [ceph-users] Kernel mounted RBD's hanging

2017-06-29 Thread Alex Gorbachev
h side? Do you check the ceph.log for any anomalies? Any occurrences on OSD nodes, anything in their OSD logs or syslogs? Aany odd page cache settings on the clients? Alex > > Thanks, > Nick > > ___ > ceph-users mailing list &g

Re: [ceph-users] Multi Tenancy in Ceph RBD Cluster

2017-06-29 Thread Alex Gorbachev
___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Kernel mounted RBD's hanging

2017-06-30 Thread Alex Gorbachev
On Fri, Jun 30, 2017 at 8:12 AM Nick Fisk wrote: > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 30 June 2017 03:54 > *To:* Ceph Users ; n...@fisk.me.uk > > > *Subject:* Re: [ceph-users] Kernel mounted RBD's hanging > > > > > > O

Re: [ceph-users] iSCSI production ready?

2017-07-19 Thread Alex Gorbachev
things, > small people talk ... about other people. > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ce

Re: [ceph-users] RBD encryption options?

2017-08-24 Thread Alex Gorbachev
for dm-crypt as well. Regards, Alex > Any suggestions? > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] PCIe journal benefit for SSD OSDs

2017-09-06 Thread Alex Gorbachev
using PCIe journals (e.g. Intel P3700 or even the older 910 series) in front of such SSDs? Thanks for any info you can share. -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Alex Gorbachev
ever run the odd releases as too risky. A good deal if functionality comes in updates, and usually the Ceph team brings them in gently, with the more experimental features off by default. I suspect the 9 month even cycle will also make it easier to perform more incremental upgrades, i.e. small ju

[ceph-users] mon health status gone from display

2017-09-15 Thread Alex Gorbachev
In Jewel and prior there was a health status for MONs in ceph -s JSON output, this seems to be gone now. Is there a place where a status of a given monitor is shown in Luminous? Thank you -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph

Re: [ceph-users] BlueStore questions about workflow and performance

2017-10-03 Thread Alex Gorbachev
it does seem stable. Hth, Alex >> >> _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] BlueStore questions about workflow and performance

2017-10-03 Thread Alex Gorbachev
Hi Mark, great to hear from you! On Tue, Oct 3, 2017 at 9:16 AM Mark Nelson wrote: > > > On 10/03/2017 07:59 AM, Alex Gorbachev wrote: > > Hi Sam, > > > > On Mon, Oct 2, 2017 at 6:01 PM Sam Huracan > <mailto:nowitzki.sa...@gmail.com>> wrote: > > >

[ceph-users] RBD Mirror between two separate clusters named ceph

2017-10-05 Thread Alex Gorbachev
configuration work? Thank you, -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD Mirror between two separate clusters named ceph

2017-10-05 Thread Alex Gorbachev
> > On Thu, Oct 5, 2017 at 7:45 PM, Alex Gorbachev > wrote: >> I am testing rbd mirroring, and have two existing clusters named ceph >> in their ceph.conf. Each cluster has a separate fsid. On one >> cluster, I renamed ceph.conf into remote-mirror.conf and >

Re: [ceph-users] Backup VM (Base image + snapshot)

2017-10-15 Thread Alex Gorbachev
tp://ceph.com/geen-categorie/incremental-snapshots-with-rbd/ http://docs.ceph.com/docs/master/dev/rbd-export/ http://cephnotes.ksperis.com/blog/2014/08/12/rbd-replication -- Alex Gorbachev Storcium > 2.- Is it possible to export BaseImage in qcow2 format and snapshots in > qcow2 format as well a

Re: [ceph-users] Changing device-class using crushtool

2018-01-15 Thread Alex Gorbachev
ice-class using crushtool, > is that correct? This is how we do it in Storcium based on http://docs.ceph.com/docs/master/rados/operations/crush-map/ ceph osd crush rm-device-class ceph osd crush set-device-class -- Best regards, Alex Gorbachev Storcium > > Wido > __

Re: [ceph-users] Ceph Future

2018-01-15 Thread Alex Gorbachev
eed the standard control tools available from their web sites, as well as hardware that supports SGPIO (most enterprise JBODs and drives do). There's likely similar options to other HBAs. Areca: UID on: cli64 curctrl=1 set password= cli64 curctrl= disk identify drv= UID OFF: cli64 cur

[ceph-users] CRUSH map cafe or CRUSH map generator

2018-01-16 Thread Alex Gorbachev
tices and simple use cases, which could be automated this way. -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Two datacenter resilient design with a quorum site

2018-01-16 Thread Alex Gorbachev
in case of a permanent failure of the main site (with two replicas), how to manually force the other site (with one replica and MON) to provide storage? I would think a CRUSH map change and modifying ceph.conf to include just one MON, then build two more MONs locally and add? -- Alex Gorbachev Storc

Re: [ceph-users] Two datacenter resilient design with a quorum site

2018-01-18 Thread Alex Gorbachev
On Tue, Jan 16, 2018 at 2:17 PM, Gregory Farnum wrote: > On Tue, Jan 16, 2018 at 6:07 AM Alex Gorbachev > wrote: >> >> I found a few WAN RBD cluster design discussions, but not a local one, >> so was wonderinng if anyone has experience with a resilience-oriented &

Re: [ceph-users] Ideal Bluestore setup

2018-01-24 Thread Alex Gorbachev
ortant. I would avoid both bcache and tiering to simplify the configuration, and seriously consider larger nodes if possible, and more OSD drives. HTH, -- Alex Gorbachev Storcium > > Thanks in advance for your advice! > > Best, > Ean > > > > > > -- >

Re: [ceph-users] Newbie question: stretch ceph cluster

2018-02-16 Thread Alex Gorbachev
-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > -- > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB > 21284 (AG Nürnberg) > > > ___ > ceph-users mailing list > ceph-

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Alex Gorbachev
t’s been for a long time and I’m reluctant to fiddle any > further. > > > > But as mentioned above, thick vmdk’s with vaai might be a really good fit. > Any chance thin vs. thick difference could be related to discards? I saw zillions of them in recent testing. > > > Thanks for

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Alex Gorbachev
dahead. I need it to stream to LTO6 tape. >> Depending on what you are doing this may or may not be required. >> > > Ah, yes. I a kind of similar use-case I went for using 64MB objects > underneath a RBD device. We needed high sequential Write and Read performance > on

[ceph-users] Storcium has been certified by VMWare

2016-08-26 Thread Alex Gorbachev
;page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-03 Thread Alex Gorbachev
HI Nick, On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk wrote: > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 21 August 2016 15:27 > *To:* Wilhelm Redbrake > *Cc:* n...@fisk.me.uk; Horace Ng ; ceph-users < > ceph-users@lists.ceph.com> > *Subject

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-03 Thread Alex Gorbachev
On Saturday, September 3, 2016, Alex Gorbachev wrote: > HI Nick, > > On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk > wrote: > >> *From:* Alex Gorbachev [mailto:a...@iss-integration.com >> ] >> *Sent:* 21 August 2016 15:27 >> *To:* Wilhelm Redbrake >

[ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Alex Gorbachev
-recommends install -o Dpkg::Options::=--force-confnew ceph-osd ceph-mds ceph-mon radosgw -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Alex Gorbachev
Confirmed - older version of ceph-deploy is working fine. Odd as there is a large number of Hammer users out there. Thank you for the explanation and fix. -- Alex Gorbachev Storcium On Fri, Sep 9, 2016 at 12:15 PM, Vasu Kulkarni wrote: > There is a known issue with latest ceph-deploy w

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-10 Thread Alex Gorbachev
-on-nfs-vs.html ) Alex > > From: Alex Gorbachev [mailto:a...@iss-integration.com] > Sent: 04 September 2016 04:45 > To: Nick Fisk > Cc: Wilhelm Redbrake ; Horace Ng ; > ceph-users > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > > &

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Alex Gorbachev
On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk wrote: > > > > > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 04 September 2016 04:45 > *To:* Nick Fisk > *Cc:* Wilhelm Redbrake ; Horace Ng ; > ceph-users > *Subject:* Re: [ceph-users] Ceph + VMwa

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Alex Gorbachev
-- Alex Gorbachev Storcium On Sun, Sep 11, 2016 at 12:54 PM, Nick Fisk wrote: > > > > > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 11 September 2016 16:14 > > *To:* Nick Fisk > *Cc:* Wilhelm Redbrake ; Horace Ng ; > ceph-users >

Re: [ceph-users] Ceph + VMWare

2016-10-06 Thread Alex Gorbachev
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry wrote: > Hey guys, > > Starting to buckle down a bit in looking at how we can better set up > Ceph for VMWare integration, but I need a little info/help from you > folks. > > If you currently are using Ceph+VMWare, or are exploring the option, > I'd

Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Alex Gorbachev
out optimal workloads under highly varied use cases. I see better results with NVMe journals and write combining HBAs, e.g. Areca. Regards, Alex > Regards, > > Frédéric. > > Le 06/10/2016 à 16:01, Alex Gorbachev a écrit : > > On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry

Re: [ceph-users] Monitoring Overhead

2016-10-25 Thread Alex Gorbachev
> > > > ___ > > > ceph-users mailing list > > > ceph-users@lists.ceph.com > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > ___ > > ceph-users mailing list >

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-30 Thread Alex Gorbachev
> ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  1   2   3   >