Hello, seeing issues with OSDs stalling and error messages such as:
2015-06-04 06:48:17.119618 7fc932d59700 0 -- 10.80.4.15:6820/3501 >> 10.80.4.30
:6811/3003603 pipe(0xb6b4000 sd=19 :33085 s=1 pgs=311 cs=4 l=0 c=0x915c6e0).conn
ect claims to be 10.80.4.30:6811/4106 not 10.80.4.30:6811/3003603 -
I wonder if your issue is related to:
http://tracker.ceph.com/issues/5195
"I had to add the new monitor to the local ceph.conf file and push
that with "ceph-deploy --overwrite-conf config push " to all
cluster hosts and I had to issue "ceph mon add " on one of
the existing cluster monitors"
Reg
I would not do this, MONs are very important and any load or stability
issues on OSD nodes would interfere with the cluster uptime. I found
it acceptable to run MONs on virtual machines with local storage. But
since MONs oversee OSD nodes, I believe combining them is a recipe for
disaster, FWIW.
ce to faults within the switch core, which is really only
detectable at application layer.
Am I missing an already existing feature? Please advise.
Best regards,
Alex Gorbachev
Intelligent Systems Services Inc.
___
ceph-users mailing list
ceph-users@lists.
s as paths, rather than links, as these are higher
level object storage exchanges.
Thank you,
Alex
>
> Nick
>
>> -Original Message-----
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Alex Gorbachev
>> Sent: 27 June 2015 19:02
>&g
oss
> > contaminate in any way.
>
> Probably implementing something like multipathTCP would be the best bet to
> mirror the traditional dual fabric SAN design.
>
> Assuming http://www.multipath-tcp.org/ and http://lwn.net/Articles/544399/
Looks very interesting.
>
Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and
we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9.
Thank you for any advice.
Alex
Jul 3 03:42:06 roc-4r-sca020 kernel: [554036.261899] BUG: unable to handle
kernel paging request at 0019001c
J
ow to set it
> correctly...
>
> Jan
>
>
> On 03 Jul 2015, at 10:16, Alex Gorbachev wrote:
>
> Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and
> we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9.
>
> Thank you for any advice.
Hi Jiwan,
On Sat, Jul 11, 2015 at 4:44 PM, Jiwan N wrote:
> Hi Ceph-Users,
>
> I am quite new to Ceph Storage (storage tech in general). I have been
> investigating Ceph to understand the precise process clearly.
>
> *Q: What actually happens When I create a block image of certain size?*
>
> Th
FWIW. Based on the excellent research by Mark Nelson (
http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/)
we have dropped SSD journals altogether, and instead went for the battery
protected controller writeback cache.
Benefits:
- No negative force multiplier
May I suggest checking also the error counters on your network switch?
Check speed and duplex. Is bonding in use? Is flow control on? Can you
swap the network cable? Can you swap a NIC with another node and does the
problem follow?
Hth, Alex
On Friday, July 17, 2015, Steve Thompson wrote:
>
:
> What’s the value of /proc/sys/vm/min_free_kbytes on your system? Increase
> it to 256M (better do it if there’s lots of free memory) and see if it
> helps.
> It can also be set too high, hard to find any formula how to set it
> correctly...
>
> Jan
>
>
> On 03 Jul
Hi Nick,
On Thu, Aug 13, 2015 at 4:37 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Nick Fisk
>> Sent: 13 August 2015 18:04
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] How to improve single thread se
What about https://github.com/Frontier314/EnhanceIO? Last commit 2
months ago, but no external contributors :(
The nice thing about EnhanceIO is there is no need to change device
name, unlike bcache, flashcache etc.
Best regards,
Alex
On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz wrote:
r's VM, and that customer didn't have a strong
> technical background to be able to fiddle with it...
> So I haven't tested it heavily.
>
> Bcache should be the obvious choice if you are in control of the environment.
> At least you can cry on LKML's shoulder whe
> IE, should we be focusing on IOPS? Latency? Finding a way to avoid journal
> overhead for large writes? Are there specific use cases where we should
> specifically be focusing attention? general iscsi? S3? databases directly
> on RBD? etc. There's tons of different areas that we can work on
>
> Just to update the mailing list, we ended up going back to default
> ceph.conf without any additional settings than what is mandatory. We are
> now reaching speeds we never reached before, both in recovery and in
> regular usage. There was definitely something we set in the ceph.conf
> bogging
Hello, this is an issue we have been suffering from and researching
along with a good number of other Ceph users, as evidenced by the
recent posts. In our specific case, these issues manifest themselves
in a RBD -> iSCSI LIO -> ESXi configuration, but the problem is more
general.
When there is an
T seem to be pretty stable in testing.
>>
>> Nick
>>
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Alex Gorbachev
>>> Sent: 23 August 2015 02:17
>>> To: ceph-users
>>>
is done to those OSDs. I am thankful this is not production
storage, but worried of this situation in production - the OSDs are
staying up and in, but their latencies are slowing clusterwide IO to a
crawl. I am trying to envision this situation in production and how
would one find out what is slo
Hi Patrick,
On Thu, Aug 27, 2015 at 12:00 PM, Patrick McGarry
wrote:
> Just a reminder that our Performance Ceph Tech Talk with Mark Nelson
> will be starting in 1 hour.
>
> If you are unable to attend there will be a recording posted on the
> Ceph YouTube channel and linked from the page at:
>
e have experienced a repeatable issue when performing the following:
Ceph backend with no issues, we can repeat any time at will in lab and
production. Cloning an ESXi VM to another VM on the same datastore on
which the original VM resides. Practically instantly, the LIO machine
becomes unrespon
On Thu, Sep 3, 2015 at 6:58 AM, Jan Schermer wrote:
> EnhanceIO? I'd say get rid of that first and then try reproducing it.
Jan, EnhanceIO has not been used in this case, in fact we have never
had a problem with it in read cache mode.
Thank you,
Alex
>
> Jan
>
>> On 03 S
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger
wrote:
> (RESENDING)
>
> On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote:
>> e have experienced a repeatable issue when performing the following:
>>
>> Ceph backend with no issues, we can repeat
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger
wrote:
> (RESENDING)
>
> On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote:
>> e have experienced a repeatable issue when performing the following:
>>
>> Ceph backend with no issues, we can repeat
Hello,
We have run into an OSD crash this weekend with the following dump. Please
advise what this could be.
Best regards,
Alex
2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >>
10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271 cs=251
l=0 c=0x10d34580).f
Hi Brad,
This occurred on a system under moderate load - has not happened since and
I do not know how to reproduce.
Thank you,
Alex
On Tue, Sep 22, 2015 at 7:29 PM, Brad Hubbard wrote:
> - Original Message -
>
> > From: "Alex Gorbachev"
> > To: "ce
Please review http://docs.ceph.com/docs/master/rados/operations/crush-map/
regarding weights
Best regards,
Alex
On Wed, Sep 23, 2015 at 3:08 AM, wikison wrote:
> Hi,
> I have four storage machines to build a ceph storage cluster as
> storage nodes. Each of them is attached a 120 GB HDD
We had multiple issues with 4TB drives and delays. Here is the
configuration that works for us fairly well on Ubuntu (but we are about to
significantly increase the IO load so this may change).
NTP: always use NTP and make sure it is working - Ceph is very sensitive to
time being precise
/etc/de
GbE networking seems to be helping a lot, it could be just the superior
switch response on a higher end switch.
Using blk_mq scheduler, it's been reported to improve performance on random
IO.
Good luck!
--
Alex Gorbachev
Storcium
On Sun, Nov 8, 2015 at 5:07 PM, Timofey Titovets
wrote:
&
e should
be helpful as well to add robustness to the Ceph networking backend.
Best regards, Alex
>
> Thanks for feedback and regards . Götz
>
>
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
have found this link
http://tracker.ceph.com/issues/12911 but not sure if the patch should have
already been in hammer or how to get it?
System: ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
Ubuntu 14.04.3 kernel 4.2.1-040201-generic
Thank you
--
Alex Gorbachev
Sto
Hi Josh,
On Mon, Dec 7, 2015 at 6:50 PM, Josh Durgin wrote:
> On 12/07/2015 03:29 PM, Alex Gorbachev wrote:
>
>> When trying to merge two results of rbd export-diff, the following error
>> occurs:
>>
>> iss@lab2-b1:~$ rbd export-diff --from-snap autosn
ling list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> On Friday, May 13, 2016, Mike Jacobacci wrote:
> Hello,
>
> I have a quick and probably dumb question… We would like to use Ceph
> for our storage, I was thinking of a cluster with 3 Monitor and OSD
> nodes. I was wondering if it was a bad idea to start a Ceph cluster
>>
by clients' IO load.
https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTLogicalUnit
https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTTarget
https://github.com/akurz/resource-agents/blob/SCST/heartbeat/iscsi-scstd
--
Alex Gorbachev
http://www.iss-integratio
ith hostname.
> >
> >
> > Or host bucket name does no matter?
> >
> >
> >
> > Best regards,
> >
> > Xiucai
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com Global OnLine
ful restore.
Best regards,
Alex
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
turned off CFQ and blk-mq/scsi-mq and are using
just the noop scheduler.
Does the ceph kernel code somehow use the fair scheduler code block?
Thanks
--
Alex Gorbachev
Storcium
Jun 28 09:46:41 roc04r-sca090 kernel: [137912.684974] CPU: 30 PID:
10403 Comm: ceph-osd Not tainted 4.4.13-040413
m Bishop :
>
> Yes - I noticed this today on Ubuntu 16.04 with the default kernel. No
> useful information to add other than it's not just you.
>
> Tim.
>
> On Tue, Jun 28, 2016 at 11:05:40AM -0400, Alex Gorbachev wrote:
>
> After upgrading to kernel 4.4.13 on Ubun
x27;s quite a long standing issue that's only just been
>>> resolved, another user chimed in on the lkml thread a couple of days
>>> ago as well and again his trace had ceph-osd in it as well.
>>>
>>> https://lkml.org/lkml/headers/2016/6/21/491
>>>
>>
ooked at the fair.c code in kernel source tree 4.4.14 and it is
quite different than Peter's patch (assuming 4.5.x source), so the
patch does not apply cleanly. Maybe another 4.4.x kernel will get the
update.
Thanks,
Alex
>
> On 29 June 2016 at 18:29, Stefan Priebe - Profihost AG
> w
HI Nick,
On Fri, Jul 1, 2016 at 2:11 PM, Nick Fisk wrote:
> However, there are a number of pain points with iSCSI + ESXi + RBD and they
> all mainly centre on write latency. It seems VMFS was designed around the
> fact that Enterprise storage arrays service writes in 10-100us, whereas Ceph
ent deduplication.
HTH,
Alex
>
>
> Any advice is greatly appreciated.
>
> Thanks,
> Brendan
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
re several options for those.
Currently running 3 VMware clusters with 15 hosts total, and things are
quite decent.
Regards,
Alex Gorbachev
Storcium
>
> Thank you !
>
> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
> IP-Interactive
>
>
/lkml/2016/7/12/919
https://lkml.org/lkml/2016/7/12/297
--
Alex Gorbachev
Storcium
>
> 2016-07-05 11:47 GMT+03:00 Nick Fisk :
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Alex Gorbachev
>>
output - this means
that discard is being sent to the backing (RBD) device, correct?
Including the ceph-users list to see if there is a reason RBD is not
processing this discard/unmap.
Thank you,
--
Alex Gorbachev
Storcium
Jul 26 08:23:38 e1 kernel: [ 858.324715] [20426]: scst:
scst_cmd_done_
with UNMAP)
- blkdiscard does release the space
--
Alex Gorbachev
Storcium
On Wed, Jul 27, 2016 at 11:55 AM, Alex Gorbachev
wrote:
> Hi Vlad,
>
> On Mon, Jul 25, 2016 at 10:44 PM, Vladislav Bolkhovitin wrote:
>> Hi,
>>
>> I would suggest to rebuild SCST in the deb
Hi Vlad,
On Wednesday, July 27, 2016, Vladislav Bolkhovitin wrote:
>
> Alex Gorbachev wrote on 07/27/2016 10:33 AM:
> > One other experiment: just running blkdiscard against the RBD block
> > device completely clears it, to the point where the rbd-diff method
> > report
>
> On Wednesday, July 27, 2016, Vladislav Bolkhovitin wrote:
>>
>>
>> Alex Gorbachev wrote on 07/27/2016 10:33 AM:
>> > One other experiment: just running blkdiscard against the RBD block
>> > device completely clears it, to the point where the rbd-diff
# blkdiscard -o 0 -l 4096000 /dev/rbd28
root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END {
print SUM/1024 " KB" }'
819200 KB
root@e1:/var/log# blkdiscard -o 0 -l 4096 /dev/rbd28
root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END {
print
Hi Ilya,
On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
> wrote:
>> RBD illustration showing RBD ignoring discard until a certain
>> threshold - why is that? This behavior is unfortunately incompatible
>> with ESXi
On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote:
> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
>> Hi Ilya,
>>
>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
>>> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
>>> wrote:
>>>
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote:
> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev
> wrote:
>> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote:
>>> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
>>>> Hi Ilya,
>>>>
>
On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote:
> Alex Gorbachev wrote on 08/02/2016 07:56 AM:
>> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote:
>>> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev
>>> wrote:
>>>> On Mon, Aug 1,
On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev wrote:
> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote:
>> Alex Gorbachev wrote on 08/02/2016 07:56 AM:
>>> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote:
>>>> On Tue, Aug 2, 2016 at 3:49 P
On Wed, Aug 3, 2016 at 10:54 AM, Alex Gorbachev
wrote:
> On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev
> wrote:
>> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin wrote:
>>> Alex Gorbachev wrote on 08/02/2016 07:56 AM:
>>>> On Tue, Aug 2, 2016 at 9:56
On Tuesday, August 2, 2016, Ilya Dryomov wrote:
> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev > wrote:
> > On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin > wrote:
> >> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
> >>> Hi Ilya,
> >>
> I'm confused. How can a 4M discard not free anything? It's either
> going to hit an entire object or two adjacent objects, truncating the
> tail of one and zeroing the head of another. Using rbd diff:
>
> $ rbd diff test | grep -A 1 25165824
> 25165824 4194304 data
> 29360128 4194304 data
>
ing.
Thank you for your input, it is very practical and helpful long term.
Alex
>
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote:
> On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev
> wrote:
>>> I'm confused. How can a 4M discard not free anything? It's either
>>> going to hit an entire object or two adjacent objects, truncating the
>&
On Sat, Aug 13, 2016 at 12:36 PM, Alex Gorbachev
wrote:
> On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote:
>> On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev
>> wrote:
>>>> I'm confused. How can a 4M discard not free anything? It's either
>>>&
On Sat, Aug 13, 2016 at 4:51 PM, Alex Gorbachev
wrote:
> On Sat, Aug 13, 2016 at 12:36 PM, Alex Gorbachev
> wrote:
>> On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov wrote:
>>> On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev
>>> wrote:
>>>>> I'm
On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev
wrote:
> On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов wrote:
>> Guys,
>>
>> This bug is hitting me constantly, may be once per several days. Does
>> anyone know is there a solution already?
>
>
> I see ther
Hi Nick,
On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk wrote:
>> -Original Message-
>> From: w...@globe.de [mailto:w...@globe.de]
>> Sent: 21 July 2016 13:23
>> To: n...@fisk.me.uk; 'Horace Ng'
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Perfo
CephFS/Ganesha.
Thanks for your very valuable info on analysis and hw build.
Alex
>
>
>
> Am 21.08.2016 um 09:31 schrieb Nick Fisk >:
>
> >> -Original Message-
> >> From: Alex Gorbachev [mailto:a...@iss-integration.com ]
> >> Sent: 21 August
h side?
Do you check the ceph.log for any anomalies?
Any occurrences on OSD nodes, anything in their OSD logs or syslogs?
Aany odd page cache settings on the clients?
Alex
>
> Thanks,
> Nick
>
> ___
> ceph-users mailing list
&g
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Jun 30, 2017 at 8:12 AM Nick Fisk wrote:
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 30 June 2017 03:54
> *To:* Ceph Users ; n...@fisk.me.uk
>
>
> *Subject:* Re: [ceph-users] Kernel mounted RBD's hanging
>
>
>
>
>
> O
things,
> small people talk ... about other people.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ce
for dm-crypt as well.
Regards,
Alex
> Any suggestions?
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
using PCIe journals (e.g. Intel P3700 or even the
older 910 series) in front of such SSDs?
Thanks for any info you can share.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
ever run the odd releases as too risky.
A good deal if functionality comes in updates, and usually the Ceph team
brings them in gently, with the more experimental features off by default.
I suspect the 9 month even cycle will also make it easier to perform more
incremental upgrades, i.e. small ju
In Jewel and prior there was a health status for MONs in ceph -s JSON
output, this seems to be gone now. Is there a place where a status of
a given monitor is shown in Luminous?
Thank you
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph
it does seem stable.
Hth, Alex
>>
>> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Mark, great to hear from you!
On Tue, Oct 3, 2017 at 9:16 AM Mark Nelson wrote:
>
>
> On 10/03/2017 07:59 AM, Alex Gorbachev wrote:
> > Hi Sam,
> >
> > On Mon, Oct 2, 2017 at 6:01 PM Sam Huracan > <mailto:nowitzki.sa...@gmail.com>> wrote:
> >
>
configuration work?
Thank you,
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> On Thu, Oct 5, 2017 at 7:45 PM, Alex Gorbachev
> wrote:
>> I am testing rbd mirroring, and have two existing clusters named ceph
>> in their ceph.conf. Each cluster has a separate fsid. On one
>> cluster, I renamed ceph.conf into remote-mirror.conf and
>
tp://ceph.com/geen-categorie/incremental-snapshots-with-rbd/
http://docs.ceph.com/docs/master/dev/rbd-export/
http://cephnotes.ksperis.com/blog/2014/08/12/rbd-replication
--
Alex Gorbachev
Storcium
> 2.- Is it possible to export BaseImage in qcow2 format and snapshots in
> qcow2 format as well a
ice-class using crushtool,
> is that correct?
This is how we do it in Storcium based on
http://docs.ceph.com/docs/master/rados/operations/crush-map/
ceph osd crush rm-device-class
ceph osd crush set-device-class
--
Best regards,
Alex Gorbachev
Storcium
>
> Wido
> __
eed the standard control tools available from their
web sites, as well as hardware that supports SGPIO (most enterprise
JBODs and drives do). There's likely similar options to other HBAs.
Areca:
UID on:
cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=
UID OFF:
cli64 cur
tices and simple
use cases, which could be automated this way.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
in case of a permanent failure of the main site (with two
replicas), how to manually force the other site (with one replica and
MON) to provide storage? I would think a CRUSH map change and
modifying ceph.conf to include just one MON, then build two more MONs
locally and add?
--
Alex Gorbachev
Storc
On Tue, Jan 16, 2018 at 2:17 PM, Gregory Farnum wrote:
> On Tue, Jan 16, 2018 at 6:07 AM Alex Gorbachev
> wrote:
>>
>> I found a few WAN RBD cluster design discussions, but not a local one,
>> so was wonderinng if anyone has experience with a resilience-oriented
&
ortant.
I would avoid both bcache and tiering to simplify the configuration,
and seriously consider larger nodes if possible, and more OSD drives.
HTH,
--
Alex Gorbachev
Storcium
>
> Thanks in advance for your advice!
>
> Best,
> Ean
>
>
>
>
>
> --
>
-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB
> 21284 (AG Nürnberg)
>
>
> ___
> ceph-users mailing list
> ceph-
t’s been for a long time and I’m reluctant to fiddle any
> further.
>
>
>
> But as mentioned above, thick vmdk’s with vaai might be a really good fit.
>
Any chance thin vs. thick difference could be related to discards? I saw
zillions of them in recent testing.
>
>
> Thanks for
dahead. I need it to stream to LTO6 tape.
>> Depending on what you are doing this may or may not be required.
>>
>
> Ah, yes. I a kind of similar use-case I went for using 64MB objects
> underneath a RBD device. We needed high sequential Write and Read performance
> on
;page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
HI Nick,
On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk wrote:
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 21 August 2016 15:27
> *To:* Wilhelm Redbrake
> *Cc:* n...@fisk.me.uk; Horace Ng ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject
On Saturday, September 3, 2016, Alex Gorbachev
wrote:
> HI Nick,
>
> On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk > wrote:
>
>> *From:* Alex Gorbachev [mailto:a...@iss-integration.com
>> ]
>> *Sent:* 21 August 2016 15:27
>> *To:* Wilhelm Redbrake >
-recommends install -o
Dpkg::Options::=--force-confnew ceph-osd ceph-mds ceph-mon radosgw
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confirmed - older version of ceph-deploy is working fine. Odd as
there is a large number of Hammer users out there. Thank you for the
explanation and fix.
--
Alex Gorbachev
Storcium
On Fri, Sep 9, 2016 at 12:15 PM, Vasu Kulkarni wrote:
> There is a known issue with latest ceph-deploy w
-on-nfs-vs.html )
Alex
>
> From: Alex Gorbachev [mailto:a...@iss-integration.com]
> Sent: 04 September 2016 04:45
> To: Nick Fisk
> Cc: Wilhelm Redbrake ; Horace Ng ;
> ceph-users
> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
>
>
>
>
&
On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk wrote:
>
>
>
>
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 04 September 2016 04:45
> *To:* Nick Fisk
> *Cc:* Wilhelm Redbrake ; Horace Ng ;
> ceph-users
> *Subject:* Re: [ceph-users] Ceph + VMwa
--
Alex Gorbachev
Storcium
On Sun, Sep 11, 2016 at 12:54 PM, Nick Fisk wrote:
>
>
>
>
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 11 September 2016 16:14
>
> *To:* Nick Fisk
> *Cc:* Wilhelm Redbrake ; Horace Ng ;
> ceph-users
>
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry wrote:
> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd
out optimal workloads under highly varied use cases. I see better results
with NVMe journals and write combining HBAs, e.g. Areca.
Regards,
Alex
> Regards,
>
> Frédéric.
>
> Le 06/10/2016 à 16:01, Alex Gorbachev a écrit :
>
> On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry
>
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > ___
> > ceph-users mailing list
>
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1 - 100 of 202 matches
Mail list logo