On Mon, Nov 28, 2016 at 2:59 PM Ilya Dryomov wrote:
> On Mon, Nov 28, 2016 at 6:20 PM, Francois Blondel
> wrote:
> > Hi *,
> >
> > I am currently testing different scenarios to try to optimize sequential
> > read and write speeds using Kernel RBD.
> >
> > I have two block devices created with :
Referencing
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html
When using --dmcrypt with ceph-deploy/ceph-disk, the journal device is not
allowed to be an existing partition. You have to specify the entire block
device, on which the tools create a partition equal to osd jour
Hi Pierre,
On Mon, Dec 5, 2016 at 3:41 AM, Pierre BLONDEAU
wrote:
> Le 05/12/2016 à 05:14, Alex Gorbachev a écrit :
>> Referencing
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html
>>
>> When using --dmcrypt with ceph-deploy/ceph-disk, t
Hi Joakim,
On Mon, Dec 5, 2016 at 1:35 PM wrote:
> Hello,
>
> I have a question regarding if Ceph is suitable for small scale
> deployments.
>
> Lets say I have two machines, connected with gbit lan.
>
> I want to share data between them, like an ordinary NFS
> share, but with Ceph instead.
>
>
On Sat, Dec 31, 2016 at 5:38 PM Tyler Bishop
wrote:
> Enjoy the leap second guys.. lol your cluster gonna be skewed.
>
> Yep, pager went off right at dinner :)
>
> _
>
> ___
>
> ceph-users mailing list
>
> ce
iki/ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool
I like the fact that a failed drive will be reported to the OS, which
is not always the case with hardware RAID controllers.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
in pagecache.
Dropping the caches syncs up the block with what's on "disk" and
everything is fine.
Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
client is Jewel 10.2.10-17.el7cp
--
Alex Gorbachev
Storcium
___
cep
On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>
>
> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
> wrote:
>>
>> I am not sure this related to RBD, but in case it is, this would be an
>> important bug to fix.
>>
>> Running LVM on top of RBD,
On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
wrote:
> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>>
>>
>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
>> wrote:
>>>
>>> I am not sure this related to RBD, but in case it
On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
wrote:
> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> wrote:
>> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>>>
>>>
>>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
>>> wrote:
On Thu, Jul 26, 2018 at 9:21 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> > wrote:
&g
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
>> > wrote:
>>
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
>> > wrote:
>>
On Fri, Jul 27, 2018 at 9:33 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev
> wrote:
>>
>> On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
>> > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
>> > wrote:
>> >
-- Forwarded message --
From: Matt.Brown
Can you please add me to the ceph-storage slack channel? Thanks!
Me too, please
--
Alex Gorbachev
Storcium
- Matt Brown | Lead Engineer | Infrastructure Services – Cloud &
Compute | Target | 7000 Target Pkwy N., NCE-0706 | Broo
On Mon, Aug 13, 2018 at 10:25 AM, Ilya Dryomov wrote:
> On Mon, Aug 6, 2018 at 8:17 PM Ilya Dryomov wrote:
>>
>> On Mon, Aug 6, 2018 at 8:13 PM Ilya Dryomov wrote:
>> >
>> > On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
>> > wrote:
>>
rminal windows (showing all
>> CPUs and disks) for all nodes should be very telling, collecting and
>> graphing this data might work, too.
>>
>> My suspects would be deep scrubs and/or high IOPS spikes when this is
>> happening, starving out OSD processes (CPU wis
gt;> > >>> >> > > > Aside from various potential false positives with regards to
>> > >>> >> > > > spam my bet
>> > >>> >> > > > is that gmail's known dislike for attachments is the cause of
>> > >>> >> > > > th
t see any recent kernel updates with nbd.c
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
32 sec
(REQUEST_SLOW)
2018-02-28 18:09:53.794882 7f6de8551700 0
mon.roc-vm-sc3c234@0(leader) e1 handle_command mon_command({"prefix":
"status", "format": "json"} v 0) v1
--
Alex Gorbachev
Storcium
___
ceph-use
.
This does not occur when compression is unset.
--
Alex Gorbachev
Storcium
>
> Subhachandra
>
> On Thu, Mar 1, 2018 at 6:18 AM, David Turner wrote:
>>
>> With default memory settings, the general rule is 1GB ram/1TB OSD. If you
>> have a 4TB OSD, you should pl
tting
on the pool, nothing in OSD logs.
I replied to another compression thread. This makes sense since
compression is new, and in the past all such issues were reflected in
OSD logs and related to either network or OSD hardware.
Regards,
Alex
>
> On Thu, Mar 1, 2018 at 2:23 PM Alex Gorb
On Thu, Mar 1, 2018 at 10:57 PM, David Turner wrote:
> Blocked requests and slow requests are synonyms in ceph. They are 2 names
> for the exact same thing.
>
>
> On Thu, Mar 1, 2018, 10:21 PM Alex Gorbachev wrote:
>>
>> On Thu, Mar 1, 2018 at 2:47 PM, David Turner
On Fri, Mar 2, 2018 at 4:17 AM Maged Mokhtar wrote:
> On 2018-03-02 07:54, Alex Gorbachev wrote:
>
> On Thu, Mar 1, 2018 at 10:57 PM, David Turner
> wrote:
>
> Blocked requests and slow requests are synonyms in ceph. They are 2 names
> for the exact same thing.
>
>
&
On Fri, Mar 2, 2018 at 9:56 AM, Alex Gorbachev wrote:
>
> On Fri, Mar 2, 2018 at 4:17 AM Maged Mokhtar wrote:
>>
>> On 2018-03-02 07:54, Alex Gorbachev wrote:
>>
>> On Thu, Mar 1, 2018 at 10:57 PM, David Turner
>> wrote:
>>
>> Blocked requests an
On Mon, Mar 5, 2018 at 2:17 PM, Gregory Farnum wrote:
> On Thu, Mar 1, 2018 at 9:21 AM Max Cuttins wrote:
>>
>> I think this is a good question for everybody: How hard should be delete a
>> Pool?
>>
>> We ask to tell the pool twice.
>> We ask to add "--yes-i-really-really-mean-it"
>> We ask to ad
lumc1 [ERR] overall HEALTH_ERR 1 scrub
errors; Possible data damage: 1 pg inconsistent
ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
On Wed, Mar 7, 2018 at 8:37 PM, Alex Gorbachev wrote:
> On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius
> wrote:
>> Hi all, this issue already have been discussed in older threads and I've
>> already tried most of the solutions proposed in older threads.
>>
lsblk and /sys/block/nbdX/size, but not
in parted for a mounted filesystem.
Unmapping and remapping the NBD device shows the size in parted.
Thank you for any help
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On Sun, Mar 11, 2018 at 4:23 AM, Mykola Golub wrote:
> On Sat, Mar 10, 2018 at 08:25:15PM -0500, Alex Gorbachev wrote:
>> I am running into the problem described in
>> https://lkml.org/lkml/2018/2/19/565 and
>> https://tracker.ceph.com/issues/23137
>>
>> I wen
On Mon, Mar 5, 2018 at 11:20 PM, Brad Hubbard wrote:
> On Fri, Mar 2, 2018 at 3:54 PM, Alex Gorbachev
> wrote:
>> On Thu, Mar 1, 2018 at 10:57 PM, David Turner wrote:
>>> Blocked requests and slow requests are synonyms in ceph. They are 2 names
>>> for the exact
7;ve upgraded Mellanox
> drivers on one host, and just decided to check IB config (and the root was
> there: adapter switched into the datagram mode). But if it wouldn't be the
> reason I would really lost.
>
> 12 мар. 2018 г. 9:39 пользователь "Alex Gorbachev"
> напи
On Mon, Mar 12, 2018 at 12:21 PM, Alex Gorbachev
wrote:
> On Mon, Mar 12, 2018 at 7:53 AM, Дробышевский, Владимир
> wrote:
>>
>> I was following this conversation on tracker and got the same question. I've
>> got a situation with slow requests and had no any idea
= 0.5 immediately relieved the problem. I
tried the value of 1, but it slowed recovery too much.
This seems like a very important operational parameter to note.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On Wed, Mar 21, 2018 at 2:26 PM, Kjetil Joergensen wrote:
> I retract my previous statement(s).
>
> My current suspicion is that this isn't a leak as much as it being
> load-driven, after enough waiting - it generally seems to settle around some
> equilibrium. We do seem to sit on the mempools x 2
ll happening?
>
Thank you Igor, reducing to 3GB now and will advise. I did not
realize there's additional memory on top of the 90GB, the nodes each
have 128 GB.
--
Alex Gorbachev
Storcium
>
> Thanks,
>
> Igor
>
>
>
> On 3/26/2018 5:09 PM, Alex Gorbachev wrote:
&
eback mode off for the SSDs, as this seems to make the
controller cache a bottleneck.
--
Alex Gorbachev
Storcium
>
> Thanks
>
> 2018-03-26 23:00 GMT+07:00 Sam Huracan :
>
>> Thanks for your information.
>> Here is result when I run atop on 1 Ceph HDD host:
>> http
On the new Luminous 12.2.4 cluster with Bluestore, I see a good deal
of scrub and deep scrub operations. Tried to find a reference, but
nothing obvious out there - was it not supposed to not need scrubbing
any more due to CRC checks?
Thanks for any clarification.
--
Alex Gorbachev
Storcium
On Thu, Mar 29, 2018 at 5:09 PM, Ronny Aasen wrote:
> On 29.03.2018 20:02, Alex Gorbachev wrote:
>>
>> w Luminous 12.2.4 cluster with Bluestore, I see a good deal
>> of scrub and deep scrub operations. Tried to find a reference, but
>> nothing obvious out there - was
find
from dump historic ops what is the bottleneck.
It seems that integrating the timings into some sort of a debug flag for
rbd bench or fio would help a lot of us locate bottlenecks faster.
Thanks,
Alex
--
--
Alex Gorbachev
Storcium
___
ceph-users
not sure if it would do similar
things. Any ideas?
https://vendor2.nginfotpdx.net/gitlab/upstream/ceph/commit/8aa159befa58cd9059ad99c752146f3a5dbfcb8b
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
On Sun, Mar 11, 2018 at 3:50 PM, Alex Gorbachev
wrote:
> On Sun, Mar 11, 2018 at 4:23 AM, Mykola Golub wrote:
>> On Sat, Mar 10, 2018 at 08:25:15PM -0500, Alex Gorbachev wrote:
>>> I am running into the problem described in
>>> https://lkml.org/lkml/
On Wed, Apr 11, 2018 at 2:43 AM, Mykola Golub wrote:
> On Tue, Apr 10, 2018 at 11:14:58PM -0400, Alex Gorbachev wrote:
>
>> So Josef fixed the one issue that enables e.g. lsblk and sysfs size to
>> reflect the correct siz on change. However, partptobe and parted
>> still
> On Wed, Apr 11, 2018 at 10:27 AM, Alex Gorbachev
> wrote:
>> On Wed, Apr 11, 2018 at 2:43 AM, Mykola Golub
>> wrote:
>>> On Tue, Apr 10, 2018 at 11:14:58PM -0400, Alex Gorbachev wrote:
>>>
>>>> So Josef fixed the one issue that enables e.g.
adc0364943b6352e8994158febcb699c9f9b#diff-bc9273bcb259fef182ae607a1d06a142L180
>>
>> On Wed, Apr 11, 2018 at 11:09 AM, Alex Gorbachev
>> wrote:
>>>> On Wed, Apr 11, 2018 at 10:27 AM, Alex Gorbachev
>>>> wrote:
>>>>> On Wed, Apr 11, 2018 at 2
quot;nbd_size_update", which seems suspicious to me at initial glance.
>>
>> [1]
>> https://github.com/torvalds/linux/commit/29eaadc0364943b6352e8994158febcb699c9f9b#diff-bc9273bcb259fef182ae607a1d06a142L180
>>
>> On Wed, Apr 11, 2018 at 11:09 AM, Alex Gorbachev
&
512B/512B
Partition Table: loop
Disk Flags:
Number Start End SizeFile system Flags
1 0.00B 2147MB 2147MB xfs
>
> On Wed, Apr 11, 2018 at 11:01 PM, Alex Gorbachev
> wrote:
>> On Wed, Apr 11, 2018 at 2:13 PM, Jason Dillaman wrote:
>>> I've teste
On Thu, Apr 12, 2018 at 9:38 AM, Alex Gorbachev
wrote:
> On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman wrote:
>> If you run "partprobe" after you resize in your second example, is the
>> change visible in "parted"?
>
> No, partprobe does not help:
>
> 2018-04-17 15:47:04.611307 mon.osd01 [WRN] Health check update: 2 slow
> requests are blocked > 32 sec (REQUEST_SLOW)
> 2018-04-17 15:47:10.102803 mon.osd01 [WRN] Health check update: 13 slow
> requests are blocked > 32 sec (REQUEST_SLOW)
>
I hav
VM to recognize the new
caps. Sorry, no experience with libvirt on this, but the caps process
seems to work well.
--
Alex Gorbachev
Storcium
>
>
> Kind Regards
>
> Sven
>
>
>
>
> bringe Informationstechnik GmbH
>
> Zur Seeplatte 12
&
ing object or blob (not sure which case applies),
> so new allocations are still written straight to disk. Can anyone confirm?
>
>
>
> PS. If your spinning disks are connected via a RAID controller with BBWC
> then you are not affected by this.
We saw this behavior even on A
On Thu, May 3, 2018 at 6:54 AM, Nick Fisk wrote:
> -Original Message-
> From: Alex Gorbachev
> Sent: 02 May 2018 22:05
> To: Nick Fisk
> Cc: ceph-users
> Subject: Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences
>
> Hi Nick,
>
> On Tue
h repackaging kRBD
with XFS. I tried rbd-nbd as well, but performance is not good when
running sync.
--
Alex Gorbachev
Storcium
>
> Thanks
> Steven
>
> ___
> ceph-users mailing list
> cep
e's small IO is pretty hard
on RBD devices, considering there is also the filesystem overhead that
serves NFS. When taking into account the single or multiple streams
(Ceph is great at multiple streams, but single stream performance will
take a good deal of tuning), and
the
limitations and challenges). WAL/DB on SSD or NVMe is a must. We use
EnhanceIO to overcome some read bottlenecks. Currently running a
petabyte of storage with three ESXi clusters.
Regards,
Alex Gorbachev
Storcium
>
> Thanks
> Steven
>
> __
fail about the time
when journals fail
Any other solutions?
Thank you for sharing.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Mar 13, 2017 at 6:09 AM, Florian Haas wrote:
> On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster
> wrote:
>>> I'm sorry, I may have worded that in a manner that's easy to
>>> misunderstand. I generally *never* suggest that people use CFQ on
>>> reasonably decent I/O hardware, and thus h
.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
Regards,
Alex
Storcium
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
that and therefore would prefer either a vendor based
solid state design (e.g. areca), all SSD OSDs whenever these can be
affordable, or start experimenting with cache pools. Does not seem like
SSDs are getting any cheaper, just new technologies like 3DXP showing up.
>
> On 03/21/17 23:22, Al
urse of troubleshooting here - dump historic ops on
OSD, wireshark the links or anything else?
3. Christian, if you are looking at this, what would be your red flags in atop?
Thank you.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-us
On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev
wrote:
> I am trying to understand the cause of a problem we started
> encountering a few weeks ago. There are 30 or so per hour messages on
> OSD nodes of type:
>
> ceph-osd.33.log:2017-04-10 13:42:39.935422 7fd7076d8700 0 ba
Hi Piotr,
On Tue, Apr 11, 2017 at 2:41 AM, Piotr Dałek wrote:
> On 04/10/2017 08:16 PM, Alex Gorbachev wrote:
>>
>> I am trying to understand the cause of a problem we started
>> encountering a few weeks ago. There are 30 or so per hour messages on
>> OSD nodes of
Hi Ilya,
On Tue, Apr 11, 2017 at 4:06 AM, Ilya Dryomov wrote:
> On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev
> wrote:
>> On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev
>> wrote:
>>> I am trying to understand the cause of a problem we started
>>> enco
Hi Ilya,
On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov wrote:
> On Tue, Apr 11, 2017 at 3:10 PM, Alex Gorbachev
> wrote:
> > Hi Ilya,
> >
> > On Tue, Apr 11, 2017 at 4:06 AM, Ilya Dryomov
> wrote:
> >> On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev
> wr
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov wrote:
> On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev
> wrote:
>> Hi Ilya,
>>
>> On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov wrote:
>>>
>>> On Tue, Apr 11, 2017 at 3:10 PM, Alex Gorbachev
>>&g
On Thu, Apr 13, 2017 at 4:24 AM, Ilya Dryomov wrote:
> On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev
> wrote:
>> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov wrote:
>>> On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev
>>> wrote:
>>>> Hi Ilya,
>
st
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-sca040 kernel: [7126712.363404] mpt2sas_cm0:
log_info(0x30030101): originator(IOP), code(0x03), sub_code(0x0101)
root@roc02r-sca040:/var/log#
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
ers in a Pacemaker/Corosync cluster. We
utilize NFS ACLs to restrict access and consume RBD as XFS
filesystems.
Best regards,
Alex Gorbachev
Storcium
>
> Thank you.
>
> Regards,
> Ossi
>
>
>
> ___
> ceph-users mailing
Has anyone run into such config where a single client consumes storage from
several ceph clusters, unrelated to each other (different MONs and OSDs,
and keys)?
We have a Hammer and a Jewel cluster now, and this may be a way to have
very clean migrations.
Best regards,
Alex
Storcium
--
--
Alex
We are seeing the same problem as http://tracker.ceph.com/issues/18945
where OSDs are not activating, with the lockbox error as below.
--
Alex Gorbachev
Storcium
un 19 17:11:56 roc03r-sca070 ceph-osd6804: starting osd.75 at :/0
osd_data /var/lib/ceph/osd/ceph-75 /var/lib/ceph/osd/ceph-75/journal
On Mon, Jun 19, 2017 at 3:12 AM Wido den Hollander wrote:
>
> > Op 19 juni 2017 om 5:15 schreef Alex Gorbachev :
> >
> >
> > Has anyone run into such config where a single client consumes storage
> from
> > several ceph clusters, unrelated to each other (d
Great, thanks Josh! Using stdin/stdout merge-diff is working. Thank you
for looking into this.
--
Alex Gorbachev
Storcium
On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote:
> This is the problem:
>
> http://tracker.ceph.com/issues/14030
>
> As a workaround, you can pass the fi
ge diff: 13% complete...failed.
rbd: merge-diff error
I am not sure how to run gdb in such scenario with stdin/stdout
Thanks,
Alex
>
>
> Josh
>
>
> On 12/08/2015 11:11 PM, Josh Durgin wrote:
>
>> On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
>>
>>&g
More oddity: retrying several times, the merge-diff sometimes works and
sometimes does not, using the same source files.
On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev
wrote:
> Hi Josh, looks like I celebrated too soon:
>
> On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote:
>
&g
e.g.:
> cat snap1.diff | rbd merge-diff - snap2.diff combined.diff
--
Alex Gorbachev
Storcium
On Thu, Dec 10, 2015 at 1:10 AM, Josh Durgin wrote:
> Hmm, perhaps there's a secondary bug.
>
> Can you send the output from strace, i.e. strace.log after running:
>
> cat
h as http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors
?
Our cluster has 8 racks right now, and I would love to place a MON at the
top of the rack (maybe on SDN switches in the future - why not?). Thank
you for helping answer these questions.
--
Alex Gorbachev
Is there any way to obtain a snapshot creation time? rbd snap ls does not
list it.
Thanks!
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
, 479 MB/s
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Sorry, one last comment on issue #1 (slow with SCST iSCSI but fast qla2xxx
FC with Ceph RBD):
> tly work fine in combination with SCST so I'd recommend to continue
>>> testing with a recent kernel. I'm running myself kernel 4.3.0 since some
>>> time on my laptop and development workstation.
>>
>>
).
Is this the best way to determine snapshots and are letters "s" and "t"
going to stay the same?
Best regards,
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Jan 12, 2016 at 12:09 PM, Josh Durgin wrote:
> On 01/12/2016 06:10 AM, Alex Gorbachev wrote:
>
>> Good day! I am working on a robust backup script for RBD and ran into a
>> need to reliably determine start and end snapshots for differential
>> exports (d
ng links will
help you make some progress:
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17820.html
https://ceph.com/community/incomplete-pgs-oh-my/
Good luck,
Alex
>
> regards
> Danny
>
>
>
--
ith 4.1+ kernel seems
to work really well for all types of bonding and multiple bonds.
HTH, Alex
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
multiple bonds
>
>
> On 26 Jan 2016, at 06:32, Alex Gorbachev > wrote:
>
>
>
> On Saturday, January 23, 2016, 名花 > wrote:
>
>> Hi, I have a 4 ports 10gb ethernet card in my osd storage. I want to use
>> 2 ports for cluster, the other 2 ports for private
Jan, I believe the block device (vs. filesystem) OSD layout is addressed in
the Newstore/Bluestore:
http://tracker.ceph.com/projects/ceph/wiki/NewStore_(new_osd_backend)
--
Alex Gorbachev
Storcium
On Thu, Jan 28, 2016 at 4:32 PM, Jan Schermer wrote:
> You can't run Ceph OSD without a
Reviving an old thread:
On Sunday, July 12, 2015, Lionel Bouton wrote:
> On 07/12/15 05:55, Alex Gorbachev wrote:
> > FWIW. Based on the excellent research by Mark Nelson
> > (
> http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
> )
&g
hat am I doing wrong? Any help is appreciated.
Hi Uwe,
If these are Proxmox images, would you be able to move them simply
using Proxmox Move Disk in hardware for VM? I have had good results
with that.
--
Alex Gorbachev
Storcium
>
> Thanks,
>
> Uwe
>
>
>
> Am 07.11.18
On Thu, Dec 13, 2018 at 10:48 AM Dietmar Rieder
wrote:
>
> Hi Cephers,
>
> one of our OSD nodes is experiencing a Disk controller problem/failure
> (frequent resetting), so the OSDs on this controller are flapping
> (up/down in/out).
>
> I will hopefully get the replacement part soon.
>
> I have s
or Proxmox,
OpenStack etc, whatever you are using) to be aware of data consistency
- Export and import Ceph images
All depends on what are the applications, what are RTO and RPO
requirements, how much data, what distance, what is the network
bandwidth
--
Alex Gorbachev
Storcium
_
r of how much notice they would
> need to have the design ready for print on demand through the ceph
> store
>
> https://www.proforma.com/sdscommunitystore
>
> --
> Mike Perez (thingee)
I am nowhere near being an artist, but would the reference to Jules
Verne's trilogy be
)
Proxmox supports Ceph integrated with their clusters (we are liking
that technology as well, more and more due to very good
thought-through design and quality).
If you provide more information on the specific use cases, it would be helpful.
--
Alex Gorbachev
Storcium
> __
Oh you are so close David, but I have to go to Tampa to a client site,
otherwise I'd hop on a flight to Boston to say hi.
Hope you are doing well. Are you going to the Cephalocon in Barcelona?
--
Alex Gorbachev
Storcium
On Sun, Feb 24, 2019 at 10:40 AM David Turner wrote:
>
>
Late question, but I am noticing
ceph-volume: automatic VDO detection
Does this mean that the OSD layer will at some point support
deployment with VDO?
Or that one could build on top of VDO devices and Ceph would detect
this and report somewhere?
Best,
--
Alex Gorbachev
ISS Storcium
On Tue
On Tue, Jun 4, 2019 at 3:32 PM Sage Weil wrote:
>
> [pruning CC list]
>
> On Tue, 4 Jun 2019, Alex Gorbachev wrote:
> > Late question, but I am noticing
> >
> > ceph-volume: automatic VDO detection
> >
> > Does this mean that the OSD layer will at
;
>
> Cheers,
I get this in a lab sometimes, and
do
ceph osd set noout
and reboot the node with the stuck PG.
In production, we remove OSDs one by one.
--
Alex Gorbachev
Intelligent Systems Services Inc.
___
ceph-users mailing list
ceph-users@l
On Wed, Jun 5, 2019 at 2:30 PM Sameh wrote:
>
> Le (On) Wed, Jun 05, 2019 at 01:57:52PM -0400, Alex Gorbachev ecrivit (wrote):
> >
> >
> > I get this in a lab sometimes, and
> > do
> >
> > ceph osd set noout
> >
> > and reboot the node wit
101 - 200 of 202 matches
Mail list logo