I am afraid one would need an udev rule to make it persistent.
Sent from Outlook Mail for Windows 10 phone
From: David Riedl
Sent: Thursday, November 19, 2015 1:42 PM
To: ceph-us...@ceph.com
Subject: Re: [ceph-users] Can't activate osd in infernalis
I fixed the issue and opened a ticket on the
I believe the error message says that there is no space left on the device for
the second partition to be created. Perhaps try to flush gpt with old good dd.
Sent from Outlook Mail for Windows 10 phone
From: German Anders
Sent: Thursday, November 19, 2015 7:25 PM
To: Mykola Dvornik
Cc: ceph
Please run ceph-deploy on your host machine as well.
Sent from Outlook Mail for Windows 10 phone
From: James Gallagher
Sent: Monday, November 23, 2015 5:03 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Cannot Issue Ceph Command
Hi there,
I have managed to complete the storage cluste
I see the same behavior with the threshold of around 20M objects for 4 nodes,
16 OSDs, 32TB, hdd-based cluster. The issue dates back to hammer.
Sent from my Windows 10 phone
From: Blair Bethwaite
Sent: Thursday, June 16, 2016 2:48 PM
To: Wade Holler
Cc: Ceph Development; ceph-users@lists.ceph.c
te an osd.X number??
The "ceph osd create" could be extended to have OSD ID as a second
optional argument (the first is already used for uuid).
ceph osd create
The command would succeed only if the ID were not in use.
Ron, would this work for you?
I have a patch as a
On Sun, Feb 15, 2015 at 06:24:45PM -0800, Sage Weil wrote:
> On Sun, 15 Feb 2015, Gregory Farnum wrote:
> > On Sun, Feb 15, 2015 at 5:39 PM, Sage Weil wrote:
> > > On Sun, 15 Feb 2015, Mykola Golub wrote:
> > >> https://github.com/trociny/ceph/compare/wip-osd
On Sun, Feb 15, 2015 at 5:39 PM, Sage Weil wrote:
> On Sun, 15 Feb 2015, Mykola Golub wrote:
>> The "ceph osd create" could be extended to have OSD ID as a second
>> optional argument (the first is already used for uuid).
>>
>> ceph osd create
>>
ceph-objectstore-tool, which adds mark-complete operation,
as it has been suggested by Sam in http://tracker.ceph.com/issues/10098
https://github.com/ceph/ceph/pull/5031
It has not been reviewed yet and not tested well though because I
don't know a simple way how to get an incompl
osd",
"type_id": 0,
"status": "up",
"reweight": 1.00,
"primary_affinity": 0.75,
"crush_weight": 1.00,
"depth": 2},
{ "id": 2,
"n
.
Done. https://github.com/ceph/ceph/pull/3254
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
cation is defined in ceph.conf,
[client] section, "admin socket" parameter.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Dear ceph experts,
I've built and administrating 12 OSD ceph cluster (spanning over 3
nodes) with replication count of 2. The ceph version is
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
The cluster hosts two pools (data and metadata) that are exported over
CephFS.
At some
5350760 0x5602b519e600 mdsbeacon(1614101/000-s-ragnarok
up:active seq 204 v9429) v4
2015-11-17 12:47:42.905260 7ffa8bae8700 10 mon.000-s-ragnarok@0(leader).mds
e9429 e9429: 1/1/0 up {0=000-s-ragnarok=up:active}
Se
--
Mykola
___
ceph-users mai
to flush/reset the MDS cache?
On 17 November 2015 at 13:26, John Spray wrote:
> On Tue, Nov 17, 2015 at 12:17 PM, Mykola Dvornik
> wrote:
> > Dear John,
> >
> > Thanks for such a prompt reply!
> >
> > Seems like something happens on the mon side, since t
t if it is, the damage is marginal.
So the question is cephfs-data-scan designed to resolve problems with
duplicated inodes?
On 19 November 2015 at 04:17, Yan, Zheng wrote:
> On Wed, Nov 18, 2015 at 5:21 PM, Mykola Dvornik
> wrote:
>
>> Hi John,
>>
>> It turned out
d you do at this stage is
mount your filesystem read-only, back it up, and then create a new
filesystem and restore from backup.
Ok. Is it somehow possible to have multiple FSs on the same ceph cluster?
On 19 November 2015 at 10:43, John Spray wrote:
> On Wed, Nov 18, 2015 at 9:21 AM, Myk
Thanks for the tip.
I will stay of the safe side and wait until it will be merged into master)
Many thanks for all your help.
-Mykola
On 19 November 2015 at 11:10, John Spray wrote:
> On Thu, Nov 19, 2015 at 10:07 AM, Mykola Dvornik
> wrote:
> > I'm guessing in this context
cat /etc/udev/rules.d/89-ceph-journal.rules
KERNEL=="sdd?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660"
On 19 November 2015 at 13:54, Mykola wrote:
> I am afraid one would need an udev rule to make it persistent.
>
>
he folder
> exists, but all the other udev rules are in /usr/lib/udev/rules.d/.
> Can I just create a new file named "89-ceph-journal.rules" in the
> /usr/lib/udev/rules.d/ folder?
>
>
> Regards
>
> David
>
>
> On 19.11.2015 14:02, Mykola Dvornik wrote:
>
&
-change-name=2:ceph journal',
> '--partition-guid=2:6a9a83f1-2196-4833-a4c8-8f3a424de54f',
> '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
> '/dev/sdf']' returned non-zero exit status 4
> [cibn05][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare
> --cluster ceph --fs-type btrfs -- /dev/sdf
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>
> any ideas?
>
> Thanks in advance,
>
> *German*
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Mykola
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
The same thing happens to my setup with CentOS7.x + non-stock kernel
(kernel-ml from elrepo).
I was not happy with IOPS I got out of the stock CentOS7.x so I did the
kernel upgrade and crashes started to happen until some of the OSDs
become non-bootable at all. The funny thing is that I was no
chose the number of PGs per metadata pool to
maintain its performance and reliability?
Regards,
Mykola
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ebody did some research in this direction?
On Wed, Dec 9, 2015 at 1:13 PM, Jan Schermer wrote:
Number of PGs doesn't affect the number of replicas, so don't worry
about it.
Jan
On 09 Dec 2015, at 13:03, Mykola Dvornik
wrote:
Hi guys,
I am creating a 4-node/16OSD/3
25 PM, Mykola Dvornik
wrote:
Hi Jan,
Thanks for the reply. I see your point about replicas. However my
motivation
was a bit different.
Consider some given amount of objects that are stored in the
metadata pool.
If I understood correctly ceph data placement approach, the number
of
object
then throws an -EIO to the kernel because 0 != write size. I could be
> wrong, so let's wait for Mykola to chime in - he added that check to
> fix discards.
Sorry for delay (I missed this thread due to a wrong filter).
I don't recall details but I think I had an impression that
`ceph_radostool` (people use such tools only when
facing extraordinary situations so they are more careful and expect
limitations).
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Any plans to support quotas in CephFS kernel client?
-Mykola___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for a quick reply.
On Mon, 2016-05-23 at 20:08 +0800, Yan, Zheng wrote:
> No plan so far. Current quota design requires client to do
> bottom-to-top path walk, which is unfriendly for kernel client (due
> to
> lock design of kernel).
>
> On Mon, May 23, 2016 at 4:55
Are there any ceph users with pools containing >2 kobjects?
If so, have you noticed any instabilities of the clusters once this
threshold is reached?
-Mykola___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/cep
ceph's watchdog mechanism. The funny thing is that
CPU and HDDs are not really overloaded during this events. So I am
really puzzled at this moment.
-Mykola
-Original Message-
From: Sven Höper
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rados complexity
Date: Sun, 05 Jun 2016
I have the same issues with the variety of kernel clients running 4.6.3
and 4.4.12 and fuse clients from 10.2.2.
-Mykola
-Original Message-
From: xiaoxi chen
To: João Castro , ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CephFS mds cache pressure
Date: Wed, 29 Jun 2016 01:00:40
question is what OSD_BACKENDSTORAGE_IOPS should stand for? 4K
random or sequential writes IOPS?
-Mykola
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I would also advice people to mind the SELinux if it is enabled on the
OSD's nodes.
The re-labeling should be done as the part of the upgrade and this is
rather time consuming process.
-Original Message-
From: Mart van Santen
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Lessons
running Jewel release (10.2.2)?
Regards,
--
Mykola
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik
> wrote:
> > Dear ceph community,
> >
> > One of the OSDs in my cluster cannot start due to the
> >
> > ERROR: osd init failed: (28) No space left on device
> >
> > A while ago it was recommended to man
date immutable features
> >>>
> >>>I read on the guide I shoulded had place in the
> >>>config|rbd_default_features|
> >>>
> >>>What can I do now to enable this feature all feature of
> >>>jewel on all images?
> >>>Can I insert all the feature of jewel or is there any issue
> >>>with old kernel?
> >>>
> >>>|
> >>>|
> >>>
> >>>|Thanks,
> >>>Max
> >>>|
> >>>
> >>>___
> >>>ceph-users mailing list
> >>>ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >>>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >
> >
> >
> >___
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.conf to
avoid names collisions).
Don't you see other errors?
What is output for `ps auxww |grep rbd-nbd`?
As the first step you could try to export images to file using `rbd
export`, see if it succeeds and probably investigate the content.
--
Mykola Golub
_
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote:
>
> Il 25/06/2017 21:52, Mykola Golub ha scritto:
> >On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
> >>I can see the error even if I easily run list-mapped:
> >>
eed logs for that period to understand
what happened.
> >Don't you observe sporadic crashes/restarts of rbd-nbd processes? You
> >can associate a nbd device with rbd-nbd process (and rbd volume)
> >looking at /sys/block/nbd*/pid and ps output.
> I really don't kno
On Tue, Jun 27, 2017 at 07:17:22PM -0400, Daniel K wrote:
> rbd-nbd isn't good as it stops at 16 block devices (/dev/nbd0-15)
modprobe nbd nbds_max=1024
Or, if nbd module is loaded by rbd-nbd, use --nbds_max command line
option.
--
Myko
e.g. with the
help of lsof utility. Or add something like below
log file = /tmp/ceph.$name.$pid.log
to ceph.conf before starting qemu and look for /tmp/ceph.*.log
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1 active+clean+scrubbing+deep
Has anybody experienced this issue so far?
Regards,
--
Mykola
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
] v68621 at
/users/mykola/mms/NCSHNO/final/120nm-uniform-h8200/j002654.out/m_xrange192-320_yrange192-320_016232.dump,
but inode 10005729a77.head v67464942 already exists at
~mds0/stray1/10005729a77
Those folders within mds.0.cache.dir that got badness report a size of 16EB
on the clients. rm on them
stry approached the ground state.
-Mykola
On 4 October 2016 at 09:16, John Spray wrote:
> (Re-adding list)
>
> The 7.5k stray dentries while idle is probably indicating that clients
> are holding onto references to them (unless you unmount the clients
> and they don't purg
up to 23K. No inconsistent PGs or any
other problems happened to the cluster within this time scale.
-Mykola
On 5 October 2016 at 05:49, Yan, Zheng wrote:
> On Mon, Oct 3, 2016 at 5:48 AM, Mykola Dvornik
> wrote:
> > Hi Johan,
> >
> > Many thanks for your reply. I will
10.2.2
-Mykola
On 7 October 2016 at 15:43, Yan, Zheng wrote:
> On Thu, Oct 6, 2016 at 4:11 PM, wrote:
> > Is there any way to repair pgs/cephfs gracefully?
> >
>
> So far no. We need to write a tool to repair this type of corruption.
>
> Which version of ceph did
-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> [1] http://tracker.ceph.com/issues/36089
>
> --
> Jason
> ___
> ceph-users mailing list
> ceph-users@li
; std::less, std::allocator > const&,
> >> MapCacher::Transaction >> std::char_traits, std::allocator >,
> >> ceph::buffer::list>*)+0x8e9) [0x55eef3894fe9]
> >> 7: (get_attrs(ObjectStore*, coll_t, ghobject_t,
> >> ObjectS
https://github.com/torvalds/linux/commit/29eaadc0364943b6352e8994158febcb699c9f9b
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
and remount of the filesystem is required.
Does your rbd-nbd include this fix [1], targeted for v12.2.3?
[1] http://tracker.ceph.com/issues/22172
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cg
ng command: /sbin/restorecon -R
>> /var/lib/ceph/tmp/mnt.FASof5/magic.126649.tmp
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R
>> ceph:ceph /var/lib/ceph/tmp/mnt.FASof5/magic.126649.tmp
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /sbin/restorec
but it
appears to me that the out-of-sync issue started to appear since the
mds_cache_size increase.
mds log does not have anything suspicious in it.
So is there any way to debug ceph-fuse?
Regards,
--
Mykola
___
ceph-users mailing list
ceph-users
ease 23 (Twenty Three)
you enable client debug by "--debug_client=20' option
Thanks. I've already remounted the clients, but once the issue is back
I will do some debugging.
And last but not least, writing a file to the folder, i.e. touch test,
triggers synchronization.
Kind
-fuse?
Regards,
Mykola
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
nt io 4381 B/s wr, 2 op
In addition on the clients' side I have
cat /etc/fuse.conf
user_allow_other
auto_cache
large_read
max_write = 16777216
max_read = 16777216
-Mykola
On Mon, Feb 1, 2016 at 5:06 PM, Gregory Farnum
wrote:
On Monday, February 1, 2016, Mykola Dvornik
wrote:
Hi guys,
:27 AM, Mykola Dvornik
wrote:
What version are you running on your servers and clients?
Are you using 4.1 or 4.2 kernel?
https://bugzilla.kernel.org/show_bug.cgi?id=104911. Upgrade to 4.3+
kernel or 4.1.17 kernel or 4.2.8 kernel can resolve this issue.
On the clients:
ceph-fuse --version
16 at 5:32 PM, Mykola Dvornik
wrote:
One of my clients is using
4.3.5-300.fc23.x86_64 (Fedora release 23)
did you encounter this problem on client using 4.3.5 kernel? If you
did, this issue should be ceph-fuse bug.
while all the other clients reply on
3.10.0-327.4.4.el7.x86_64 (CentOS
No, I have not had any issues with 4.3.x.
On Tue, Feb 2, 2016 at 3:28 PM, Yan, Zheng wrote:
On Tue, Feb 2, 2016 at 8:28 PM, Mykola Dvornik
wrote:
No, I've never seen this issue on the Fedora stock kernels.
So either my workflow is not triggering it on the Fedora software
stack or
I would strongly(!) suggest you to add few more OSDs to cluster before
things get worse / corrupted.
-Mykola
On Tue, Feb 2, 2016 at 6:45 PM, Zhao Xu wrote:
Hi All,
Recently our ceph storage is running at low performance. Today, we
can not write to the folder. We tried to unmount the ceph
Try to mount with ceph-fuse. It worked for me when I've faced the same
sort of issues you are now dealing with.
-Mykola
On Tue, Feb 2, 2016 at 8:42 PM, Zhao Xu wrote:
Thank you Mykola. The issue is that I/we strongly suggested to add
OSD for many times, but we are not the decision
for cls errors? You will probably need to restart an osd to get some
fresh ones on the osd start.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ist returned by this
command:
ceph-conf --name osd.0 -D | grep osd_class_load_list
contains rbd.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rn code of 13 for check of
> host 'localhost' was out of bounds.
>
> --
Could you please post the full ceph-osd log somewhere?
/var/log/ceph/ceph-osd.0.log
> but hang at the command: "rbd create libvirt-pool/dimage --size 10240 "
So i
rn mentioned by others about ext4 might want
to flush the journal if it is not clean even when mounting ro. I
expect the mount will just fail in this case because the image is
mapped ro, but you might want to investigate how to improve this.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Jan 18, 2019 at 11:06:54AM -0600, Mark Nelson wrote:
> IE even though you guys set bluestore_cache_size to 1GB, it is being
> overridden by bluestore_cache_size_ssd.
Isn't it vice versa [1]?
[1]
https://github.com/ceph/ceph/blob/luminous/src/os/bluestore/BlueStore.cc#L3976
the metadata?
Yes.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
w "rbd migration prepare" after the the sourse VM closes the
image, but before the destination VM opens it.
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Feb 22, 2019 at 02:43:36PM +0200, koukou73gr wrote:
> On 2019-02-20 17:38, Mykola Golub wrote:
>
> > Note, if even rbd supported live (without any downtime) migration you
> > would still need to restart the client after the upgrate to a new
> > librb
not support "journaling" feature, which is
necessary for mirroring. You can access those images only with librbd
(e.g. mapping with rbd-nbd driver or via qemu).
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to find all its logs? `lsof |grep 'rbd-mirror.*log'`
may be useful for this.
BTW, what rbd-mirror version are you running?
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For a 50 gb volume, Local image get created, but it couldnt create
> a mirror image
"Connection timed out" errors suggest you have a connectivity issue
between sites?
--
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
71 matches
Mail list logo