Le mardi 30 octobre 2018 à 18:14 +0100, Kevin Olbrich a écrit :
> Proxmox has support for rbd as they ship additional packages as well
> as
> ceph via their own repo.
>
> I ran your command and got this:
>
> > qemu-img version 2.8.1(Debian 1:2.8+dfsg-6+deb9u4)
> > Copyright (c) 2003-2016 Fabrice
On 11/6/18 11:02 PM, Steve Taylor wrote:
I intend to put together a pull request to push this upstream. I
haven't reviewed the balancer module code to see how it's doing
things, but assuming it uses osdmaptool or the same upmap code as
osdmaptool this should also improve the balancer module.
Yes, we do one-way replication and the 'remote' cluster is the secondary
cluster, so the rbd-mirror daemon is there.
We can confirm the daemon is working because we observed IO workload. And the
remote cluster is actually bigger than the 'local’ cluster so it should be able
to keep up with the I
It is pretty difficult to know what step you are missing if we are
getting the `activate --all` command.
Maybe if you try one by one, capturing each command, throughout the
process, with output. In the filestore-to-bluestore guides we never
advertise `activate --all` for example.
Something is mis
If /dev/hdd67/data67 does not exist, try `vgchange -a y` and that should make
it exist, then try again. Not sure why this would ever happen, though, since I
expect lower level stuff to take care of activating LVM LVs.
If it does exist, I get the feeling that your original ceph-volume prepare
co
This is becoming even more confusing. I got rid of those
ceph-disk@6[0-9].service
(which had been symlinked to /dev/null). Moved
/var/lib/ceph/osd/ceph-6[0-9] to /var/./osd_old/. Then, I ran
`ceph-volume lvm activate --all`. I got once again
root@osd1:~# ceph-volume lvm activate --all
-->
On 11/7/18 5:27 AM, Hayashida, Mami wrote:
> 1. Stopped osd.60-69: no problem
> 2. Skipped this and went to #3 to check first
> 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
> nothing. I see in that directory
>
> /etc/systemd/system/ceph-disk@60.service # and 61 - 69.
>
>
1. Stopped osd.60-69: no problem
2. Skipped this and went to #3 to check first
3. Here, `find /etc/systemd/system | grep ceph-volume` returned nothing. I
see in that directory
/etc/systemd/system/ceph-disk@60.service# and 61 - 69.
No ceph-volume entries.
On Tue, Nov 6, 2018 at 11:43 AM, H
Ok. I will go through this this afternoon and let you guys know the
result. Thanks!
On Tue, Nov 6, 2018 at 11:32 AM, Hector Martin
wrote:
> On 11/7/18 1:00 AM, Hayashida, Mami wrote:
> > I see. Thank you for clarifying lots of things along the way -- this
> > has been extremely helpful. Neit
On 11/7/18 1:00 AM, Hayashida, Mami wrote:
> I see. Thank you for clarifying lots of things along the way -- this
> has been extremely helpful. Neither "df | grep osd" nor "mount | grep
> osd" shows ceph-60 through 69.
OK, that isn't right then. I suggest you try this:
1) bring down OSD 60-69
Den lör 6 okt. 2018 kl 15:06 skrev Elias Abacioglu
:
> I'm bumping this old thread cause it's getting annoying. My membership get
> disabled twice a month.
> Between my two Gmail accounts I'm in more than 25 mailing lists and I see
> this behavior only here. Why is only ceph-users only affected?
But this is correct, isn't it?
root@osd1:~# ceph-volume lvm list --format=json hdd60/data60
{
"60": [
{
"devices": [
"/dev/sdh"
],
"lv_name": "data60",
"lv_path": "/dev/hdd60/data60",
"lv_size": "3.64t",
I ended up balancing my osdmap myself offline to figure out why the balancer
couldn't do better. I had similar issues with osdmaptool, which of course is
what I expected, but it's a lot easier to run osdmaptool in a debugger to see
what's happening. When I dug into the upmap code I discovered th
I see. Thank you for clarifying lots of things along the way -- this has
been extremely helpful. Neither "df | grep osd" nor "mount | grep osd"
shows ceph-60 through 69.
On Tue, Nov 6, 2018 at 10:57 AM, Hector Martin
wrote:
>
>
> On 11/7/18 12:48 AM, Hayashida, Mami wrote:
> > All other OSDs
On 11/7/18 12:48 AM, Hayashida, Mami wrote:
> All other OSDs that I converted (#60-69) look basically identical while
> the Filestore OSDs (/var/lib/ceph/osd/ceph-70 etc.) look different
> obviously. When I run "df" it does NOT list those converted osds (only
> the Filestore ones). In other wor
All other OSDs that I converted (#60-69) look basically identical while the
Filestore OSDs (/var/lib/ceph/osd/ceph-70 etc.) look different obviously.
When I run "df" it does NOT list those converted osds (only the Filestore
ones). In other words, /dev/sdh1 where osd.60 should be is not listed.
(Sh
On 11/7/18 12:30 AM, Hayashida, Mami wrote:
> So, currently this is what /var/lib/ceph/osd/ceph-60 shows. Is it not
> correct? I don't know what I should expect to see.
>
> root@osd1:~# ls -l /var/lib/ceph/osd/ceph-60
> total 86252
> -rw-r--r-- 1 ceph ceph 384 Nov 2 16:20 activate.mo
So, currently this is what /var/lib/ceph/osd/ceph-60 shows. Is it not
correct? I don't know what I should expect to see.
root@osd1:~# ls -l /var/lib/ceph/osd/ceph-60
total 86252
-rw-r--r-- 1 ceph ceph 384 Nov 2 16:20 activate.monmap
-rw-r--r-- 1 ceph ceph 10737418240 Nov 5 16:32 block
On Tue, Nov 6, 2018 at 8:41 AM Pavan, Krish wrote:
>
> Trying to created OSD with multipath with dmcrypt and it failed . Any
> suggestion please?.
ceph-disk is known to have issues like this. It is already deprecated
in the Mimic release and will no longer be available for the upcoming
release (
I met the same problem. I had to create GPT table for each disk, create
first partition over full space and then fed these to ceph-volume (should
be similar for ceph-deploy).
Also I am not sure if you can combine fs-type btrfs with bluestore (afaik
this is for filestore).
Kevin
Am Di., 6. Nov. 2
Trying to created OSD with multipath with dmcrypt and it failed . Any
suggestion please?.
ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr
--bluestore --dmcrypt -- failed
ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr
--bluestore - worked
the logs
On Tue, Nov 6, 2018 at 1:12 AM Wei Jin wrote:
>
> Thanks.
> I found that both minimum and active set are very large in my cluster, is it
> expected?
> By the way, I do snapshot for each image half an hour,and keep snapshots for
> two days.
>
> Journal status:
>
> minimum_set: 671839
> active_set
Hi,
I'm wondering whether cephfs have quota limit options.
I use kernel client and ceph version is 12.2.8.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I'm trying to test this feature but I did not manage to make it working. In my
simple setup, I have a small mimic cluster with 3 vms at work and I have access
to a S3 cloud provider (not amazon).
Here is my period configuration, with one realm, one zonegroup and 2 zones:
---
Is that correct or have you added more than 1 OSD?
CEPH is never going to work or be able to bring up a pool with only one
OSD, if you really do have more than OSD and have added them correctly then
there really is something up with your CEPH setup / config and may be worth
starting from scratch.
On 2018/11/6 下午4:29, Ashley Merrick wrote:
What does
"ceph osd tree" show ?
root@node1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-2 0 host 0
-1 1.0 root default
-3 1.0 host node1
0 hdd 1.0 osd.0 down 0
What does
"ceph osd tree" show ?
On Tue, Nov 6, 2018 at 4:27 PM Dengke Du wrote:
>
> On 2018/11/6 下午4:24, Ashley Merrick wrote:
>
> If I am reading your ceph -s output correctly you only have 1 OSD, and 0
> pool's created.
>
> So your be unable to create a RBD till you atleast have a pool setup
On 2018/11/6 下午4:24, Ashley Merrick wrote:
If I am reading your ceph -s output correctly you only have 1 OSD, and
0 pool's created.
So your be unable to create a RBD till you atleast have a pool setup
and configured to create the RBD within.
root@node1:~# ceph osd lspools
1 libvirt-pool
2 te
If I am reading your ceph -s output correctly you only have 1 OSD, and 0
pool's created.
So your be unable to create a RBD till you atleast have a pool setup and
configured to create the RBD within.
On Tue, Nov 6, 2018 at 4:21 PM Dengke Du wrote:
>
> On 2018/11/6 下午4:16, Mykola Golub wrote:
> >
On 2018/11/6 下午4:16, Mykola Golub wrote:
On Tue, Nov 06, 2018 at 09:45:01AM +0800, Dengke Du wrote:
I reconfigure the osd service from start, the journal was:
I am not quite sure I understand what you mean here.
---
On Tue, Nov 06, 2018 at 09:45:01AM +0800, Dengke Du wrote:
> I reconfigure the osd service from start, the journal was:
I am not quite sure I understand what you mean here.
> -
31 matches
Mail list logo