Hello,
I have a small EC Pool I am using with RBD to store a bunch of large files
attached to some VM's for personal storage use.
Currently I have the EC Meta Data Pool on some SSD's, I have noticed even
though the EC Pool has TB's of data in the metadata pool is only in the 2MB
Range.
My questi
I wanted to ask for thoughts/guidance on the case of running a newer
version of Ceph on a client than the version of Ceph that is running on
the server.
E.g., I have a client machine running Ceph 12.2.8, while the server is
running 12.2.4. Is this a terrible idea? My thoughts are to more
thorou
On 10/31/2018 04:48 PM, Steven Vacaroaia wrote:
> On the monitor that works I noticed this
>
> mon.mon01@0(leader) e1 handle_probe ignoring fsid
> d01a0b47-fef0-4ce8-9b8d-80be58861053 != 8e7922c9-8d3b-4a04-9a8a-e0b0934162df
>
> Where is that fsid ( 8e7922 ) coming from ?
monmap. somehow one
Hello!
My cluster currently has this health state:
2018-10-31 21:20:13.694633 mon.lol [WRN] Health check update:
39010709/192173470 objects misplaced (20.300%)
(OBJECT_MISPLACED)
2018-10-31 21:20:13.694684 mon.lol [WRN] Health check update: Degraded data
redundancy: 1624786/192173470 objects
de
On the monitor that works I noticed this
mon.mon01@0(leader) e1 handle_probe ignoring fsid
d01a0b47-fef0-4ce8-9b8d-80be58861053 != 8e7922c9-8d3b-4a04-9a8a-e0b0934162df
Where is that fsid ( 8e7922 ) coming from ?
Steven
On Wed, 31 Oct 2018 at 12:45, Steven Vacaroaia wrote:
> Hi,
>
> I've a
I think I pretty well have things figured out at this point, but I'm not sure
how to proceed.
The config-key settings were not effective because I had not restarted the
active mgr after setting them. Once I restarted the mgr the settings became
effective.
Once I had the config-key settings wor
Hi,
I've added 2 more monitors to my cluster but they are not joining the
cluster
service is up, ceph.conf is the same ,
what am I missing ??
ceph-deploy install
ceph-deploy mon create
ceph-deploy add
I then manually change ceph.conf to contain the following
Then I push it to all cluster membres
Hi,
On 10/26/18 2:55 PM, David Turner wrote:
It is indeed adding a placement target and not removing it replacing the
pool. The get/put wouldn't be a rados or even ceph command, you would do
it through an s3 client.
Which is an interesting idea, but presumably there's no way of knowing
which S
Nevermind ...a bit of reading was enough to point me to
"osd_crush_update_on_start": "true"
Thanks
Steven
On Wed, 31 Oct 2018 at 10:31, Steven Vacaroaia wrote:
> Hi,
> I have created a separate root for my ssd drives
> All works well but a reboot ( or restart of the services) wipes out all my
>
although that said, I’ve just noticed this crash this morning
2018-10-31 14:26:00.522 7f0cf53f5700 -1 /build/ceph-13.2.1/src/mds/CDir.cc: In
function 'void CDir::fetch(MDSInternalContextBase*, std::string_view, bool)'
thread 7f0cf53f5700 time 2018-10-31 14:26:00.485647
/build/ceph-13.2.1/src/mds
Hi,
I have created a separate root for my ssd drives
All works well but a reboot ( or restart of the services) wipes out all my
changes
How can I persist changes to crush rules ?
here are some details
Initial / default - this is what I am getting after a restart / reboot
If I just do that on o
so, moving the entry from [mon] to [global] worked
This is a bit confusing - I use to put all my configuration setting
starting with mon_ under [mon]
Steven
On Wed, 31 Oct 2018 at 10:13, Steven Vacaroaia wrote:
> I do not think so ..or maybe I did not understand what are you saying
> There is n
I do not think so ..or maybe I did not understand what are you saying
There is no key listed on mgr config
ceph config-key list
[
"config-history/1/",
"config-history/2/",
"config-history/2/+mgr/mgr/dashboard/server_addr",
"config-history/3/",
"config-history/3/+mgr/mgr/prometh
Isn't this a mgr variable ?
On 10/31/2018 02:49 PM, Steven Vacaroaia wrote:
> Hi,
>
> Any idea why different value for mon_max_pg_per_osd is not "recognized" ?
> I am using mimic 13.2.2
>
> Here is what I have in /etc/ceph/ceph.conf
>
>
> [mon]
> mon_allow_pool_delete = true
> mon_osd_min_dow
Hi,
Any idea why different value for mon_max_pg_per_osd is not "recognized" ?
I am using mimic 13.2.2
Here is what I have in /etc/ceph/ceph.conf
[mon]
mon_allow_pool_delete = true
mon_osd_min_down_reporters = 1
mon_max_pg_per_osd = 400
checking the value with
ceph daemon osd.6 config show| gr
Hi Mike,
Thank you for your answer. I thought maybe FC would just be the transport
protocol to LIO and all would be fine but I forgot the tcmu-runner part which I
suppose is where some iSCSI specifics were hard coded.
FC was interesting in the way that (when already set) it would avoid having t
On Wed, Oct 31, 2018 at 8:28 AM Hayashida, Mami wrote:
>
> Thank you for your replies. So, if I use the method Hector suggested (by
> creating PVs, VGs etc. first), can I add the --osd-id parameter to the
> command as in
>
> ceph-volume lvm prepare --bluestore --data hdd0/data0 --block.db ss
Thank you for your replies. So, if I use the method Hector suggested (by
creating PVs, VGs etc. first), can I add the --osd-id parameter to the
command as in
ceph-volume lvm prepare --bluestore --data hdd0/data0 --block.db ssd/db0
--osd-id 0
ceph-volume lvm prepare --bluestore --data hdd1/data
On Wed, Oct 31, 2018 at 5:22 AM Hector Martin wrote:
>
> On 31/10/2018 05:55, Hayashida, Mami wrote:
> > I am relatively new to Ceph and need some advice on Bluestore migration.
> > I tried migrating a few of our test cluster nodes from Filestore to
> > Bluestore by following this
> > (http://docs
You might want to try --path option instead of --dev one.
On 10/31/2018 7:29 AM, ST Wong (ITSC) wrote:
Hi all,
We deployed a testing mimic CEPH cluster using bluestore. We can’t
run ceph-bluestore-tool on OSD with following error:
---
# ceph-bluestore-tool show-label --dev *device
Hi,
Didn't know that auto resharding does not remove old instances. Wrote
my own script for cleanup as I've discovered this before reading your
message.
Not very wlll tested, but here it is:
for bucket in $(radosgw-admin bucket list | jq -r .[]); do
bucket_id=$(radosgw-admin metadata get buck
On 31/10/2018 05:55, Hayashida, Mami wrote:
I am relatively new to Ceph and need some advice on Bluestore migration.
I tried migrating a few of our test cluster nodes from Filestore to
Bluestore by following this
(http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/)
as the
22 matches
Mail list logo