Typically, the number of nodes is 2n+1 to cover n failures.
It's OK to have 4 nodes, from failure covering POV, it's the same
as 3 nodes. 4 nodes will cover 1 failure. If 2 nodes down, the
cluster is down. It works, just not make much sense.
Thanks!
Tony
> -Original Message-
> From: Marc R
Hi,
AWIK, the read latency primarily depends on HW latency,
not much can be tuned in SW. Is that right?
I ran a fio random read with iodepth 1 within a VM backed by
Ceph with HDD OSD and here is what I got.
=
read: IOPS=282, BW=1130KiB/s (1157kB/s)(33.1MiB/30001msec)
slat (
cessor
> cores. In short - tune affinity so that the packet receive queues and
> osds processes run on the same corresponding cores. Disabling process
> power saving features helps a lot. Also watch out for NUMA interference.
> But overall all these tricks will save you less than switchi
n the end, read ahead with sequential IOs leads to way way less real
> physical reads than random read, hence the IOPS difference.
>
> пн, 2 нояб. 2020 г. в 06:20, Tony Liu :
>
> > Another confusing about read vs. random read. My understanding is
> > that, when fio does read, it read
Is it FileStore or BlueStore? With this SSD-HDD solution, is journal
or WAL/DB on SSD or HDD? My understanding is that, there is no
benefit to put journal or WAL/DB on SSD with such solution. It will
also eliminate the single point of failure when having all WAL/DB
on one SSD. Just want to confirm.
--Original Message-
> From: 胡 玮文
> Sent: Sunday, November 8, 2020 5:47 AM
> To: Tony Liu
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: The feasibility of mixed SSD and HDD
> replicated pool
>
>
> > 在 2020年11月8日,11:30,Tony Liu 写道:
> >
> >
Hi,
For example, 16 threads with 3.2GHz and 32 threads with 3.0GHz,
which makes 11 OSDs (10x12TB HDD and 1x960GB SSD) with better
performance?
Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le.
Thanks Nathan!
Tony
> -Original Message-
> From: Nathan Fish
> Sent: Thursday, November 12, 2020 7:43 PM
> To: Tony Liu
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] which of cpu frequency and number of threads
> servers osd better?
>
> From what I
D OSD requires 4T and HDD OSD only requires 1T, then 8C/16T
3.2GHz would be better, because it provides sufficient Ts as well
as stronger computing?
Thanks!
Tony
> -Original Message-
> From: Frank Schilder
> Sent: Thursday, November 12, 2020 10:59 PM
> To: Tony Liu ; Nathan Fish
Thank you Frank for the clarification!
Tony
> -Original Message-
> From: Frank Schilder
> Sent: Friday, November 13, 2020 12:37 AM
> To: Tony Liu ; Nathan Fish
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: which of cpu frequency and number of
> threa
I am not sure any configuration tuning would help here.
The bottleneck is on HDD. In my case, I have a SSD for
WAL/DB and it provides pretty good write performance.
The part I don't quite understand in your case is that,
random read is quite fast. Due to the HDD seeking latency,
the random read is
See if this helps.
* Create "ssh-config".
```
Host *
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
LogLevel ERROR
```
* Add it to cephadm.
```
ceph cephadm set-ssh-config -i ssh-config
```
Then try to add host again.
Tony
> -Original Message-
> From: Mika Saari
> Se
Hi,
With Ceph Octopus 15.2.5, here is the output of command
"ceph device get-health-metrics SEAGATE_DL2400MM0159_WBM2WP2S".
===
"20201123-000939": {
"dev": "/dev/sde",
"error": "smartctl failed",
"nvme_smart_health_information_add_log_error":
Hi,
I did some search about replacing osd, and found some different
steps, probably for different release?
Is there recommended process to replace an osd with Octopus?
Two cases here:
1) replace HDD whose WAL and DB are on a SSD.
1-1) failed disk is replaced by the same model.
1-2) working disk is
That's the simple case. If redeploying all OSDs on that
> host is not an option you'll probably have to pause orchestrator in
> order to migrate devices yourself to prevent to much data movement.
>
> Regards,
> Eugen
>
>
> [1] https://docs.ceph.com/en/latest/mgr/o
Hi,
> > When replacing an osd, there will be no PG remapping, and backfill
> > will restore the data on the new disk, right?
>
> That depends on how you decide to go through the replacement process.
> Usually without your intervention (e.g. setting the appropriate OSD
> flags) the remapping will
> >> When replacing an osd, there will be no PG remapping, and backfill
> >>> will restore the data on the new disk, right?
> >>
> >> That depends on how you decide to go through the replacement process.
> >> Usually without your intervention (e.g. setting the appropriate OSD
> >> flags) the remapp
are spaces?
Thanks!
Tony
> -Original Message-
> From: Frank Schilder
> Sent: Saturday, November 28, 2020 12:42 AM
> To: Anthony D'Atri ; Tony Liu
>
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: replace osd with Octopus
>
> Hi all,
>
> maybe
half-
> to-half, but more than 50% of objects are usually misplaced and there
> will be movement between the original set of OSDs as well. In any case,
> getting such a large number of disks involved that only need to be
> filled up to 50% of the previous capacity will be much mor
Hi,
With ceph 15.2.5 octopus, mon, mgd and rgw dump loggings on debug
level to stdout/stderr. It causes huge container log file
(/var/lib/docker/containers//-json.log).
Is there any way to stop dumping logs or change the logging level?
BTW, I tried "ceph config set log_to_stderr false".
It doesn
Any comments?
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Tuesday, December 29, 2020 5:22 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] logging to stdout/stderr causes huge container log
> file
>
> Hi,
>
> With ceph 15.2.5 octopus, mon
Is swift service endpoint created in OpenStack?
Tony
> -Original Message-
> From: Mika Saari
> Sent: Thursday, January 7, 2021 3:45 AM
> To: Wissem MIMOUNA
> Cc: ceph-users@ceph.io
> Subject: [ceph-users] Re: Ceph RadosGW & OpenStack swift problem
>
> Hi,
>
> Adding below what I test
Hi,
I added a host by "ceph orch host add ceph-osd-5 10.6.10.84 ceph-osd".
I can see the host by "ceph orch host ls", but no devices listed by
"ceph orch device ls ceph-osd-5". I tried "ceph orch device zap
ceph-osd-5 /dev/sdc --force", which works fine. Wondering why no
devices listed? What I am
"ceph log last cephadm" shows the host was added without errors.
"ceph orch host ls" shows the host as well.
"python3 -c import sys;exec(...)" is running on the host.
But still no devices on this host is listed.
Where else can I check?
Thanks!
Tony
> -----Origi
find out how to trace it. Any idea?
Thanks!
Tony
> -Original Message-
> From: Eugen Block
> Sent: Monday, February 1, 2021 12:33 PM
> To: Tony Liu
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: no device listed after adding host
>
> Hi,
>
> you cou
Hi,
With 3 replicas, a pg hs 3 osds. If all those 3 osds are down,
the pg becomes unknow. Is that right?
If those 3 osds are replaced and in and on, is that pg going to
be eventually back to active? Or anything else has to be done
to fix it?
Thanks!
Tony
Hi,
When build cluster Octopus 15.2.5 initially, here is the OSD
service spec file applied.
```
service_type: osd
service_id: osd-spec
placement:
host_pattern: ceph-osd-[1-3]
data_devices:
rotational: 1
db_devices:
rotational: 0
```
After applying it, all HDDs were added and DB of each hdd i
t; shows those 3 replaced OSDs.
"pg query " can't find it. I did "osd force-create-pg " to
recreate them. PG map remains on those 3 OSDs.
Now, they are active+clean.
Tony
> -Original Message-
> From: Jeremy Austin
> Sent: Tuesday, February 2, 2021 8:58 AM
&g
-Original Message-
> From: Eugen Block
> Sent: Tuesday, February 2, 2021 12:32 AM
> To: Tony Liu
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: no device listed after adding host
>
> Just a note: you don't need to install any additional package to run
>
t; service_name: osd.default
> placement:
>hosts:
>- host4
>- host3
>- host1
>- host2
> spec:
>block_db_size: 4G
>data_devices:
> rotational: 1
> size: '20G:'
>db_devices:
> size: '10G:'
>filter_lo
Hi,
There are multiple different procedures to replace an OSD.
What I want is to replace an OSD without PG remapping.
#1
I tried "orch osd rm --replace", which sets OSD reweight 0 and
status "destroyed". "orch osd rm status" shows "draining".
All PGs on this OSD are remapped. Checked "pg dump", c
Hi,
After upgrading from 15.2.5 to 15.2.8, I see this health error.
Has anyone seen this? "ceph log last cephadm" doesn't show anything
about it. How can I trace it?
Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
is None. "osd" is from the list "to_remove_osds".
Seems like a bug to me.
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Tuesday, February 2, 2021 7:20 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Module 'cephadm' has failed
e your patience!
Tony
> -Original Message-
> From: Frank Schilder
> Sent: Tuesday, February 2, 2021 11:47 PM
> To: Tony Liu ; ceph-users@ceph.io
> Subject: Re: replace OSD without PG remapping
>
> You asked about exactly this before:
> https://lists.ceph.io/hyper
Hi,
With 15.2.8, run "ceph orch rm osd 12 --replace --force",
PGs on osd.12 are remapped, osd.12 is removed from "ceph osd tree",
the daemon is removed from "ceph orch ps", the device is "available"
in "ceph orch device ls". Everything seems good at this point.
Then dry-run service spec.
```
# ca
3c90-b7d5-4f13-8a58-f72761c1971b
ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64
[2021-02-05 04:03:17,244][ceph_volume.process][INFO ] stderr Volume group
"ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" has insufficient free space (572317
extents): 572318 required.
```
size was passed: 2.18 TB
Here is the issue.
https://tracker.ceph.com/issues/47758
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Thursday, February 4, 2021 8:46 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: replace OSD failed
>
> Here is the log from ceph-volume.
>
sn't show up in exported osd
> service spec
>
> Hi.
>
> I have the same situation. Running 15.2.8 I created a specification that
> looked just like it. With rotational in the data and non-rotational in
> the db.
>
> First use applied fine. Afterwards it only uses
Hi,
With v15.2.8, after zap a device on OSD node, it's still not available.
The reason is "locked, LVM detected". If I reboot the whole OSD node,
then the device will be available. There must be something no being
cleaned up. Any clues?
Thanks!
Tony
__
I checked pvscan, vgscan, lvscan and "ceph-volume lvm list" on the OSD node,
that zapped device doesn't show anywhere.
Anything missing?
Thanks!
Tony
____
From: Tony Liu
Sent: February 7, 2021 05:27 PM
To: ceph-users
Subject: [ceph-users
d, is it pushed by something
(mgr?) or pulled by mon?
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Sunday, February 7, 2021 5:32 PM
> To: ceph-users
> Subject: [ceph-users] Re: Device is not available after zap
>
> I checked pvscan, vgscan, lvscan and
.
Thanks!
Tony
From: David Orman
Sent: February 8, 2021 04:06 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
Adding ceph-users:
We ran into this same issue, and we used a s
Hi David,
Could you show me an example of OSD service spec YAML
to workaround it by specifying size?
Thanks!
Tony
From: David Orman
Sent: February 8, 2021 04:06 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't
Hi,
I'd like to know how DB device is expected to be handled by "orch osd rm".
What I see is that, DB device on SSD is untouched when OSD on HDD is removed
or replaced. "orch device zap" removes PV, VG and LV of the device.
It doesn't touch the DB LV on SSD.
To remove an OSD permanently, do I nee
onal: true
filter_logic: AND
db_devices:
size: ':1TB'
It was usable in my test environment, and seems to work.
Regards
Jens
-----Original Message-
From: Tony Liu
Sent: 9. februar 2021 02:09
To: David Orman
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: db_devices does
+
|osd |osd-spec |ceph-osd-1 |/dev/sdd |- |-|
+-+--++--++-+
Thanks!
Tony
From: David Orman
Sent: February 10, 2021 11:02 AM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io
To update, the OSD had data on HDD and DB on SSD.
After "ceph orch osd rm 12 --replace --force" and wait
till rebalancing is done and daemon is stopped,
I ran "ceph orch device zap ceph-osd-2 /dev/sdd" to zap the device.
It cleared PV, VG and LV for data device, but not DB device.
DB device issue i
n
Sent: February 10, 2021 01:19 PM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
It's displaying sdb (what I assume you want to be used as a DB device) as
unavailable. What
ters.
Regards
Jens
-Original Message-
From: Tony Liu
Sent: 10. februar 2021 22:59
To: David Orman
Cc: Jens Hyllegaard (Soft Design A/S) ;
ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
Hi David,
Hi,
I've been trying with v15.2 and v15.2.8, no luck.
Wondering if this is actually supported or ever worked for anyone?
Here is what I've done.
1) Create a cluster with 1 controller (mon and mgr) and 3 OSD nodes,
each of which is with 1 SSD for DB and 8 HDDs for data.
2) OSD service spec.
se
You can have BGP-ECMP to multiple HAProxy instances to support
active-active mode, instead of using keepalived for active-backup mode,
if the traffic amount does required multiple HAProxy instances.
Tony
From: Graham Allan
Sent: February 14, 2021 01:31 PM
I followed https://tracker.ceph.com/issues/46691 to bring up the OSD.
"ceph osd tree" shows it's up. "ceph pg dump" shows PGs are remapped.
How can I make it to be aware by cephadm (showed up by "ceph orch ps")?
Because "ceph status" complains "1 stray daemons(s) not managed by cephadm".
Thanks!
T
Never mind, the OSD daemon shows up in "orch ps" after a while.
Thanks!
Tony
____
From: Tony Liu
Sent: February 14, 2021 09:47 PM
To: Kenneth Waegeman; ceph-users
Subject: [ceph-users] Re: reinstalling node with orchestrator/cephadm
I foll
this work.
Thanks!
Tony
________
From: Tony Liu
Sent: February 14, 2021 02:01 PM
To: ceph-users@ceph.io; dev
Subject: [ceph-users] Is replacing OSD whose data is on HDD and DB is on SSD
supported?
Hi,
I've been trying with v15.2 and v15.2.8, no luck.
Wondering if
Hi,
This is with v15.2 and v15.2.8.
Once an OSD service is applied, it can't be removed.
It always shows up from "ceph orch ls".
"ceph orch rm " only marks it "unmanaged",
but not actually removes it.
Is this the expected?
Thanks!
Tony
___
ceph-users m
It may help if you could share how you added those OSDs.
This guide works for me.
https://docs.ceph.com/en/latest/cephadm/drivegroups/
Tony
From: Philip Brown
Sent: February 17, 2021 09:30 PM
To: ceph-users
Subject: [ceph-users] ceph orch and mixed SSD/rot
> -Original Message-
> From: Stefan Kooman
> Sent: Tuesday, March 16, 2021 4:10 AM
> To: Dave Hall ; ceph-users
> Subject: [ceph-users] Re: Networking Idea/Question
>
> On 3/15/21 5:34 PM, Dave Hall wrote:
> > Hello,
> >
> > If anybody out there has tried this or thought about it, I'd li
nks!
Tony
> -Original Message-
> From: Andrew Walker-Brown
> Sent: Tuesday, March 16, 2021 9:18 AM
> To: Tony Liu ; Stefan Kooman ;
> Dave Hall ; ceph-users
> Subject: RE: [ceph-users] Re: Networking Idea/Question
>
> https://docs.ceph.com/en/latest/rad
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/EC45YMDJZD3T6TQINGM222H2H4RZABJ4/
From: Philip Brown
Sent: March 19, 2021 08:59 AM
To: ceph-users
Subject: [ceph-users] ceph orch daemon add , separate db
I was having difficulty doing this
Are you sure the OSD is with DB/WAL on SSD?
Tony
From: Philip Brown
Sent: March 19, 2021 02:49 PM
To: Eugen Block
Cc: ceph-users
Subject: [ceph-users] Re: [BULK] Re: Re: ceph octopus mysterious OSD crash
Wow.
My expectations have been adjusted. Thank you
Hi,
Do I need to update ceph.conf and restart each OSD after adding more MONs?
This is with 15.2.8 deployed by cephadm.
When adding MON, "mon_host" should be updated accordingly.
Given [1], is that update "the monitor cluster’s centralized configuration
database" or "runtime overrides set by an a
Thanks!
Tony
From: Stefan Kooman
Sent: March 26, 2021 12:22 PM
To: Tony Liu; ceph-users@ceph.io
Subject: Re: [ceph-users] Do I need to update ceph.conf and restart each OSD
after adding more MONs?
On 3/26/21 6:06 PM, Tony Liu wrote:
> Hi,
>
>
restart service.
Tony
From: Tony Liu
Sent: March 27, 2021 12:20 PM
To: Stefan Kooman; ceph-users@ceph.io
Subject: [ceph-users] Re: Do I need to update ceph.conf and restart each OSD
after adding more MONs?
I expanded MON from 1 to 3 by updating orch
at means I still need to restart all services to apply the update, right?
Is this supposed to be part of adding MONs as well, or additional manual step?
Thanks!
Tony
____
From: Tony Liu
Sent: March 27, 2021 12:53 PM
To: Stefan Kooman; ceph-users@ceph.io
Subject: [
Hi,
Here is a snippet from top on a node with 10 OSDs.
===
MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3 buff/cache
MiB Swap: 128000.0 total, 126754.7 free, 1245.3 used. 221608.0 avail Mem
PID USER PR NIVIRTRESSHR S %CPU %MEM
t /proc/meminfo
MemTotal: 263454780 kB
MemFree: 2212484 kB
MemAvailable: 226842848 kB
Buffers:219061308 kB
Cached: 2066532 kB
SwapCached: 928 kB
Active: 142272648 kB
Inactive: 109641772 kB
..
Thanks!
Tony
Restarting OSD frees buff/cache memory.
What kind of data is there?
Is there any configuration to control this memory allocation?
Thanks!
Tony
From: Tony Liu
Sent: March 27, 2021 06:10 PM
To: ceph-users
Subject: [ceph-users] Re: memory consumption by osd
able: 226842848 kB
> Buffers:219061308 kB
> Cached: 2066532 kB
> SwapCached: 928 kB
> Active: 142272648 kB
> Inactive: 109641772 kB
> ..
>
>
> Thanks!
> Tony
>
> From: Tony Liu
> Sent
Thank you Stefan and Josh!
Tony
From: Josh Baergen
Sent: March 28, 2021 08:28 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Do I need to update ceph.conf and restart each
OSD after adding more MONs?
As was mentioned in this thread
Hi,
I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus
deployed by cephadm. As what I know, either Swift (implemented by RADOSGW)
or RBD is supported to be the backend of cinder-backup. My intention is to use
one of those option to replicate Cinder volume from one site to
From: Tony Liu
Sent: July 30, 2021 09:16 PM
To: openstack-discuss; ceph-users
Subject: [ceph-users] [cinder-backup][ceph] replicate volume between sites
Hi,
I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus
deployed by cephadm. As what I know, either
Hi,
This shows one RBD image is treated as one object, and it's mapped to one PG.
"object" here means a RBD image.
# ceph osd map vm fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk
osdmap e18381 pool 'vm' (4) object 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk'
-> pg 4.c7a78d40 (4.0) -> up ([4,17,6], p4
ssing here?
Thanks!
Tony
From: Konstantin Shalygin
Sent: August 7, 2021 11:35 AM
To: Tony Liu
Cc: ceph-users; d...@ceph.io
Subject: Re: [ceph-users] rbd object mapping
Object map show where your object with any object name will be placed in
defined pool with your crush map, an
>> There are two types of "object", RBD-image-object and 8MiB-block-object.
>> When create a RBD image, a RBD-image-object is created and 12800
>> 8MiB-block-objects
>> are allocated. That whole RBD-image-object is mapped to a single PG, which
>> is mapped
>> to 3 OSDs (replica 3). That means, al
Thank you Konstantin!
Tony
From: Konstantin Shalygin
Sent: August 9, 2021 01:20 AM
To: Tony Liu
Cc: ceph-users; d...@ceph.io
Subject: Re: [ceph-users] rbd object mapping
On 8 Aug 2021, at 20:10, Tony Liu
mailto:tonyliu0...@hotmail.com>> wrote:
Hi,
I have OpenStack Ussuri and Ceph Octopus. Sometimes, I see timeout when create
or delete volumes. I can see RBD timeout from cinder-volume. Has anyone seen
such
issue? I'd like to see what happens on Ceph. Which service should I look into?
Is it stuck
with mon or any OSD? Any option to enabl
ence is between successful and failing volumes. Is
it the size or anything else? Which glance stores are enabled? Can you
reproduce it, for example 'rbd create...' with the cinder user? Then
you could increase 'debug_rbd' and see if that reveals anything.
Zitat von Tony Liu :
&g
Shalygin
Sent: September 8, 2021 08:29 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] debug RBD timeout issue
What is ceoh.conf for this rbd client?
k
Sent from my iPhone
> On 7 Sep 2021, at 19:54, Tony Liu wrote:
>
>
> I have OpenStack Ussuri and
Good to know. Thank you Konstantin!
Will test it out.
Is this some known issue? Any tracker or fix?
Thanks!
Tony
From: Konstantin Shalygin
Sent: September 8, 2021 12:47 PM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] debug
Update /usr/lib/python3.6/site-packages/swiftclient/client.py and restart
container horizon.
This is to fix the error message on dashboard when it tries to retrieve policy
list.
-parsed = urlparse(urljoin(url, '/info'))
+parsed = urlparse(urljoin(url, '/swift/info'))
Tony
_
Hi,
I wonder if anyone could share some experiences in etcd support by Ceph.
My users build Kubernetes cluster in VMs on OpenStack with Ceph.
With HDD (DB/WAL on SSD) volume, etcd performance test fails sometimes
because of latency. With SSD (all SSD) volume, it works fine.
I wonder if there is an
For PR-DR case, I am using RGW multi-site support to replicate backup image.
Tony
From: Manuel Holtgrewe
Sent: October 12, 2021 11:40 AM
To: dhils...@performair.com
Cc: mico...@gmail.com; ceph-users
Subject: [ceph-users] Re: Ceph cluster Sync
To chime in
Hi,
Other than get all objects of the pool and filter by image ID,
is there any easier way to get the number of allocated objects for
a RBD image?
What I really want to know is the actual usage of an image.
An allocated object could be used partially, but that's fine,
no need to be 100% accurate.
Hi,
The context is RBD on bluestore. I did check extent on Wiki.
I see "extent" when talking about snapshot and export/import.
For example, when create a snapshot, we mark extents. When
there is write to marked extents, we will make a copy.
I also know that user data on block device maps to object
ou're looking for?
Zitat von Tony Liu :
> Hi,
>
> Other than get all objects of the pool and filter by image ID,
> is there any easier way to get the number of allocated objects for
> a RBD image?
>
> What I really want to know is the actual usage of an image.
> An all
Hi,
src-image is 1GB (provisioned size). I did the following 3 tests.
1. rbd export src-image - | rbd import - dst-image
2. rbd export --export-format 2 src-image - | rbd import --export-format 2 -
dst-image
3. rbd export --export-format 2 src-image - | rbd import - dst-image
With #1 and #2, ds
Hi,
I have an image with a snapshot and some changes after snapshot.
```
$ rbd du backup/f0408e1e-06b6-437b-a2b5-70e3751d0a26
NAME
PROVISIONED USED
f0408e1e-06b6-437b-a2b5-70e3751d0a26@snapshot-eb085877-7
Hi Ilya,
That explains it. Thank you for clarification!
Tony
From: Ilya Dryomov
Sent: December 4, 2023 09:40 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] the image used size becomes 0 after export/import
with snapshot
Hi,
Say, the source image is being updated and data is mirrored to destination
continuously.
Suddenly, networking of source is down and destination will be promoted and
used to
restore the VM. Is that going to cause any FS issue and, for example, fsck
needs to be
invoked to check and repair FS?
Hi,
The cluster is Pacific 16.2.10 with containerized service and managed by
cephadm.
"config show" shows running configuration. Who is supported?
mon, mgr and osd all work, but rgw doesn't. Is this expected?
I tried with client. and without "client",
neither works.
When issue "config show", wh
this by single step.
>
> I haven't played around too much yet, but you seem to be right,
> changing the config isn't applied immediately, but only after a
> service restart ('ceph orch restart rgw.ebl-rgw'). Maybe that's on
> purpose? So you can change your c
Hi,
Based on doc, Ceph prevents you from writing to a full OSD so that you don’t
lose data.
In my case, with v16.2.10, OSD crashed when it's full. Is this expected or some
bug?
I'd expect write failure instead of OSD crash. It keeps crashing when tried to
bring it up.
Is there any way to bring
23: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_strin
g, std::allocator > const&)+0x10c1) [0x56102e
e39c41]
24: (BlueStore::_open_db(bool, bool, bool)+0x8c7) [0x56102ec9de17]
25: (BlueStore::_open_db_and_around(bool, bool
!
Tony
From: Zizon Qiu
Sent: October 31, 2022 08:13 PM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: Is it a bug that OSD crashed when it's full?
15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int,
rocksdb::ColumnFamilyData*, ro
a bug.
Thanks!
Tony
From: Tony Liu
Sent: October 31, 2022 05:46 PM
To: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Is it a bug that OSD crashed when it's full?
Hi,
Based on doc, Ceph prevents you from writing to a full OSD so that you don’t
Thank you Igor!
Tony
From: Igor Fedotov
Sent: November 1, 2022 04:34 PM
To: Tony Liu; ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] Re: Is it a bug that OSD crashed when it's full?
Hi Tony,
first of all let me share my understanding o
Hi,
I want
1) copy a snapshot to an image,
2) no need to copy snapshots,
3) no dependency after copy,
4) all same image format 2.
In that case, is rbd cp the same as rbd clone + rbd flatten?
I ran some tests, seems like it, but want to confirm, in case of missing
anything.
Also, seems cp is a bit
Thank you Ilya!
Tony
From: Ilya Dryomov
Sent: March 27, 2023 10:28 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] rbd cp vs. rbd clone + rbd flatten
On Wed, Mar 22, 2023 at 10:51 PM Tony Liu wrote:
>
> Hi,
>
>
Hi,
The cluster is with Pacific and deployed by cephadm on container.
The case is to import OSDs after host OS reinstallation.
All OSDs are SSD who has DB/WAL and data together.
Did some research, but not able to find a working solution.
Wondering if anyone has experiences in this?
What needs to b
h.com/en/pacific/cephadm/services/osd/#activate-existing-osds
[2]
https://github.com/ceph/ceph/blob/0a5b3b373b8a5ba3081f1f110cec24d82299cac8/src/pybind/mgr/cephadm/services/osd.py#L196
Thanks!
Tony
From: Tony Liu
Sent: April 27, 2023 10:20 PM
To: ceph-
1 - 100 of 152 matches
Mail list logo