Dear Jason
A small update in the setup is that now the image syncing shows 8% and
remains still in the same status...after 1 day I can see the image got
replicated the other side
Answer few of my queries:
1.Does the image sync will work one by one after 1 image or all images get
sync at the s
On Mon, Aug 26, 2019 at 7:54 AM V A Prabha wrote:
>
> Dear Jason
> I shall explain my setup first
> The DR centre is 300 Kms apart from the site
> Site-A - OSD 0 - 1 TB Mon - 10.236.248.XX /24
> Site-B - OSD 0 - 1 TB Mon - 10.236.228.XX/27 - RBD-Mirror deamon
> running
> All por
Dear Jason
I shall explain my setup first
The DR centre is 300 Kms apart from the site
Site-A - OSD 0 - 1 TB Mon - 10.236.248.XX /24
Site-B - OSD 0 - 1 TB Mon - 10.236.228.XX/27 - RBD-Mirror deamon running
All ports are open and no firewall..Connectivity is there between
My ini
On Tue, Aug 20, 2019 at 9:23 AM V A Prabha wrote:
> I too face the same problem as mentioned by Sat
> All the images created at the primary site are in the state : down+
> unknown
> Hence in the secondary site the images is 0 % up + syncing all time
> No progress
> The only error log th
I too face the same problem as mentioned by Sat
All the images created at the primary site are in the state : down+ unknown
Hence in the secondary site the images is 0 % up + syncing all time No
progress
The only error log that is continuously hitting is
2019-08-20 18:04:38.556908 7f7d4
Thank you - we were expecting that, but wanted to be sure.
By the way - we are running our clusters on IPv6-BGP, to achieve massive
scalability and load-balancing ;-)
Mit freundlichen Grüßen
Carsten Buchberger
[WITCOM_LOGO_CS4_CMYK_1.png]
WiTCOM Wiesbadener Informations-
und Telekommunikations
On 30.01.19 08:55, Buchberger, Carsten wrote:
> So as long as there is ip-connectivity between the client, and the
> client-network ip –adressses of our ceph-cluster everything is fine ?
Yes, client traffic is routable.
Even inter-OSD traffic is routable, there are reports from people
running ro
> -Original Message-
> From: Jason Dillaman
> Sent: Friday, August 24, 2018 12:09 AM
> To: sat
> Cc: ceph-users
> Subject: Re: [ceph-users] [question] one-way RBD mirroring doesn't work
>
> On Thu, Aug 23, 2018 at 10:56 AM sat wrote:
> >
> >
On Thu, Aug 23, 2018 at 10:21 AM Cody wrote:
> So, is it okay to say that compared to the 'firstn' mode, the 'indep'
> mode may have the least impact on a cluster in an event of OSD
> failure? Could I use 'indep' for replica pool as well?
>
You could, but shouldn't. Imagine if the primary OSD fa
So, is it okay to say that compared to the 'firstn' mode, the 'indep'
mode may have the least impact on a cluster in an event of OSD
failure? Could I use 'indep' for replica pool as well?
Thank you!
Regards,
Cody
On Wed, Aug 22, 2018 at 7:12 PM Gregory Farnum wrote:
>
> On Wed, Aug 22, 2018 at 1
On Thu, Aug 23, 2018 at 10:56 AM sat wrote:
>
> Hi,
>
>
> I'm trying to make a one-way RBD mirroed cluster between two Ceph clusters.
> But it
> hasn't worked yet. It seems to sucecss, but after making an RBD image from
> local cluster,
> it's considered as "unknown".
>
> ```
> $ sudo rbd --clus
On Wed, Aug 22, 2018 at 12:56 AM Konstantin Shalygin wrote:
> > Hi everyone,
> >
> > I read an earlier thread [1] that made a good explanation on the 'step
> > choose|chooseleaf' option. Could someone further help me to understand
> > the 'firstn|indep' part? Also, what is the relationship betwee
Hi everyone,
I read an earlier thread [1] that made a good explanation on the 'step
choose|chooseleaf' option. Could someone further help me to understand
the 'firstn|indep' part? Also, what is the relationship between 'step
take' and 'step choose|chooseleaf' when it comes to define a failure
dom
Thank you, David.
I found "ceph osd pool set-quota" command.
I think using this command to SSD pool is useful to avoid the problem in
quotation, isn't it?
best regards
On 2018/04/10 5:22, David Turner wrote:
The proper way to prevent this is to set your full ratios safe and monitor
your disk
That's right. I didn't actually use Jewel for very long. I'm glad it worked
for you.
On Fri, May 11, 2018, 4:49 PM Webert de Souza Lima
wrote:
> Thanks David.
> Although you mentioned this was introduced with Luminous, it's working
> with Jewel.
>
> ~# ceph osd pool stats
>
>
Thanks David.
Although you mentioned this was introduced with Luminous, it's working with
Jewel.
~# ceph osd pool stats
Fri May 11 17:41:39 2018
pool rbd id 5
client io 505 kB/s rd, 3801 kB/s wr, 46 op/s rd, 27 op/s wr
pool rbd_cache id 6
client io 2538 kB/s rd,
`ceph osd pool stats` with the option to specify the pool you are
interested in should get you the breakdown of IO per pool. This was
introduced with luminous.
On Fri, May 11, 2018 at 2:39 PM Webert de Souza Lima
wrote:
> I think ceph doesn't have IO metrics will filters by pool right? I see IO
I think ceph doesn't have IO metrics will filters by pool right? I see IO
metrics from clients only:
ceph_client_io_ops
ceph_client_io_read_bytes
ceph_client_io_read_ops
ceph_client_io_write_bytes
ceph_client_io_write_ops
and pool "byte" metrics, but not "io":
ceph_pool(write/read)_bytes(_total)
Hey Jon!
On Wed, May 9, 2018 at 12:11 PM, John Spray wrote:
> It depends on the metadata intensity of your workload. It might be
> quite interesting to gather some drive stats on how many IOPS are
> currently hitting your metadata pool over a week of normal activity.
>
Any ceph built-in tool f
On Wed, May 9, 2018 at 3:32 PM, Webert de Souza Lima
wrote:
> Hello,
>
> Currently, I run Jewel + Filestore for cephfs, with SSD-only pools used for
> cephfs-metadata, and HDD-only pools for cephfs-data. The current
> metadata/data ratio is something like 0,25% (50GB metadata for 20TB data).
>
> R
I'm sorry I have mixed up some information. The actual ratio I have now
is 0,0005% (*100MB for 20TB data*).
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
On Wed, May 9, 2018 at 11:32 AM, Webert de Souza Lima wrote:
> Hello,
>
> Current
The proper way to prevent this is to set your full ratios safe and monitor
your disk usage. That will allow you to either clean up old data or add
new storage before you get to 95 full on any OSDs. What I mean by setting
your full ratios safe is that if your use case can fill 20% of your disk
spa
Hi Tim,
With the current setup you can only handle 1 host failure without loosing
any data, BUT everything will probably freeze until you bring the failed
node (or the OSD"s in it) back up.
Your setup indicates k=6, m=2 and all 8 shards are distributed to 4 hosts
(2 shards/osds per host). Be awar
No, most filesystems can be expanded pretty trivially (shrinking is a more
complex operation but usually also doable). Assuming the likely case of an
ext2/3/4 filesystem, the command "resize2fs /dev/rbd0" should resize the FS to
cover the available space in the block device.
Rich
On 03/01/18 1
hi Jason
the data won't be lost if i resize the filesystem in the image?
thanks
13605702...@163.com
From: Jason Dillaman
Date: 2018-01-03 20:57
To: 13605702...@163.com
CC: ceph-users
Subject: Re: [ceph-users] question on rbd resize
You need to resize the filesystem within the RBD
You need to resize the filesystem within the RBD block device.
On Wed, Jan 3, 2018 at 7:37 AM, 13605702...@163.com <13605702...@163.com> wrote:
> hi
>
> a rbd image is out of space (old size is 1GB), so i resize it to 10GB
>
> # rbd info rbd/test
> rbd image 'test':
> size 10240 MB in 2560 objects
It's not possible to use multiple threads by disk in qemu currently. (It's on
qemu roadmap).
but you can create multiple disk/rbd image and use multiple qemu iothreads. (1
by disk).
(BTW, I'm able to reach around 70k iops max with 4k read, with 3,1ghz cpu,
rbd_cache=none, disabling debug and
On Thu, Nov 23, 2017 at 1:55 AM 许雪寒 wrote:
> Hi, everyone.
>
> We also encountered this problem: http://tracker.ceph.com/issues/11332.
> And we found that this seems to be caused by the lack of mutual exclusion
> between applying "trim" and handling subscriptions. Since
> "build_incremental" ope
In next time: submit "Answer to All" button, for make copy of your
message in ML.
On 11/24/2017 12:45 AM, bernhard glomm wrote:
and is there ANY way to figure out how much space is being additional
consumed by the snapshots at the moment (either by pool, preferable or
by cluster?)
The way i
What is the difference between the "usage" and the "raw usage" of a pool?
Usage - is your data. Raw - is what actually your data use with all
copies (pool 'size' option). I.e. if your data is 1000G - your raw is 3000G.
___
ceph-users mailing list
ceph-
Bump ! i saw this on the documentation for Bluestore also
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#spdk-usage
Does anyone has any experience ?
On Thu, Jun 8, 2017 at 2:27 AM, Li,Datong wrote:
> Hi all,
>
> I’m new in Ceph, and I wonder to know the performance r
On Wed, Jun 28, 2017 at 11:42 PM, YuShengzuo wrote:
> Hi Jason Dillaman,
>
>
>
> I am using rbd-mirror now (release Jewel).
>
>
>
> 1.
>
> And in many webs or other information introduced rbd-mirror notices that two
> ceph cluster should be the ‘same fsid’.
>
> But It’s nothing bad or wrong when I
On 06/01/2017 05:35 AM, 许雪寒 wrote:
Hi, everyone.
Recently, I’m reading the source code of Monitor. I found that, in
PGMonitor::preprare_pg_stats() method, a callback C_Stats is put into
PGMonitor::waiting_for_finished_proposal. I wonder, if a previous PGMap
incremental is in PAXOS's propose
Here is a background on Ceph striping [1]. By default, RBD will stripe
data with a stripe unit of 4MB and a stripe count of 1. Decreasing the
default RBD image object size will balloon the number of objects in
your backing Ceph cluster but will also result in less data to copy
during snapshot and c
On Fri, Apr 21, 2017 at 12:07 PM, Fabian wrote:
> Hi Everyone,
>
> I play a bit around with ceph on a test cluster with 3 servers (each MON
> and OSD at the same time).
> I use some self written ansible rules to deploy the config and crate
> the OSD with ceph-disk. Because ceph-disk use the next f
mon.* and osd.* sections are not mandatory in config. So unless you want
to set something per daemon, you can skip them completely.
On 17-04-21 19:07, Fabian wrote:
Hi Everyone,
I play a bit around with ceph on a test cluster with 3 servers (each MON
and OSD at the same time).
I use some self
;
>
> Gesendet: Donnerstag, 13. April 2017 um 20:15 Uhr
> Von: "Trey Palmer"
> An: ceph.nov...@habmalnefrage.de
> Cc: "Trey Palmer" , ceph-us...@ceph.com
> Betreff: Re: [ceph-users] Question about RadosGW subusers
>
> Anton,
>
> It turns out that Adam Emerson is
:15 Uhr
Von: "Trey Palmer"
An: ceph.nov...@habmalnefrage.de
Cc: "Trey Palmer" , ceph-us...@ceph.com
Betreff: Re: [ceph-users] Question about RadosGW subusers
Anton,
It turns out that Adam Emerson is trying to get bucket policies and roles
merged in time for Luminous:
https://github
Anton,
It turns out that Adam Emerson is trying to get bucket policies and roles
merged in time for Luminous:
https://github.com/ceph/ceph/pull/14307
Given this, I think we will only be using subusers temporarily as a method
to track which human or service did what in which bucket. This seems t
Hey Trey.
Sounds great, we were discussing the same kind of requirements and couldn't
agree on/find something "useful"... so THANK YOU for sharing!!!
It would be great if you could provide some more details or an example how you
configure the "bucket user" and sub-users and all that stuff.
Even
roll
>back???
From: Steve Taylor [mailto:steve.tay...@storagecraft.com]
Sent: 30 March 2017 20:07
To: n...@fisk.me.uk; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Question about unfound objects
One other thing to note with this experience is that we do a LOT of RBD snap
trimming, l
One other thing to note with this experience is that we do a LOT of RBD snap
trimming, like hundreds of millions of objects per day added to our snap_trimqs
globally. All of the unfound objects in these cases were found on other OSDs in
the cluster with identical contents, but associated with di
Good suggestion, Nick. I actually did that at the time. The "ceph osd map"
wasn't all that interesting because the OSDs had been outed and their PGs had
been mapped to new OSDs. Everything appeared to be in order with the PGs being
mapped to the right number of new OSDs. The PG mappings looked f
Hi Steve,
If you can recreate or if you can remember the object name, it might be worth
trying to run "ceph osd map" on the objects and see
where it thinks they map to. And/or maybe pg query might show something?
Nick
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behal
On Mon, Mar 20, 2017 at 6:49 PM, Alejandro Comisario
wrote:
> Jason, thanks for the reply, you really got my question right.
> So, some doubts that might show that i lack of some general knowledge.
>
> When i read that someone is testing a ceph cluster with secuential 4k
> block writes, does that
Jason, thanks for the reply, you really got my question right.
So, some doubts that might show that i lack of some general knowledge.
When i read that someone is testing a ceph cluster with secuential 4k
block writes, does that could happen inside a vm that is using an RBD
backed OS ?
In that case
It's a very broad question -- are you trying to determine something
more specific?
Notionally, your DB engine will safely journal the changes to disk,
commit the changes to the backing table structures, and prune the
journal. Your mileage my vary depending on the specific DB engine and
its configu
anyone ?
On Fri, Mar 17, 2017 at 5:40 PM, Alejandro Comisario
wrote:
> Hi, it's been a while since im using Ceph, and still im a little
> ashamed that when certain situation happens, i dont have the knowledge
> to explain or plan things.
>
> Basically what i dont know is, and i will do an exercis
On 16/02/17 20:44, girish kenkere wrote:
> Thanks David,
>
> Its not quiet what i was looking for. Let me explain my question in more
> detail -
>
> This is excerpt from Crush paper, this explains how crush algo running on
> each client/osd maps pg to an osd during the write operation[lets assu
Thanks David,
Its not quiet what i was looking for. Let me explain my question in more
detail -
This is excerpt from Crush paper, this explains how crush algo running on
each client/osd maps pg to an osd during the write operation[lets assume].
*"Tree buckets are structured as a weighted binary
As a piece to the puzzle, the client always has an up to date osd map (which
includes the crush map). If it's out of date, then it has to get a new one
before it can request to read or write to the cluster. That way the client
will never have old information and if you add or remove storage, t
On 01/20/2017 03:52 AM, Chen, Wei D wrote:
Hi,
I have read through some documents about authentication and user management
about ceph, everything works fine with me, I can create
a user and play with the keys and caps of that user. But I cannot find where
those keys or capabilities stored, obv
I don't know exactly where but I'm guessing in the database of the
monitor server which should be located at
"/var/lib/ceph/mon/".
Best,
Martin
On Fri, Jan 20, 2017 at 8:55 AM, Chen, Wei D wrote:
> Hi Martin,
>
> Thanks for your response!
> Could you pls tell me where it is on the monitor nodes?
Hi,
They are stored on the monitore nodes.
Best,
Martin
On Fri, 20 Jan 2017 at 04:53, Chen, Wei D wrote:
> Hi,
>
>
>
> I have read through some documents about authentication and user
> management about ceph, everything works fine with me, I can create
>
> a user and play with the keys and cap
You might want to have a look at this:
https://github.com/camptocamp/ceph-rbd-backup/blob/master/ceph-rbd-backup.py
I have a bash implementation of this, but it basically boils down to
wrapping what peter said: an export-diff to stdout piped to an
import-diff on a different cluster. The "transfer"
On 11/01/16 10:22, Peter Maloney wrote:
> On 11/01/16 06:57, xxhdx1985126 wrote:
>> Hi, everyone.
>>
>> I'm trying to write a program based on the librbd API that transfers
>> snapshot diffs between ceph clusters without the need for a temporary
>> storage which is required if I use the "rbd export
On 11/01/16 06:57, xxhdx1985126 wrote:
> Hi, everyone.
>
> I'm trying to write a program based on the librbd API that transfers
> snapshot diffs between ceph clusters without the need for a temporary
> storage which is required if I use the "rbd export-diff" and "rbd
> import-diff" pair.
You don't
Title: 缃戞槗閭
Sorry, sir. I don't quite follow you. I agree that the osds must get the current map to know who to contact so it can catch up. But it looks to me that the osd is getting the current map through get_map(superblock.current_epoch) in which the content of the variable superblock.current_
Title: 缃戞槗閭
Sorry, sir. I don't quite follow you. I agree that the osds must get the current map to know who to contact so it can catch up. But it looks to me that the osd is getting the current map through get_map(superblock.current_epoch) in which the variable superblock.current_epoch is read f
The osd needs to know where it thought data was, in particular so it knows what
it has. Then it gets the current map so it knows who to talk to so it can catch
back up.
Sent from my iPhone
On Oct 22, 2016, at 7:12 AM, xxhdx1985126
mailto:xxhdx1985...@163.com>> wrote:
Hi, everyone.
I'm trying
Paul
-Original Message-
From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
Sent: Friday, September 23, 2016 10:44 AM
To: Paul Nimbley
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Question on RGW MULTISITE and librados
On Thu, Sep 22, 2016 at 1:52 PM, Paul Nimbley wrote:
> Fa
On Thu, Sep 22, 2016 at 1:52 PM, Paul Nimbley wrote:
> Fairly new to ceph so please excuse any misused terminology. We’re
> currently exploring the use of ceph as a replacement storage backend for an
> existing application. The existing application has 2 requirements which
> seemingly can be met
Ok i will try without creating by myself
Never the less thanks a lot Christian for your patience, i will try more
clever questions when i'm ready for them
Le 5 août 2016 02:44, "Christian Balzer" a écrit :
Hello,
On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote:
> Maybe you are mispell
Hello,
On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote:
> Maybe you are mispelling, but in the docs they dont use white space but :
> this is quite misleading if it works
>
I'm quoting/showing "ceph-disk", which is called by ceph-deploy, which
indeed uses a ":".
Christian
> Le 5 août 20
Maybe you are mispelling, but in the docs they dont use white space but :
this is quite misleading if it works
Le 5 août 2016 02:30, "Christian Balzer" a écrit :
>
> Hello,
>
> On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
>
> > I am reading half your answer
> >
> > Do you mean that c
Hello,
On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
> I am reading half your answer
>
> Do you mean that ceph will create by itself the partitions for the journal?
>
Yes, "man ceph-disk".
> If so its cool and weird...
>
It can be very weird indeed.
If sdc is your data (OSD) disk a
I am reading half your answer
Do you mean that ceph will create by itself the partitions for the journal?
If so its cool and weird...
Le 5 août 2016 02:01, "Christian Balzer" a écrit :
>
> Hello,
>
> you need to work on your google skills. ^_-
>
> I wrote about his just yesterday and if you se
Yeah you are right
>From what i understand is that using a ceph is a good idea
But the fact is that it dont work
So i circumvent that by configuring ceph-deploy to use root
Was it the main goal, i dont think so
Thanks for your answer
Le 5 août 2016 02:01, "Christian Balzer" a écrit :
>
> He
Hello,
you need to work on your google skills. ^_-
I wrote about his just yesterday and if you search for "ceph-deploy wrong
permission" the second link is the issue description:
http://tracker.ceph.com/issues/13833
So I assume your journal partitions are either pre-made or non-GPT.
Christian
Hello,
On Wed, 13 Jul 2016 18:15:10 + EP Komarla wrote:
> Hi All,
>
> Have a question on the performance of sequential write @ 4K block sizes.
>
Which version of Ceph?
Any significant ceph.conf modifications?
> Here is my configuration:
>
> Ceph Cluster: 6 Nodes. Each node with :-
> 20x
Hi Manual,
This is a well known issue. You are definitely not the first one to hit this
problem. Before Jewel i (and other as well) added the line
ceph-disk activate all
to /etc/rc.local to get the OSD’s running at boot. In Jewel, however, this
doesn’t work anymore. Now i add these line to /et
On 07/08/2016 11:59 AM, Manuel Lausch wrote:
> hi,
>
> In the last days I do play around with ceph jewel on debian Jessie and
> CentOS 7. Now I have a question about systemd on this Systems.
>
> I installed ceph jewel (ceph version 10.2.2
> (45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian
> Op 13 juni 2016 om 16:07 schreef George Shuklin :
>
>
> Hello.
>
> How objects are handled in the rbd? If user writes 16k in the RBD image
> with 4Mb object size, how much would be written in the OSD? 16k x
> replication or 4Mb x replication (+journals for both cases)?
>
librbd will write
The notice about image format 1 being deprecated was somewhat hidden in the
release notes. Displaying that message when opening an existing format 1
image is overkill and should be removed (at least until we come up with
some sort of online migration tool in a future Ceph release).
[1] https://gi
One more thing:
I haven't seen anything regarding the following message:
# rbd lock list 25091
2016-04-22 19:39:31.523542 7fd199d57700 -1 librbd::image::OpenRequest: RBD
image format 1 is deprecated. Please copy this image to image format 2.
Is it something that i should worry ?
---
Diego Cast
yeah, i followed the release notes.
Everything is working, just hit this issue until enabled services
individually.
Tks
---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade
2016-04-22 12:24 GMT-03:00 Vasu Kulkarni :
> Hope you followed the release notes and are on 0.94.4
Hope you followed the release notes and are on 0.94.4 or above
http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer
1) upgrade ( ensure you dont have user 'ceph' before)
2) stop the service
/etc/init.d/ceph stop (since you are on centos/hammer)
3) change ownership
Mike,
No. It will simply rebuild the degraded PGs which have nothing to do with
the cache tier.
Bob
On Mar 26, 2016 8:13 AM, "Mike Miller" wrote:
> Christian, Bob,
>
> Recovery would be based on placement groups and those degraded groups
>>> would only exist on the storage pool(s) rather than t
Christian, Bob,
Recovery would be based on placement groups and those degraded groups
would only exist on the storage pool(s) rather than the cache tier in
this scenario.
Precisely.
They are entirely different entities.
There may be partially identical data (clean objects) in them, but that
w
On Fri, 25 Mar 2016 14:14:37 -0700 Bob R wrote:
> Mike,
>
> Recovery would be based on placement groups and those degraded groups
> would only exist on the storage pool(s) rather than the cache tier in
> this scenario.
>
Precisely.
They are entirely different entities.
There may be partially id
Mike,
Recovery would be based on placement groups and those degraded groups would
only exist on the storage pool(s) rather than the cache tier in this
scenario.
Bob
On Fri, Mar 25, 2016 at 8:30 AM, Mike Miller
wrote:
> Hi,
>
> in case of a failure in the storage tier, say single OSD disk failu
expected. All nodes fully functional again. Thank you
very much for the help.
Best, Daniel
---
From: Mihai Gheorghe [mailto:mcaps...@gmail.com]
Sent: Mittwoch, 10. Februar 2016 18:51
To: Ivan Grcic
Cc: Balsiger Daniel, INI-INO-ECO-MXT ; ceph-users
Subject: Re: [ceph-users] Question
As far as i know you can do it in two ways: (assuming you hace a pool size
of 3 on all 3 nodes with min_size 2 to still have access to data)
1. Set noout for not starting the rebalance of the cluster. Reinstall OS on
the faulty node and redeploy the node with all keys and conf files (either
manual
Hi Daniel,
oops, wrong copy paste, here are the correct commands:
ceph osd pool get pool-name size
ceph osd pool set pool-name size 2
On Wed, Feb 10, 2016 at 6:27 PM, Ivan Grcic wrote:
> Grüezi Daniel,
>
> my first question would be: Whats your pool size / min_size?
>
> ceph osd pool get pool-
Grüezi Daniel,
my first question would be: Whats your pool size / min_size?
ceph osd pool get pool-name
It is probably 3 (default size). If you want to have healthy state
again with only 2 nodes (all the OSDs on node 3 are down), you have to
set your pool size to 2:
ceph osd pool set pool-name
> Thanks for your reply, why not rebuild object-map when object-map feature is
> enabled.
>
> Cheers,
> xinxin
>
My initial motivation was to avoid a potentially lengthy rebuild when enabling
the feature. Perhaps that option could warn you to rebuild the object map
after its been enabled.
-
Thanks for your reply, why not rebuild object-map when object-map feature is
enabled.
Cheers,
xinxin
-Original Message-
From: Jason Dillaman [mailto:dilla...@redhat.com]
Sent: Tuesday, October 27, 2015 9:20 PM
To: Shu, Xinxin
Cc: ceph-users
Subject: Re: Question about rbd flag(RBD_FLAG
> Hi Jason dillaman
> Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when
> I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID
> flag.
> When I create a rbd with “—image-features = 13 ” , we enable object-map
> feature without setting RBD_FLAG_OBJEC
Hello,
There are of course a number of threads in the ML archives about things
like this.
On Sat, 24 Oct 2015 17:48:35 +0200 Mike Miller wrote:
> Hi,
>
> as I am planning to set up a ceph cluster with 6 OSD nodes with 10
> harddisks in each node, could you please give me some advice about
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
My understanding of growing file systems is the same as yours, it can
only grow at the end not the beginning. In addition to that, having
partition 2 before partition 1 just cries to me to have it fixed, but
that is just aesthetic.
Because the weigh
Christian,
Thanks for the feedback.
I guess I'm wondering about step 4 "clobber partition, leaving data in
tact and grow partition and the file system as needed".
My understanding of xfs_growfs is that the free space must be at the end
of the existing file system. In this case the existing part
Hello,
On Wed, 16 Sep 2015 07:21:26 -0500 John-Paul Robinson wrote:
> The move journal, partition resize, grow file system approach would
> work nicely if the spare capacity were at the end of the disk.
>
That shouldn't matter, you can "safely" loose your journal in controlled
circumstances.
T
So I just realized I had described the partition error incorrectly in my
initial post. The journal was placed at the 800GB mark leaving the 2TB data
partition at the end of the disk. (See my follow-up to Lionel for details.)
I'm working to correct that so I have a single large partition the siz
The move journal, partition resize, grow file system approach would
work nicely if the spare capacity were at the end of the disk.
Unfortunately, the gdisk (0.8.1) end of disk location bug caused the
journal placement to be at the 800GB mark, leaving the largest remaining
partition at the end of
Le 16/09/2015 01:21, John-Paul Robinson a écrit :
> Hi,
>
> I'm working to correct a partitioning error from when our cluster was
> first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB
> partitions for our OSDs, instead of the 2.8TB actually available on
> disk, a 29% space hit. (Th
ssage -
From: "Goncalo Borges"
To: "Shinobu Kinjo" , "John Spray"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, September 15, 2015 12:39:57 PM
Subject: Re: [ceph-users] Question on cephfs recovery tools
Hi Shinobu
>>> c./ After recovering the cluster, I th
Hi Shinobu
c./ After recovering the cluster, I though I was in a cephfs situation where
I had
c.1 files with holes (because of lost PGs and objects in the data pool)
c.2 files without metadata (because of lost PGs and objects in the
metadata pool)
What does "files without metadata" m
Hello John...
Thank you for the replies. I do have some comments in line.
Bare a bit with me while I give you a bit of context. Questions will appear
at the end.
1) I am currently running ceph 9.0.3 and I have install it to test the
cephfs recovery tools.
2) I've created a situation where
> In your procedure, the umount problems have nothing to do with
> corruption. It's (sometimes) hanging because the MDS is offline. If
How did you notice that the MDS was offline?
It's just because ceph client could not unmount filesystem, or anything?
I would like to see logs in mds and osd. B
er.
e.g: broken metadata, data or whatever you're thinking now.
3. What you exactly did shortly not bla bla bla...
4. What you really want to do (shortly)?
Otherwise there would be a bunch of back-end-force messages.
Shinobu
- Original Message -
From: "John Spray&qu
1 - 100 of 229 matches
Mail list logo