Hello,Paul,:
Thanks for your help.The aim I did it in my test/dev environment is to ready
for my production cluster.
If set nodown,while clinet read/write on the osd that previously marked
down, What will it happen? How can I avoid it? or is there any document I can
refer to? Thanks!
Hi Everybody,
I starting a new lab environment with ceph ansible , bluestore and lvm
advanced deployment.
Which size configuration is recommend to set data ,journal wal and db lvm ?
Someone had configured in lvm adavanced deploy ?
Regards,
Fabio
--
Atenciosamente,
Fabio Abreu Reis
http://f
Hm, according to https://tracker.ceph.com/issues/24025 snappy compression
should be available out of the box at least since luminous. What ceph
version are you running?
On Wed, 26 Jun 2019 at 21:51, Rafał Wądołowski
wrote:
> We changed these settings. Our config now is:
>
> bluestore_rocksdb_opt
Hi everyone,
Tomorrow's Ceph Tech Talk will be an updated "Intro to Ceph" talk by Sage
Weil. This will be based on a newly refreshed set of slides and provide a
high-level introduction to the overall Ceph architecture, RGW, RBD, and
CephFS.
Our plan is to follow-up later this summer with comp
Please disregard the earlier message. I found the culprit:
`osd_crush_update_on_start` was set to false.
*Mami Hayashida*
*Research Computing Associate*
Univ. of Kentucky ITS Research Computing Infrastructure
On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami
wrote:
> I am trying to build a Ce
I am trying to build a Ceph cluster using ceph-deploy. To add OSDs, I used
the following command (which I had successfully used before to build
another cluster):
ceph-deploy osd create --block-db=ssd0/db0 --data=/dev/sdh osd0
ceph-deploy osd create --block-db=ssd0/db1 --data=/dev/sdi osd0
etc.
On 2019-06-26T14:45:31, Sage Weil wrote:
Hi Sage,
I think that makes sense. I'd have preferred the Oct/Nov target, but
that'd have made Octopus quite short.
Unsure whether freezing in December with a release in March is too long
though. But given how much people scramble, setting that as a goal
Awesome. I made a ticket and pinged the Bluestore guys about it:
http://tracker.ceph.com/issues/40557
On Tue, Jun 25, 2019 at 1:52 AM Thomas Byrne - UKRI STFC
wrote:
>
> I hadn't tried manual compaction, but it did the trick. The db shrunk down to
> 50MB and the OSD booted instantly. Thanks!
>
>
March seems sensible to me for the reasons you stated. If a release gets
delayed, I'd prefer it to be on the spring side of Christmas (again for the
reasons already mentioned).
That aside, I'm now very impatient to install Octopus on my 8-node cluster.
: )
On Wed, 26 Jun 2019 at 15:46, Sage Weil
Hi everyone,
We talked a bit about this during the CLT meeting this morning. How about
the following proposal:
- Target release date of Mar 1 each year.
- Target freeze in Dec. That will allow us to use the holidays to do a
lot of testing when the lab infrastructure tends to be somewhat idl
G'Day everyone.
I'm about to try my first OSD's with a split data drive and journal on an SSD
using some Intel S3500 600GB SSD's I have spare from a previous project. Now I
would like to make sure that the 300GB journal fits but my question is whether
that 300GB is 300 * 1000 or 300 * 1024? The
On Wed, 26 Jun 2019, Alfonso Martinez Hidalgo wrote:
> I think March is a good idea.
Spring had a slight edge over fall in the twitter poll (for whatever
that's worth). I see the appeal for fall when it comes to down time for
retailers, but as a practical matter for Octopus specifically, a tar
On Tue, 25 Jun 2019, Alfredo Deza wrote:
> On Mon, Jun 17, 2019 at 4:09 PM David Turner wrote:
> >
> > This was a little long to respond with on Twitter, so I thought I'd share
> > my thoughts here. I love the idea of a 12 month cadence. I like October
> > because admins aren't upgrading product
Have you tried: ceph osd force-create-pg ?
If that doesn't work: use objectstore-tool on the OSD (while it's not
running) and use it to force mark the PG as complete. (Don't know the exact
command off the top of my head)
Caution: these are obviously really dangerous commands
Paul
--
Paul E
Looks like it's overloaded and runs into a timeout. For a test/dev
environment: try to set the nodown flag for this experiment if you just
want to ignore these timeouts completely.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseni
Device classes are implemented with magic invisible crush trees; you've got
two completely independent trees internally: one for crush rules mapping to
HDDs, one to legacy crush rules not specifying a device class.
The balancer *should* be aware of this and ignore it, but I'm not sure
about the st
Hi,all:
I start ceph cluster on my machine with development mode,to estimate the
time of recoverying after increasing pgp_num.
all of daemon run on one machine.
CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
memory: 377GB
OS:CentOS Linux release 7.6.1810
ceph version:hammer
Have I missed a step? Diskprediction module is not working for me.
root@cnx-11:/var/log/ceph# ceph device show-prediction-config
no valid command found; 10 closest matches:
root@cnx-11:/var/log/ceph# ceph mgr module ls
{
"enabled_modules": [
"dashboard",
"diskprediction_cloud"
We changed these settings. Our config now is:
bluestore_rocksdb_options =
"compression=kSnappyCompression,max_write_buffer_number=16,min_write_buffer_number_to_merge=3,recycle_log_file_num=16,compaction_style=kCompactionStyleLevel,write_buffer_size=50331648,target_file_size_base=50331648,max_backg
Hi,
tried to enable the ceph balancer on a 12.2.12 cluster and got this:
mgr[balancer] Some osds belong to multiple subtrees: [0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 4
20 matches
Mail list logo