bytes.
Am I reading this incorrectly?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chi
Thanks again Gregory!
One more quick question. If I raise the amount of PGs for a pool, will this
REMOVE any data from the full OSD? Or will I have to take the OSD out and put
it back in to realize this benefit? Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite
Thanks Gregory,
One point that was a bit unclear in documentation is whether or not this
equation for PGs applies to a single pool, or the entirety of pools.
Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should
all the pools ADD UP to 3000 PGs? Thanks!
--
Kevin Weiler
IT
; 20
I assume this is in bytes.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>
From: Kur
All of the disks in my cluster are identical and therefore all have the same
weight (each drive is 2TB and the automatically generated weight is 1.82 for
each one).
Would the procedure here be to reduce the weight, let it rebal, and then put
the weight back to where it was?
--
Kevin Weiler
IT
Hi guys,
I have an OSD in my cluster that is near full at 90%, but we're using a little
less than half the available storage in the cluster. Shouldn't this be balanced
out?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-c
Thanks Kyle,
What's the unit for osd recovery max chunk?
Also, how do I find out what my current values are for these osd options?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +
o that our VMs don't go
down when there is a problem with the cluster?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailt
Hi Josh,
We did map it directly to the host, and it seems to work just fine. I
think this is a problem with how the container is accessing the rbd module.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312
The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did,
however, try a map with an RBD that was format 2. I got the same error.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax
sages on either
the container or the host box. Any ideas on how to troubleshoot this?
Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@
: NOKEY
/usr/bin/env
gdisk
or
pushy >= 0.5.3
python(abi) = 2.7
python-argparse
python-distribute
python-pushy >= 0.5.3
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
it seems to require both pushy AND python-pushy.
--
Kevin Weiler
IT
IMC Financial Mar
k=0
proxy=_none_
metadata_expire=0
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>
From: Gary Lowe
rrect version). The spec file looks fine in
the ceph-deploy git repo, maybe you just need to rerun the package/repo
generation? Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-
Hi again Ceph devs,
I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph about this and we both agreed it
doesn't seem possible given the documentation. However, I have an example of a
ceph cluster that was deployed using ceph-deploy
when creating
the client.admin key so it doesn't need capabilities? Thanks again!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com
elot on camelot...
=== mds.camelot ===
Starting Ceph mds.camelot on camelot...
starting mds.camelot at :/0
[root@camelot ~]# ceph auth get mon.
access denied
If someone could tell me what I'm doing wrong it would be greatly appreciated.
Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wack
17 matches
Mail list logo