For the second issue, I got the answer from within:
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location.
Thanks,
Guang
On Oct 8, 2013, at 8:43 PM, Guang wrote:
> Hi ceph-users,
> After walking through the operations document, I still have several questions
Hello,
I have a Ceph test cluster with 88 OSDs running well.
In ceph.conf I found multiple OSDs that are using the same SSD block
device (without a file system) for their journal:
[osd.214]
osd journal = /dev/fioa1
...
[osd.314]
osd journal = /dev/fioa1
...
Is th
Hello to all,
I can't succeed in using the Admin Ops REST API for radosgw.
Where can I find an example, in any language (Perl, Python, Bash) ?
For instance, how to proceed to get info for user xxx ?
Via cli, i do radosgw user info --uid=xxx
but with the REST API ?
Thanks for your answers.
Alexi
Hi
When I compare the /etc/ceph.conf for the latest release-dumpling and
previous releases I find they are different.
in the older release we had [osd],[mon],[msd] in the ceph.conf
now i dont seem them.Where are these values stored now?
How does ceph figure out the partitions of osd and journal
I too have noticed this as well when using ceph-deploy to configure ceph.
>From what I can tell it just creates symlinks from the default osd location
at /var/lib/ceph. Same for the journal. If it on a different device a
symlink is created from the dir.
Then it appears the osds are just defined i
发自我的 iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
What is the estimated storage usage for a monitor (i.e. the amount of
data stored in /var/lib/ceph/mon/ceph-mon01)
Currently in my starting test system it's something like 40M (du -s
-h /var/lib/ceph/mon/ceph-mon01), but that will probably grow with the
number of osds.
Are there some numbers
Hi,
Can someone share your experience with monitoring the Ceph cluster? How is
going with the work mentioned here:
http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/ceph_stats_and_monitoring_tools
Thanks,
Guang___
ceph-users mailing list
ceph-user
On 09/10/13 13:38, Kees Bos wrote:
Hi,
What is the estimated storage usage for a monitor (i.e. the amount of
data stored in /var/lib/ceph/mon/ceph-mon01)
Currently in my starting test system it's something like 40M (du -s
-h /var/lib/ceph/mon/ceph-mon01), but that will probably grow with the
nu
help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, 2013-10-09 at 15:18 +0200, Joao Eduardo Luis wrote:
> On 09/10/13 13:38, Kees Bos wrote:
> > Hi,
> >
> > What is the estimated storage usage for a monitor (i.e. the amount of
> > data stored in /var/lib/ceph/mon/ceph-mon01)
> >
> > Currently in my starting test system it's something like 40
On 09/10/2013 15:24, Erwan Velu wrote:
help
_
Sorry... was about to send it to ceph-users-requ...@lists.ceph.com
/me hides
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> Via cli, i do radosgw user info --uid=xxx
> but with the REST API ?
Hi Alexis,
Here is a simple python example on how to use the admin api. You will
need to get a few packages from the cheese shop (virtualenv + pip makes
this easy).
pip install requests-aws
You will also need to set the ap
Great !
Thanks a lot. It works.
I didn't know awsauth module.
Thanks again.
2013/10/9 Derek Yarnell :
>> Via cli, i do radosgw user info --uid=xxx
>> but with the REST API ?
>
> Hi Alexis,
>
> Here is a simple python example on how to use the admin api. You will
> need to get a few packages fr
You can add PGs, the process is called splitting. I don't think PG merging,
the reduction in the number of PGs, is ready yet.
On Oct 8, 2013, at 11:58 PM, Guang wrote:
> Hi ceph-users,
> Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my
> understanding, the number o
Hi,
to avoid confusion: the configuration did *not* contain
multiple osds referring to the same journal device (or file).
The snippet from ceph.conf suggests osd.214 and osd.314
both use the same journal -
but it doesn't show that these osds run on different hosts.
Regards
Andreas Bluemle
On
All, I have been prototyping an object store and am looking at a way to index
content and metadata. Has anyone looked at doing anything similar? I would be
interested in kicking around some ideas. I'd really like to implement something
with Apache Solr or something similar.
Thanks, Mike
I would also love to see this answered, this is sometimes asked during my geek
on duty shift and I don't know a real answer to this, and I myself always do it
old-(bobtail)-style.
Wolfgang
--
http://www.wogri.at
On Oct 9, 2013, at 13:54 , su kucherova wrote:
> Hi
>
> When I compare the /et
Previously the Ceph startup scripts required an enumeration of the
daemons in the ceph.conf in order to start them. We've been doing a
lot of incremental work since last October or so to make the system do
more self-bootstrapping, and by the time we released Dumpling that got
far enough to be used
Ok, thanks for the detailed answer, I already assumed so.
But how do the OSD's then find their mon's? I believe this again has to be in
ceph.conf, right?
wogri
--
http://www.wogri.at
On Oct 9, 2013, at 21:36 , Gregory Farnum wrote:
> Previously the Ceph startup scripts required an enumerati
Yes, the monitors need to be specified in the ceph.conf still.
ceph-deploy and similar systems make sure to do so.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Oct 9, 2013 at 12:42 PM, Wolfgang Hennerbichler wrote:
> Ok, thanks for the detailed answer, I already ass
I was poking around on a node, found the following executables:
radosgw-all-starter
ceph-mds-all-starter
ceph-mon-all-starter
ceph-osd-all-starter
A Ceph web page search yielded no results. Does documentation exist? Where?
Tim
___
ceph-users mailing lis
Journal on SSD should effectively double your throughput because data will
not be written to the same device twice to ensure transactional integrity.
Additionally, by placing the OSD journal on an SSD you should see less
latency, the disk head no longer has to seek back and forth between the
journa
You can certainly use a similarly named device to back an OSD journal if
the OSDs are on separate hosts. If you want to take a single SSD device and
utilize it as a journal for many OSDs on the same machine then you would
want to partition the SSD device and use a different partition for each OSD
j
While in theory this should be true, I'm not finding it to be the case for a
typical enterprise LSI card with 24 drives attached. We tried a variety of
ratios and went back to collocated journals on the spinning drives.
Eagerly awaiting the tiered performance changes to implement a faster tier
Thanks Mike.
Is there any documentation for that?
Thanks,
Guang
On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote:
> You can add PGs, the process is called splitting. I don't think PG merging,
> the reduction in the number of PGs, is ready yet.
>
> On Oct 8, 2013, at 11:58 PM, Guang wrote:
>
>>
There used to be, can't find it right now. Something like 'ceph osd set pg_num
' then 'ceph osd set pgp_num ' to actually move your data into the
new pg's. I successfully did it several months ago, when bobtail was current.
Sent from my iPad
> On Oct 9, 2013, at 10:30 PM, Guang wrote:
>
> T
Thanks Mike. I get your point.
There are still a few things confusing me:
1) We expand Ceph cluster by adding more OSDs, which will trigger re-balance
PGs across the old & new OSDs, and likely it will break the optimized PG
numbers for the cluster.
2) We can add more PGs which will trigger
I had those same questions, I think the answer I got was that it was better to
have too few pg's than to have overloaded osd's. So add osd's then add pg's.
I don't know the best increments to grow in, probably depends largely on the
hardware in your osd's.
Sent from my iPad
> On Oct 9, 2013,
Ceph deployed by ceph-deploy on Ubuntu uses upstart.
On Wed, Oct 9, 2013 at 1:48 PM, Snider, Tim wrote:
> I was poking around on a node, found the following executables:
>
> radosgw-all-starter
>
> ceph-mds-all-starter
>
> ceph-mon-all-starter
>
> ceph-osd-all-starter
>
> A Ceph web page search
Upstart itself could do with better docs :-(
I'd recommend starting with 'man initctl', should help clarify things a bit!
Cheers
Mark
On 10/10/13 17:50, John Wilkins wrote:
Ceph deployed by ceph-deploy on Ubuntu uses upstart.
On Wed, Oct 9, 2013 at 1:48 PM, Snider, Tim wrote:
I was poking
Hi,
I've managed to get cepth in a unhealthy state, from which it will not
recover automatically. I've done some 'ceph osd out X' and stopped
ceph-osd processes before the rebalancing was completed. (All in a test
environment :-) )
Now I see:
# ceph -w
cluster 7fac9ad3-455e-4570-ae24-5c431176
warning infomation:
common/Preforker.h: In member function ‘void Preforker::daemonize()’:
common/Preforker.h:97:40: warning: ignoring return value of ‘ssize_t
write(int, const void*, size_t)’, declared with attribute
warn_unused_result [-Wunused-result]
test/encoding/ceph_dencoder.cc: In functi
33 matches
Mail list logo