>>https://github.com/rochaporto/collectd-ceph
>>
>>It has a set of collectd plugins pushing metrics which mostly map what
>>the ceph commands return. In the setup we have it pushes them to
>>graphite and the displays rely on grafana (check for a screenshot in
>>the link above).
Thanks for sh
Dear ceph,
I am trying to setup ceph 0.80.1 with the following components :
1 x mon - Debian Wheezy (i386)
3 x osds - Debian Wheezy (i386)
(all are kvm powered)
Status after the standard setup procedure :
root@ceph-node2:~# ceph -s
cluster d079dd72-8454-4b4a-af92-ef4c424d96d8
health H
Hi,
I have four old machines lying around. I would like to setup ceph on
these machines.
Are there any screencast or tutorial with commands, on how to obtain,
install and configure on ceph on these machines ?
The official documentation page "OS Recommendations" seem to list only
old distros and
Hi Simon,
thanks for your reply.
I already installed OS for my ceph-nodes via Kickstart (via network) from
Redhat Satellite and I dont want to do that again because some other config had
also been done.
xfsprogs is not part of the rhel base repository but of some extra package with
costs per no
发自我的 iPhone
> 在 2014年5月22日,22:26,Gregory Farnum 写道:
>
>> On Thu, May 22, 2014 at 5:04 AM, Geert Lindemulder
>> wrote:
>> Hello All
>>
>> Trying to implement the osd leveldb backend at an existing ceph test
>> cluster.
>> The test cluster was updated from 0.72.1 to 0.80.1. The update was ok.
> -Ursprüngliche Nachricht-
> Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag
> von Sankar P
> Gesendet: Freitag, 23. Mai 2014 11:14
> An: ceph-users@lists.ceph.com
> Betreff: [ceph-users] Screencast/tutorial on setting up Ceph
>
> Hi,
>
> I have four old machines ly
On 22.05.2014 15:36, Yehuda Sadeh wrote:
On Thu, May 22, 2014 at 6:16 AM, Georg Höllrigl
wrote:
Hello List,
Using the radosgw works fine, as long as the amount of data doesn't get too
big.
I have created one bucket that holds many small files, separated into
different "directories". But whene
Try increasing the placement groups for pools
ceph osd pool set data pg_num 128
ceph osd pool set data pgp_num 128
similarly for other 2 pools as well.
- karan -
On 23 May 2014, at 11:50, jan.zel...@id.unibe.ch wrote:
> Dear ceph,
>
> I am trying to setup ceph 0.80.1 with the following com
use my blogs if you like
http://karan-mj.blogspot.fi/2013/12/ceph-storage-part-2.html
- Karan Singh -
On 23 May 2014, at 12:30,
wrote:
>> -Ursprüngliche Nachricht-
>> Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag
>> von Sankar P
>> Gesendet: Freitag, 23. Ma
Thank you very much - I think I've solved the whole thing. It wasn't in
radosgw.
The solution was,
- increase the timeout in Apache conf.
- when using haproxy, also increase the timeouts there!
Georg
On 22.05.2014 15:36, Yehuda Sadeh wrote:
On Thu, May 22, 2014 at 6:16 AM, Georg Höllrigl
wr
64 PG's per pool /shouldn't/ cause any issues while there's only 3
OSD's. It'll be something to pay attention to if a lot more get added
through.
Your replication setup is probably anything other than host.
You'll want to extract your crush map then decompile it and see if your
"step" is set t
Hi Ricardo,
Let me share a few notes on metrics in calamari:
* We're bundling graphite, and using diamond to send home metrics.
The diamond collector used in calamari has always been open source
[1].
* The Calamari UI has its own graphs page that talks directly to the
graphite API (the calamari
On 22.05.2014 17:30, Craig Lewis wrote:
On 5/22/14 06:16 , Georg Höllrigl wrote:
I have created one bucket that holds many small files, separated into
different "directories". But whenever I try to acess the bucket, I
only run into some timeout. The timeout is at around 30 - 100 seconds.
This
Hi Mike,
Sorry I missed this message. Are you able to reproduce the problem ? Does it
always happen when you logrotate --force or only sometimes ?
Cheers
On 13/05/2014 21:23, Gregory Farnum wrote:
> Yeah, I just did so. :(
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
On 13/05/2014 20:10, Gregory Farnum wrote:
> On Tue, May 13, 2014 at 9:06 AM, Mike Dawson wrote:
>> All,
>>
>> I have a recurring issue where the admin sockets
>> (/var/run/ceph/ceph-*.*.asok) may vanish on a running cluster while the
>> daemons keep running
>
> Hmm.
>
>> (or restart without m
Hi,
if you use debian,
try to use a recent kernel from backport (>3.10)
also check your libleveldb1 version, it should be 1.9.0-1~bpo70+1 (debian
wheezy version is too old)
I don't see it in ceph repo:
http://ceph.com/debian-firefly/pool/main/l/leveldb/
(only for squeeze ~bpo60+1)
but you c
Hello Greg and Haomai,
Thanks for the answers.
I was trying to implement the osd leveldb backend at an existing ceph
test cluster.
At the moment i am removing the osd's one by one and recreate them with
the objectstore = keyvaluestore-dev option in place in ceph.conf.
This works fine and the back
> -Ursprüngliche Nachricht-
> Von: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Gesendet: Freitag, 23. Mai 2014 13:20
> An: Zeller, Jan (ID)
> Cc: ceph-users@lists.ceph.com
> Betreff: Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck
> unclean
>
> Hi,
>
> if you use debi
>>thanks Alexandre, due to this I'll try the whole setup on Ubuntu 12.04.
>>May be it's going to be a bit more easier...
Yes,I think you can use last ubuntu lts, I think ceph 0.79 is officialy
supported, so it should not be a problem for firefly.
- Mail original -
De: "jan zeller"
Hi All,
I would like to create a function for getting the user details by passing a
user id ( id) using php and curl. I am planning to pass the user id as
'admin' ( admin is a user which is already there ) and get the details of
that user. Could you please tell me how we can create the authenticat
Best Wishes!
> 在 2014年5月23日,19:27,Geert Lindemulder 写道:
>
> Hello Greg and Haomai,
>
> Thanks for the answers.
> I was trying to implement the osd leveldb backend at an existing ceph
> test cluster.
>
> At the moment i am removing the osd's one by one and recreate them with
> the objectstore
I want to know/read Objet ID assigned by ceph to file which I
transfered via crossftp.
How can I read 64bit Object ID?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In Firefly, I added below lines to [global] section in ceph.conf, however,
after creating the cluster, the default pool “metadata/data/rbd”’s pg num is
still over 900 but not 375. Any suggestion?
osd pool default pg num = 375
osd pool default pgp num = 375
___
Hi!
I can't find any information about ceph osd pool snapshots, except for the
commands mksnap and rmsnap.
What features does snapshots enable? Can I do things such as
diff-export/import just like rbd can?
Thanks!
___
ceph-users mailing list
ceph-user
Thanks for your tips & tricks.
This setup is now based on ubuntu 12.04, ceph version 0.80.1
Still using
1 x mon
3 x osds
root@ceph-node2:~# ceph osd tree
# idweight type name up/downreweight
-10 root default
-20
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Yehuda
On 23/05/14 02:25, Yehuda Sadeh wrote:
> That looks like a bug; generally the permission checks there are
> broken. I opened issue #8428, and pushed a fix on top of the
> firefly branch to wip-8428.
I cherry picked the fix and tested - L
Those settings are applied when creating new pools with "osd pool
create", but not to the pools that are created automatically during
cluster setup.
We've had the same question before
(http://comments.gmane.org/gmane.comp.file-systems.ceph.user/8150), so
maybe it's worth opening a ticket to do som
Thanks:-) That helped.
Thanks & Regards,
Sharmila
On Thu, May 22, 2014 at 6:41 PM, Alfredo Deza wrote:
> Hopefully I am not late to the party :)
>
> But ceph-deploy recently gained a `osd list` subcommand that does this
> plus a bunch of other interesting metadata:
>
> $ ceph-deploy osd list no
Hello,
I’m running a 3 node cluster with 2 hdd/osd and one mon on each node.
Sadly the fsyncs done by mon-processes eat my hdd.
I was able to disable this impact by moving the mon-data-dir to ramfs.
This should work until at least 2 nodes are running, but I want to implement
some kind of disaste
Hi,
I think you’re rather brave (sorry, foolish) to store the mon data dir in
ramfs. One power outage and your cluster is dead. Even with good backups of the
data dir I wouldn't want to go through that exercise.
Saying that, we had a similar disk-io-bound problem with the mon data dirs, and
sol
Hi,
Iam trying to do some network control on the storage nodes. For this, I
need to know the ports opened for communication by each OSD processes.
I got to know from the link
http://ceph.com/docs/master/rados/configuration/network-config-ref/ , that
each OSD process requires 3 ports and they from
On 05/23/2014 04:09 PM, Dan Van Der Ster wrote:
Hi,
I think you’re rather brave (sorry, foolish) to store the mon data dir in
ramfs. One power outage and your cluster is dead. Even with good backups of the
data dir I wouldn't want to go through that exercise.
Agreed. Foolish. I'd never do th
For what it's worth (very little in my case)...
Since the cluster wasn't in production yet and Firefly (0.80.1) did hit
Debian Jessie today I upgraded it.
Big mistake...
I did the recommended upgrade song and dance, MONs first, OSDs after that.
Then applied "ceph osd crush tunables default" as
Hi,
Am 23.05.2014 um 16:09 schrieb Dan Van Der Ster :
> Hi,
> I think you’re rather brave (sorry, foolish) to store the mon data dir in
> ramfs. One power outage and your cluster is dead. Even with good backups of
> the data dir I wouldn't want to go through that exercise.
>
I know - I’m stil
Hi,
> Am 23.05.2014 um 17:31 schrieb "Wido den Hollander" :
>
> I wrote a blog about this:
> http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/
so you assume restoring the old data is working, or did you proof this?
Fabian
___
ceph-
Hey cephers,
Just wanted to let you know that the schedule has been posted for Ceph
Day Boston happening on 10 June at the Sheraton Boston, MA:
http://www.inktank.com/cephdays/boston/
There are still a couple of talk title tweaks that are pending, but I
wanted to get the info out as soon as poss
Yesterday I went through manually configuring a ceph cluster with a
rados gateway on centos 6.5, and I have a question about the
documentation. On this page:
https://ceph.com/docs/master/radosgw/config/
It mentions "On CentOS/RHEL distributions, turn off print continue. If
you have it set to tru
The other thing to note, too, is that it appears you're trying to decrease the
PG/PGP_num parameters, which is not supported. In order to decrease those
settings, you'll need to delete and recreate the pools. All new pools created
will use the settings defined in the ceph.conf file.
-Orig
Hi !
I have failover clusters for some aplications. Generally with 2 members
configured with Ubuntu + Drbd + Ext4. For example, my IMAP cluster works
fine with ~ 50k email accounts and my HTTP cluster hosts ~2k sites.
See design here: http://adminlinux.com.br/cluster_design.txt
I would like
On 5/22/14 11:51 , Győrvári Gábor wrote:
Hello,
Got this kind of logs in two node of 3 node cluster both node has 2
OSD, only affected 2 OSD on two separate node thats why i dont
understand the situation. There wasnt any extra io on the system at
the given time.
Using radosgw with s3 api to
If you're not using CephFS, you don't need metadata or data pools. You
can delete them.
If you're not using RBD, you don't need the rbd pool.
If you are using CephFS, and you do delete and recreate the
metadata/data pools, you'll need to tell CephFS. I think the command is
ceph mds add_data_
On 05/23/2014 06:30 PM, Fabian Zimmermann wrote:
Hi,
Am 23.05.2014 um 17:31 schrieb "Wido den Hollander" :
I wrote a blog about this:
http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/
so you assume restoring the old data is working, or did you proof this?
No, that won'
On 5/23/14 09:30 , Fabian Zimmermann wrote:
Hi,
Am 23.05.2014 um 17:31 schrieb "Wido den Hollander" :
I wrote a blog about this:
http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/
so you assume restoring the old data is working, or did you proof this?
I did some of the sam
On 05/23/2014 03:06 PM, Craig Lewis wrote:
> 1: ZFS or Btrfs snapshots could do this, but neither one are recommended
> for production.
Out of curiosity, what's the current beef with zfs? I know what problems
are cited for btrfs, but I haven't heard much about zfs lately.
--
Dimitri Maziuk
Prog
On 5/23/14 03:47 , Georg Höllrigl wrote:
On 22.05.2014 17:30, Craig Lewis wrote:
On 5/22/14 06:16 , Georg Höllrigl wrote:
I have created one bucket that holds many small files, separated into
different "directories". But whenever I try to acess the bucket, I
only run into some timeout. The t
On 5/21/14 19:49 , wsnote wrote:
Hi,everyone!
I have 2 ceph clusters, one master zone, another secondary zone.
Now I have some question.
1. Can ceph have two or more secondary zones?
It's supposed to work, but I haven't tested it.
2. Can the role of master zone and secondary zone transform m
Hello Dimitri,
> Le 23 mai 2014 à 22:33, Dimitri Maziuk a écrit :
>
>> On 05/23/2014 03:06 PM, Craig Lewis wrote:
>>
>> 1: ZFS or Btrfs snapshots could do this, but neither one are recommended
>> for production.
>
> Out of curiosity, what's the current beef with zfs? I know what problems
> are
Hi John.
Thanks for the reply, sounds very good.
The extra visualizations from kibana (grafana only seems to pack a
small subset, but the codebase is basically the same) look cool, will
put some more in soon - seems like they can still be useful later.
Looking forward to some calamari.
Cheers,
Hello,
No i dont see any backfill log in ceph.log during that period, drives
are WD2000FYYZ-01UL1B1 but i did not find any informations in SMART, and
yes i will check other drives too.
Could i determine somehow, in which PG placed the file?
Thanks
2014.05.23. 20:51 keltezéssel, Craig Lewis
Hello,
On Fri, 23 May 2014 15:41:23 -0300 Listas@Adminlinux wrote:
> Hi !
>
> I have failover clusters for some aplications. Generally with 2 members
> configured with Ubuntu + Drbd + Ext4. For example, my IMAP cluster works
> fine with ~ 50k email accounts and my HTTP cluster hosts ~2k sites
50 matches
Mail list logo