On 03/05/2015 03:40 AM, Josh Durgin wrote:
It looks like your libvirt rados user doesn't have access to whatever
pool the parent image is in:
librbd::AioRequest: write 0x7f1ec6ad6960
rbd_data.24413d1b58ba.0186 1523712~4096 should_complete: r
= -1
-1 is EPERM, for operation not
Hello,
Is there some way to make the client(via RADOS API or something like
that) to get the notification of an event (for example, an OSD down)
happened in the cluster?
--
Den
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Hello Ketor,
About 1 more years ago, I need a free DFS can be used in AIX
environment as a tiered storage solution for Bank DC, that why the
project.
This project just port the CephFS in Linux kernel to AIX kernel(maybe
RBD in future), so it's a kernel mode AIX cephfs.
But I have multiple projects
Thank you all for such wonderful feedback.
Thank you to John Spray for putting me on the right track. I now see
that the cephfs aspect of the project is being de-emphasised, so that
the manual deployment instructions tell how to set up the object store,
and then the cephfs is a separate issue tha
Bump...
On 2015-03-03 10:54:13 +, Daniel Schneller said:
Hi!
After realizing the problem with log rotation (see
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/17708)
and fixing it, I now for the first time have some
meaningful (and recent) logs to look at.
While from an applica
David,
You will need to up the limit of open files in the linux system. Check
/etc/security/limits.conf. it is explained somewhere in the docs and the
autostart scripts 'fixes' the issue for most people. When I did a manual
deploy for the same reasons you are, I ran into this too.
Robert LeBlanc
On Thu, 5 Mar 2015 07:46:50 -0700 Robert LeBlanc wrote:
> David,
>
> You will need to up the limit of open files in the linux system. Check
> /etc/security/limits.conf. it is explained somewhere in the docs and the
> autostart scripts 'fixes' the issue for most people. When I did a manual
> deplo
Thank you all for all good advises and much needed documentation.
I have a lot to digest :)
Adrian
On 03/04/2015 08:17 PM, Stephen Mercier wrote:
> To expand upon this, the very nature and existence of Ceph is to replace
> RAID. The FS itself replicates data and handles the HA functionality
> tha
- Original Message -
> From: "Daniel Schneller"
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, March 3, 2015 2:54:13 AM
> Subject: [ceph-users] Understand RadosGW logs
>
> Hi!
>
> After realizing the problem with log rotation (see
> http://thread.gmane.org/gmane.comp.file-systems.cep
The fix for this should be in 0.93, so this must be something different,
can you reproduce with
debug osd = 20
debug ms = 1
debug filestore = 20
and post the log to http://tracker.ceph.com/issues/11027?
On Wed, 2015-03-04 at 00:04 +0100, Yann Dupont wrote:
> Le 03/03/2015 22:03, Italo Santos a é
I'm seeing a strange queue depth behaviour with a kernel mapped RBD, librbd
does not show this problem.
Cluster is comprised of 4 nodes, 10GB networking, not including OSDs as test
sample is small so fits in page cache.
Running fio against a kernel mapped RBD
fio --randrepeat=1 --ioengine=
Hello Loïc,
It does exists ... but maybe not at the scale you are looking for :
http://www.fujitsu.com/global/products/computing/storage/eternus-cd/
I read a paper about their hardware, it seems like they work with
inktank (redhat) on this.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsabl
On 03/05/2015 12:46 AM, koukou73gr wrote:
On 03/05/2015 03:40 AM, Josh Durgin wrote:
It looks like your libvirt rados user doesn't have access to whatever
pool the parent image is in:
librbd::AioRequest: write 0x7f1ec6ad6960
rbd_data.24413d1b58ba.0186 1523712~4096 should_complete:
Hi All,
Just a heads up after a day's experimentation.
I believe tgt with its default settings has a small write cache when
exporting a kernel mapped RBD. Doing some write tests I saw 4 times the
write throughput when using tgt aio + krbd compared to tgt with the builtin
librbd.
After r
Do any of the Ceph repositories run rsync? We generally mirror the
repository locally so we don't encounter any unexpected upgrades.
eu.ceph.com used to run this, but it seems to be down now.
# rsync rsync://eu.ceph.com
rsync: failed to connect to eu.ceph.com: Connection refused (111)
rsync er
I use reposync to keep mine updated when needed.
Something like:
cd ~ /ceph/repos
reposync -r Ceph -c /etc/yum.repos.d/ceph.repo
reposync -r Ceph-noarch -c /etc/yum.repos.d/ceph.repo
reposync -r elrepo-kernel -c /etc/yum.repos.d/elrepo.repo
Michael Kuriger
Sr. Unix Systems Engineer
S mk7...@
Hi Blair,
I've updated the script and it now (theoretically) computes optimal
crush weights based on both primary and secondary acting set OSDs. It
also attempts to show you the efficiency of equal weights vs using
weights optimized for different pools (or all pools). This is done by
lookin
Mark,
It worked for me earlier this morning but the new rev is throwing a
traceback:
$ ceph pg dump -f json | python ./readpgdump.py > pgdump_analysis.txt
dumped all in format json
Traceback (most recent call last):
File "./readpgdump.py", line 294, in
parse_json(data)
File "./readpgdump
Is there anyone who is hitting this? or any help on this is much appreciated.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pavan
Rallabhandi
Sent: Saturday, February 28, 2015 11:42 PM
To: ceph-us...@ceph.com
Subject: [ceph-users] RGW hammer/m
Hi Robert,
it seems I have not listened well on your advice - I set osd to out,
instead of stoping it - and now instead of some ~ 3% of degraded objects,
now there is 0.000% of degraded, and arround 6% misplaced - and rebalancing
is happening again, but this is small percentage..
Do you know if l
According to the docs at
http://docs.ceph.com/docs/master/radosgw/adminops/#get-user-info
I should be able to invoke /admin/user without a quid specified, and get a list
of users.
No matter what I try, I get a 403.
After looking at the source at github (ceph/ceph), it appears that there isn’t
an
Hi,
I am newbie for ceph, and ceph-user group. Recently I have been working on
a ceph client. It worked on all the environments while when i tested on the
production, it is not able to connect to ceph.
Following are the operating system details and error. If someone has seen
this problem before,
Setting an OSD out will start the rebalance with the degraded object count.
The OSD is still alive and can participate in the relocation of the
objects. This is preferable so that you don't happen to get less the
min_size because a disk fails during the rebalance then I/O stops on the
cluster.
Bec
On 03/05/2015 07:14 PM, Brian Rak wrote:
> Do any of the Ceph repositories run rsync? We generally mirror the
> repository locally so we don't encounter any unexpected upgrades.
>
> eu.ceph.com used to run this, but it seems to be down now.
>
> # rsync rsync://eu.ceph.com
> rsync: failed to conn
The metadata api can do it:
GET /admin/metadata/user
Yehuda
- Original Message -
> From: "Joshua Weaver"
> To: ceph-us...@ceph.com
> Sent: Thursday, March 5, 2015 1:43:33 PM
> Subject: [ceph-users] rgw admin api - users
>
> According to the docs at
> http://docs.ceph.com/docs/master/r
Thanks a lot Robert.
I have actually already tried folowing:
a) set one OSD to out (6% of data misplaced, CEPH recovered fine), stop
OSD, remove OSD from crush map (again 36% of data misplaced !!!) - then
inserted OSD back in to crushmap - and those 36% displaced objects
disappeared, of course -
Hi David,
Mind sending me the output of "ceph pg dump -f json"?
thanks!
Mark
On 03/05/2015 12:52 PM, David Burley wrote:
Mark,
It worked for me earlier this morning but the new rev is throwing a
traceback:
$ ceph pg dump -f json | python ./readpgdump.py > pgdump_analysis.txt
dumped all in fo
Hello everyone,
I'm trying use the rgw admin API, but every user operation I tried I received
"HTTP 403 Forbidden":
In [1]: import requests
In [2]: from awsauth import S3Auth
In [3]: access_key = 'ACCESS_KEY'
In [4]: secret_key = 'SECRET_KEY'
In [5]: server = 'rgw.example.com'
In [6]: url = 'htt
Hello guys,
On adminops documentation that saw how to remove a bucket, but I can’t find the
URI to create one, I’d like to know if this is possible?
Regards.
Italo Santos
http://italosantos.com.br/
___
ceph-users mailing list
ceph-users@lists.cep
Hi,
I'm sorry to revive my post but I can't to solve my problems
and I see anything in the log. I have tried with Hammer version
and I found the same phenomena.
In fact, first, I have tried the same installation (ie the same
conf via puppet) of my cluster but in "virtualbox" environment
and I hav
hello everyone,
recently, I have a doubt about ceph osd journal.
I use ceph-deploy to add new osd which the version is 1.4.0. And my ceph
version is 0.80.5
the /dev/sdb is a sata disk,and the /dev/sdk is a ssd disk, the sdk1
partition size is 50G.
ceph-deploy osd prepare host1:/dev/sdb1:/de
What did you mean when say "ceph client"?
The log piece that you posted seems to be about kernel that you are
using not supporting some features of ceph. Try to update you kernel if
your 'client' is Rados Block Device client.
06.03.2015 00:48, Sonal Dubey пишет:
Hi,
I am newbie for ceph, an
32 matches
Mail list logo