There is no detail with GET /admin/metadata/user, only ids.
For PHP, have a look at http://ceph.com/docs/master/radosgw/s3/php/
Alain
De : Shanil S [mailto:xielessha...@gmail.com]
Envoyé : mercredi 21 mai 2014 05:48
À : DECHORGNAT Alain IMT/OLPS
Objet : Re: [ceph-users] Access denied error for
Hi,
I have a lot of space wasted by this problem (about 10GB per OSD, just
for this RBD image).
If OSDs can't detect orphans files, should I manually detect them, then
remove them ?
This command can do the job, at least for this image prefix :
find /var/lib/ceph/osd/ -name 'rb.0.14bfb5a.238e1
Thanks! I increased the max processes parameter for all daemons quite
a lot (until ulimit -u 3802720)
These are the limits for the daemons now..
[root@ ~]# cat /proc/17006/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited
Hi Alian,
Thanks for your reply.
Do you mean we can't list out all users with complete user details using
GET /admin/metadata/user or using GET /admin/user?
Yes, i checked http://ceph.com/docs/master/radosgw/s3/php/ and it contains
only the bucket operations and not any admin operations like lis
Hello Sage
nodown, noout set on cluster
# ceph status
cluster 009d3518-e60d-4f74-a26d-c08c1976263c
health HEALTH_WARN 1133 pgs degraded; 44 pgs incomplete; 42 pgs stale; 45
pgs stuck inactive; 42 pgs stuck stale; 2602 pgs stuck unclean; recovery
206/2199 objects degraded (9.368%); 40/1
Hi,
I have just started to dabble with ceph - went thru the docs
http://ceph.com/howto/deploying-ceph-with-ceph-deploy/
I have a 3 node setup with 2 nodes for OSD
I use ceph-deploy mechanism.
The ceph init scripts expects that cluster.conf to be ceph.conf . If I
give any other name the init s
Hi,
When you just create a cluster, with no OSD, the HEALTH_ERR is "normal"
It mean that your storage is damaged, but you don't care since you've no
storage at this point
About your OSDs, I think you should create partition on your disks (a
single partition, properly aligned etc), instead of
Hi,
I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to figure
out to which pyhsical device/partition each of the OSDs are attached to. Is
there are command that can be executed in the storage node to find out the
same.
Thanks in Advance,
Sharmila
__
Perhaps:
# mount | grep ceph
- Mike Dawson
On 5/21/2014 11:00 AM, Sharmila Govind wrote:
Hi,
I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to
figure out to which pyhsical device/partition each of the OSDs are
attached to. Is there are command that can be executed in the st
Hi Mike,
Thanks for your quick response. When I try mount on the storage node this
is what I get:
*root@cephnode4:~# mount*
*/dev/sda1 on / type ext4 (rw,errors=remount-ro)*
*proc on /proc type proc (rw,noexec,nosuid,nodev)*
*sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)*
*none on /sys/fs/fuse
Hi Olivier,
On Wed, 21 May 2014, Olivier Bonvalet wrote:
> Hi,
>
> I have a lot of space wasted by this problem (about 10GB per OSD, just
> for this RBD image).
> If OSDs can't detect orphans files, should I manually detect them, then
> remove them ?
>
> This command can do the job, at least for
Looks like you may not have any OSDs properly setup and mounted. It
should look more like:
user@host:~# mount | grep ceph
/dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64)
/dev/sdc1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime,inode64)
/dev/sdd1 on /var/lib/ceph/osd/ceph-2
On Tue, May 20, 2014 at 4:13 AM, Andrei Mikhailovsky wrote:
> Anyone have any idea how to fix the problem with getting 403 when trying to
> upload files with none standard characters? I am sure I am not the only one
> with these requirements.
It might be the specific client that you're using and
You might also try
ceph-disk list
sage
On Wed, 21 May 2014, Mike Dawson wrote:
> Looks like you may not have any OSDs properly setup and mounted. It should
> look more like:
>
> user@host:~# mount | grep ceph
> /dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64)
> /dev/sdc1 o
Hi everybody,
I'm reading the doc regarding the replication through radosgw. It
talks just about inter-region METAdata replication, nothing about data
replication.
My question is, it's possible to have (everything) geo-replicated
through radosgw? Actually we have 2 ceph cluster (geo-dislocated)
i
This would give you pretty good understanding where the mounts and
/dev/sd* are.
[jlu@gfsnode1 osd]$ ceph-disk list; pwd; ls -lai
/dev/sda :
/dev/sda1 other, mounted on /boot
/dev/sda2 other
/dev/sdb other, unknown, mounted on /ceph/osd120
/dev/sdc other, unknown, mounted on /ceph/osd121
/dev/sd
I am setting a CephFS cluster and wondering about MDS setup.
I know you are still hesitant to put the stable label on it but I have
a few questions what would be an adequate setup.
I know active active is not developed yet so that is pretty much out
of the question right now.
What about active st
On 05/21/2014 09:04 PM, Scottix wrote:
I am setting a CephFS cluster and wondering about MDS setup.
I know you are still hesitant to put the stable label on it but I have
a few questions what would be an adequate setup.
I know active active is not developed yet so that is pretty much out
of the
This Dumpling point release fixes several minor bugs. The most prevalent
in the field is one that occasionally prevents OSDs from starting on
recently created clusters.
We recommand that all v0.67.x Dumpling users upgrade at their convenience.
Notable Changes
---
* ceph-fuse, libce
Hi All,
Experimenting with cache pools for RBD, created two pools, slowdata-hot
backed by slowdata-cold. Set up max data to be stored in hot to be
100GB, data to be moved to cold above 40% hot usage. Created a 100GB RBD
image, mounted it tested reading/writing, then dumped in 80GB of data.
Al
On Wed, 21 May 2014, Michael wrote:
> Hi All,
>
> Experimenting with cache pools for RBD, created two pools, slowdata-hot backed
> by slowdata-cold. Set up max data to be stored in hot to be 100GB, data to be
> moved to cold above 40% hot usage. Created a 100GB RBD image, mounted it
> tested readi
Thanks Sage, the cache system's look pretty great so far. Combined with
erasure coding it's really adding a lot of options.
-Michael
On 21/05/2014 21:54, Sage Weil wrote:
On Wed, 21 May 2014, Michael wrote:
Hi All,
Experimenting with cache pools for RBD, created two pools, slowdata-hot backe
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit :
>
> You should definitely not do this! :)
Of course ;)
>
> You're certain that that is the correct prefix for the rbd image you
> removed? Do you see the objects lists when you do 'rados -p rbd ls - |
> grep '?
I'm pretty sure yes
Hi All,
I want to understand on how do CEPH users go about Quota Management when CEPH
is used with Openstack.
1. Is it recommended to use a common pool say “volumes” for creating volumes
which is shared by all tenants ? In this case a common keyring
ceph.common.keyring will be shared across
On Wed, May 21, 2014 at 3:52 AM, Kenneth Waegeman
wrote:
> Thanks! I increased the max processes parameter for all daemons quite a lot
> (until ulimit -u 3802720)
>
> These are the limits for the daemons now..
> [root@ ~]# cat /proc/17006/limits
> Limit Soft Limit Har
On 5/21/14 09:02 , Fabrizio G. Ventola wrote:
Hi everybody,
I'm reading the doc regarding the replication through radosgw. It
talks just about inter-region METAdata replication, nothing about data
replication.
My question is, it's possible to have (everything) geo-replicated
through radosgw? Ac
On 05/21/2014 03:03 PM, Olivier Bonvalet wrote:
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit :
You're certain that that is the correct prefix for the rbd image you
removed? Do you see the objects lists when you do 'rados -p rbd ls - |
grep '?
I'm pretty sure yes : since I didn't s
On 5/20/14 08:18 , Sage Weil wrote:
On Tue, 20 May 2014, Karan Singh wrote:
Hello Cephers , need your suggestion for troubleshooting.
My cluster is terribly struggling , 70+ osd are down out of 165
Problem ?>OSD are getting marked out of cluster and are down. The cluster is
degraded. On checki
On 05/21/2014 03:29 PM, Vilobh Meshram wrote:
Hi All,
I want to understand on how do CEPH users go about Quota Management when
CEPH is used with Openstack.
1. Is it recommended to use a common pool say “volumes” for creating
volumes which is shared by all tenants ? In this case a common
The times I have seen this message, it has always been because there
are snapshots of the image that haven't been deleted yet. You can see
the snapshots with "rbd snap list ".
On Tue, May 20, 2014 at 4:26 AM, James Eckersall
wrote:
> Hi,
>
>
>
> I'm having some trouble with an rbd image. I want
Hi,everyone!
I have 2 ceph clusters, one master zone, another secondary zone.
Now I have some question.
1. Can ceph have two or more secondary zones?
2. Can the role of master zone and secondary zone transform mutual?
I mean I can change the secondary zone to be master and the master zone to
sec
On Wed, 21 May 2014, Craig Lewis wrote:
> If you do this over IRC, can you please post a summary to the mailling
> list?
>
> I believe I'm having this issue as well.
In the other case, we found that some of the OSDs were behind processing
maps (by several thousand epochs). The trick here to gi
Hi, Lewis!
With your way, there will be a contradition because of the limit of secondary
zone.
In secondary zone, one can't do any files operations.
Let me give some example.I define the symbols first.
The instances of cluster 1:
M1: master zone of cluster 1
S2: Slave zone for M2 of cluster2, th
33 matches
Mail list logo