Hi all
We know that radosgw supported a subset of Amazon s3 functional features.Does
radosgw support server-side encryption algorithm ,when creating an
object,through Amazon s3 api support AES256 on server-side? Or can we use
client-side encryption algorithm ?
lixuehui__
|
Hi all!
now my ceph version is 0.62! yesday I update it from 0.56.3!
I found the init-ceph scrip in defferent ceph version has lots devfrents!
my ceph version is 0.62! what make me confuse is that I can not use the
init-ceph, which version is 0.62 , start my cluster
Hi list,
I'm trying to get the best from my 3 node "low cost" hardware for testing
purposes :
3 Dell PowerEdge 2950.
Cluster/Public networks both with 2x1Gb LACP (layer3+4 hash)
No MDS running for now.
SAS disks (no ssd), both 1 & 15000 rpm.
Sda = system
Sdb, sdc, sdd, sde = OSDs
Sdf = journa
Hi,
thank you it's rebalancing now :)
De : Eric Eastman [eri...@aol.com]
Date d'envoi : mercredi 23 octobre 2013 01:19
À : HURTEVENT VINCENT; ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Balance data on near full osd warning or error
Hello,
What I
On 10/23/2013 09:27 AM, lixuehui wrote:
Hi all
We know that radosgw supported a subset of Amazon s3 functional
features.Does radosgw support server-side encryption algorithm ,when
creating an object,through Amazon s3 api support AES256 on server-side?
Or can we use client-side encryption algorith
Le 22/10/2013 14:38, Damien Churchill a écrit :
Yeah, I'd thought of doing it that way, however it would be nice to
avoid that if possible since the machines in the cluster will be
running under QEMU using librbd, so it'd be additional overhead having
to re-export the drives using iSCSI.
Hell
> Hello,
> So, if your cluster nodes are running virtualized with Qemu/KVM, you can
> present them a virtual SCSI drive, from the same RBD image.
> It will be like a shared FC SCSI SAN LUN.
>
You would want to be absolutely sure that neither qemu or rbd was doing any
sort of caching though for t
I have a newly created cluster with 68 osds and the default of 2 replicas. The
default pools are created with 64 placement groups . The documentation in
http://ceph.com/docs/master/rados/operations/pools/ states for osd pool
creation :
"We recommend approximately 50-100 placement groups per OS
On Tue, Oct 22, 2013 at 8:31 PM, Gruher, Joseph R
wrote:
> This was resolved by setting the curl proxy (which conveniently was
> identified as necessary in another email on this list just earlier today).
>
>
>
> Overall I had to directly configure the proxies for wget, rpm and curl
> before I coul
Hello to all,
I installed a ceph cluster using ceph-deploy and i am quite happy.
Now i want to add a cluster network to it. According to "ceph report"
public_addr and cluster_addr are set to the same ip. How can i change
this now?
Is it save to add something like this to my ceph.conf:
#( fo
On 2013-10-22 22:41, Gregory Farnum wrote:
...
Right now, unsurprisingly, the focus of the existing Manila developers
is on Option 1: it's less work than the others and supports the most
common storage protocols very well. But as mentioned, it would be a
pretty poor fit for CephFS
I must be mis
Hi,
I've been taking a look at the repair functionality in ceph. As I understand it
the osds should try to copy an object from another member of the pg if it is
missing. I have been attempting to test this by manually removing a file from
one of the osds however each time the repair completes
Hi,
Why not just add cluster network and public network to the global section (
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks
) ? The OSDs will pick their address when restarted. I did it once ( just once
;-) and it worked.
My 2cts
On 23/10/2013 16:33, Stef
Hi,
well that was way easier than expected.
Thanks a lot :)
On 10/23/2013 04:57 PM, Loic Dachary wrote:
Hi,
Why not just add cluster network and public network to the global section (
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks
) ? The OSDs will pick th
On 13/10/22 6:28 PM, "Dan Mick" wrote:
>/etc/ceph should be installed by the package named 'ceph'. Make sure
>you're using ceph-deploy install to install the Ceph packages before
>trying to use the machines for mon create.
I'll admit, I did skip that step a couple times in my testing since I di
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>
>Did you tried working with the `--no-adjust-repos` flag in ceph-deploy ? It
>will
>allow you to tell ceph-deploy to just go and install ceph without attempting to
>import keys or doing anything with your repos.
http://tracker.ceph.com/issues/6485
I don't believe it's in a release yet, but yes, that's the problem and it's
fixed in the ceph-deploy source repo. :)
-Greg
On Wednesday, October 23, 2013, LaSalle, Jurvis wrote:
> On 13/10/22 6:28 PM, "Dan Mick" >
> wrote:
>
> >/etc/ceph should be installed by
On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk wrote:
> On 2013-10-22 22:41, Gregory Farnum wrote:
> ...
>
>> Right now, unsurprisingly, the focus of the existing Manila developers
>> is on Option 1: it's less work than the others and supports the most
>> common storage protocols very well. But a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 22/10/13 08:51, Mike Lowe wrote:
> And a +1 from me as well. It would appear that ubuntu has picked
> up the 0.67.4 source and included a build of it in their official
> repo, so you may be able to get by until the next point release
> with those
On 10/23/2013 12:53 PM, Gregory Farnum wrote:
> On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk wrote:
>> On 2013-10-22 22:41, Gregory Farnum wrote:
>> ...
>>
>>> Right now, unsurprisingly, the focus of the existing Manila developers
>>> is on Option 1: it's less work than the others and supports
Should osd_pool_default_pg_num and osd_pool_default_pgp_num apply to the
default pools? I put them in ceph.conf before creating any OSDs but after
bringing up the OSDs the default pools are using a value of 64.
Ceph.conf contains these lines in [global]:
osd_pool_default_pgp_num = 800
osd_pool_
On Wed, Oct 23, 2013 at 11:47 AM, Dimitri Maziuk wrote:
> On 10/23/2013 12:53 PM, Gregory Farnum wrote:
>> On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk
>> wrote:
>>> On 2013-10-22 22:41, Gregory Farnum wrote:
>>> ...
>>>
Right now, unsurprisingly, the focus of the existing Manila develop
Hi all,
I have CentOS 6.4 with 3.11.6 kernel running (built from latest stable on
kernel.org) and I cannot load the rbd client module. Should I have to do
anything to enable/install it? Shouldn't it be present in this kernel?
[ceph@joceph05 /]$ cat /etc/centos-release
CentOS release 6.4 (Fina
Alfredo,
Do you know what version of ceph-deploy has this updated functionality
I just updated to 1.2.7 and it does not appear to include it.
Thanks,
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
_
On 10/23/2013 02:46 PM, Gregory Farnum wrote:
> Ah, I see. No, each CephFS client needs to communicate with the whole
> cluster. Only the POSIX metadata changes flow through the MDS.
Yeah, I thought you'd say that. Back in February I asked if I could get
a cephfs client to read from a specific os
On Wed, Oct 23, 2013 at 1:28 PM, Dimitri Maziuk wrote:
> On 10/23/2013 02:46 PM, Gregory Farnum wrote:
>
>> Ah, I see. No, each CephFS client needs to communicate with the whole
>> cluster. Only the POSIX metadata changes flow through the MDS.
>
> Yeah, I thought you'd say that. Back in February I
Hi,
So the problem was that '.usage' pool was not created. I haven't
traversed the code well enough yet to know where this pool is supposed
to get created but it wasn't even though the option was on. As soon as
I hand created the pool the radosgw started logging usage.
Thanks,
derek
--
---
De
O.K...I found the help section in 1.2.7 that talks about using paths...however
I still cannot get this to work:
root@hqceph1:/usr/local/ceph-install-1# ceph-deploy osd prepare
hqosd1:/dev/disk/by-path/pci-:02:00.0-scsi-0:2:1:0
usage: ceph-deploy osd [-h] [--zap-disk] [--fs-type FS_TYPE] [-
> Option 1) The service plugs your filesystem's IP into the VM's network
> and provides direct IP access. For a shared box (like an NFS server)
> this is fairly straightforward and works well (*everything* has a
> working NFS client). It's more troublesome for CephFS, since we'd need
> to include a
Speculating, but it seems possible that the ':' in the path is problematic,
since that is also the separator between disk and journal (HOST:DISK:JOURNAL)?
Perhaps if you enclose in ''s or or use /dev/disk/by-id?
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-us
Tying to gather some more info.
CentOS - hanging ls
[root@srv ~]# cat /proc/14614/stack
[] wait_answer_interruptible+0x81/0xc0 [fuse]
[] fuse_request_send+0x1cb/0x290 [fuse]
[] fuse_do_getattr+0x10c/0x2c0 [fuse]
[] fuse_update_attributes+0x75/0x80 [fuse]
[] fuse_getattr+0x53/0x60 [fuse]
[] vfs_ge
Hi all,
Can I request that somebody with list admin rights please fixes the digest
settings for this list - I'm regularly receiving 8+ digest messages within
a 24 hour period, not really a "digest" :-).
--
Cheers,
~Blairo
___
ceph-users mailing list
ce
If you do
ceph mds tell 0 dumpcache /tmp/foo
it will dump the dms cache, and
ceph-post-file /tmp/foo
will send the file to ceph.com so we can get some clue what happened. I
suspect that restarting the ceph-mds process will resolve the hang.
Thanks!
sage
On Wed, 23 Oct 2013, Michael wrot
Joseph,
I suspect the same...I was just wondering if it was supposed to be supported
using ceph-deploy since CERN had it in their setup.
I was able to use '/dev/disk/by-id', although when I list out the osd mount
points it still shows sdb,sdc, etc:
oot@hqosd1:/dev/disk/by-id# df -h
Filesyste
On 10/23/2013 01:47 PM, Dimitri Maziuk wrote:
On 10/23/2013 12:53 PM, Gregory Farnum wrote:
On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk wrote:
On 2013-10-22 22:41, Gregory Farnum wrote:
...
Right now, unsurprisingly, the focus of the existing Manila developers
is on Option 1: it's less w
Looks like your journal has some bad events in it, probably due to
bugs in the multi-MDS systems. Did you start out this cluster on 67.4,
or has it been upgraded at some point?
Why did you use two active MDS daemons?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Oct 2
[ Adding back the list for archival and general edification. :) ]
On Wed, Oct 23, 2013 at 5:53 PM, Gagandeep Arora wrote:
> Hello Greg,
>
> mds was running fine for more than a month and last week on Thursday, we
> created a snapshot to test the snapshot functionality of cephfs and the
> snapshot
On Wed, 23 Oct 2013, Kyle Bader wrote:
>
> Option 1) The service plugs your filesystem's IP into the VM's
> network
> and provides direct IP access. For a shared box (like an NFS
> server)
> this is fairly straightforward and works well (*everything* has
> a
>
On Thu, Oct 24, 2013 at 6:44 AM, Michael wrote:
> Tying to gather some more info.
>
> CentOS - hanging ls
> [root@srv ~]# cat /proc/14614/stack
> [] wait_answer_interruptible+0x81/0xc0 [fuse]
> [] fuse_request_send+0x1cb/0x290 [fuse]
> [] fuse_do_getattr+0x10c/0x2c0 [fuse]
> [] fuse_update_attribu
>> This is going to get horribly ugly when you add neutron into the mix, so
>> much so I'd consider this option a non-starter. If someone is using
>> openvswitch to create network overlays to isolate each tenant I can't
>> imagine this ever working.
>
> I'm not following here. Are this only needed
40 matches
Mail list logo