Hi!
I think, it is impossible to hide crypto keys from admin, who have access to
host machine where VM guest running. Admin can always make snapshot of running
VM and extract all keys just from memory. May be, you can achieve enough level
of security providing a dedicated real server holding cr
On Fri, 07 Mar 2014 18:30:13 +0100, Cedric Lemarchand
wrote:
> Le 07/03/2014 18:05, Stijn De Weirdt a écrit :
> > we tried this with a Dell H200 (also LSI2008 based).
> >
> > however, running some basic benchmarks, we saw no immediate difference
> > between IT and IR firmware.
> >
> > so i'd lik
On Sat, 8 Mar 2014 13:43:37 +0800, Indra Pramana wrote:
> Hi Mariusz,
>
> Good day to you, and thank you for your email.
>
> >You should probably start by hooking up all servers into some kind of
> statistics
> >gathering software (we use collectd + graphite ) and monitor at least
> disk stat
Hi,
I don't suppose anyone ever managed to look at or fix this issue with
rbd-fuse? Or does anyone know what I'm maybe doing wrong?
Best regards
Graeme
On 07/02/14 12:20, Graeme Lambert wrote:
Hi,
Does anyone know what the issue is with this?
Thanks
*Graeme*
On 06/02/14 13:21, Graeme
On Mon, Mar 10, 2014 at 11:58 AM, Graeme Lambert wrote:
> Hi,
>
> I don't suppose anyone ever managed to look at or fix this issue with
> rbd-fuse? Or does anyone know what I'm maybe doing wrong?
>
> Best regards
>
> Graeme
>
>
>
> On 07/02/14 12:20, Graeme Lambert wrote:
>
> Hi,
>
> Does anyone
On Mon, Mar 10, 2014 at 12:25 PM, Ilya Dryomov wrote:
> Hi Graeme,
>
> It looks like not enough memory is allocated for image names. On top
> of that, error reporting could have been better. What's the output of
> 'rbd ls | wc -lc' ?
Sorry, the output of 'rbd -p libvirt-pool ls | wc -lc', of co
W dniu 10.03.2014 o 07:54 Stefan Priebe - Profihost AG
pisze:
Am 07.03.2014 16:56, schrieb Konrad Gutkowski:
Hi,
If those are journal drives you could have n+1 ssd's and swap them at
some intervals, could introduce more problems.
If it required data to be synchronized one could operate it w
Am 10.03.2014 11:41, schrieb Konrad Gutkowski:
> W dniu 10.03.2014 o 07:54 Stefan Priebe - Profihost AG
> pisze:
>
>> Am 07.03.2014 16:56, schrieb Konrad Gutkowski:
>>> Hi,
>>>
>>> If those are journal drives you could have n+1 ssd's and swap them at
>>> some intervals, could introduce more probl
sorry to chime in so late but I only just saw this thread.
As mark said, you should try out collectl, but even more important you
might consider installing colmux as well, which is part of collectl-utils.
Whenever I have questions about my disks I run the command:
colmux -addr filename -command
Hi!
I've been checking some of the available information about Cache pools and I've
come out with some questions:
-What do you think is a better approach to improve the performance of
RBD for VMs: Caching OSDs with FlashCache or using SSD Cache Pools?
-As I understand kernel d
On 03/10/2014 08:18 AM, Xavier Trilla wrote:
Hi!
I’ve been checking some of the available information about Cache pools
and I’ve come out with some questions:
-What do you think is a better approach to improve the performance of
RBD for VMs: Caching OSDs with FlashCache or using SSD Cache Pools
Hi.
All of a sudden, MDS started crashing, causing havoc on our deployment.
Any help would be greatly appreciated.
ceph.x86_64 0.56.7-0.el6 @ceph
-1> 2014-03-10 19:16:35.956323 7f9681cb3700 1 mds.0.12
rejoin_joint_start
0> 2014-03-10 19:16:35.9
Why the limit of 6 OSDs per SSD?
Where does Ceph tail off in performance when having to many OSDs in
servers?
When your Journal isn't able to keep up. If you use SSDs for
journaling, use 6 OSDs per SSD at max.
I am doing testing with a PCI-e based SSD, and showing that even with
15 OSD di
Hmm, at first glance it looks like you're using multiple active MDSes
and you've created some snapshots and part of that state got corrupted
somehow. The log files should have a slightly more helpful (including
line numbers) stack trace at the end, and might have more context for
what's gone wrong.
On Mon, 10 Mar 2014, Gregory Farnum wrote:
> Hmm, at first glance it looks like you're using multiple active MDSes
> and you've created some snapshots and part of that state got corrupted
> somehow. The log files should have a slightly more helpful (including
> line numbers) stack trace at the end,
Hi.
On Mon, Mar 10, 2014 at 12:54 PM, Gregory Farnum wrote:
> Hmm, at first glance it looks like you're using multiple active MDSes
> and you've created some snapshots and part of that state got corrupted
> somehow. The log files should have a slightly more helpful (including
> line numbers) sta
> Ceph is seriously badass, but my requirements are to create a cluster in
> which I can host my customer's data in separate areas which are independently
> encrypted, with passphrases which we as cloud admins do not have access to.
>
> My current thoughts are:
> 1. Create an OSD per machine stre
> Why the limit of 6 OSDs per SSD?
SATA/SAS throughput generally.
> I am doing testing with a PCI-e based SSD, and showing that even with 15
OSD disk drives per SSD that the SSD is keeping up.
That will probably be fine performance wise but it's worth noting that all
OSDs will fail if the flash
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Am 07.03.2014 18:27, schrieb Michael J. Kidd:
[...]
> * I've not seen any documentation on each counter, aside from
> occasional mailing list posts about specific counters..
>
[...]
>> One additional question, are these latency values in
>> milli
Hi.
Well, I've screwed up my cluster to the point that nothing works anymore.
The monitors won't start after the version update
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002468.html
I've re-created the monitor fs, and the monitors are running again, but
nothing authenticates t
Thanks for the suggestion Seth. It's unfortunately not an option in our model.
We did consider it.
On 2014 Mar 10, at 02:30, Seth Mason (setmason) wrote:
Why not have the application encrypt the data or at the compute server's file
system? That way you don't have to manage keys.
Seth
Thanks Kyle.
I've deliberately not provided the entire picture. I'm aware of memory
residency and of in-flight encryption issues. Theses are less of a problem for
us.
For me, it's a question of finding a reliably encrypted, OSS, at-rest setup
which involves Ceph and preferably ZFS for flexibil
Now, I'm getting this. May be any idea what can be done to straighten this
up?
-12> 2014-03-10 22:26:23.748783 7fc0397e5700 0 log [INF] : mdsmap e1:
0/0/1 up
-11> 2014-03-10 22:26:23.748793 7fc0397e5700 10 send_log to self
-10> 2014-03-10 22:26:23.748795 7fc0397e5700 10 log_queue is 4 l
Further, here is the logging output (when I set 'debug rgw log = 20/20’ in
ceph.conf). I have removed some information. The server replies with a 403.
Any insight into why? When the account submits a non-admin type request it
works, but not when trying to create a new user. Is there a CAP w
I'd like to stand up another SFBay (Mountian View, CA, USA) meetup for the
week of 3/24 or 4/7 (3/31 is week of OpenStack so we might want to avoid
then). We will need to secure a location again as well as some topics to
discuss.
Are the fine folks at Mellanox interested in hosting again?
Andrew
4/7’s the week of ApacheCon and CloudStack Collab in Denver as well, so a good
number of us ASF/ACS folk won’t be able to make that...
On Mar 10, 2014, at 5:35 PM, Andrew Woodward wrote:
> I'd like to stand up another SFBay (Mountian View, CA, USA) meetup for the
> week of 3/24 or 4/7 (3/31 is
Are you expecting the tenant to provide the key? Also how many tenants are you
expecting to have? It seems like you're looking for per-object encryption and
not per OSD.
-Seth
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Beha
OS: CentOS 6.4
version: ceph 0.67.7
Hello, everyone.
With the help of document, I have install ceph gateway.
But I don't know how to configure it. The web
http://ceph.com/docs/master/radosgw/config/ has many command not found.I
thought it's written in the ubuntu.
can anyone help?
Thanks!___
On Mon, Mar 10, 2014 at 4:26 PM, Steve Carter wrote:
> Further, here is the logging output (when I set 'debug rgw log = 20/20' in
> ceph.conf). I have removed some information. The server replies with a 403.
> Any insight into why? When the account submits a non-admin type request it
> work
Hi,
what commands are “not found”?
This page for configuring the RGW works fine as far as I know as I used it no
later than a week ago.
Can you please give us more details? What is your layout (radosgw installed on
a ceph node, mon node, standalone node)?
Note: In order to get it running, rem
>You must also create an rgw.conf file in the /etc/apache2/sites-enabled
>directory.
There is no /etc/apache2/sites-enabled directory in the CentOS. So I didn't
create rgw.conf. I put the content of rgw.conf to the httpd.conf.
>sudo a2ensite rgw.conf
>sudo a2dissite default
These 2 commands
Hi,
looks like this comes from the apache install. Something is wrong or different
with CentOS.
Replace first command with
ln -s /etc/httpd/sites-available/rgw.conf /etc/httpd/conf.d/rgw.conf
Replace second command with
unlink /etc/httpd/conf.d/default
This should make the trick
JC
On Mar
Thanks for your reply!
I have try it but no use.
I installed ceph in 3 servers called ceph69, ceph70, ceph71.
All my steps are as follows:
1. vi /etc/ceph/ceph.conf
add these content:
[client.radosgw.gateway]
host = {host-name}
keyring = /etc/ceph/keyring.radosgw.gateway
33 matches
Mail list logo