-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/01/2013 05:08 PM, Jogi Hofmüller wrote:
> Dear all,
Sers jogi,
> I am back to managing the cluster before starting to use it even on
> a test host. First of all a question regarding the docs:
>
> Is this [1] outdated? If not, why are the l
Hi.
I am going to create my first Ceph cluster using 3 physical servers and
Ubuntu distribution.
Each server will have three 3Tb hard drives, connected with or without a
physycal RAID controller.
I would have to be protect on a fault of one of this three servers, having
as much as space possible,
Hi,
I would not use RAID5 since it would be redundant with what Ceph provides.
My 2cts ;-)
On 02/10/2013 13:50, shacky wrote:
> Hi.
>
> I am going to create my first Ceph cluster using 3 physical servers and
> Ubuntu distribution.
> Each server will have three 3Tb hard drives, connected with o
Hi,
I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using
libvirt to "patch" the ceph disk directly to the qemu instance.
I'm using SL6 with the patched qemu packages from the Ceph site (which the
latest version is still cuttlefish):
http://www.ceph.com/packages/ceph-extras
On Tue, Oct 1, 2013 at 11:08 AM, Jogi Hofmüller wrote:
> Dear all,
>
> I am back to managing the cluster before starting to use it even on a
> test host. First of all a question regarding the docs:
>
> Is this [1] outdated? If not, why are the links to chef-* not working?
> Is chef-* still reco
On Fri, Sep 27, 2013 at 4:59 PM, Piers Dawson-Damer wrote:
> Hi,
>
> I'm trying to setup my first cluster, (have never manually bootstrapped a
> cluster)
>
> Is ceph-deploy odd activate/prepare supposed to write to the master
> ceph.conf file, specific entries for each OSD along the lines of
> h
On 10/1/13 11:37 AM, Fuchs, Andreas (SwissTXT) wrote:
> What we need to do is to have a key/secret with read write permission and one
> with read only permission to a certain bucket, is this possible? How?
Hi Andi,
Yes this is possible. You can either create accounts for the radosgw
through the
Thank you very much for your answer!
So I could save the use of hardware RAID controllers on storage servers.
Good news.
I see in the Ceph documentation that I will have to manually configure the
datastore to be efficient, reliable and full fault tolerant.
Is there a particular way to configure it,
I successfully installed a new cluster recently following the intructions here
: http://ceph.com/docs/master/rados/deployment/
Cheers
On 02/10/2013 16:32, shacky wrote:
> Thank you very much for your answer!
> So I could save the use of hardware RAID controllers on storage servers. Good
> news.
On 2013-10-02 07:35, Loic Dachary wrote:
Hi,
I would not use RAID5 since it would be redundant with what Ceph provides.
I would not use raid-5 (or 6) because its safety on modern drives is
questionable and because I haven't seen anyone comment on ceph's
performance -- e.g. openstack docs exp
What happens when a drive goes bad in ceph and has to be replaced (at
the physical level) . In the Raid world you pop out the bad disk and
stick a new one in and the controller takes care of getting it back into
the system. With what I've been reading so far, it probably going be a
mess to do t
I need share buckets created by one user with outher users without share
the same access_key or secret_key , for example I have user jmoura with
bucket name Jeff and I need share this bucket with user frocha and show the
informations on bucket Jeff , a
anybody know how I do that ?
___
I have heard rumblings that there are experiments-in-progress. Would
love to hear from anyone that has put these two together and what
their experience was like. Drop me a line here or via
commun...@inktank.com to let me know what you thought. Thanks!
Best Regards,
Patrick McGarry
Director,
I actually am looking for a similar answer. If 1 osd = 1 HDD, in dumpling
it will relocate the data for me after the timeout which is great. If I
just want to replace the osd with an unformated new HDD what is the
procedure?
One method that has worked for me is to remove it out of the crush map th
Along the lines of this thread, if I have OSD(s) on rotational HDD(s), but have
the journal(s) going to an SSD, I am curious about the best procedure for
replacing the SSD should it fail.
-Joe
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Scott
Hey Robert,
On 02-10-13 14:44, Robert van Leeuwen wrote:
> Hi,
>
> I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using
> libvirt to "patch" the ceph disk directly to the qemu instance.
> I'm using SL6 with the patched qemu packages from the Ceph site (which the
> latest
On 10/02/2013 10:45 AM, Oliver Daudey wrote:
Hey Robert,
On 02-10-13 14:44, Robert van Leeuwen wrote:
Hi,
I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using libvirt to
"patch" the ceph disk directly to the qemu instance.
I'm using SL6 with the patched qemu packages fr
On this web page http://ceph.com/docs/master/start/quick-start-preflight/ where
it says "Modify your ~/.ssh/config file of your admin node so that it defaults
to logging in as the user you created when no username is specified." Which
config file do I change?
I am using Ubuntu server 13.04.
1
sshd looks for a per-user config file in ~/.ssh/config in addition to the
system level config in /etc/ssh/. If the file doesn't exist, create it.
More information is available from 'man ssh_config'
On Wed, Oct 2, 2013 at 1:18 PM, Nimish Patel wrote:
> On this web page
> http://ceph.com/docs
On my system my user is named "ceph" so I modified /home/ceph/.ssh/config.
That seemed to work fine for me. ~/ is shorthand for your user's home folder.
I think SSH will default to the current username so if you just use the same
username everywhere this may not even be necessary.
My file:
c
I have three storage servers that provide NFS and iSCSI services to my
network, which serve data to four virtual machine compute hosts (two
ESXi, two libvirt/kvm) with several dozen virtual machines . I decided
to test out a Ceph deployment to see whether it could replace iSCSI as
the primary w
Can anyone provide me a sample ceph.conf with multiple rados gateways? I must
not be configuring it correctly and I can't seem to Google up an example or
find one in the docs. Thanks!
-Joe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://
On Wed, Oct 2, 2013 at 1:59 PM, Eric Lee Green wrote:
> I have three storage servers that provide NFS and iSCSI services to my
> network, which serve data to four virtual machine compute hosts (two ESXi,
> two libvirt/kvm) with several dozen virtual machines . I decided to test out
> a Ceph deploy
Part of our ceph.conf:
[client.radosgw.gw0]
host =
keyring = /etc/ceph/keyring.radosgw.gateway
rgw socket path = /tmp/radosgw.sock
log file = /var/log/radosgw/radosgw.log
rgw dns name = <3l>..
rgw thread pool size = 100
rgw print continue = f
I agree with Greg that this isn't a great test. You'll need multiple
clients to push the Ceph cluster, and you have to use oflag=direct if
you're using dd.
The OSDs should be individual drives, not part of a RAID set, otherwise
you're just creating extra work, unless you've reduced the number of
On 10/2/2013 2:24 PM, Gregory Farnum wrote:
There's a couple things here:
1) You aren't accounting for Ceph's journaling. Unlike a system such
as NFS, Ceph provides *very* strong data integrity guarantees under
failure conditions, and in order to do so it does full data
journaling. So, yes, cut y
Hi Josh,
> Message: 3
> Date: Wed, 02 Oct 2013 10:55:04 -0700
> From: Josh Durgin
> To: Oliver Daudey , ceph-users@lists.ceph.com,
> robert.vanleeu...@spilgames.com
> Subject: Re: [ceph-users] Loss of connectivity when using client
> caching with libvirt
> Message-ID: <524c5df8.60
On 10/2/2013 3:13 PM, Warren Wang wrote:
I agree with Greg that this isn't a great test. You'll need multiple
clients to push the Ceph cluster, and you have to use oflag=direct if
you're using dd.
I was not doing a test of overall performance but, rather, doing a
"smoke test" to see whether
On 10/02/2013 05:16 PM, Eric Lee Green wrote:
On 10/2/2013 2:24 PM, Gregory Farnum wrote:
There's a couple things here:
1) You aren't accounting for Ceph's journaling. Unlike a system such
as NFS, Ceph provides *very* strong data integrity guarantees under
failure conditions, and in order to do
On Wed, 2 Oct 2013, Eric Lee Green wrote:
> By contrast, that same dd to an iSCSI volume exported by one of the servers
> wrote at 240 megabytes per second. Order of magnitude difference.
Can you see what 'rados -p rbd bench 60 write' tells you?
I suspect the problem here is an unfortunate combin
On 10/2/2013 3:50 PM, Sage Weil wrote:
On Wed, 2 Oct 2013, Eric Lee Green wrote:
By contrast, that same dd to an iSCSI volume exported by one of the servers
wrote at 240 megabytes per second. Order of magnitude difference.
Can you see what 'rados -p rbd bench 60 write' tells you?
Pretty much
On 10/02/2013 03:16 PM, Blair Bethwaite wrote:
Hi Josh,
Message: 3
Date: Wed, 02 Oct 2013 10:55:04 -0700
From: Josh Durgin
To: Oliver Daudey , ceph-users@lists.ceph.com,
robert.vanleeu...@spilgames.com
Subject: Re: [ceph-users] Loss of connectivity when using client
caching w
Josh,
On 3 October 2013 10:36, Josh Durgin wrote:
> The version base of qemu in precise has the same problem. It only
> affects writeback caching.
>
> You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
> cloud archive.
Thanks for the pointer! I had not realised there were new
On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
Josh,
On 3 October 2013 10:36, Josh Durgin wrote:
The version base of qemu in precise has the same problem. It only
affects writeback caching.
You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
cloud archive.
Thanks for the
FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script and
the stock libvirt from raring.
> On Oct 2, 2013, at 10:59 PM, Josh Durgin wrote:
>
>> On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
>> Josh,
>>
>>> On 3 October 2013 10:36, Josh Durgin wrote:
>>> The version bas
Thanks guys,
Useful info - we'll see how we go. I expect the main issue blocking a
cloud-wide upgrade will be forwards live-migration of existing instances.
Cheers, ~B
On 3 October 2013 13:04, Michael Lowe wrote:
> FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script
> a
36 matches
Mail list logo