On Wed, 19 Jun 2013, Derek Yarnell wrote:
> On 6/18/13 5:31 PM, Sage Weil wrote:
> >> 1) Remove the %ghost directive and allow RPM to install the directory.
> >> Potentially leaving orphaned pid/state files after the package is removed.
> >>
> >> 2) Or the directory needs to be created in the %post
On 6/18/13 5:31 PM, Sage Weil wrote:
>> 1) Remove the %ghost directive and allow RPM to install the directory.
>> Potentially leaving orphaned pid/state files after the package is removed.
>>
>> 2) Or the directory needs to be created in the %post section. If it is
>> created in the %post section
Dear all,
I am trying to mount cephfs to 2 different mount points (each should have their
respective pools and keys). While the first mount works (after using set_layout
to get it to the right pool), the second attempt failed with "mount error 12 =
Cannot allocate memory". Did I miss some steps
run /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
error:
2013-06-19 09:19:55.148536 7f120aa0d820 0 librados: client.radosgw.gateway
authentication error (95) Operation not supported
2013-06-19 09:19:55.148923 7f120aa0d820 -1 Couldn't init storage provider
(RADOS)
How t
We solved this problem at ParaScale by enabling users to enter any fancy
device names in the device discovery logic so that HP servers like the
DL185 which use older Compaq RAID controllers would work. This is common.
Best,
Cameron
--
On Tue, Jun 18, 2013 at 1:43 PM, Sage Weil wrote:
> I reme
On Tue, 18 Jun 2013, Derek Yarnell wrote:
> Hi,
>
> So the first error below is that /var/run/ceph isn't created when
> installing the ceph RPM(s). This is becasuse of line 440 in
> ceph.spec.in using the %ghost directive[1] for the file install. My
> reading of the behavior will mean that the f
I remember seeing a few reports of problems from users with strange block
device names in /dev (sdaa*, c0d1p2* etc.) and have a bug open
(http://tracker.ceph.com/issues/5345), but looking at the code I don't
immediately see the problem, and I don't have any machines that have this
problem. Are
> [ Please stay on the list. :) ]
Doh. Was trying to get Outlook to quote properly, and forgot to hit Reply-all.
:)
> >> The specifics of what data will migrate where will depend on how
> >> you've set up your CRUSH map, when you're updating the CRUSH
> >> locations, etc, but if you move an OS
[ Please stay on the list. :) ]
On Tue, Jun 18, 2013 at 12:54 PM, Edward Huyer wrote:
>> > First questions: Are there obvious flaws or concerns with the
>> > following configuration I should be aware of? Does it even make sense
>> > to try to use ceph here? Anything else I should know, think a
Hi,
So the first error below is that /var/run/ceph isn't created when
installing the ceph RPM(s). This is becasuse of line 440 in
ceph.spec.in using the %ghost directive[1] for the file install. My
reading of the behavior will mean that the file or directory in this
case will be included in the
On Tue, Jun 18, 2013 at 11:21 AM, harri wrote:
> Thanks Greg,
>
> The concern I have is an "all eggs in one basket" approach to storage
> design. Is it feasible, however unlikely, that a single Ceph cluster could
> be brought down (obviously yes)? And what if you wanted to operate different
> stor
Spell check fail, that of course should have read CRUSH map.
Sent from Samsung Mobile
Original message
From: harri
Date: 18/06/2013 19:21 (GMT+00:00)
To: Gregory Farnum
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Single Cluster / Reduced Failure Domains
Thanks
On Tue, Jun 18, 2013 at 10:34 AM, Edward Huyer wrote:
> Hi, I’m an admin for the School of Interactive Games and Media at RIT, and
> looking into using ceph to reorganize/consolidate the storage my department
> is using. I’ve read a lot of documentation and comments/discussion on the
> web, but I
On 6/18/13 10:29 AM, Sage Weil wrote:
> Derek-
>
> Please also try the latest ceph-deploy and cuttlefish branches, which
> fixed several issues with el6 distros. 'git pull' for hte latest
> ceph-deploy (clone from github and ./bootstrap if you were using the
> package) and install with
>
> .
Thanks Greg,
The concern I have is an "all eggs in one basket" approach to storage design.
Is it feasible, however unlikely, that a single Ceph cluster could be brought
down (obviously yes)? And what if you wanted to operate different storage
networks?
It feels right to build virtual environme
I would like to make a local mirror or your yum repositories. Do you support
any of the standard methods of syncing aka rsync?
Thanks,
Joe
--
Joe Ryner
Center for the Application of Information Technologies (CAIT)
Production Coordinator
P: (309) 298-1804
F: (309) 298-2806
_
Giuseppe,
My apologies for the misunderstanding. I do have some information on
the librados API. There is a Python binding for the Ceph Filesystem
in the repository. For information on cloning the Ceph repository, see
http://ceph.com/docs/master/install/clone-source/
Once you clone the Ceph sour
Hi, I'm an admin for the School of Interactive Games and Media at RIT, and
looking into using ceph to reorganize/consolidate the storage my department is
using. I've read a lot of documentation and comments/discussion on the web,
but I'm not 100% sure what I'm looking at doing is a good use of
Hi Derek,
Are you sure the package is installed on the target? (Did you ceph-deploy
install ) It is probably caused by /var/lib/ceph/mon not
existing?
sage
On Tue, 18 Jun 2013, Derek Yarnell wrote:
>
> > If you are still having problems with ceph-deploy, please forward the
> > ceph.log f
> If you are still having problems with ceph-deploy, please forward the
> ceph.log file to me, I can start trying to figure out what's gone wrong.
Hi,
Nothing seems fishy from the log, I am going to try now the git version.
-bash-4.1$ cat ceph.log
2013-06-12 13:21:17,937 ceph_deploy.new DEBUG
On Tue, Jun 18, 2013 at 09:02:12AM -0700, Gregory Farnum wrote:
> On Tuesday, June 18, 2013, harri wrote:
>
> > Hi, **
> >
> > ** **
> >
> > I wondered what best practice is recommended to reducing failure domains
> > for a virtual server platform. If I wanted to run multiple virtual server
On Tuesday, June 18, 2013, Leen Besselink wrote:
> On Tue, Jun 18, 2013 at 08:13:39PM +0800, Da Chun wrote:
> > Hi List,My ceph cluster has two osds on each node. One has 15g capacity,
> and the other 10g.
> > It's interesting that, after I took the 15g osd out of the cluster, the
> cluster starte
On Tuesday, June 18, 2013, harri wrote:
> Hi, **
>
> ** **
>
> I wondered what best practice is recommended to reducing failure domains
> for a virtual server platform. If I wanted to run multiple virtual server
> clusters then would it be feasible to serve storage from 1 x large Ceph
> clus
I think the bug Sage is talking about was fixed in 3.8.0
On Jun 18, 2013, at 11:38 AM, Guido Winkelmann
wrote:
> Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil:
>> On Tue, 18 Jun 2013, Guido Winkelmann wrote:
>>> Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin:
Which fil
Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil:
> On Tue, 18 Jun 2013, Guido Winkelmann wrote:
> > Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin:
> > > Which filesystem are the OSDs using?
> >
> > BTRFS
>
> Which kernel version? There was a recent bug (fixed in 3.9 or 3.8) t
On Tue, 18 Jun 2013, Giuseppe 'Gippa' Paterno' wrote:
> Hi John,
> apologies for the late reply. The librados seems quite interesting ...
> > Actually no. I'll write up an API doc for you soon.
> >
> > sudo apt-get install python-ceph
> >
> > import rados
>
> I wonder if I can ake python calls to
On Tue, 18 Jun 2013, Guido Winkelmann wrote:
> Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin:
> > On 06/11/2013 11:59 AM, Guido Winkelmann wrote:
> >
> > > - Write the data with a very large number of concurrent threads (1000+)
> >
> > Are you using rbd caching? If so, turning it off
Derek-
Please also try the latest ceph-deploy and cuttlefish branches, which
fixed several issues with el6 distros. 'git pull' for hte latest
ceph-deploy (clone from github and ./bootstrap if you were using the
package) and install with
./ceph-deploy install --dev=cuttlefish cbcbobj00.umiacs
I built the 3.10-rc rbd module for a 3.8 kernel yesterday, and only
had one thing to add (I know I'm reviving an old thread).
There is one folder missing from the original list of files to use:
include/linux/crush/*
That would bring everything to:
include/keys/ceph-type.h
include/linux/ceph/*
i
On Tue, Jun 18, 2013 at 02:38:19PM +0200, Kurt Bauer wrote:
>
>
> Da Chun schrieb:
> >
> > Thanks for sharing! Kurt.
> >
> > Yes. I have read the article you mentioned. But I also read another
> > one:
> > http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devic
Thanks! Craig.
umount works.
About the time skew, I saw the log said the time difference should be less than
50ms. I setup one of my nodes as the time server, and the others sync the time
with it. I don't know why the system time still changes frequently especially
after reboot. Maybe it's bec
Da Chun schrieb:
>
> Thanks for sharing! Kurt.
>
> Yes. I have read the article you mentioned. But I also read another
> one:
> http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
> It uses LIO, which is the current standard Linux kernel SCSI target.
T
On Tue, Jun 18, 2013 at 08:13:39PM +0800, Da Chun wrote:
> Hi List,My ceph cluster has two osds on each node. One has 15g capacity, and
> the other 10g.
> It's interesting that, after I took the 15g osd out of the cluster, the
> cluster started to rebalance, and finally the 10g osd on the same no
Hi List,My ceph cluster has two osds on each node. One has 15g capacity, and
the other 10g.
It's interesting that, after I took the 15g osd out of the cluster, the cluster
started to rebalance, and finally the 10g osd on the same node was finally full
and taken off, and failed to start again wit
Hi,
I wondered what best practice is recommended to reducing failure domains for a
virtual server platform. If I wanted to run multiple virtual server clusters
then would it be feasible to serve storage from 1 x large Ceph cluster?
I am concerned that, in the unlikely event the Ceph whole c
Hi John,
apologies for the late reply. The librados seems quite interesting ...
> Actually no. I'll write up an API doc for you soon.
>
> sudo apt-get install python-ceph
>
> import rados
I wonder if I can ake python calls to interact with the object store
(say: cephfs.open() mkdir() ) direct
Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin:
> On 06/11/2013 11:59 AM, Guido Winkelmann wrote:
>
> > - Write the data with a very large number of concurrent threads (1000+)
>
> Are you using rbd caching? If so, turning it off may help reproduce
> faster if it's related to the numbe
On Tue, Jun 18, 2013 at 11:13:15AM +0200, Leen Besselink wrote:
> On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote:
> > Hi,
> >
> >
> > Da Chun schrieb:
> > > Hi List,
> > >
> > > I want to deploy a ceph cluster with latest cuttlefish, and export it
> > > with iscsi interface to my appl
Thanks for sharing! Kurt.
Yes. I have read the article you mentioned. But I also read another one:
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
It uses LIO, which is the current standard Linux kernel SCSI target.
There is another doc in the ce
On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote:
> Hi,
>
>
> Da Chun schrieb:
> > Hi List,
> >
> > I want to deploy a ceph cluster with latest cuttlefish, and export it
> > with iscsi interface to my applications.
> > Some questions here:
> > 1. Which Linux distro and release would you
Hi Alex,
What versions of Qemu are recommended for this?
I would go with version 1.4.2 (I don't know what the official
recommendation is).
which is the implementation of using asynchronous flushing
in Qemu. That's only in 1.4.3 and 1.5 if I use the upstream
As far as I know, it is in 1.4
I'm planning on running Ceph Cuttlefish with Qemu/KVM using Qemu's
inbuilt RBD support (not kernel RBD). I may go beyond Cuttlefish.
What versions of Qemu are recommended for this? Qemu 1.0 is what
ships with Ubuntu Precise LTS which the base OS in use, so this
would be the best options in many wa
Hi,
Da Chun schrieb:
> Hi List,
>
> I want to deploy a ceph cluster with latest cuttlefish, and export it
> with iscsi interface to my applications.
> Some questions here:
> 1. Which Linux distro and release would you recommend? I used Ubuntu
> 13.04 for testing purpose before.
For the ceph-clust
43 matches
Mail list logo