Hi Sage,
all hosts (ceph-servers ans clients) are Ubuntu 13.04-server
3.8.0-23-generic.
Just another question:
Before running 'ceph-deploy -v --overwrite-conf osd prepare
bd-0:sdc:/dev/sda5' the filesystem-type of /dev/sda5 (Journal on SSD)
was btrfs,
after running the command its filesystem
May anybody help me?
Many thanks in advanced and best regards,
Álvaro.
De: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph..com] En nombre de Alvaro Izquierdo Jimeno
Enviado el: martes, 11 de junio de 2013 17:11
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] Glance
On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote:
> On Jun 12, 2013, at 2:51 PM, Yehuda Sadeh wrote:
>
>> On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote:
>>> On Jun 12, 2013, at 2:02 PM, Yehuda Sadeh wrote:
>>>
On Wed, Jun 12, 2013 at 12:59 PM, John Nielsen wrote:
> After upda
Hi,
I am trying to run ceph-deploy on a very basic 1 node configuration.
But this is causing an exception that is confusing me,
ceph-deploy mon create cbcbobj00.umiacs.umd.edu
ceph-mon: mon.noname-a 192.168.7.235:6789/0 is local, renaming to
mon.cbcbobj00
ceph-mon: set fsid to a602d0c8-5c6e-442c-
A new development release of Ceph is out. Notable changes include:
* osd: monitor both front and back interfaces
* osd: verify both front and back network are working before rejoining
cluster
* osd: fix memory/network inefficiency during deep scrub
* osd: fix incorrect mark-down of osds
On Jun 12, 2013, at 2:51 PM, Yehuda Sadeh wrote:
> On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote:
>> On Jun 12, 2013, at 2:02 PM, Yehuda Sadeh wrote:
>>
>>> On Wed, Jun 12, 2013 at 12:59 PM, John Nielsen wrote:
After updating to Cuttlefish I was able to set up two rados gateways us
A large restructuring of the 'ceph' command-line tool has been pushed to
the master branch (and will be present in v0.65 as well). The ceph tool
you execute is now a Python script that talks to the cluster through
rados.py, the Python binding to librados.so (and, of course, then, with
librados
On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote:
> On Jun 12, 2013, at 2:02 PM, Yehuda Sadeh wrote:
>
>> On Wed, Jun 12, 2013 at 12:59 PM, John Nielsen wrote:
>>> After updating to Cuttlefish I was able to set up two rados gateways using
>>> distinct pools and users. (Thanks Yehuda!) Now I'
On Jun 12, 2013, at 2:02 PM, Yehuda Sadeh wrote:
> On Wed, Jun 12, 2013 at 12:59 PM, John Nielsen wrote:
>> After updating to Cuttlefish I was able to set up two rados gateways using
>> distinct pools and users. (Thanks Yehuda!) Now I'd like to make it so the
>> user for each gateway can only
After updating to Cuttlefish I was able to set up two rados gateways using
distinct pools and users. (Thanks Yehuda!) Now I'd like to make it so the user
for each gateway can only access its own pools and nothing else. The reasons
include security and preventing foot-shooting.
Instead of simply
On Wed, Jun 12, 2013 at 12:59 PM, John Nielsen wrote:
> After updating to Cuttlefish I was able to set up two rados gateways using
> distinct pools and users. (Thanks Yehuda!) Now I'd like to make it so the
> user for each gateway can only access its own pools and nothing else. The
> reasons in
Thanks Greg,
I am starting to understand it better.
I soon realized as well after doing some searching I hit this bug.
http://tracker.ceph.com/issues/5194
Which created the problem upon rebooting.
Thank You,
Scottix
On Wed, Jun 12, 2013 at 10:29 AM, Gregory Farnum wrote:
> On Wed, Jun 12, 2013
On Wed, Jun 12, 2013 at 9:40 AM, Scottix wrote:
> Hi John,
> That makes sense it affects the ceph cluster map, but it actually does a
> little more like partitioning drives and setting up other parameters and
> even starts the service. So the part I see is a little confusing is that I
> have to co
Hi John,
That makes sense it affects the ceph cluster map, but it actually does a
little more like partitioning drives and setting up other parameters and
even starts the service. So the part I see is a little confusing is that I
have to configure the ceph.conf file on top of using ceph-deploy so i
Hi Markus,
What version of the kernel are you using on the client? There is an
annoying compatibility issue with older glibc that makes representing
large values for statfs(2) (df) difficult. We switched this behavior to
hopefully do things the better/"more right" way for the future, but it's
ceph-deploy adds the OSDs to the cluster map. You can add the OSDs to
the ceph.conf manually.
In the ceph.conf file, the settings don't require underscores. If you
modify your configuration at runtime, you need to add the underscores
on the command line.
http://ceph.com/docs/master/rados/configur
Actually no. I'll write up an API doc for you soon.
sudo apt-get install python-ceph
import rados
You can view the code by cloning the git repository.
http://ceph.com/docs/master/install/clone-source/
The source is in src/pybind/rados.py.
See http://ceph.com/docs/master/rbd/librbdpy/
The fir
Hi,
while testing a setup with ceph and Radosgw for storing document files, we
encountered some problems.
We fill in files to RadosGW (tested both with S3 and Swift) with a little perl
script to see how the
cluster behaves, putting 100k+ files with different sizes(1kb-1mb) in the
cluster (in par
Hi,
this is cuttlefish 0.63 on Ubuntu 13.04, underlying OSD-FS is btrfs, 3
servers, each of them 20TB (Raid6-array)
When i mount at the client (or at one of the servers) the mounted
filesystem is only 240GB but it should be 60TB.
root@bd-0:~# cat /etc/ceph/ceph.conf
[global]
fsid = e0dbf70d-
Is using s3/swift emulation the only way to access object store with python?
On 06/11/2013 08:32 PM, John Wilkins wrote:
> Here are the libraries for the Ceph Object Store.
>
> http://ceph.com/docs/master/radosgw/s3/python/
> http://ceph.com/docs/master/radosgw/swift/python/
>
>
20 matches
Mail list logo