On 2014年09月18日 10:38, Luke Jing Yuan wrote:
Hi,
From the ones we managed to configure in our lab here. I noticed that using image format
"raw" instead of "qcow2" worked for us.
Regards,
Luke
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
hi steven,
we ran into issues when trying to use a non-default user ceph user in
opennebula (don't remeber what the default was; but it's probably not
libvirt2 ), patches are in https://github.com/OpenNebula/one/pull/33,
devs sort-of confirmed they will be in 4.8.1. this way you can set
CEPH_
Have anyone ever testing multi volume performance on a *FULL* SSD setup?
We are able to get ~18K IOPS for 4K random read on a single volume with fio
(with rbd engine) on a 12x DC3700 Setup, but only able to get ~23K (peak) IOPS
even with multiple volumes.
Seems the maximum random write performan
dear,
my ceph cluster worked for about two weeks, mds crashed every 2-3 days,
Now it stuck on replay , looks like replay crash and restart mds process again
what can i do for this?
1015 => # ceph -s
cluster 07df7765-c2e7-44de-9bb3-0b13f6517b18
health HEALTH_ERR 56 pgs inconsistent; 56 scru
Hi john,
I specify the name then I got this error.
#radosgw-admin pools list -n client.radosgw.in-west-1
could not list placement set: (2) No such file or directory
Regards,
Santhosh
On Thu, Sep 18, 2014 at 3:44 AM, John Wilkins
wrote:
> Does radosgw-admin have authentication keys available a
Hi,
>From the ones we managed to configure in our lab here. I noticed that using
>image format "raw" instead of "qcow2" worked for us.
Regards,
Luke
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steven
Timm
Sent: Thursday, 18 September, 201
Does radosgw-admin have authentication keys available and with
appropriate permissions?
http://ceph.com/docs/master/radosgw/config/#create-a-user-and-keyring
On Fri, Sep 12, 2014 at 3:13 AM, Santhosh Fernandes
wrote:
> Hi,
>
> Anyone help me why my radosgw-admin pool list give me this error
>
>
Subhadip,
I updated the master branch of the preflight docs here:
http://ceph.com/docs/master/start/ We did encounter some issues that
were resolved with those preflight steps.
I think it might be either requiretty or SELinux. I will keep you
posted. Let me know if it helps.
On Wed, Sep 17, 201
I am trying to use Ceph as a data store with OpenNebula 4.6 and
have followed the instructions in OpenNebula's documentation
at
http://docs.opennebula.org/4.8/administration/storage/ceph_ds.html
and compared them against the "using libvirt with ceph"
http://ceph.com/docs/master/rbd/libvirt/
We
Hi,
any suggestions ?
Regards,
Subhadip
---
On Wed, Sep 17, 2014 at 9:05 AM, Subhadip Bagui wrote:
> Hi
>
> I'm getting the below error while installing ceph in admin node. Please
>
Hey everyone! We just posted the agenda for next week’s Ceph Day in San Jose:
http://ceph.com/cephdays/san-jose/
This Ceph Day will be held in a beautiful facility provided by our friends at
Brocade. We have a lot of great speakers from Brocade, Red Hat, Dell, Fujitsu,
HGST, and Supermicro,
That looks like the beginning of an mds creation to me. What's your
problem in more detail, and what's the output of "ceph -s"?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Sep 15, 2014 at 5:34 PM, Shun-Fa Yang wrote:
> Hi all,
>
> I'm installed ceph v 0.80.5 on Ubu
Hi,
Now I feel dumb for jumping to the conclusion that it was a simple
networking issue - it isn't.
I've just checked connectivity properly and I can ping and telnet 6789 from
all mon servers to all other mon servers.
I've just restarted the mon03 service and the log is showing the following:
20
On Wed, Sep 17, 2014 at 5:21 PM, James Eckersall
wrote:
> Hi,
>
> Thanks for the advice.
>
> I feel pretty dumb as it does indeed look like a simple networking issue.
> You know how you check things 5 times and miss the most obvious one...
>
> J
No worries at all .:)
Cheers,
Florian
On Wed, Sep 17, 2014 at 5:42 PM, Dan Van Der Ster
wrote:
> From: Florian Haas
> Sent: Sep 17, 2014 5:33 PM
> To: Dan Van Der Ster
> Cc: Craig Lewis ;ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RGW hung, 2 OSDs using 100% CPU
>
> On Wed, Sep 17, 2014 at 5:24 PM, Dan Van Der Ster
> wrote
Hi,
(Sorry for top posting, mobile now).
That's exactly what I observe -- one sleep per PG. The problem is that the
sleep can't simply be moved since AFAICT the whole PG is locked for the
duration of the trimmer. So the options I proposed are to limit the number of
snaps trimmed per call to e.g
On Wed, Sep 17, 2014 at 5:24 PM, Dan Van Der Ster
wrote:
> Hi Florian,
>
>> On 17 Sep 2014, at 17:09, Florian Haas wrote:
>>
>> Hi Craig,
>>
>> just dug this up in the list archives.
>>
>> On Fri, Mar 28, 2014 at 2:04 AM, Craig Lewis
>> wrote:
>>> In the interest of removing variables, I remove
Hi Florian,
> On 17 Sep 2014, at 17:09, Florian Haas wrote:
>
> Hi Craig,
>
> just dug this up in the list archives.
>
> On Fri, Mar 28, 2014 at 2:04 AM, Craig Lewis
> wrote:
>> In the interest of removing variables, I removed all snapshots on all pools,
>> then restarted all ceph daemons at
Hi,
Thanks for the advice.
I feel pretty dumb as it does indeed look like a simple networking issue.
You know how you check things 5 times and miss the most obvious one...
J
On 17 September 2014 16:04, Florian Haas wrote:
> On Wed, Sep 17, 2014 at 1:58 PM, James Eckersall
> wrote:
> > Hi,
>
Hi Craig,
just dug this up in the list archives.
On Fri, Mar 28, 2014 at 2:04 AM, Craig Lewis wrote:
> In the interest of removing variables, I removed all snapshots on all pools,
> then restarted all ceph daemons at the same time. This brought up osd.8 as
> well.
So just to summarize this: yo
On Wed, Sep 17, 2014 at 1:58 PM, James Eckersall
wrote:
> Hi,
>
> I have a ceph cluster running 0.80.1 on Ubuntu 14.04. I have 3 monitors and
> 4 OSD nodes currently.
>
> Everything has been running great up until today where I've got an issue
> with the monitors.
> I moved mon03 to a different s
Hi,
I have a ceph cluster running 0.80.1 on Ubuntu 14.04. I have 3 monitors
and 4 OSD nodes currently.
Everything has been running great up until today where I've got an issue
with the monitors.
I moved mon03 to a different switchport so it would have temporarily lost
connectivity.
Since then, t
Thanks John - It did look like it was heading in that direction!
I did wonder if a 'fs map' & 'fs unmap' would be useful too; filesystem
backups, migrations between clusters & async DR could be facilitated by
moving underlying pool objects around between clusters.
Dave
On Wed, Sep 17, 2014 at 1
Thanks, I did check on that too as I'd seen this before and this was
"the usual drill", but alas, no, that wasn't the problem. This cluster
is having other issues too, though, so I probably need to look into
those first.
Cheers,
Florian
On Mon, Sep 15, 2014 at 7:29 PM, Gregory Farnum wrote:
> No
Hi David,
We haven't written any code for the multiple filesystems feature so
far, but the new "fs new"/"fs rm"/"fs ls" management commands were
designed with this in mind -- currently only supporting one
filesystem, but to allow slotting in the multiple filesystems feature
without too much disrup
Hi all,
Anyone have successful in replicating data across two zones of Federated
gateway configuration. I am getting "TypeError: unhashable type: 'list'"
error. I am not seeing data part getting replicated.
verbose log :
application/json; charset=UTF-8
Wed, 17 Sep 2014 09:59:22 GMT
/admin/log
20
On 09/17/2014 12:11 PM, David Barker wrote:
> Hi Cephalopods,
>
> Browsing the list archives, I know this has come up before, but I thought
> I'd check in for an update.
>
> I'm in an environment where it would be useful to run a file system per
> department in a single cluster (or at a pinch enf
Hi Cephalopods,
Browsing the list archives, I know this has come up before, but I thought
I'd check in for an update.
I'm in an environment where it would be useful to run a file system per
department in a single cluster (or at a pinch enforcing some client / fs
tree security). Has there been muc
>>The results are with journal and data configured in the same SSD ?
yes
>>Also, how are you configuring your journal device, is it a block device ?
yes.
~ceph-deploy osd create node:sdb
# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands
29 matches
Mail list logo