On Mon, May 9, 2016 at 8:08 PM, Yan, Zheng wrote:
> On Tue, May 10, 2016 at 2:10 AM, Eric Eastman
> wrote:
>> On Mon, May 9, 2016 at 10:36 AM, Gregory Farnum wrote:
>>> On Sat, May 7, 2016 at 9:53 PM, Eric Eastman
>>> wrote:
On Fri, May 6, 2016 at 2:14 PM, Eric Eastman
wrote:
>>
Hi members,
We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB
each), 64GB memory.
ceph version 10.2.0, Ubuntu 16.04 LTS
The whole cluster is new installed.
Can you help check what the arguments we put in ceph.conf is reasonable or
not?
thanks.
[osd]
osd_data = /var/lib/ceph
On Tue, May 10, 2016 at 2:10 AM, Eric Eastman
wrote:
> On Mon, May 9, 2016 at 10:36 AM, Gregory Farnum wrote:
>> On Sat, May 7, 2016 at 9:53 PM, Eric Eastman
>> wrote:
>>> On Fri, May 6, 2016 at 2:14 PM, Eric Eastman
>>> wrote:
>>>
>
>>>
>>> A simple test of setting an ACL from the command line
On Mon, May 9, 2016 at 3:28 PM, Nick Fisk wrote:
> Hi Eric,
>
>>
>> I am trying to do some similar testing with SAMBA and CTDB with the Ceph
>> file system. Are you using the vfs_ceph SAMBA module or are you kernel
>> mounting the Ceph file system?
>
> I'm using the kernel client. I couldn't find
Hi Eric,
> -Original Message-
> From: Eric Eastman [mailto:eric.east...@keepertech.com]
> Sent: 09 May 2016 19:21
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] CephFS + CTDB/Samba - MDS session timeout on
> lockfile
>
> I am trying to do some similar testing with SAMBA an
> -Original Message-
> From: Ira Cooper [mailto:icoo...@redhat.com]
> Sent: 09 May 2016 17:31
> To: Sage Weil
> Cc: Nick Fisk ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] CephFS + CTDB/Samba - MDS session timeout on
> lockfile
> - Original Message -
> > On Mon, 9 May 201
I am trying to do some similar testing with SAMBA and CTDB with the
Ceph file system. Are you using the vfs_ceph SAMBA module or are you
kernel mounting the Ceph file system?
Thanks
Eric
On Mon, May 9, 2016 at 9:31 AM, Nick Fisk wrote:
> Hi All,
>
> I've been testing an active/active Samba clus
On Mon, May 9, 2016 at 10:36 AM, Gregory Farnum wrote:
> On Sat, May 7, 2016 at 9:53 PM, Eric Eastman
> wrote:
>> On Fri, May 6, 2016 at 2:14 PM, Eric Eastman
>> wrote:
>>
>>
>> A simple test of setting an ACL from the command line to a fuse
>> mounted Ceph file system also fails:
>> # mkdir /c
The systems on which the `rbd map` hangs problem occurred are
definitely not under memory stress. I don't believer they
are doing a lot of disk I/O either. Here's the basic set-up:
* all nodes in the "data-plane" are identical
* they each host and OSD instance, sharing one of the drive
* I'm runni
On Sat, May 7, 2016 at 9:53 PM, Eric Eastman
wrote:
> On Fri, May 6, 2016 at 2:14 PM, Eric Eastman
> wrote:
>
>> As it should be working, I will increase the logging level in my
>> smb.conf file and see what info I can get out of the logs, and report back.
>
> Setting the log level = 20 in my smb
On Mon, May 9, 2016 at 8:48 AM, Sage Weil wrote:
> On Mon, 9 May 2016, Nick Fisk wrote:
>> Hi All,
>>
>> I've been testing an active/active Samba cluster over CephFS, performance
>> seems really good with small files compared to Gluster. Soft reboots work
>> beautifully with little to no interrupt
On Mon, 9 May 2016, Nick Fisk wrote:
> Hi All,
>
> I've been testing an active/active Samba cluster over CephFS, performance
> seems really good with small files compared to Gluster. Soft reboots work
> beautifully with little to no interruption in file access. However when I
> perform a hard shut
Hi All,
I've been testing an active/active Samba cluster over CephFS, performance
seems really good with small files compared to Gluster. Soft reboots work
beautifully with little to no interruption in file access. However when I
perform a hard shutdown/reboot of one of the samba nodes, the remain
Hi,
I'm not running a cluster as yours, but I don't think the issue is caused by
you using 2 APIs at the same time.
IIRC the dash thing is append by S3 multipart upload, with a following digit
indicating the number of parts.
You may want to check this reported in s3cmd community:
https://sourcef
On Mon, May 9, 2016 at 12:19 AM, K.C. Wong wrote:
>
>> As the tip said, you should not use rbd via kernel module on an OSD host
>>
>> However, using it with userspace code (librbd etc, as in kvm) is fine
>>
>> Generally, you should not have both:
>> - "server" in userspace
>> - "client" in kernels
Hi, Calvin!
Actually it's an OpenStack's question rather than ceph's, but you can use
"cinder quota-show" and "cinder quota-update" commands to show and set
tennat's quota.
You can look this quetion\answer about setting volumes quota for a
certain volume types, it's a kind of relevant to your
I try to simplify the question to get some feedback.
Is anyone running the RadosGW in production with S3 and SWIFT API
active at the same time ?
thank you !
Saverio
2016-05-06 11:39 GMT+02:00 Saverio Proto :
> Hello,
>
> We have been running the Rados GW with the S3 API and we did not have
> p
thanks for your commends. answers inline.
On 05/09/16 09:53, Christian Balzer wrote:
Hello,
On Mon, 9 May 2016 09:31:20 +0200 Ronny Aasen wrote:
hello
I am running a small lab ceph cluster consisting of 6 old used servers.
That's larger than quite a few production deployments. ^_-
:)
Hello,
On Mon, 9 May 2016 09:31:20 +0200 Ronny Aasen wrote:
> hello
>
> I am running a small lab ceph cluster consisting of 6 old used servers.
That's larger than quite a few production deployments. ^_-
> they have 36 slots for drives. but too little ram, 32GB max, for this
> mainboard, to
hello
I am running a small lab ceph cluster consisting of 6 old used servers.
they have 36 slots for drives. but too little ram, 32GB max, for this
mainboard, to take advantage of them all. When i get to around 20 osd's
on a node the OOM killer becomes a problem, if there is incidents that
re
20 matches
Mail list logo