On Do, 2016-10-27 at 15:47 +0200, mj wrote:
> Hi Jelle,
>
> On 10/27/2016 03:04 PM, Jelle de Jong wrote:
> > Hello everybody,
> >
> > I want to upgrade my small ceph cluster to 10Gbit networking and would
> > like some recommendation from your experience.
> >
> > What is your recommend budget 10Gb
Hi Prabu,
On Thu, 15 Dec 2016 13:11:50 +0530, gjprabu wrote:
> We are using ceph version 10.2.4 (Jewel) and data's are mounted
> with cephfs file system in linux. We are trying to set quota for directory
> and files but its don't worked with below document. I have set 100mb for
> dir
Hi Björn,
i think he use something like this:
http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
Udo
Am 2016-12-15 11:10, schrieb Bjoern Laessig:
On Do, 2016-10-27 at 15:47 +0200, mj wrote:
Hi Jelle,
On 10/27/2016 03:04 PM, Jelle de Jong wrote:
> Hello everybody,
>
> I want to up
Moving this to Ceph user where it can get some eyeballs.
On Dec 15, 2016 1:46 AM, "杨维云" wrote:
>
> hi,
>
>
> we know ,the image can't share the data on the different client host , so
> if the client host is down or crash,how can i recover the data from the
> image which map to this crash host.
Hi everyone,
Yesterday scrubbing turned up an inconsistency in one of our placement
groups. We are running ceph 10.2.3, using CephFS and RBD for some VM
images.
[root@hyperv017 ~]# ceph -s
cluster d7b33135-0940-4e48-8aa6-1d2026597c2f
health HEALTH_ERR
1 pgs inconsistent
On Mi, 2016-12-14 at 18:01 +0100, Ilya Dryomov wrote:
> On Wed, Dec 14, 2016 at 5:10 PM, Bjoern Laessig
> wrote:
> > i triggered a Kernel bug in the ceph-krbd code
> > * http://www.spinics.net/lists/ceph-devel/msg33802.html
>
> The fix is ready and is set to be merged into 4.10-rc1.
>
> How of
Hi,
On a Ceph cluster running Jewel 10.2.5 I'm running into a problem.
I want to change the amount of shards:
# radosgw-admin zonegroup-map get > zonegroup.json
# nano zonegroup.json
# radosgw-admin zonegroup-map set --infile zonegroup.json
# radosgw-admin period update --commit
Now, the error
On Do, 2016-12-15 at 14:31 +0100, ulem...@polarzone.de wrote:
> Hi Björn,
> i think he use something like this:
> http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
i looked into it. This was the first idea for a config i throw away.
Imagine you have 3 Systems. A,B,C
Your cable betwe
> Op 7 november 2016 om 13:17 schreef Wido den Hollander :
>
>
>
> > Op 4 november 2016 om 2:05 schreef Joao Eduardo Luis :
> >
> >
> > On 11/03/2016 06:18 PM, w...@42on.com wrote:
> > >
> > >> Personally, I don't like this solution one bit, but I can't see any
> > >> other way without a pat
Hi Wido,
This looks like you are hitting http://tracker.ceph.com/issues/17364
The fix is being backported to jewel: https://github.com/ceph/ceph/pull/12315
A workaround:
save the realm, zonegroup and zones json file
make a copy of .rgw.root (the pool contain the multisite config)
remove .rgw.root
On Thu, Dec 15, 2016 at 4:31 PM, Bjoern Laessig
wrote:
> On Mi, 2016-12-14 at 18:01 +0100, Ilya Dryomov wrote:
>> On Wed, Dec 14, 2016 at 5:10 PM, Bjoern Laessig
>> wrote:
>> > i triggered a Kernel bug in the ceph-krbd code
>> > * http://www.spinics.net/lists/ceph-devel/msg33802.html
>>
>> The
Hello,
I didn't look at your video but i already can tell you some tracks :
1 - there is a bug in 10.2.2 which make the client cache not working. The
client cache works as it never recieved a flush so it will stay in
writethrough mode. This bug is clear in 10.2.3
2 - 2 SSDs in JBOD and 12 x 4TB
Hi John...
Regarding logs, we still do not have them available. We just realized
that ceph-fuse tries to log to /var/log/ceph, which in our case didn't
exist in the clients. So, we had to create that directory everywhere,
and we are in the process of remounting every client so that they starts
Hello Team,
Can I get any info for this query please ?
Thanks
On Thu, Dec 15, 2016 at 7:15 PM, Jayaram Radhakrishnan <
jayaram161...@gmail.com> wrote:
> Hello Team,
>
> Is there any way to disable warning messages prompting from
> ceph -s" output
>
> ~~~
>
> WARNING: the following dangerous and
Hi David,
Thanks for your mail, We are currently using Linux kernel CephFS,
Is it possible to use ceph-fuse without disturbing current setup.
Regards
Prabu GJ
On Thu, 15 Dec 2016 15:55:12 +0530 David Disseldorp
wrote
Hi Prabu,
On Thu, 15 D
Hi David,
Now we are mounted client using ceph-fuse and still allowing me to
put a data above the limit(100MB). Below is quota details.
getfattr -n ceph.quota.max_bytes test
# file: test
ceph.quota.max_bytes="1"
ceph-fuse fuse.ceph-fuse 5.3T 485G 4.8T 10%
16 matches
Mail list logo