Dear Daniel,
I think it's bug. Because if i have a big file and with 15 minutes expires,
I can't finished download it.
2017-02-06 15:12 GMT+07:00 Khang Nguyễn Nhật
:
> Dear Daniel,
> I think it's bug. Because if i have a big file and with 15 minutes
> expires, I can't finished download it.
>
> 2
> Op 6 februari 2017 om 11:10 schreef Florent B :
>
>
> # ceph -v
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>
> (officiel Ceph packages for Jessie)
>
>
> Yes I recently adjusted pg_num, but all objects were correctly rebalanced.
>
> Then a manually deleted some objects
Hi everyone,
I'm now running our two-node mini-cluster for some months. OSD, MDS and
Monitor is running on both nodes. Additionally there is a very small
third node which is only running a third monitor but no MDS/OSD. On both
main servers, CephFS is mounted via FSTab/Kernel driver. The mounte
How about *pve01-rbd01*?
* rados -p pve01-rbd01 ls | wc -l
?
On Mon, Feb 6, 2017 at 9:40 PM, Florent B wrote:
> On 02/06/2017 11:12 AM, Wido den Hollander wrote:
>>> Op 6 februari 2017 om 11:10 schreef Florent B :
>>>
>>>
>>> # ceph -v
>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a0
Hello together,
I got an error message during execution of the following command line:
# rbd create backup/one-71-242-2 --object-size 32M --stripe-unit 4194304
--stripe-count 8 -s 1 # create an empty image
# rbd export-diff --no-progress --whole-object
one_ssd/one-71-242-2@backup-20170206-0829
worth filing an issue at http://tracker.ceph.com/ , looks like there are 2
different issues and should be easy to recreate.
On Mon, Feb 6, 2017 at 9:01 AM, Florent B wrote:
> On 02/06/2017 05:49 PM, Shinobu Kinjo wrote:
> > How about *pve01-rbd01*?
> >
> > * rados -p pve01-rbd01 ls | wc -l
> >
I've not been able to reproduce an issue with exactly same version of
your cluster.
./ceph -v
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
./rados df | grep cephfs
cephfs_data_a 409600 10000
000 10
So the cluster has been dead and down since around 8/10/2016. I have since
rebooted the cluster in order to try and use the new ceph-monstore-tool
rebuild functionality.
I built the debian packages for the tools for hammer that were recently
backported and installed it across all of the servers:
This is what I'm doing on my CentOS 7/KVM/virtlib server:
rbd create --size 20G pool/vm.mydomain.com
rbd map pool/vm.mydomain.com --name client.admin
virt-install --name vm.mydomain.com --ram 2048 --disk
path=/dev/rbd/pool/vm.mydomain.com --vcpus 1 --os-type linux --os-variant
rhel6 --networ
1) Every once in a while, some processes (PHP) accessing the filesystem
get stuck in a D-state (Uninterruptable sleep). I wonder if this happens
due to network fluctuations (both server are connected via a simple
Gigabit crosslink cable) or how to diagnose this. Why exactly does this
happen in the
10 matches
Mail list logo