I have a system w/ 7 hosts.
Each host has 1x1TB NVME, and 2x2TB SATA SSD.
The intent was to use this for openstack, having glance stored on the SSD,
and cinder + nova running cache-tier replication pool on nvme into erasure
coded pool on ssd.
The rationale is that, given the copy-on-write, only t
I am running 10.2.0-0ubuntu0.16.04.1.
I've run into a problem w/ cephfs metadata pool. Specifically I have a pg
w/ an 'unfound' object.
But i can't figure out which since when i run:
ceph pg 12.94 list_unfound
it hangs (as does ceph pg 12.94 query). I know its in the cephfs metadata
pool since I
I have a 6 osd system (w/ 3 mon, and 3 mds).
it is running cephfs as part of its task.
i have upgraded the 3 mon nodes to Ubuntu 16.04 and the bundled
ceph 10.1.0-0ubuntu1.
(upgraded from Ubuntu 15.10 with ceph 0.94.6-0ubuntu0.15.10.1).
2 of the mon nodes are happy and up. But the 3rd is giving
On 21 December 2015 at 22:07, Yan, Zheng wrote:
>
> > OK, so i changed fio engine to 'sync' for the comparison of a single
> > underlying osd vs the cephfs.
> >
> > the cephfs w/ sync is ~ 115iops / ~500KB/s.
>
> This is normal because you were doing single thread sync IO. If
> round-trip time fo
On 20 December 2015 at 22:47, Yan, Zheng wrote:
> >> ---
> >>
>
>
> fio tests AIO performance in this case. cephfs does not handle AIO
> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
> this case.
>
> Regards
> Yan, Z
On 21 December 2015 at 03:23, Yan, Zheng wrote:
> On Sat, Dec 19, 2015 at 4:34 AM, Don Waterloo
> wrote:
> > I have 3 systems w/ a cephfs mounted on them.
> > And i am seeing material 'lag'. By 'lag' i mean it hangs for little bits
> of
> > time
On 20 December 2015 at 19:23, Francois Lafont wrote:
> On 20/12/2015 22:51, Don Waterloo wrote:
>
> > All nodes have 10Gbps to each other
>
> Even the link client node <---> cluster nodes?
>
> > OSD:
> > $ ceph osd tree
> > ID WEIGHT TYPE NAME
On 20 December 2015 at 15:06, Francois Lafont wrote:
> Hi,
>
> On 20/12/2015 19:47, Don Waterloo wrote:
>
> > I did a bit more work on this.
> >
> > On cephfs-fuse, I get ~700 iops.
> > On cephfs kernel, I get ~120 iops.
> > These were both on 4.3 kerne
On 20 December 2015 at 08:35, Francois Lafont wrote:
> Hello,
>
> On 18/12/2015 23:26, Don Waterloo wrote:
>
> > rbd -p mypool create speed-test-image --size 1000
> > rbd -p mypool bench-write speed-test-image
> >
> > I get
> >
> > bench-write io_
On 18 December 2015 at 15:48, Don Waterloo wrote:
>
>
> On 17 December 2015 at 21:36, Francois Lafont wrote:
>
>> Hi,
>>
>> I have ceph cluster currently unused and I have (to my mind) very low
>> performances.
>> I'm not an e
On 17 December 2015 at 21:36, Francois Lafont wrote:
> Hi,
>
> I have ceph cluster currently unused and I have (to my mind) very low
> performances.
> I'm not an expert in benchs, here an example of quick bench:
>
> ---
> # fio --randrep
I have 3 systems w/ a cephfs mounted on them.
And i am seeing material 'lag'. By 'lag' i mean it hangs for little bits of
time (1s, sometimes 5s).
But very non repeatable.
If i run
time find . -type f -print0 | xargs -0 stat > /dev/null
it might take ~130ms.
But, it might take 10s. Once i've done
101916 objects
265 GB used, 5357 GB / 5622 GB avail
840 active+clean
On 6 December 2015 at 08:18, Yan, Zheng wrote:
> On Sun, Dec 6, 2015 at 7:01 AM, Don Waterloo
> wrote:
> > Thanks for the advice.
> >
> > I dumped the filesystem contents, then de
On Fri, Dec 4, 2015 at 10:39 AM, Don Waterloo
> wrote:
> > i have a file which is untouchable: ls -i gives an error, stat gives an
> > error. it shows ??? for all fields except name.
> >
> > How do i clean this up?
> >
>
> The safest way to clean this up is
i have a file which is untouchable: ls -i gives an error, stat gives an
error. it shows ??? for all fields except name.
How do i clean this up?
I'm on ubuntu 15.10, running 0.94.5
# ceph -v
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
the node that accessed the file then caused
15 matches
Mail list logo