Hi,
For a number of application we use, there is a lot of file duplication. This
wastes precious storage space, which I would like to avoid.
When using a local disk, I can use a hard link to let all duplicate files
point to the same inode (use "rdfind", for example).
As there isn't any ded
Hi,
I mount cephfs on my client servers. Some of the servers mount without any
error whereas others don't.
The error:
# ceph-fuse -n client.kvm -m ceph.somedomain.com:6789 /mnt/kvm -r /kvm -d
2019-03-19 17:03:29.136 7f8c80eddc80 -1 deliberately leaking some memory
2019-03-19 17:03:29.137 7f8c80ed
Hi all,
We've just hit our first OSD replacement on a host created with
`ceph-volume lvm batch` with mixed hdds+ssds.
The hdd /dev/sdq was prepared like this:
# ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes
Then /dev/sdq failed and was then zapped like this:
# ceph-volume lvm zap /dev/
On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote:
>
> Hi all,
>
> We've just hit our first OSD replacement on a host created with
> `ceph-volume lvm batch` with mixed hdds+ssds.
>
> The hdd /dev/sdq was prepared like this:
># ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes
>
> Then /d
On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
>
> On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote:
> >
> > Hi all,
> >
> > We've just hit our first OSD replacement on a host created with
> > `ceph-volume lvm batch` with mixed hdds+ssds.
> >
> > The hdd /dev/sdq was prepared like this
On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
>
> On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> >
> > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster
> > wrote:
> > >
> > > Hi all,
> > >
> > > We've just hit our first OSD replacement on a host created with
> > > `ceph-volume lvm
On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote:
>
> On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
> >
> > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> > >
> > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster
> > > wrote:
> > > >
> > > > Hi all,
> > > >
> > > > We've j
On Tue, Mar 19, 2019 at 1:05 PM Alfredo Deza wrote:
>
> On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote:
> >
> > On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
> > >
> > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> > > >
> > > > On Tue, Mar 19, 2019 at 6:47 AM Dan van de
We're glad to announce the first release of Nautilus v14.2.0 stable
series. There have been a lot of changes across components from the
previous Ceph releases, and we advise everyone to go through the release
and upgrade notes carefully.
The release also saw commits from over 300 contributors and
Hi All.
I'm trying to get head and tails into where we can stretch our Ceph cluster
into what applications. Parallism works excellent, but baseline throughput
it - perhaps - not what I would expect it to be.
Luminous cluster running bluestore - all OSD-daemons have 16GB of cache.
Fio files attac
Hi,
Will debian packages be released? I don't see them in the nautilus repo. I
thought that Nautilus was going to be debian-friendly, unlike Mimic.
Sean
On Tue, 19 Mar 2019 14:58:41 +0100
Abhishek Lekshmanan wrote:
>
> We're glad to announce the first release of Nautilus v14.2.0 stable
>
One thing you can check is the CPU performance (cpu governor in particular).
On such light loads I've seen CPUs sitting in low performance mode (slower
clocks), giving MUCH worse performance results than when tried with heavier
loads. Try "cpupower monitor" on OSD nodes in a loop and observe the
On 3/19/19 12:05 AM, David Coles wrote:
I'm looking at setting up a multi-site radosgw configuration where
data is sharded over multiple clusters in a single physical location;
and would like to understand how Ceph handles requests in this
configuration.
Looking through the radosgw source[1] i
On Tue, Mar 19, 2019 at 02:17:56PM +0100, Dan van der Ster wrote:
On Tue, Mar 19, 2019 at 1:05 PM Alfredo Deza wrote:
On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote:
>
> On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
> >
> > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
Hi,
I'm getting an error when trying to use the APT repo for Ubuntu bionic.
Does anyone else have this issue? Is the mirror sync actually still in
progress? Or was something setup incorrectly?
E: Failed to fetch
https://download.ceph.com/debian-nautilus/dists/bionic/main/binary-amd64/Packages.bz2
> One thing you can check is the CPU performance (cpu governor in
> particular).
> On such light loads I've seen CPUs sitting in low performance mode (slower
> clocks), giving MUCH worse performance results than when tried with
> heavier
> loads. Try "cpupower monitor" on OSD nodes in a loop and ob
On 3/19/19 2:52 PM, Benjamin Cherian wrote:
> Hi,
>
> I'm getting an error when trying to use the APT repo for Ubuntu bionic.
> Does anyone else have this issue? Is the mirror sync actually still in
> progress? Or was something setup incorrectly?
>
> E: Failed to fetch
> https://download.ceph.com
I don't think file locks are to blame. I tired to control for that in my tests;
I was reading with fio from one set of files (multiple fio pids spawned from a
single command) while writing with dd to an entirely different file using a
different shell on the same host. So one CephFS kernel client
On Tue, Mar 19, 2019 at 7:54 PM Zhenshi Zhou wrote:
>
> Hi,
>
> I mount cephfs on my client servers. Some of the servers mount without any
> error whereas others don't.
>
> The error:
> # ceph-fuse -n client.kvm -m ceph.somedomain.com:6789 /mnt/kvm -r /kvm -d
> 2019-03-19 17:03:29.136 7f8c80eddc80
On 3/19/19 2:52 PM, Benjamin Cherian wrote:
>/Hi, />//>/I'm getting an error when trying to use the APT repo for Ubuntu bionic. />/Does anyone else have this issue? Is the mirror sync actually still in />/progress? Or was something setup incorrectly? />//>/E: Failed to fetch />/https://download.ce
I setup an SSD Luminous 12.2.11 cluster and realized after data had been
added that pg_num was not set properly on the default.rgw.buckets.data pool
( where all the data goes ). I adjusted the settings up, but recovery is
going really slow ( like 56-110MiB/s ) ticking down at .002 per log
entry(ce
On Tue, Mar 19, 2019 at 2:13 PM Erwin Bogaard
wrote:
> Hi,
>
>
>
> For a number of application we use, there is a lot of file duplication.
> This wastes precious storage space, which I would like to avoid.
>
> When using a local disk, I can use a hard link to let all duplicate files
> point to th
I setup an SSD Luminous 12.2.11 cluster and realized after data had been
added that pg_num was not set properly on the default.rgw.buckets.data pool
( where all the data goes ). I adjusted the settings up, but recovery is
going really slow ( like 56-110MiB/s ) ticking down at .002 per log
entry(c
Hi,ALL
I've deployed mimic(13.2.5) cluster on 3 CentOS 7.6 servers, then configured
iscsi-target and created a LUN, referring to
http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/.
I have another server which is CentOS 7.4, configured and mounted the LUN I've
just created, referring to
http
On Tue, Mar 19, 2019 at 2:53 PM Benjamin Cherian
wrote:
>
> Hi,
>
> I'm getting an error when trying to use the APT repo for Ubuntu bionic. Does
> anyone else have this issue? Is the mirror sync actually still in progress?
> Or was something setup incorrectly?
>
> E: Failed to fetch
> https://d
There aren't any Debian packages built for this release because we
haven't updated the infrastructure to build (and test) Debian packages
yet.
On Tue, Mar 19, 2019 at 10:24 AM Sean Purdy wrote:
>
> Hi,
>
>
> Will debian packages be released? I don't see them in the nautilus repo. I
> thought t
26 matches
Mail list logo