On Tue, Jan 28, 2020 at 7:26 AM CASS Philip wrote:
>
> I have a query about https://docs.ceph.com/docs/master/cephfs/createfs/:
>
>
>
> “The data pool used to create the file system is the “default” data pool and
> the location for storing all inode backtrace information, used for hard link
> ma
Osd's do not even use bonding effenciently. If it were to use 2 links
concurrently it would be a lot better.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] Re: small cluster HW upgrade
Hi Philipp
On Wed, Jan 29, 2020 at 3:04 AM Frank Schilder wrote:
>
> I would like to (in this order)
>
> - set the data pool for the root "/" of a ceph-fs to a custom value, say "P"
> (not the initial data pool used in fs new)
> - create a sub-directory of "/", for example "/a"
> - mount the sub-directory "
On Wed, Jan 29, 2020 at 1:25 AM Samy Ascha wrote:
>
> Hi!
>
> I've been running CephFS for a while now and ever since setting it up, I've
> seen unexpectedly large write i/o on the CephFS metadata pool.
>
> The filesystem is otherwise stable and I'm seeing no usage issues.
>
> I'm in a read-inten
This is a natural condition of bonding, it has little to do with ceph-osd.
Make sure your hash policy is set appropriatelly, so that you even have a
chance of using both links.
https://support.packet.com/kb/articles/lacp-bonding
The larger the set of destinations, the more likely you are to spr
Hello,
what you see is an stracktrace, so the OSD is hitting an unexpected
state (Otherwise there would be an error handler).
The crash happens, when the osd wants to read from pipe when processing
heartbeat. To me it sounds like a networking issue.
I see the other OSD an that host are healthy,
You can optimize ceph-osd for this of course. It would benefit people
that like to use the 1Gbit connections. I can understand putting time
into it now does not make sense because of the availability of 10Gbit.
However, I do not get why this was not optimized already 5 or 10 years
ago.
---
On Tue, 28 Jan 2020, Paul Emmerich wrote:
Yes, data that is not synced is not guaranteed to be written to disk,
this is consistent with POSIX semantics.
To get all 0s back during read() of a part that returned successfully from
write() of data other than 0s does not seem to be consistent wi
Hello Dave,
you can configure Ceph to pick multiple OSDs per Host and therefore work
like a classic raid.
It will cause a downtime whenever you have to do maintenance on a system,
but when you plan to grow it quite fast, it's maybe an option for you.
--
Martin Verges
Managing director
Hint: Secu
Hi Frederic,
i guess it is not stuck but just iterating. My "orphans find" job is running
for nearly 2 months now! I hope, you started it in a screen session ;)
happy waiting,
ingo
- Ursprüngliche Mail -
Von: "CUZA Frédéric"
An: "ceph-users"
Gesendet: Freitag, 31. Januar 2020 11:19:49
On 2/2/20 5:20 PM, Andreas John wrote:
> Hello,
>
> what you see is an stracktrace, so the OSD is hitting an unexpected
> state (Otherwise there would be an error handler).
>
> The crash happens, when the osd wants to read from pipe when processing
> heartbeat. To me it sounds like a networking
How to know a OSD is super busy? Thanks.
Wido den Hollander
>
>
> On 2/2/20 5:20 PM, Andreas John wrote:
> > Hello,
> >
> > what you see is an stracktrace, so the OSD is hitting an unexpected
> > state (Otherwise there would be an error handler).
> >
> > The crash happens, when the osd wants to
On 2/3/20 8:39 AM, wes park wrote:
> How to know a OSD is super busy? Thanks.
Check if it's using 100% CPU for example. And check the disk util with
iostat.
Wido
>
> Wido den Hollander mailto:w...@42on.com>>
>
>
>
> On 2/2/20 5:20 PM, Andreas John wrote:
> > Hello,
> >
> >
13 matches
Mail list logo