t;I see that every folder under "/var/lib/ceph/osd/" is a tmpfs mount
>point filled with appropriate files and symlinks, except of
>"/var/lib/ceph/osd/ceph-1",
>which is just an empty folder not mounted anywhere.
>I tried to run
>
>"ceph-bluestore-tool prime-osd-dir --dev
>/dev/ceph-e53b65ba-5eb0-44f5-9160-a2328f787a0f/osd-block-8c6324a3-0364-4fad-9dcb-81a1661ee202
>--path
>/var/lib/ceph/osd/ceph-1"
>
>it created some files under /var/lib/ceph/osd/ceph-1 but without tmpfs
>mount, and these files belonged to root. I changed owner of these files
>into ceph.ceph ,
>I created appropriate symlinks for block and block.db but ceph-osd@1
>did not want to start either. Only "unable to find keyring" messages
>disappeared.
>
>Please give any help on where to move next.
>Thanks in advance for your help.
>___
>ceph-users mailing list -- ceph-users@ceph.io
>To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
; Looking into Python scripts of ceph-volume, I noticed that tmpfs is mounted
> during the run "ceph-colume lvm activate",
> and "ceph-bluestore-tool prime-osd-dir" is started from the same script
> afterwards.
> Should I try starting "ceph-volume lvm
.0 0.0 0.0
> 0.0 0.0 0.0 0.0 0.00 0.00 0
> 0.000 0 0
>
> ** Compaction Stats [default] **
> PriorityFiles Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB)
> Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec)
> Comp(cnt) Avg(sec) KeyIn KeyDrop
> ---
> User 0/00.00 KB 0.0 0.0 0.0 0.0 0.0 0.0
> 0.0 0.0 0.0 1.4 0.00 0.00 1
> 0.001 0 0
> Uptime(secs): 0.0 total, 0.0 interval
> Flush(GB): cumulative 0.000, interval 0.000
> AddFile(GB): cumulative 0.000, interval 0.000
> AddFile(Total Files): cumulative 0, interval 0
> AddFile(L0 Files): cumulative 0, interval 0
> AddFile(Keys): cumulative 0, interval 0
> Cumulative compaction: 0.00 GB write, 0.21 MB/s write, 0.00 GB read, 0.00
> MB/s read, 0.0 seconds
> Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s
> read, 0.0 seconds
> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0
> level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for
> pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0
> memtable_compaction, 0 memtable_slowdown, interval 0 total count
>
> ** File Read Latency Histogram By Level [default] **
>
> 2020-10-28 17:17:13.253 7eff1f7cd1c0 0 mon.mgmt03 does not exist in monmap,
> will attempt to join an existing cluster
> 2020-10-28 17:17:13.254 7eff1f7cd1c0 0 using public_addr v2:10.2.1.1:0/0 ->
> [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]
> 2020-10-28 17:17:13.254 7eff1f7cd1c0 0 starting mon.mgmt03 rank -1 at public
> addrs [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] at bind addrs
> [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] mon_data
> /var/lib/ceph/mon/ceph-mgmt03 fsid 374aed9e-5fc1-47e1-8d29-4416f7425e76
> 2020-10-28 17:17:13.256 7eff1f7cd1c0 1 mon.mgmt03@-1(???) e2 preinit fsid
> 374aed9e-5fc1-47e1-8d29-4416f7425e76
> 2020-10-28 17:17:13.256 7eff1f7cd1c0 1 mon.mgmt03@-1(???) e2
> initial_members mgmt01,mgmt02,mgmt03, filtering seed monmap
> 2020-10-28 17:17:13.256 7eff1f7cd1c0 1 mon.mgmt03@-1(???) e2 preinit clean
> up potentially inconsistent store state
> 2020-10-28 17:17:13.258 7eff1f7cd1c0 0 --
> [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] send_to message mon_probe(probe
> 374aed9e-5fc1-47e1-8d29-4416f7425e76 name mgmt03 new mon_release 14) v7 with
> empty dest
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
iving load, so there seems to be
no problem, but does anyone know what that error is about?
Thanks!
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediafoundation.org/>
PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C7E1 D262 69C3
"Imagine a world in w
Thanks
>
> On Wed, Nov 25, 2020 at 4:03 PM David Caro wrote:
>
> >
> > Yep, you are right:
> >
> > ```
> > # cat /sys/block/sdd/queue/rotational
> > 1
> > ```
> >
> > I was looking to the code too but you got there before me :)
>
pty pool. And the pool dump gets huge.
>
> I would take a look at iostat output for those OSD drives and see if there
> are 8 iops or lots more actually.
>
> --
> May the most significant bit of your life be positive.
> ___
> ce
en readonly would be nice.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediafoundation.org/>
PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C
ocess in a test environment
first.
Cheers!
>
> Thanks.
>
> -Dave
>
> --
> Dave Hall
> Binghamton University
> kdh...@binghamton.edu
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.i
> > > > Are there any features like this in libRADOS?
> > > > > > >
> > > > > > > Thank you
> > > > > > > ___
> > > > > > > ceph-users mailing list -- ceph
Not sure if non
> containerized deployments hit this issue as well. I will find thta out
> somewhere next week.
>
> FYI,
>
> Gr. Stefan
>
> [1]: https://tracker.ceph.com/issues/49770
> ___
> ceph-users mailing list -- ceph-users@ceph
x27; might
also help figuring out what's the issue.
On 03/12 16:33, Marc wrote:
>
> Python3 14.2.11 is still supporting python2, I can't imagine that a minor
> update has such a change. Furthermore was el7 officially supported not?
>
>
>
> > -Original
t; for more information.
>>> import typing
>>>
An older image though (v4.0.18-stable-4.0-nautilus-centos-7) does not have that
module either, I'm no expert on how
ceph-ansible sets the containers up though, maybe it failed to do some setup?
Do you have any logs/output f
Thanks for the quick release! \o/
On Tue, 30 Mar 2021, 22:30 David Galloway, wrote:
> This is the 19th update to the Ceph Nautilus release series. This is a
> hotfix release to prevent daemons from binding to loopback network
> interfaces. All nautilus users are advised to upgrade to this releas
Reading the thread "s3 requires twice the space it should use", Boris pointed
out that the fragmentation for the osds is around 0.8-0.9:
> On Thu, Apr 15, 2021 at 8:06 PM Boris Behrens wrote:
>> I also checked the fragmentation on the bluestore OSDs and it is around
>> 0.80 - 0.89 on most OSDs.
gt; lost or deleted, or contain viruses.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediafoun
ugtracker for some day or two?
>
> https://tracker.ceph.com/issues/new
>
>
> Best regards
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ce
059
> email: a.ro...@csic.es
> ID comunicate.csic.es: @50852720l:matrix.csic.es
> ***
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimedia
objects degraded (0.635%)
> 15676 active+clean
> 285 active+undersized+degraded+remapped+backfill_wait
> 230 incomplete
> 176 active+undersized+degraded+remapped+backfilling
> 8 down
> 6 peering
s:
>
> CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
> PG_DEGRADED: Degraded data redundancy: 132518/397554 objects degraded
> (33.333%), 65 pgs degraded, 65 pgs undersized
>
> Thank your for your hints.
>
> Best regards,
> Mabi
> ____
t -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediafoundation.org/>
PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C7E1 D262 69C3
"Imagine a world in which every single hu
orever
>
> Regards
>
> Marcel
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediaf
phs in specific are you looking?
>
> Regards
>
> Marcel
>
> David Caro schreef op 2021-06-10 11:49:
> > We have a similar setup, way smaller though (~120 osds right now) :)
> >
> > We have different capped VMs, but most have 500 write, 1000 read iops
> &
t convinced I’ve got the systemctl command right.
> >
>
> You are not mixing 'not container commands' with 'container commands'. As in,
> if you execute this journalctl outside of the container it will not find
> anything of course.
>
>
>
.
>
>
> Is that a known behavior, an bug or configuration problem? On two hosts I
> turned of swap and the OSDs a running happily
> now for more the 6 weeks.
>
> Bets,
> Alex
>
> ___
> ceph-users mailing list -- ceph-use
tag, dem 16.08.2021 um 13:52 +0200 schrieb David Caro:
> > Afaik the swapping behavior is controlled by the kernel, there might be
> > some tweaks on the container engine side, but
> > you might want to try to tweak the default behavior by lowering the
> > '
ed by the cephadm bootstrap command and not created by hand, and
> > it worked before the upgrade/reboot so I am pretty confident with it.
> >
> > What do you think, can this be a bug or is more a misconfiguration on my
> > side?
> >
> > Thanks,
> > Javier
>
I did not really look deep, but by the last log it seems there's some utf
chars somewhere (Greek phi?) And the code is not handling it well when
logging, trying to use ASCII.
On Thu, 23 Dec 2021, 19:02 Michal Strnad, wrote:
> Hi all.
>
> We have problem using disks accessible via multipath. We a
The hints have to be given from the client side as far as I understand, can you
share the client code too?
Also,not seems that there's no guarantees that it will actually do anything
(best effort I guess):
https://docs.ceph.com/docs/mimic/rados/api/librados/#c.rados_set_alloc_hint
Cheers
On 6
28 matches
Mail list logo