On Tue, 21 Jan 2020 at 11:12, Peter Maydell <peter.mayd...@linaro.org> wrote: > > On Tue, 21 Jan 2020 at 06:40, Markus Armbruster <arm...@redhat.com> wrote: > > John Snow <js...@redhat.com> writes: > > > Still, I do want to ask: Are we sure we want to double-down on keeping > > > the .hx files around instead of trying to move to a more generic data > > > format? > > > > One the one hand, I'd prefer to invest as little as practical into .hx. > > On the other hand, adding more hard dependencies on QAPIfication is not > > a good idea. > > > > What's the stupidest solution that could possibly work now? Is it the > > one Peter sketched? > > FWIW, I wrote some code for the Sphinx extension approach yesterday, > along the 'simplest possible thing' lines. It's less than 200 lines > of Python (though I still need to put in the support for DEFHEADING > and ARCHHEADING). The actual texinfo fragments in the various .hx > files of course would also need to be hand-converted to rST, same > as the hand-written manual .texi file contents.
Incidentally, I am definitely coming to the conclusion that the best way to do generation of docs to go in Sphinx manuals is to use/write a Sphinx extension -- this lets you properly create doctree nodes, for instance and it fits the flow better. So a in potential future where we were generating these docs from json, I think we'd want to have a Sphinx extension driving the 'parse the json for docs', rather than a separate script that spit out rst-format-text to include. thanks -- PMM