On Thu, Sep 19, 2019 at 11:06 PM David Steele <da...@pgmasters.net> wrote: > > I am not crazy about JSON because it requires that I get a json parser > > into src/common, which I could do, but given the possibly-imminent end > > of the universe, I'm not sure it's the greatest use of time. You're > > right that if we pick an ad-hoc format, we've got to worry about > > escaping, which isn't lovely. > > My experience is that JSON is simple to implement and has already dealt > with escaping and data structure considerations. A home-grown solution > will be at least as complex but have the disadvantage of being non-standard.
I think that's fair and just spent a little while investigating how difficult it would be to disentangle the JSON parser from the backend. It has dependencies on the following bits of backend-only functionality: - check_stack_depth(). No problem, I think. Just skip it for frontend code. - pg_mblen() / GetDatabaseEncoding(). Not sure what to do about this. Some of our infrastructure for dealing with encoding is available in the frontend and backend, but this part is backend-only. - elog() / ereport(). Kind of a pain. We could just kill the program if an error occurs, but that seems a bit ham-fisted. Refactoring the code so that the error is returned rather than thrown might be the way to go, but it's not simple, because you're not just passing a string. ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("invalid input syntax for type %s", "json"), errdetail("Character with value 0x%02x must be escaped.", (unsigned char) *s), report_json_context(lex))); - appendStringInfo et. al. I don't think it would be that hard to move this to src/common, but I'm also not sure it really solves the problem, because StringInfo has a 1GB limit, and there's no rule at all that a backup manifest has got to be less than 1GB. https://www.pgcon.org/2013/schedule/events/595.en.html This gets at another problem that I just started to think about. If the file is just a series of lines, you can parse it one line and a time and do something with that line, then move on. If it's a JSON blob, you have to parse the whole file and get a potentially giant data structure back, and then operate on that data structure. At least, I think you do. There's probably some way to create a callback structure that lets you presuppose that the toplevel data structure is an array (or object) and get back each element of that array (or key/value pair) as it's parsed, but that sounds pretty annoying to get working. Or we could just decide that you have to have enough memory to hold the parsed version of the entire manifest file in memory all at once, and if you don't, maybe you should drop some tables or buy more RAM. That still leaves you with bypassing the 1GB size limit on StringInfo, maybe by having a "huge" option, or perhaps by memory-mapping the file and then making the StringInfo point directly into the mapped region. Perhaps I'm overthinking this and maybe you have a simpler idea in mind about how it can be made to work, but I find all this complexity pretty unappealing. Here's a competing proposal: let's decide that lines consist of tab-separated fields. If a field contains a \t, \r, or \n, put a " at the beginning, a " at the end, and double any " that appears in the middle. This is easy to generate and easy to parse. It lets us completely ignore encoding considerations. Incremental parsing is straightforward. Quoting will rarely be needed because there's very little reason to create a file inside a PostgreSQL data directory that contains a tab or a newline, but if you do it'll still work. The lack of quoting is nice for humans reading the manifest, and nice in terms of keeping the manifest succinct; in contrast, note that using JSON doubles every backslash. I hear you saying that this is going to end up being just as complex in the end, but I don't think I believe it. It sounds to me like the difference between spending a couple of hours figuring this out and spending a couple of months trying to figure it out and maybe not actually getting anywhere. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company