On Thu, Apr 1, 2021 at 1:27 AM redst...@gmail.com <redstre...@gmail.com> wrote:
> *Apologies, meant to start a new thread:* > Awesome, thanks for sharing beanbuff. March seems to be the #ingest month > :). > It's due to Daniele's active involvement in basically taking it over and giving it a good solid revamping. Thank you Daniele! BTW, the "file" command, whose name was confusing as it is also a noun, was renamed to "archive" recently. > I'm curious though: what is the fundamental concern around representing > thousands of temporary positions in Beancount? Is it performance, and if > so, won't v3 solve this? Is it to not pollute your ledger with distracting > short-term transactions, and if so, wouldn't a better approach be to put it > in appropriate accounts in the hierarchy that don't distract? I'd imagine > the pros of putting it in Beancount would far outweigh the cons > (performance not withstanding), given you can focus your analysis tools on > one system (Beancount) rather than building a parallel system for it. > A number of things. The slowness is one thing, but like you said, won't be an issue with the C++ version. The other thing is, for this type of import, usually you need multiple steps of import.You have the intra-day or nightly process, and then the post-settlement process (which for me is weekends). The goal is to create a single clean database of each trade with all the associated costs, ids and details. For example, TD's platform only provides intraday via thinkorswim, and that requires joining two tables to get an accurate picture (a trade history table and a table of the cash settlement account). One gives you the individual legs of the trade (the multiple options/stocks involved) and the other the fees, but no transaction ids, only order ids. And the fees aren't broken down by option. You then have to later on, once the information shows up through the API, join in that data if you want the full picture. This includes the unique transaction IDs that make it easy to later on filter out past data unambiguously. What's more, futures accounts are settled separately and aren't available through their API, and have an additional daily set of mark-to-market transactions, which is yet another step. And there's a whole set of other complications as well, e.g. inconsistencies between names pulled from TOS and from the API, around name changes. I suppose you could just wait 3-4 days and then create a multi-source join of all this data, but the way I like to trade involves keeping track of chains of trades as a logical unit, considering position adjustments as part of the single sequence of events. In other words, since I entered a position, considering all the connected adjustments, what is the total amount of credits & debits related to them. In my view, that's the correct way of looking at P/L for a single trade. If you collected 3.15 initially and made an adjustment that cost you 0.65, you current cost basis is 2.50. And I need that at least daily to know what's the cost, which can only be computed one of the way. But later I also want a clean full reconciliation that includes all the particular fees and details. Other people give up and just look at their Net Liq, trusting that they're doing a good job. I'm a bit of a Beancounter, so I like to know the specifics of the health of each of the little plants in my garden. The analysis wouldn't change. I'd only convert the trades from Beancount back into a flat table. It's only a difference of backend. If you did all that with Beancount, you'd have to build a conversion from the much more general transactions to a simple flat schema with fixed fields, making assumptions on the transaction. That's doable, but then you'd also have to find a way to update the text file - when you join data from an additional data source - while minimizing changes (I suppose that's doable with the line numbers). I think it's doable, but I don't know if it's worth it. Having the transactions in Beancount allows you to highlight regions and compute aggregate cost & P/L over the region, so that's nice, but I find that I care more about aggregate statistics and win rates than this minutiae. Doing this right with the intent to have Beancount be the full database would involve creating a solid way to join in data to an existing ledger with minimal disruption to the Ledger. I think that might tip the balance over. Also: why "buff"? :) > I don't know, it just had a ring to it. It seemed right at the time. It's probably subconscious I haven't articulated it even for myself. Let's try to come up with something now. Google says two possibly relevant things about "buff": 1. INFORMAL make (an element in a role-playing or video game) more powerful. "there are cards that'll buff your troops" *adjective* INFORMAL•NORTH AMERICAN 1. being in good physical shape with fine muscle tone. "the driver was a buff blond named March" There you go. That's why. Trading will make your portfolio buff. > On Wednesday, March 31, 2021 at 7:48:27 PM UTC-7 bl...@furius.ca > <https://groups.google.com/> wrote: > Red: Related to investments, I'm in the process of cleaning up and > building common data structures specifically for trading accounts, in a new > repo called "beanbuff". Find related codes here: > https://github.com/beancount/beanbuff/tree/master/beanbuff > At the moment it's just the importers I'm sharing, but I'm working on > making the conversion to an intermediate trade log consistent across them > and to convert that trade log to Beancount. I'm finding that having a flat > table trade log is more useful for analysis and am still on the fence about > representing thousands of 1000's of transactions for temporary positions in > Beancount inputs. I tend to separate the long term stuff (e.g., a long > position on an ETF held for years, with associated covered calls) from > active trading (e.g., a strangle on BIDU held for 20 days, a /ZCN21-/ZCZ21 > calendar spread, etc.), those things don't matter much in the long run and > I think it might make more sense to summarize the impact of trading to just > a single transaction per week and to keep all the detail in separate files > analyzed by custom scripts doing various breakdowns on P/L. > > -- > You received this message because you are subscribed to the Google Groups > "Beancount" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to beancount+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/beancount/b58745fe-ad33-4755-ade6-87e8314e44a3n%40googlegroups.com > <https://groups.google.com/d/msgid/beancount/b58745fe-ad33-4755-ade6-87e8314e44a3n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Beancount" group. To unsubscribe from this group and stop receiving emails from it, send an email to beancount+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/beancount/CAK21%2BhMgtfTGgM5ckPG6C_9-jD9q1xxPEZRZuAE6vj2svPeKLA%40mail.gmail.com.