A quick follow up on this thread.
As an alternative to plaid, it's possible to write your own scraping
code (by this I mean, code that goes and scrapes data from the web sites
where you can manually download your ofx files).
To automate a browser, you can use Selenium (which has bindings for
python and many other languages). With that you can open a page, click
on various elements etc, from a script.
The difficulty is to find out the series of clicks that are required to
download your OFX files (because you have to specify the element to
click and that can be very difficult to determine).
But there's a great time-saving tool called selenium ide (an extension
for chrome of firefox) where you can just "record" a series of clicks
that you make on the web page, and export that as ready-to-use code for
Selenium to replicate your actions and download your data automatically.
While this requires a bit of programming, it's not bad at all, and you
have 100% full control over what gets done and how, (in particular, if
you can go manually download your ofx, you can write a script to do that
automatically, and you're not giving access to your account to anyone).
Just wanted to share what I have found so far for those who might be
curious/interested.
Jean
_______________________________________________
gnucash-devel mailing list
gnucash-devel@gnucash.org
https://lists.gnucash.org/mailman/listinfo/gnucash-devel