> On May 3, 2018, at 4:09 PM, Mike Kienenberger wrote:
>
> But I still cringe thinking of how I'd deal with updating one of
> Andrus's big cvs data sets. I guess you'd need to reload it into a
> database, make changes, and then export it again?
Unfortunately yes.
Andrus
There is one place I always use files, and that's for constants and
lookup tables. Data that almost never changes. My database test
framework always pulls in and populates these for me, so I never have
to think about them. I generally import these directly from
production.
For data that change
I guess it depends on the situation. API-based is easier to comprehend and
manage on the lower end (small/narrow data sets), while CSV scales much better
to long/wide datasets. I often generate the later with real SQL against a real
DB, then edit in LibreOffice/Excel, then save as a CSV in the p
Which do you late less? :)
> On May 3, 2018, at 8:32 AM, Andrus Adamchik wrote:
>
> Hi Mike,
>
>> Is derby entirely in memory or does it have a filesystem footprint?
>
> It does.
>
>> I've found that maintaining tests using standalone data files is far more
>> difficult than using helper cla
Hi Mike,
> Is derby entirely in memory or does it have a filesystem footprint?
It does.
> I've found that maintaining tests using standalone data files is far more
> difficult than using helper classes to create default data sets.
Bootique tools support both types of datasets: file- and API-bas
As a general observation after using dbunit for 13 years, I've found
that maintaining tests using standalone data files is far more
difficult than using helper classes to create default data sets. I
ended up with far too many data sets, and things were even worse if I
needed a test with many inst
I use Derby. From my experience it is the most "serious" choice out of all
in-memory Java databases. HSQL/H2 left a bad aftertaste from the days when we
used it for the Modeler preferences. Though this may not be relevant in the
context of unit tests.
Beyond that, I use bootique-jdbc-test / bo