On 26/03/18 19:07, Nikita Glukhov wrote:
Attached fixed 3th version of the patch:
Thanks, I'm reviewing this now. Nice speedup!
There is no test coverage for some of the added code. You can get a code
coverage report with:
./configure --enable-coverage ...
make
make -C src/pl/plpython check
make coverage-html
That produces a code coverage report in coverage/index.html. Please look
at the coverage of the new functions, and add tests to
src/pl/plpython/sql/plpython_types.sql so that all the new code is covered.
In some places, where you've already checked the object type e.g. with
PyFloat_Check(), I think you could use the less safe variants, like
PyFloat_AS_DOUBLE() instead of PyFloat_AsDouble(). Since this patch is
all about performance, seems worth doing.
Some of the conversions seem a bit questionable. For example:
/*
* Convert a Python object to a PostgreSQL float8 datum directly.
* If can not convert it directly, fallback to PLyObject_ToScalar
* to convert it.
*/
static Datum
PLyObject_ToFloat8(PLyObToDatum *arg, PyObject *plrv,
bool *isnull, bool inarray)
{
if (plrv == Py_None)
{
*isnull = true;
return (Datum) 0;
}
if (PyFloat_Check(plrv) || PyInt_Check(plrv) || PyLong_Check(plrv))
{
*isnull = false;
return Float8GetDatum((double)PyFloat_AsDouble(plrv));
}
return PLyObject_ToScalar(arg, plrv, isnull, inarray);
}
The conversion from Python int to C double is performed by
PyFloat_AsDouble(). But there's no error checking. And wouldn't
PyLong_AsDouble() be more appropriate in that case, since we already
checked the python type?
- Heikki