Hi all,

We've been doing some more work on LDP performance since our previous
message, and wanted to send a quick update.

We can confirm that upgrading to PSQL 9.2+ (we are using 9.4) has
significant performance benefits on LDP requests. GET requests that had
been taking between 10 and 50 seconds are now observed at around 1/10 of a
second, with much of that time apparently spent on the Marmotta side rather
than in PSQL.

After the upgrade, we saw vastly improved performance for a time, but found
that under a certain load similar query planning problems resurfaced.
Perhaps oddly, we were able to stem this by creating a new index on the
triples table, with a different field order from the existing four-column
index. The new index (on `spoc` rather than `cspo`) is used more readily in
the generic plan, and we've not seen slow queries (> 200ms) logged on LDP
requests since introducing it.

See the post-upgrade notes on generic query plans[0] and our observations
on index order[1] for more info.

Best,

Tom Johnson
Metadata & Platform Architect
Digital Public Library of America

[0]
https://digitalpubliclibraryofamerica.atlassian.net/wiki/display/TECH/Generic+Query+Plans+in+PSQL+9.1
[1]
https://digitalpubliclibraryofamerica.atlassian.net/wiki/pages/viewpage.action?pageId=29720590

On Wed, Oct 21, 2015 at 10:38 AM, Tom Johnson <t...@dp.la> wrote:

> Hi all,
>
> We've been having problems with performance on all LDP requests for the
> past week or so, following the changes described in the quoted thread.
>
> I've tracked our current issue down to a quirk of PREPARE statements used
> by Kiwi when building Postgresql queries.  In short, the use of PREPARE
> results in adherence to a "generic query plan"  for the prepared statement
> that, in the case of queries against the LDP named graph, can be extremely
> non-performant.
>
> If you see cases where you have fast index scans in the postgresql console
> but slow queries in practice, this is likely what you are running into.
>
> I believe that this is fixed in Postgresql 9.2, and we are currently
> planning to upgrade. It may be worth considering strongly recommending that
> Marmotta users upgrade to 9.2 or above, as a default.
>
> A full write-up of the issue with some example query plans is on our
> project wiki[0].
>
> Best,
>
> Tom
>
> [0]
> https://digitalpubliclibraryofamerica.atlassian.net/wiki/display/TECH/Generic+Query+Plans+in+PSQL+9.1
>
> On Thu, Oct 8, 2015 at 4:04 AM, Sergio Fernández <wik...@apache.org>
> wrote:
>
>> Hi Mark,
>>
>> sorry, I've been with limited connectivity for some days, but let's catch
>> up this.
>>
>> On Fri, Oct 2, 2015 at 9:46 PM, Mark Breedlove <m...@dp.la> wrote:
>>>
>>> I just wanted to get back to you with the status of our experimentation.
>>> I still need to gather some more information by running some more of our
>>> activities, but the activities that were bogged down last week are running
>>> now without the I/O wait that we were encountering. I'll report back when
>>> we've run the full range of our processes. (Including processes that do
>>> more inserts updates.)
>>>
>>
>> That are great news! Performance tuning is very tricky.
>>
>> BTW, since you are already using AWS,have you considered to give a try
>> to Amazon RDS for PostgreSQL? I have to admit I've never had the chance to
>> test Marmotta on it, but it'd be really interesting to compare the
>> performance in scenarios like yours.
>>
>> Well, internally each LDP-R uses its own context. Therefore all indexes
>>>>> that could improve the lookup would help. Try to create also at least cop.
>>>>> [...]
>>>>>
>>>>
>>>> OK, that's on my to-do list and I'll let you know ... :-)
>>>>
>>>
>>> I've created an index, `idx_triples_cop` on triples (context, object,
>>> predicate) where deleted = false.
>>>
>>> I made note of a query last week that is executed frequently, which now
>>> uses this index:
>>>
>>> SELECT
>>> id,subject,predicate,object,context,deleted,inferred,creator,createdAt,deletedAt
>>> FROM triples WHERE deleted = false AND context = $1 AND subject = $2 AND
>>> predicate = $3 AND object = $4
>>>
>>> If I `explain analyze` that query, PG says "Index Scan using
>>> idx_triples_cop on triples".
>>>
>>
>> Exactly, that query is executed very frequently in KiWi, so the index
>> helps.
>>
>>
>>>
>>> One query that's showing up frequently in our slow query log is now:
>>>
>>> SELECT
>>> id,subject,predicate,object,context,deleted,inferred,creator,createdAt,deletedAt
>>> FROM triples WHERE deleted = false AND subject = $1 AND predicate = $2 AND
>>> context = $3
>>>
>>> This is supposed to use a full index scan on idx_triples_spo, according
>>> to `explain analyze`.  However, I think that the index is just so huge that
>>> a full index scan is what's still causing it to take about 6 seconds in a
>>> lot of cases.
>>>
>> Well, specifically the spc index, that could also be quite big.
>>
>> We're apparently getting a 98% buffer cache hit rate on idx_triples_spo,
>>> according to a query on pg_statio_user_indexes. The idx_triples_spo index
>>> is 8GB in size.
>>>
>> Good.
>>
>>
>>
>>> Another thing that I did was to increase work_mem to 100GB (again,
>>> before adding the memory), which didn't make any difference at the time,
>>> but which sounds like a good idea in general (and may be have been allowed
>>> to show its benefits after the filesystem cache was increased).
>>>
>>
>> Well, keep observing how that working memory behaves, and maybe you can
>> decrease it a bit and safe some memory for other components of your setup.
>>
>>
>> Since we only have about one million ore:Aggregations in our system now,
>>> out of 11 million, I'm going to have to see how big the triples table has
>>> to get before we run into more I/O wait issues as a result of the
>>> filesystem cache / shared buffers running out. But so far so good, insofar
>>> as the I/O wait has dropped. I will follow up later with you after we try
>>> some of our harvest and mapping activities [1].
>>>
>>> [1]
>>> https://digitalpubliclibraryofamerica.atlassian.net/wiki/display/TECH/LDP+Interactions+Overview
>>>
>>
>>  Based on this experience maybe we can add some of that to the project
>> documentation, so it will help other users in the future.
>>
>> Thanks for sharing all this with us!
>>
>> Cheers,
>>
>>
>> --
>> Sergio Fernández
>> Partner Technology Manager
>> Redlink GmbH
>> m: +43 6602747925
>> e: sergio.fernan...@redlink.co
>> w: http://redlink.co
>>
>
>

Reply via email to