[KPhotoAlbum] More thumbnail investigations

Robert Krawitz rlk at alum.mit.edu
Sat May 19 05:24:41 CEST 2018


On Fri, 18 May 2018 19:11:14 -0400 (EDT), Robert Krawitz wrote:
> On Fri, 18 May 2018 23:45:31 +0200, Johannes Zarl-Zierl wrote:
>> Am Montag, 14. Mai 2018, 14:41:43 CEST schrieb Robert Krawitz:
>>> The best solution would be to generate thumbnails upon image load for
>>> images up to a certain size.  That would combine nicely with the MD5
>>> code, which can also profit from having the entire file (since the
>>> underlying crypto code in Qt only does 16K I/O ops).  We could always
>>> postpone the thumbnail generation for really big files (and files that
>>> need load methods other than JPEG or thumbnail extraction from RAW) to
>>> the end.
>>
>> My gut feeling is that we do read image files too many times (at
>> least 3 times for exif, thumbnails, md5) and without optimizing for
>> cache-friendliness.
>
> We do indeed.  We may read them four times; I'm not certain.  I've
> actually added a fourth one -- a scout thread (actually, I'm finding
> that two scouts work best, but we can tune it) that slurps the data
> into RAM, so the other reads are satisfied by buffering (and I've put
> in protocol so the scout thread doesn't get too far ahead).  Combining
> thumbnail building with everything else helps too.

So I did some more performance measurement, and found that one scout
thread actually works best.  I also tuned the I/O sizes for both the
scout thread and the MD5 checksumming.

There turned out to be one more subtle (but quite significant)
performance issue in the new image loading code; it was trying to
compute MD5 checksums on all "modified" filenames, which can be
expensive if you have a lot of suffix substitutions; it was on the
order of 25-33% on both SSD and hard disk.  With that (and proper I/O
tuning), I'm now getting the kind of I/O performance I expect on the
hard disk (95 MB/sec or so with 100-110 IO/sec when reading 10 MB
image files).  The hard disk maxes out around 115 MB/sec, but that
needs sustained streaming I/O.  I'm getting about 350 MB/sec off the
SSD, but that appears to be partly CPU limited on my system; if I turn
off thumbnail building I get about 400-420 MB/sec (the peak is about
550 MB/sec, and I've gotten close to that with sufficient threading).
With an NVMe SSD I'd probably get a little better performance but not
enough to matter.  Being able to load 10800 images in 4'30" is quite
satisfactory (it's about 16'20" on hard disk).

I'm pretty confident now (and I'm going to be preparing to push this
code this weekend) that the image loading is pretty close to what
we're going to get on a hard disk system; it would need some pretty
fancy footwork to do better on an SSD.

>>> This work may not be entirely trivial, but it could have a pretty big
>>> payoff when loading files.
>>
>> I've shied away from tackling this issue because of the complexity
>> of the code it touches.
>
> It's pretty complex code, to be sure, but this is the very first thing
> people see (how fast does it read my photos, and how fast can I skim
> through the thumbnails?), and if you have a lot of images, it's very
> important from a workflow perspective.

I'm going to try the thumbnail rebuild thing overnight; I'm curious
whether some of my other changes are having a significant impact
there.
-- 
Robert Krawitz                                     <rlk at alum.mit.edu>

***  MIT Engineers   A Proud Tradition   http://mitathletics.com  ***
Member of the League for Programming Freedom  --  http://ProgFree.org
Project lead for Gutenprint   --    http://gimp-print.sourceforge.net

"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton


More information about the KPhotoAlbum mailing list