Andrew Stevens wrote:

For the same x2 special-case asd your algorithm there is also some very interesting work based on ideas from some Sony researchers. Here you again choose your weights dynamically based on context in which your new pixel appears. However, the weight-selection function you use constructed by a 'learning' process. You 'train' your 'smart filter' on a huge set of training data (upscaling downsampled video and trying to match the original as closely as possible).

The results are sometimes eerily good!
The training approach (and the larger 3x3 'context' they use) allows them to
avoid certain kinds of artefact I (suspect) your technique might have in common with median-like non-linear scaling filters. The usual complaint is a tendency to optical 'fattening' of fine features. Median filters also tend to be expensive (in HW) to get to work for fractional scale factors. Though I suspect yours is quite 'friendly' in that regard.

I'm not sure how this came about but I thought about training a Backpropagation neural net with sample images so that it may learn what the missing pixels looked like from the surrounding pixels.

At first the results looked surprisingly good. Later I discovered that the engine appears to be dependent on the sample images used to train it. IE if there are a lot of uphill diagonal lines in the sample images, all uphill lines in the upsampled picture look perfectly smooth, however the downhill lines look excessively jaggered. This also happens if the reverse is used to train the network.

I thought that it may be that I was only using a 3x3 grid (effectively 6 pixels) so tried a 5x3 grid, which simply made things look worse. I then went to passing the RGB values through the network at the same time (back to a 3x3 grid), incase they might have an effect on each other, and it looked promising at first but as it was trained with more images, became progressively worse.

The result appears no better than a box filter (as used by pamscale).

I'm not sure if this is because I have not trained it enough, or if this is something that has no pattern and cannot be learnt, or that I'm using the wrong sort of images.

I don't suppose anyone has any suggestions?

this is how I am training the network, with a 3x3 grid:

1 2 3
4 5 6
7 8 9

take pixels 1,2,3,7,8 and 9 as the inputs to the neural net, and pixel 5 as the output.

Mark




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
Mjpeg-users mailing list
Mjpeg-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mjpeg-users

Reply via email to