Teal metrics

Apr 11, 2021
Image

What are teal metrics?

The teal metrics are designed to compare images before and after compression with one of the image codecs. They are expected to be more efficient than the current “fsim-c”, more sensitive, and to be slightly tuned for handling images that contain fabric.

Metric names

These are the names used in our Web API, specifically in the “encoder” section of the Luxetum API.

Name What is it
teal-ssim SSIM metric, with the adjustments described below.
teal-weighted-ssim SSIM metric, slightly boosted for fabric pictures.
teal-weighted-ssim-mse A hybrid between SSIM and MSE, slightly boosted for fabric pictures.

How do they work 1: sampling

The metrics use particular coordinate in the input images to derive samples. This is so that we can compute the difference between two images using less computing time, which improves latency for our customers and decreases processing cost for us.

Image example to show that the metrics use particular coordinate in the input images to derive samples

The number of samples that is taken from an image follows a soft exponential, as shown in the table below. Thanks to a moderate amount of optimization with SIMD instructions, during development and testing we could obtain a so-so speed of 16 megapixels per second; still not perfect but much better than the metric “f-sim” that we currently use.

Number of pixels Number of samples
2500 136
40000 400
90000 548
1000000 1397
16000000 4105

For the samples themselves, each sample is composed of eight layers at resolution steps separated by a 1.2 factor. The bottom-most layer captures 16x16 luminance values at the natural scale of the image, and the uppermost layer captures 1.28 x 16 x 16 = 917 luminance values down-scaled to 16x16.

What do we do with each sample?

Over each sample, we can compute:

  • SSIM
  • MSE
  • Likelihood that the sample is over a patch of fabric

The last item is computed using a machine-learning model trained to identify fabric. Since it’s not perfect and we need to account for defects in the image which appear over-non-fabric too, the outputs from the model are clipped to the range 0.3 to 0.7, meaning that a sample the model identifies as “non-cloth” is weighted at 0.3, and a sample that the model identifies as “cloth” is weighted at 0.7.

The end effect of boosting is that patches which are of fabric contribute more heavily to whichever metric we are computing, but note that parts which are not fabric still contribute to the score.

Each of the named metrics uses SSIM, MSE and fabric likelihood in slightly different ways.

Example of machine-learning model trained to identify fabric
  • teal-ssim: doesn’t use MSE nor the boosting for fabric; so it’s just a slightly improved version of SSIM.
  • teal-weighted-ssim: uses SSIM boosted by the fabric likelihood, in the manner explained above.
  • teal-weighted-ssim-mse: uses SSIM and MSE, boosted by fabric likelihood, in the manner explained above.

Curious on how much you can benefit from ShimmerCat?

Fill the form and get a performance report, find your bottlenecks and explore possibilities.

GET PAGE TEST REPORT