We use YIQ color model to exploit human eye different sensitivity to the Y,I,Q channels. So we assign more bits to Y and less bits to I,Q. But this all is trivial.
For each range block, we search for the best-matching domain block on the Y channel and we use the corresponding(1) blocks in I,Q channels as well-matching(2) blocks for the I,Q components. So we have only 1 geometric transformation (the isometry and the address of the selected domain block) for all the three components YIQ, while we recompute the optimal massic transformation for each of the YIQ channels.
This approach can both save compression time and increase the compression ratio; compared to the obvious three separed encoding of each YIQ channel.
2 - Obviously they are not the best matching blocks, but here is the matter: we allow sub-optimal block mapping on the I and Q channels.
How we can encode true color images
In the first stage we have to convert the image from the RGB model to another color model that is more suitable for exploiting the differences in how the human eye perceives the colors.
These models can be one of YIQ, YUV, HLS, HVS. We selected the YIQ model. Since the variance of the chrominance data (I,Q) tend to be less than the corrisponding one of the Y channel, we can save space reducing the dynamic range of I and Q channels.
Once we have selected a color model there are three ways of doing the encoding:
How we do encode true color images
Let we have a true-color image in RGB format (24bpp) which we split into Y,I,Q color planes. Now we could apply three times the gray-scale algorithm (such as the prof. Fisher's enc.c) to the Y, I and Q channels (obviously we can use different encoding parameters for each channel, privileging the Y at the expense of I and Q). In so doing we'll get an algorithm that is almost 3 times slower than the gray-scale encoding. That's why we have developed a different approach.
Examinating several different images and their related YIQ color planes, we noticed that something as the "shape" of the gray-scale image on each channel is preserved - even if not exactly -.
We noticed that if in a range block of the component Y there is some sort of a detail, we can distinguish its shape in the I and Q components too, even if more or less attenuated in gray-level luminance of that channel. What is interesting in this all, is that we don't need to change the parameters of the geometric transform (offset of best block and isometry) across the three channels, while recalculating the massic transform parameters for each channel.
Last Revised: April, 1997 - ver. 2.01