@@ -105,7 +105,7 @@ Like a cathode-ray tube in a television, the algorithm goes line by line, readin
Each pixel, as long as it is not on the top or side boundaries, will have 4 neighbors that have already been read into the machine.
Those points can be analyzed and interpolated to find the next pixel's value.
The goal is to encode the error between that value and the original value, save that, and use that to compress and decompress the image.
Even though a possibly larger integer may need to be stored, it's more likely that the guess will be correct, or off by a small margin, making the distribution and better for compression.
Even though a possibly larger integer may need to be stored, it's more likely that the guess will be correct, or off by a small margin, making the distribution better for compression.
\begin{figure}[h]
\centering
...
...
@@ -113,11 +113,11 @@ Even though a possibly larger integer may need to be stored, it's more likely th
\caption{\label{fig:pixels}The other 4 pixels are used to find the value of the 5th.}
\end{figure}
\subsection{Background}
The images that were used in the development of this paper are all thermal images, with values ranging from 19,197 to 25,935.
Total possible values can range from 0 to 32,768.
Everything detailed here can still apply to standard grayscale or RGB images, but for testing, only 16 bit thermal images were used.
The images that were used in the development of this paper were all thermal images, with values ranging from 19,197 to 25,935.
In the system, total possible values can range from 0 to 32,768.
Most images had ranges of at most 4,096 between the smallest and the largest pixel values.
The camera being used has 16 forward facing thermal sensors creating 16 similar thermal images every frame.
Everything detailed here can still apply to standard grayscale or RGB images, but for testing, only 16 bit thermal images were used.
\section{Related Work}
...
...
@@ -129,14 +129,14 @@ For example, if there are two identical blocks of just the color blue, the secon
Instead of saving two full blocks, the second one just contains the location of the first, telling the decoder to use that block.
Huffman encoding is then used to save these numbers, optimizing how the location data is stored.
If one pattern is more frequent, the algorithm should optimize over this, producing an even smaller file\cite{PNGdetails}.
The Huffman encoding portion is what separates LZ77 from ``deflate'', the algorithm summarized here, and the one used in PNG.
The Huffman encoding in conjuction with LZ77 helps form ``deflate'', the algorithm summarized here, and the one used in PNG.
Our algorithm has a similar use of Huffman encoding, but a completely different algorithm than LZ77.
LZ77 seeks patterns between blocks while ours has no block structure and no explicit pattern functionality.
Ours uses the equivalent block size of 1, and instead of encoding the data it encodes alternate information which is used to compress.
Ours uses the equivalent block size of 1, and instead of encoding the data it encodes alternate data which is used to compress.
\subsection{LZW}
LZW operates differently by creating a separate code table that maps every sequence to a code.
Although this is used for an image, the original paper by Welch \cite{LZW} explains it through text examples which will be done here as well .
Although this is used for an image, the original paper by Welch \cite{LZW} explains it through text examples, which will be done here as well .
Instead of looking at each character individually, it looks at variable length string chains and compresses those.
Passing through the items to be compressed, if a phrase has already been encountered, it saves the reference to the original phrase along with the next character in sequence.
In this way, the longer repeated phrases are automatically found and can be compressed to be smaller.
...
...
@@ -144,35 +144,35 @@ This system also uses blocks like PNG in order to save patterns in the data, but
Ours, similarly to PNG, only looks at a short portion of the data, which may have an advantage over LZW for images.
Images generally do not have the same patterns that text does, so it may be advantageous to not use the entire corpus in compressing an image and instead only evaluate it based off of nearby objects.
The blue parts of the sky will be next to other blue parts of the sky, and in the realm of thermal images, objects will probably be most similar to nearby ones in temperature due to how heat flows.
The blue parts of the sky will be next to other blue parts of the sky, and in the realm of thermal images, temperatures will probably be most similar to nearby ones due to how heat flows.
\subsection{Similar Methods}
Our research did not find any very similar approaches, especially with 16-bit thermal images.
One paper that comes close is ``Encoding-interleaved hierarchical interpolation for lossless image compression'' \cite{ABRARDO1997321}.
There are many papers however that may have influenced ours indirectly or come close to ours and need to be mentioned for both their similarities and differences.
One paper that is close is ``Encoding-interleaved hierarchical interpolation for lossless image compression'' \cite{ABRARDO1997321}.
This method seems to operate with a similar end goal, to save the interpolation, but operates on a different system, including how it interpolates.
Instead of using neighboring pixels in a raster format, it uses vertical and horizontal ribbons, and a different way of interpolating.
The ribbons alternate, going between a row that is just saved and one that is not saved but is later interpolated.
In this way it is filling in the gaps of an already robust image and saving that finer detail.
It should show an increase in speed but not in overall compression.
This will not have the same benefit as ours since ours uses interpolation on almost the entire image, instead of just parts, optimizing over the larger amount of saved error values.
In this way it is filling in the gaps of an already robust image and saving the finer details.
This other method could possibily show an increase in speed but not likely in overall compression.
This will not have the same benefit as ours since ours uses interpolation on almost the entire image, instead of just parts, optimizing over a larger amount of data.
This paper is also similar to ``Iterative polynomial interpolation and data compression'' \cite{Dahlen1993}, where the researchers did a similar approach but with different shapes.
The error numbers were still saved, but they used specifically polynomial interpretation which we did not see fit to use in ours.
The closest method is ``Near-lossless image compression by relaxation-labelled prediction'' \cite{AIAZZI20021619} which has similarity with the general principles of the interpolation and encoding.
The algorithm detailed in the paper uses a clustering algorithm of the nearby points to create the interpolation, saving the errors in order to retrieve the original later.
This method is much more complex, not using a direct interpolation method but instead using a clustering algorithm to find the next point.
This could potentially have an advantage by using more points in the process, but the implementation becomes too complicated and may lose value.
The goal for us was to have a simple and efficient encoding operation, and this would have too many things to process.
It also has a binning system based off of the mean square prediction error, but which bin it goes into can shift over the classification process adding to the complexity of the algorithm.
It also has a binning system like ours, with theirs based off of the mean square prediction error.
The problem is that which bin it goes into can shift over the classification process adding to the complexity of the algorithm.
The use of more points could have been implemented into ours too but we chose not to due to the potential additional temporal complexity.
\section{The Approach}
To begin, the border values are encoded into the system starting with the first value.
The values after that are just modifications from the first value.
There are not many values here and the algorithm needs a place to start.
Other things could have been done but they would have raised temporal complexity with marginal gain.
Alternate things could have been done but they would have raised temporal complexity with marginal gain.
Once the middle points are reached, the pixel to the left, top left, directly above, and top right have already been read in.
Each of these values is given a point in the x-y plane, with the top left at (-1,1), top pixel at (0,1), top right pixel at (1,1), and the middle left pixel at (-1,0), giving the target (0,0).
Using the formula for a plane in 3D ($ax + by + c = z$) we get the system of equations
...
...
@@ -230,7 +230,7 @@ $$
The new matrix is full rank and can therefore be solved using \verb|numpy.linalg.solve|\cite{Numpy}.
The x that results corresponds to two values followed by the original $c$ from the $ax+by+c=z$ form, which is the predicted pixel value.
Huffman encoding performs well on data with varying frequency \cite{Huffman}, which makes saving the error numbers a good candidate for using it.
Huffman encoding performs well on data with varying frequency \cite{Huffman}, which makes it a good candidate for saving the error numbers.
Most pixels will be off by low numbers since many objects have close to uniform surface temperature or have an almost uniform temperature gradient.
\begin{figure}[h]
...
...
@@ -260,14 +260,14 @@ An average number between all of them was chosen, since using the average versus
We attained an average compression ratio of $0.4057$ on a set of 262 images, with compression ratios ranging from $0.3685$ to $0.4979$.
Because the system as it stands runs off of a saved dictionary, it is better to think of the system as a cross between individual compression and a larger archival tool.
Because the system as it stands runs off of a saved dictionary, it is better to think of the system as a cross between an individual compression system and a larger archival tool.
This means that there are large changes in compression ratios depending on how many files are compressed at a time, despite the ability to decompress files individually.
When the size of the saved dictionary was included, the compression ratio on the entire set only changed from $0.4043$ to $0.4057$. However, when tested on just the first image in the set, it went from $0.3981$ to $0.7508$.
This is not a permanent issue, as changes to the system can be made to fix this.
This is not a permanent issue, as changes to the method can be made to fix this.
These are detailed in the discussion section below.
We are using it on a set of at least 16 images, so this does not affect us as much.
This was tested on a set of a least 16 images, so this does not affect us as much.
When tested on a random set of 16 images, the ratio only changed from $0.3973$ to $0.4193$.
@@ -280,30 +280,30 @@ When tested on a random set of 16 images, the ratio only changed from $0.3973$ t
\hline
\end{tabular}
The created file system together created files that are on average 33.7\% smaller than PNG and 34.5\% smaller than LWZ compression on TIFF.
Our method created files that are on average 33.7\% smaller than PNG and 34.5\% smaller than LWZ compression on TIFF.
\section{Discussion}
The files produced through this method are much smaller than the others tested but at great computational costs.
PNG compression is several orders of magnitude faster than the code that was used in this project.
Using a compiled language instead of python will increase the speed but there are other improvements that could be made.
Part of the problem with the speed was the \verb|numpy.linalg.solve|\cite{Numpy} function, which is not the fastest way to solve the system.
This method operates in $O(N^3)$\cite{LAPACKAlgorithms} for an $N\times N$ matrix, while more recent algorithms have placed it at $O(n^{2.37286})$\cite{DBLP:journals/corr/abs-2010-05846}
Using an approximation could be helpful.
Although it is potentially lossy, it would greatly improve computational complexity.
The least squares method mentioned in this project also has the same shortcoming.
It runs in $O(N^3)$ for a similar $N\times N$ matrix \cite{LeastSquaredProblem}.
This compression suffers greatly when it is only used on individual images, which is not a problem for the project it was designed for.
The camera that this compression was built for has 16 image sensors that work simultaneously.
They work in 100 image increments and therefore create large packets that can be saved together, while still having the functionality of decompressing individually.
The files produced through this method are much smaller than the others, but this comes at great computational costs.
PNG compression was several orders of magnitude faster on the local machine than the method that was used in this project.
Using a compiled language instead of python will increase the speed substantially, but there are other improvements that can be made.
The issue with \verb|numpy.linalg.solve| was later addressed to fix the potential slowdown, but calculating the inverse beforehand and using that in the system had marginal temporal benefit.
\verb|numpy.linalg.solve| runs in $O(N^3)$ for an $N\times N$ matrix, while the multiplication runs in a similar time. \cite{LAPACKAlgorithms}
The least squares method mentioned in this project also has a shortcoming, but this one cannot be solved as easily.
The psudoinverse can be calculated beforehand, but the largest problem is that it is solving the system for every pixel individually and calculating the norm.
\verb|numpy.linalg.lstsq| itself runs in $O(N^3)$ for an $N\times N$ matrix \cite{LeastSquaredProblem}, while the psudoinverse when implemented uses more python runtime, adding to temporal complexity.
This compression suffers greatly when it is only used on individual images, which is not a problem for the project it was tested on.
The test images came from a camera that has 16 image sensors that work simultaneously.
The camera works in multiple image increments and therefore creates large packets that can be saved together, while still having the functionality of decompressing individually.
This saves greatly on the memory that is required to view an image.
It was therefore not seen necessary to create a different system to compress individual files as individual images are not created.
A potential workaround for this problem would be to code extraneous values into the image directly instead of adding them to the full dictionary.
This has the downside of not being able to integrate perfectly with Huffman encoding.
A leaf of the tree would have to be a trigger to not use Huffman encoding anymore and use an alternate system to read in the bits.
We chose not to do this but it would be a simple operation for someone with a different use case.
We did not to do this, but it would be a simple change for someone with a different use case.