@@ -130,7 +130,7 @@ For example, if there are two identical blocks of just the color blue, the secon
...
@@ -130,7 +130,7 @@ For example, if there are two identical blocks of just the color blue, the secon
Instead of saving two full blocks, the second one just contains the location of the first, telling the decoder to use that block.
Instead of saving two full blocks, the second one just contains the location of the first, telling the decoder to use that block.
Huffman encoding is then used to save these numbers, optimizing how the location data is stored.
Huffman encoding is then used to save these numbers, optimizing how the location data is stored.
If one pattern is more frequent, the algorithm should optimize over this, producing an even smaller file\cite{PNGdetails}.
If one pattern is more frequent, the algorithm should optimize over this, producing an even smaller file\cite{PNGdetails}.
The Huffman encoding in conjuction with LZ77 helps form ``deflate'', the algorithm summarized here, and the one used in PNG.
The Huffman encoding in conjunction with LZ77 helps form ``deflate'', the algorithm summarized here, and the one used in PNG.
Our algorithm has a similar use of Huffman encoding, but a completely different algorithm than LZ77.
Our algorithm has a similar use of Huffman encoding, but a completely different algorithm than LZ77.
LZ77 seeks patterns between blocks while ours has no block structure and no explicit pattern functionality.
LZ77 seeks patterns between blocks while ours has no block structure and no explicit pattern functionality.
...
@@ -154,7 +154,7 @@ This method seems to operate with a similar end goal, to save the interpolation,
...
@@ -154,7 +154,7 @@ This method seems to operate with a similar end goal, to save the interpolation,
Instead of using neighboring pixels in a raster format, it uses vertical and horizontal ribbons, and a different way of interpolating.
Instead of using neighboring pixels in a raster format, it uses vertical and horizontal ribbons, and a different way of interpolating.
The ribbons alternate, going between a row that is directly saved and one that is not saved but is later interpolated.
The ribbons alternate, going between a row that is directly saved and one that is not saved but is later interpolated.
In this way it is filling in the gaps of an already robust image and saving the finer details.
In this way it is filling in the gaps of an already robust image and saving the finer details.
This other method could possibily show an increase in speed but not likely in overall compression.
This other method could possibly show an increase in speed but not likely in overall compression.
This will not have the same benefit as ours since ours uses interpolation on almost the entire image, instead of just parts, helping it optimize over a larger amount of data.
This will not have the same benefit as ours since ours uses interpolation on almost the entire image, instead of just parts, helping it optimize over a larger amount of data.
This paper is also similar to ``Iterative polynomial interpolation and data compression'' \cite{Dahlen1993}, where the researchers did a similar approach but with different shapes.
This paper is also similar to ``Iterative polynomial interpolation and data compression'' \cite{Dahlen1993}, where the researchers did a similar approach but with different shapes.
The error numbers were still saved, but they used specifically polynomial interpretation which we did not see fit to use in ours.
The error numbers were still saved, but they used specifically polynomial interpretation which we did not see fit to use in ours.
...
@@ -260,13 +260,14 @@ An average number between all of them was chosen, since using the average versus
...
@@ -260,13 +260,14 @@ An average number between all of them was chosen, since using the average versus
\section{Results}
\section{Results}
We attained an average compression ratio of $0.4057$ on a set of 262 images, with compression ratios ranging from $0.3685$ to $0.4979$.
We attained an average compression ratio of $0.4057$ on a set of 262 images, with compression ratios on individual images ranging from $0.3685$ to $0.4979$.
Because the system runs off of a saved dictionary, it is better to think of the system as a cross between an individual compression system and a larger archival tool.
Because the system runs off of a saved dictionary, it is better to think of the system as a cross between an individual compression system and a larger archival tool.
This means that there are large changes in compression ratios depending on how many files are compressed at a time, despite the ability to decompress files individually.
This means that there are large changes in compression ratios depending on how many files are compressed at a time, despite the ability to decompress files individually and independently.
When the size of the saved dictionary was included, the compression ratio on the entire set only changed from $0.4043$ to $0.4057$. However, when tested on just the first image in the set, it went from $0.3981$ to $0.7508$.
When the size of the saved dictionary was included, the compression ratio on the entire set only changed from $0.4043$ to $0.4057$.
However, when tested on a random image in the set, it went from $0.3981$ to $0.7508$.
This is not a permanent issue, as changes to the method can be made to fix this.
This is not a permanent issue, as changes to the method can be made to fix this.
These are detailed in the discussion section below.
These are outlined in the discussion section below.
This was tested on a set of a least 16 images, so this does not affect us as much.
This was tested on a set of a least 16 images, so this does not affect us as much.
When tested on a random set of 16 images, the ratio only changed from $0.3973$ to $0.4193$.
When tested on a random set of 16 images, the ratio only changed from $0.3973$ to $0.4193$.
...
@@ -285,9 +286,9 @@ Our method created files that are on average 33.7\% smaller than PNG and 34.5\%
...
@@ -285,9 +286,9 @@ Our method created files that are on average 33.7\% smaller than PNG and 34.5\%
\section{Discussion}
\section{Discussion}
The files produced through this method are much smaller than the others, but this comes at great computational costs.
The files produced through this method are much smaller than the ones produced by the others, but this comes at great computational costs in its current implementation.
PNG compression was several orders of magnitude faster on the local machine than the method that was used in this project.
PNG compression was several orders of magnitude faster on the local machine than the method that was used in this project.
Using a compiled language instead of python will increase the speed, but there are other improvements that can be made.
Using a compiled language or integrated system instead of python will increase the speed, but there are other improvements that can be made.
The issue with \verb|numpy.linalg.solve| was later addressed to fix the potential slowdown.
The issue with \verb|numpy.linalg.solve| was later addressed to fix the potential slowdown.
Calculating the inverse beforehand and using that in the system had marginal temporal benefit.
Calculating the inverse beforehand and using that in the system had marginal temporal benefit.