Commit 314910ba authored by Bryce Hepner's avatar Bryce Hepner

more typos fixed

parent 4b2dff06
Pipeline #2588 passed with stage
in 7 seconds
This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019/Debian) (preloaded format=pdflatex 2020.7.20) 11 JUL 2022 11:52
This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019/Debian) (preloaded format=pdflatex 2020.7.20) 11 JUL 2022 13:17
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
......@@ -431,22 +431,22 @@ Package pdftex.def Info: Normal_No_Title.png used on input line 245.
LaTeX Warning: `h' float specifier changed to `ht'.
[3 <./Uniform_No_Title.png> <./Normal_No_Title.png>]
Underfull \hbox (badness 7273) in paragraph at lines 271--283
Underfull \hbox (badness 7273) in paragraph at lines 272--284
[]\OT1/cmr/m/n/10 This was tested on a set of a least 16
[]
Underfull \hbox (badness 5161) in paragraph at lines 271--283
Underfull \hbox (badness 5161) in paragraph at lines 272--284
\OT1/cmr/m/n/10 im-ages, so this does not af-fect us as much.
[]
Underfull \hbox (badness 4353) in paragraph at lines 271--283
Underfull \hbox (badness 4353) in paragraph at lines 272--284
\OT1/cmr/m/n/10 When tested on a ran-dom set of 16 im-ages,
[]
Underfull \hbox (badness 3428) in paragraph at lines 271--283
Underfull \hbox (badness 3428) in paragraph at lines 272--284
\OT1/cmr/m/n/10 the ra-tio only changed from $0\OML/cmm/m/it/10 :\OT1/cmr/m/n/1
0 3973$ to $0\OML/cmm/m/it/10 :\OT1/cmr/m/n/10 4193$.
[]
......@@ -462,17 +462,17 @@ Underfull \hbox (badness 7362) in paragraph at lines 26--26
[]
)
Package atveryend Info: Empty hook `BeforeClearDocument' on input line 316.
Package atveryend Info: Empty hook `BeforeClearDocument' on input line 317.
[4]
Package atveryend Info: Empty hook `AfterLastShipout' on input line 316.
Package atveryend Info: Empty hook `AfterLastShipout' on input line 317.
(./main.aux)
Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 316.
Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 317.
\snap@out=\write5
\openout5 = `main.dep'.
Dependency list written on main.dep.
Package atveryend Info: Executing hook `AtEndAfterFileList' on input line 316.
Package atveryend Info: Executing hook `AtEndAfterFileList' on input line 317.
Package rerunfilecheck Info: File `main.out' has not changed.
(rerunfilecheck) Checksum: 32E97EDE93C04899CE7128EA0CB0D790;513.
Package rerunfilecheck Info: File `main.brf' has not changed.
......@@ -503,7 +503,7 @@ y9.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmti10.pfb
></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmti9.pfb></usr/
share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmtt10.pfb></usr/share/
texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmtt9.pfb>
Output written on main.pdf (4 pages, 247963 bytes).
Output written on main.pdf (4 pages, 248017 bytes).
PDF statistics:
180 PDF objects out of 1000 (max. 8388607)
152 compressed objects within 2 object streams
......
No preview for this file type
No preview for this file type
......@@ -130,7 +130,7 @@ For example, if there are two identical blocks of just the color blue, the secon
Instead of saving two full blocks, the second one just contains the location of the first, telling the decoder to use that block.
Huffman encoding is then used to save these numbers, optimizing how the location data is stored.
If one pattern is more frequent, the algorithm should optimize over this, producing an even smaller file\cite{PNGdetails}.
The Huffman encoding in conjuction with LZ77 helps form ``deflate'', the algorithm summarized here, and the one used in PNG.
The Huffman encoding in conjunction with LZ77 helps form ``deflate'', the algorithm summarized here, and the one used in PNG.
Our algorithm has a similar use of Huffman encoding, but a completely different algorithm than LZ77.
LZ77 seeks patterns between blocks while ours has no block structure and no explicit pattern functionality.
......@@ -154,7 +154,7 @@ This method seems to operate with a similar end goal, to save the interpolation,
Instead of using neighboring pixels in a raster format, it uses vertical and horizontal ribbons, and a different way of interpolating.
The ribbons alternate, going between a row that is directly saved and one that is not saved but is later interpolated.
In this way it is filling in the gaps of an already robust image and saving the finer details.
This other method could possibily show an increase in speed but not likely in overall compression.
This other method could possibly show an increase in speed but not likely in overall compression.
This will not have the same benefit as ours since ours uses interpolation on almost the entire image, instead of just parts, helping it optimize over a larger amount of data.
This paper is also similar to ``Iterative polynomial interpolation and data compression'' \cite{Dahlen1993}, where the researchers did a similar approach but with different shapes.
The error numbers were still saved, but they used specifically polynomial interpretation which we did not see fit to use in ours.
......@@ -260,13 +260,14 @@ An average number between all of them was chosen, since using the average versus
\section{Results}
We attained an average compression ratio of $0.4057$ on a set of 262 images, with compression ratios ranging from $0.3685$ to $0.4979$.
We attained an average compression ratio of $0.4057$ on a set of 262 images, with compression ratios on individual images ranging from $0.3685$ to $0.4979$.
Because the system runs off of a saved dictionary, it is better to think of the system as a cross between an individual compression system and a larger archival tool.
This means that there are large changes in compression ratios depending on how many files are compressed at a time, despite the ability to decompress files individually.
This means that there are large changes in compression ratios depending on how many files are compressed at a time, despite the ability to decompress files individually and independently.
When the size of the saved dictionary was included, the compression ratio on the entire set only changed from $0.4043$ to $0.4057$. However, when tested on just the first image in the set, it went from $0.3981$ to $0.7508$.
When the size of the saved dictionary was included, the compression ratio on the entire set only changed from $0.4043$ to $0.4057$.
However, when tested on a random image in the set, it went from $0.3981$ to $0.7508$.
This is not a permanent issue, as changes to the method can be made to fix this.
These are detailed in the discussion section below.
These are outlined in the discussion section below.
This was tested on a set of a least 16 images, so this does not affect us as much.
When tested on a random set of 16 images, the ratio only changed from $0.3973$ to $0.4193$.
......@@ -285,9 +286,9 @@ Our method created files that are on average 33.7\% smaller than PNG and 34.5\%
\section{Discussion}
The files produced through this method are much smaller than the others, but this comes at great computational costs.
The files produced through this method are much smaller than the ones produced by the others, but this comes at great computational costs in its current implementation.
PNG compression was several orders of magnitude faster on the local machine than the method that was used in this project.
Using a compiled language instead of python will increase the speed, but there are other improvements that can be made.
Using a compiled language or integrated system instead of python will increase the speed, but there are other improvements that can be made.
The issue with \verb|numpy.linalg.solve| was later addressed to fix the potential slowdown.
Calculating the inverse beforehand and using that in the system had marginal temporal benefit.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment