%SUMMARY %- ABSTRACT %- INTRODUCTION %# BASICS %- \acs{DNA} STRUCTURE %- DATA TYPES % - BAM/FASTQ % - NON STANDARD %- COMPRESSION APPROACHES % - SAVING DIFFERENCES WITH GIVEN BASE \acs{DNA} % - HUFFMAN ENCODING % - PROBABILITY APPROACHES (WITH BASE?) % %# COMPARING TOOLS %- %# POSSIBLE IMPROVEMENT %- \acs{DNA}S STOCHASTICAL ATTRIBUTES %- IMPACT ON COMPRESSION % Structure: % - Focus/Goal (why and what) % - Procedure (what and how) % . Specs and used tools %\chapter{Analysis for Possible Compression Improvements} \chapter{Environment and Procedure to Determine the State of The Art Efficiency and Compressionratio of Relevant Tools} \label{k5:goals} % goal define Since improvements must be measured, defining a baseline which would need to be beaten bevorhand is necessary. Others have dealt with this task several times with common algorithms and tools, and published their results. But since the test case, that need to be build for this work, is rather uncommon in its compilation, the available data are not very useful. Therefore, new test data must be created.\\ The goal of this is, to determine a baseline for efficiency and effectivity of state of the art tools, used to compress \ac{DNA}. This baseline is set by two important factors: \begin{itemize} \item Efficiency: \textbf{duration} the Process had run for \item Effectivity: The difference in \textbf{size} between input and compressed data \end{itemize} As a third point, the compliance that files were compressed losslessly should be verified. This is done by comparing the source file to a copy that got compressed and than decompressed again. If one of the two processes should operate lossy, a difference between the source file and the copy a difference in size should be recognizable. %environment, test setup, raw results \section{Server specifications and test environment} To be able to recreate this in the future, relevant specifications and the commands that reveiled this information are listed in this section.\\ Reading from /proc/cpuinfo reveals processor specifications. Since most of the information displayed in the seven entries is redundant, only the last entry is shown. Below are relevant specifications listed: \noindent \begin{lstlisting}[language=bash] cat /proc/cpuinfo \end{lstlisting} \begin{itemize} \item available logical processors: 0 - 7 \item vendor: GenuineIntel \item cpu family: 6 \item model nr, name: 58, Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz \item microcode: 0x15 \item MHz: 2280.874 \item cache size: 8192 KB \item cpu cores: 4 \item fpu and fpu exception: yes \item address sizes: 36 bits physical, 48 bits virtual \end{itemize} % explanation on some entry: https://linuxwiki.de/proc/cpuinfo %\begin{em} %processor : 7 %vendor\_id : GenuineIntel %cpu family : 6 %model : 58 %model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz %stepping : 9 %microcode : 0x15 %cpu MHz : 2280.874 %cache size : 8192 KB %physical id : 0 %siblings : 8 %core id : 3 %cpu cores : 4 %apicid : 7 %initial apicid : 7 %fpu : yes %fpu\_exception : yes %cpuid level : 13 %wp : yes %flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant\_tsc arch\_perfmon pebs bts rep\_good nopl xtopology nonstop\_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds\_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4\_1 sse4\_2 x2apic popcnt tsc\_deadline\_timer aes xsave avx f16c rdrand lahf\_lm cpuid\_fault epb pti tpr\_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts %vmx flags : vnmi preemption\_timer invvpid ept\_x\_only flexpriority tsc\_offset vtpr mtf vapic ept vpid unrestricted\_guest %bugs : cpu\_meltdown spectre\_v1 spectre\_v2 spec\_store\_bypass l1tf mds swapgs itlb\_multihit srbds mmio\_unknown %bogomips : 6784.88 %clflush size : 64 %cache\_alignment : 64 %address sizes : 36 bits physical, 48 bits virtual %power management: %\end{em} The installed \ac{RAM} was offering a total of 16~\acs{GB} with four 4~\acs{GB} instances. For this paper relevant specifications are listed below: \begin{itemize} \item{Total/Data Width: 64 bits} \item{Size: 4~\acs{GB}} \item{Type: DDR3} \item{Type Detail: Synchronous} \item{Speed/Configured Memory Speed: 1600 Megatransfers/s} \end{itemize} %dmidecode --type 17 % ... %Handle 0x0062, DMI type 17, 34 bytes %Memory Device % Array Handle: 0x0056 % Error Information Handle: Not Provided % Total Width: 64 bits % Data Width: 64 bits % Size: 4 GB % Form Factor: DIMM % Set: None % Locator: DIMM B2 % Bank Locator: BANK 3 % Type: DDR3 % Type Detail: Synchronous % Speed: 1600 MT/s % Manufacturer: Samsung % Serial Number: 148A8133 % Asset Tag: 9876543210 % Part Number: M378B5273CH0-CK0 % Rank: 2 % Configured Memory Speed: 1600 MT/s % \section{Operating System and Additionally Installed Packages} To leave the testing environment in a consistent state, non-project specific processes running in the background, should be avoided. Due to following circumstances, a current Linux distribution was chosen as a suitable operating system: \begin{itemize} \item{Factors that interfere with a consistent efficiency value should be avoided.} \item{Packages, support and user experience should be present to an reasonable amount.} \end{itemize} Some background processes will run while the compression analysis is done. This is owed to the demand of an increasingly complex operating system to execute complex programs. Considering that different tools will be exeuted in this environment, minimizing the background processes would require building a custom operating system or configuring an existing one to fit this specific use case. The boundary set by the time limitation for this work rejects mentioned alternatives. %By comparing the values of explained factors, a sweet spot can be determined: % todo: add preinstalled package/program count and other specs Choosing \textbf{Debian GNU/Linux} version \textbf{11} features enough packages to run every tool without spending to much time on the setup.\\ The graphical user interface and most other optional packages were omitted. The only additional package added in the installation process is the ssh server package. Further a list of packages required by the compression tools were installed. At last, some additional packages were installed for the purpose of simplifying work processes and increasing the safety of the environment. \begin{itemize} \item{installation process: ssh-server} \item{tool requirements:, git, libhts-dev, autoconf, automake, cmake, make, gcc, perl, zlib1g-dev, libbz2-dev, liblzma-dev, libcurl4-gnutls-dev, libssl-dev, libncurses5-dev, libomp-dev} \item{additional packages: ufw, rsync, screen, sudo} \end{itemize} A complete list of installed packages as well as individual versions can be found in the appendix. % todo appendix %user@debian raw$\ cat /etc/os-release %PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" %NAME="Debian GNU/Linux" %VERSION_ID="11" %VERSION="11 (bullseye)" %VERSION_CODENAME=bullseye %ID=debian %HOME_URL="https://www.debian.org/" %SUPPORT_URL="https://www.debian.org/support" %BUG_REPORT_URL="https://bugs.debian.org/" \section{Selection, Receivement, and Preperation of Testdata} Following criteria is reqiured for test data to be appropriate: \begin{itemize} \item{The test file is in a format that all or at least most of the tools can work with, meaning \acs{FASTA} or \acs{FASTq} files.} \item{The file is publicly available and free to use (for research).} \end{itemize} A second, bigger set of testfiles were required. This would verify the test results are not limited to small files. A minimum of one gigabyte of average filesize were set as a boundary. This corresponds to over five times the size of the first set.\\ % data gathering Since there are multiple open \ac{FTP} servers which distribute a variety of files, finding a suitable first set is rather easy. The Ensembl database featured defined criteria, so the first available set called: \texttt{Homo\_sapiens.GRCh38.dna.chromosome} was picked \cite{ftp-ensembl}. This sample includes 20 chromosomes, whereby considering the filenames, one chromosome is contained in each single file. After retrieving and unpacking the files, write privileges on them was withdrawn. So no tool could alter any file contents, without sufficient permission. Finding a second, bigger set happened to be more complicated. \acs{FTP} offers no fast, reliable way to sort files according to their size, regardless of their position. Since available servers \cite{ftp-ensembl, ftp-ncbi, ftp-igsr} offer several thousand files, stored in varying, deep directory structures, mapping filesize, filetype and file path takes too much time and resources for the scope of this work. This problematic combined with a easily triggered overflow in the samtools library, resulted in a set of several, manualy searched and tested \acs{FASTq} files. Compared to the first set, there is a noticable lack of quantity, but the filesizes happen to be of a fortunate distribution. With pairs of two files in the ranges of 0.6, 1.1, 1.2 and one file with a size of 1.3 gigabyte, effects on scaling sizes should be clearly visible.\\ \mycomment{ %make sure this needs to stay. \noindent Following tools and parameters where used in this process: \begin{lstlisting}[language=bash] \$ wget http://ftp.ensembl.org/pub/release-107/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.chromosome.{2,3,4,5,6,7,8,9,10}.fa.gz \$ gzip -d ./* \$ chmod -w ./* \end{lstlisting} } The chosen tools are able to handle the \acs{FASTA} format. However Samtools must convert \acs{FASTA} files into their \acs{SAM} format bevor the file can be compressed. The compression will firstly lead to an output with \acs{BAM} format, from there it can be compressed further into a \acs{CRAM} file. For \acs{CRAM} compression, the time needed for each step, from converting to two compressions, is summed up and displayed as one. For the compression time into the \acs{BAM} format, just the conversion and the single compression time is summed up. The conversion from \acs{FASTA} to \acs{SAM} is not displayed in the results. This is due to the fact that this is no compression process, and therefor has no value to this work.\\ Even though \acs{SAM} files are not compressed, there can be a small but noticeable difference in size between the files in each format. Since \acs{FASTA} should store less information, by leaving out quality scores, this observation was counterintuitive. Comparing the first few lines showed two things: the header line were altered and newlines were removed. The alteration of the header line would result in just a few more bytes. To verify, no information was lost while converting, both files were temporary stripped from metadata and formatting, so the raw data of both files can be compared. Using \texttt{diff} showed no differences between the stored characters in each file.\\ % user@debian data$\ ls -l --block-size=M raw/Homo_sapiens.GRCh38.dna.chromosome.1.fa % -r--r--r-- 1 user user 242M Jun 4 10:49 raw/Homo_sapiens.GRCh38.dna.chromosome.1.fa % user@debian data$\ ls -l --block-size=M samtools/files/Homo_sapiens.GRCh38.dna.chromosome.1.sam % -rw-r--r-- 1 user user 238M Nov 2 14:32 samtools/files/Homo_sapiens.GRCh38.dna.chromosome.1.sam % remove metadata: grep -E 'A|C|G|N' > % remove newlines: tr -d '\n' % convert just once. test for losslessness? % get test data: wget http://ftp.ensembl.org/pub/release-107/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.chromosome.{2,3,4,5,6,7,8,9,10}.fa.gz % unzip it: gzip -d ./* % withdraw write priv: chmod -w ./* % first thoughts: % - just save one nuceleotide every n bits % - save checksum for whole genome % - use algorithms (from new discoveries) to recreate genome % - check checksum -> finished : retry % - can run recursively and threaded % - im falle von test data: hetzer, dedizierter hardware, auf server compilen, specs aufschreiben -> 'lscpu' || 'cat /proc/cpuinfo'