missing SAM header with minimap2 and samtools

When using minimap2 to map sequencing reads onto a reference, you can use this kind of command (be careful, this is wrong as you will see later):

minimap2 -a -x map-pb test.fastq reference.fasta > minimap.sam

The command is verbose and prints this kind of information. Note here the WARN%ING:

[M::mm_idx_gen::0.338*0.98] collected minimizers
[M::mm_idx_gen::0.464*1.19] sorted minimizers
[WARNING] For a multi-part index, no @SQ lines will be outputted.
[M::main::0.464*1.19] loaded/built the index for 863 target sequence(s)
......

Then, if you try to convert or read this file, you will most problaby get an error. For instance, to convert this SAM file into a BAM format (using samtools), you will get this error message:

[E::sam_parse1] missing SAM header
[W::sam_read1] Parse error at line 2
[main_samview] truncated file.

The solution took me a while but is very simple: if you check the help message of minimap2, you will see that the reference should be provided first. So the top command should be:

minimap2 -a -x map-pb reference.fasta test.fastq > minimap.sam

that is the reference comes first and then the data.

Posted in bioinformatics | Tagged , | Leave a comment

How to get pypi statistics about package download

A while ago, I designed pypiview, a Python package used to fetch the number of downloads for a package hosted on pypi website.

It used to work decently but according to pypi itself the values stored are not reliable and indeed sometimes it looks wierd. Besides, it looks like
the number are updated for a given release. So if you have no release for a year, you have no downloads associated.

There are now more alternatives such as those associated with bigquery. One such tool called pypinfo uses bigquery.

There is also a google interface via biquery.com available here:

https://bigquery.cloud.google.com/welcome

SELECT COUNT(*) AS download_count
FROM TABLE_DATE_RANGE(
  [the-psf:pypi.downloads],
  TIMESTAMP("2017-01-01"),
  TIMESTAMP("2018-01-01")
)
WHERE file.project="spectrum"
Posted in Python | Tagged , , , | Leave a comment

How to prevent wget from creating duplicates

wget is used to download file from internet. For instance:

wget http://url/test.csv

So far so good but two things may happen. First, you may interrupt the download. Second, you may load the file again. Sometimes, files are huge and you do not want to download the same file again.

In the first case, this is eve worse: imagine you have downloaded half of the file and you interrupt the process. Then, you call wget again, you wait, it is over and your are happy. However, because there was already a file called “test.csv” locally, wget downloaded the new file into test.csv.1 ! Moreover, it started the download from srcratch.

So, the solution is to used the two options -c and -N .

wget -c -N http://url/test.csv

The first one tells to continue an interrupted download where it was stopped. And, the -N option checks the timestamps to prevent the download of the same file.

Posted in Linux | Tagged | Leave a comment

Meaning of Real, User and Sys time statistics

Under Linux, the time command is quite convenient to get the elapsed time taken by a command call. It is very simple to use: just type your command preceded by the time command itself. For instance:

time df

The output looks like

real	0m3.905s
user	0m2.408s
sys	0m1.238s

In brief, Real refers to actual elapsed time including other processes that may be running at the same time; User and Sys refer to CPU time used only by the process (here the df command).

More precisely:

  • Real is wall clock time – time from start to finish of the call including time used by other processes and time the process spends blocked (for example if it is waiting for I/O to complete).
  • User is the actual CPU time used in executing the process. Other processes and time the process spends blocked do not count.

  • Sys is the amount of CPU time spent in the kernel within the process.

So, User + Sys is the actual CPU time used by your process

For more details, you can consult this quite precise description

https://stackoverflow.com/questions/556405/what-do-real-user-and-sys-mean-in-the-output-of-time1

Posted in Linux | 2 Comments

git : How to remove a big file wrongly committed

I added a large file to a git repository (102Mb), commited and push and got an error due to size limit limitations on github

remote: error: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.
remote: error: Trace: 7d51855d4f834a90c5a5a526e93d2668
remote: error: See http://git.io/iEPt8g for more information.
remote: error: File coverage/sensitivity/simulated.bed is 102.00 MB; this exceeds GitHub's file size limit of 100.00 MB

Here, you see the path of the file (coverage/sensitivity/simualted.bed).

So, the solution is actually quite simple (when you know it): you can use the filter-branch command as follows:

git filter-branch --tree-filter 'rm -rf path/to/your/file' HEAD
git push
Posted in Computer Science | Tagged , | 43 Comments

git and github : skip password typing with https

if you clone a github repository using the https:// method (instead of ssh), you will have to type your username and passwor all the time.

In order to avoid having to type you password all the time, you can use the credential helpers since git 1.7.9 and later.

git config --global credential.helper "cache --timeout=7200"

where

--timeout=7200

means “keep the credentials cached for 2 hours. (default is 15 minutes).

You can also store the credentials permanently using

git config credential.helper store
Posted in Computer Science | Tagged | Leave a comment

failed to convert from cram to bam (parse error CIGAR character)

In order to convert a bioinformatic file from CRAM to BAM format, I naively used the samtools command available on a cluster but got this error:

samtools view -T reference.fa -b -o output.bam input.cram
[sam_header_read2] 3366 sequences loaded.
[sam_read1] reference 'VN:1.4' is recognized as '*'.
Parse error at line 1: invalid CIGAR character

After a few commands trying to fix the issue, I realised that the error message contained the SAM label. This indicates that samtools version is a bit old. And indeed it was. I then used version 1.6 of samtools and it worked out of the box.

Posted in bioinformatics | Tagged | Leave a comment

How to mount and create a partition on a hard drive dock (fedora)

I got a new hard drive (2.7Tb) but wanted to use it with a docking station. Here are the steps required to use it under my Fedora box.

First, I naively went into the Nautilus File Browser hoping to see the hard drive mounted automatically. Of course it was not there: the hard drive is new and has no partition.

So, first, let us discover and check that the drive can be seen. We can use the fdisk command:

sudo fdisk -l
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: 3822C676-2317-437F-83E0-2358BA655039

You can see in this case that the disk is on device /dev/sdb.

I then started the tool gparted and in the top right corner you can see the /dev/sdb device that should also indicate the size of your hard drive as shown in this image:

As you can see the partition and file system are unallocated. First, you need to go to the menu

Device/Create Partition Table

to create a partition table on this hard drive.

Then, you can create a new partition by going to

Partition/New

Here, you get a new window that looks like:

I allocated the entire space to one partition. In the menu you need to give a label and a name. The name is for you, the label is for the system so for the label remain simple and do not use special characters (except if you know what you are doing).

For the filesystem I kept the default (gpt). Finally, once you are done, you need to press the apply button. You should be ready in a few seconds.

Go back to Nautilus File Browser and here you can see the new hard drive partition (in theory).

Change permission

Finally, you will see that in Nautilus, you can not create any folder or files: you do not have permissions. To change this, you need to be in the list of sudo users. Then, go the path where your hard disk is mounted and type:

sudo chmod 0777 /run/media/yourdisk_path
Posted in Linux | Tagged , | Leave a comment

AWK: convert into lower or upper cases

In order to convert a bash variable to lower case with awk, just use this command:

a="UPPER CASE"
echo "$a" | awk '{print tolower($0)}'

If you want to convert the content of a file (called data.csv) to lower case:

awk '{print tolower($0)}' data.csv

Of course to convert into upper case, simply use the function toupper() instead of tolower().

Note also that a better tool to avoid issues with special characters might be the tr unix command:

tr [:upper:] [:lower:] < input
Posted in Linux | Tagged , | Leave a comment

How to sort a dictionary by values in Python

By definition, dictionary are not sorted (to speed up access). Let us consider the following dictionary, which stores the age of several persons as values:

d = {"Pierre": 42, "Anne": 33, "Zoe": 24}

If you want to sort this dictionary by values (i.e., the age), you must use another data structure such as a list, or an ordered dictionary.

Use the sorted function and operator module

import operator
sorted_d = sorted(d.items(), key=operator.itemgetter(1))

Sorted_d is a list of tuples sorted by the second element in each tuple. Each tuple contains the key and value for each item found in the dictionary. If you look at the content of this variable, you should see:

[ ('Zoe', 24), ('Anne', 33), ('Pierre', 42)]

Use the sorted function and lambda function

If you do not want to use the operator module, you can use a lambda function:

sorted_d = sorted(d.items(), key=lambda x: x[1])
# equivalent version
# sorted_d = sorted(d.items(), key=lambda (k,v): v)

The computation time is of the same order of magnitude as with the operator module. Would be interesting to test on large dictionaries.

Use the sorted function and return an ordered dictionary

In the previous methods, the returned objects are list of tuples. So we do not have a dictionary anymore. You can use an OrderedDict if you prefer:

>>> from collections import OrderedDict
>>> dd = OrderedDict(sorted(d.items(), key=lambda x: x[1]))
>>> print(dd)
OrderedDict([('Pierre', 24), ('Anne', 33), ('Zoe', 42)])

Use sorted function and list comprehension

Another method consists in using list comprehension and use the sorted function on the tuples made of (value, key).

sorted_d = sorted((value, key) for (key,value) in d.items())

Here the output is a list of tuples where each tuple contains the value and then the key:

[(24, 'Pierre'), (33, 'Anne'), (42, 'Zoe')]

A note about Python 3.6 native sorting

In previous version on this post, I wrote that “In Python 3.6, the iteration through a dictionary is sorted.”. This is wrong. What I meant is that in Python 3.6 dictionary keeps insertion sorted.

It means that if you insert your items already sorted, the new python 3.6 implementation will be this information. Therefore, there is no need to sort the items anymore. Of course, if you insert the items randomly, you will still need to use one of the method mentioned above.

For instance, taking care of the age, we now create our list as follows (sorting by ascending age):

d = {("Zoe": 24)}
d.update({'Anne': 33})
d.update({'Pierre': 42})

Now you can iterate through the items and they will be in the same order as in the creation of the dictionary. So you can just create a list from your items very easily:

list(d.items())
Out[15]: [('Zoe', 24), ('Anne', 33), ('Pierre', 42)]

Benchmark

Here is a quick benchmark made using the small dictionary from the above examples. Would be interesting to redo the test with a large dictionary.

What you can see is that the native Python dictionary sorting is pretty cool followed by the combination of the lambda + list comprehension method. Overall using one of these methods would be equivalent though (factor 2/3 at most).

This image was created with the following code.

import operator                                                  
import pylab
from easydev import Timer
 
times1, times2, times3 = [], [], []
pylab.clf()
d = {"Pierre": 42, "Anne": 33, "Zoe": 24}
for j in range(20):
    N = 1000000
    with Timer(times3):
        for i in range(N):
         sorted_d = sorted((key, value) for (key,value) in d.items())
    with Timer(times2):
        for i in range(N):
            sorted_d = sorted(d.items(), key=lambda x: x[1])
    with Timer(times1):
        for i in range(N):
            sorted_d = sorted(d.items(), key=operator.itemgetter(1))
    print(j)
pylab.boxplot([times1, times2, times3])
pylab.xticks([1,2,3], ["operator", "lambda", "list comprehension and lambda"])
pylab.ylabel("Time (seconds) 1 million sorting \n (repeated 20 times)")
pylab.grid()
pylab.title("Performance sorted dictionary by values")
Posted in Python | Tagged , | 7 Comments