Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error is occurred when using the 10X 1.3M data #2

Open
kokitsuyuzaki opened this issue Dec 19, 2018 · 6 comments
Open

Error is occurred when using the 10X 1.3M data #2

kokitsuyuzaki opened this issue Dec 19, 2018 · 6 comments

Comments

@kokitsuyuzaki
Copy link

kokitsuyuzaki commented Dec 19, 2018

Hi,

I found that this PCA is ultra-fast, but when using against 1.3M data (https://community.10xgenomics.com/t5/10x-Blog/Our-1-3-million-single-cell-dataset-is-ready-to-download/ba-p/276), a strange result is generated like this :

# U matrix of SVD
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
...
# The diagonal elements of S matrix of SVD
0
0
0
0
0
0
0
...

Since you don't provide the source code, I still don't see the precise reasons, but I'm wondering if the initial values of the singularvectors and singularvalues are not updated for some reasons.

Have you ever tried this PCA against 10X-1.3M dataset?

After the oocPCA_csv2binary, I performed the oocRPCA like below.

library("oocRPCA")

input="10X.bin"
m=23771
n=1306127
dim=25
output1="Eigen_vectors.csv"
output2="Eigen_values.csv"

out <- oocPCA_BIN(input, m=m, n=n, k = dim, mem = 1e+10, centeringRow = TRUE, logTransform = TRUE)
NC = ncol(out$U)
write.table(out$V[,1:dim], output1, quote=FALSE, row.names=FALSE, col.names=FALSE, sep=",")
write.table(diag(out$S)[1:dim]^2/NC, output2, quote=FALSE, row.names=FALSE, col.names=FALSE, sep=",")

The CSV file is parsed from the 10X-HDF5 format like this gist.
https://gist.github.com/kokitsuyuzaki/5b6cebcaf37100c8794bdb89c7135fd5

I tried the PCA by oocPCA_BIN and oocPCA_CSV but the results were same.

@linqiaozhi
Copy link
Member

Hi @kokitsuyuzaki, thanks for your interest in oocPCA!

Firstly, we do provide the source code...it is in this very repository you are making the issue in! The /src folder has all the code to generate the binary for the R Package.

As for your particular problem, it most likely has to do with the input format. Would you mind sending, say, the first 100 lines of the CSV to me along with the oocPCA_csv2binary call you are using to generate 10X.bin?

Thanks!

@kokitsuyuzaki
Copy link
Author

Oh, sorry...
I overlooked the /src.

I extracted the top 100 rows from the huge 10X-CSV file.
You can download the file from the link below.
https://www.dropbox.com/s/di98nrmu70lothg/10X.csv?dl=0

To binarize the CSV file, I performed the code like below.

library("oocRPCA")
oocPCA_csv2binary("10X.csv", "10X.bin")

@linqiaozhi
Copy link
Member

Thanks!

When I run the following:

input <- '~/Downloads/10X.csv'
k <- 5
out <- oocPCA_CSV(input, k = k, centeringRow=T, logTransform=T)
out$U[1:5,1:5]
out$S[1:5,1:5]
out$V[1:5,1:5]

I get

> out$U[1:5,1:5]
              [,1]          [,2]          [,3]          [,4]          [,5]
[1,] -1.761859e-03  9.776750e-04 -5.447111e-04  4.018511e-03 -4.318454e-03
[2,] -1.410291e-06  1.404715e-05  1.329962e-05  9.984632e-06 -3.713423e-06
[3,] -7.471722e-04 -1.065888e-04  1.287069e-03 -4.816207e-04  5.407371e-04
[4,] -2.528704e-02 -2.454266e-02  1.422214e-02 -9.962610e-03  1.575481e-02
[5,] -4.751919e-06 -3.816953e-06 -7.663478e-06  1.528653e-05 -2.938495e-05
> out$S[1:5,1:5]
         [,1]     [,2]     [,3]     [,4]     [,5]
[1,] 1002.219   0.0000   0.0000   0.0000   0.0000
[2,]    0.000 539.1075   0.0000   0.0000   0.0000
[3,]    0.000   0.0000 474.8727   0.0000   0.0000
[4,]    0.000   0.0000   0.0000 456.3833   0.0000
[5,]    0.000   0.0000   0.0000   0.0000 447.7164
> out$V[1:5,1:5]
              [,1]          [,2]          [,3]          [,4]          [,5]
[1,]  3.383174e-04 -1.122198e-03 -0.0005226319  0.0007564878  9.558623e-04
[2,]  1.613240e-03 -7.660849e-05 -0.0003419195  0.0002935322 -3.811770e-05
[3,]  1.018689e-05  6.023594e-04  0.0015051709  0.0002145294 -9.153902e-05
[4,] -1.992685e-04  1.078998e-03  0.0007539106  0.0005330468 -4.439185e-04
[5,] -1.189716e-03  4.253058e-04 -0.0022875617 -0.0014558012  9.302128e-04

This does not mean they are "correct," but at least that they certainly are not the identity matrix and zero matrix that you are getting. Can you try this too? Just trying to replicate your result with this smaller matrix, if possible.

Also, what platform are you on? I did the above experiment with OS X.

@kokitsuyuzaki
Copy link
Author

kokitsuyuzaki commented Dec 26, 2018

The program is performed on the CentOS machine.

Yes, your program can be performed properly, when the matrix is small.

I found that a reason of the error; when the CSV file is extremely huge, the binary file is not generated by oocPCA_csv2binary() for some reasons.

In my environment, I made some CSV files from the original matrix as follows;

head -100 1M_neurons_filtered_gene_bc_matrices_h5.csv > 10X_100row.csv
head -1000 1M_neurons_filtered_gene_bc_matrices_h5.csv > 10X_1000row.csv
head -10000 1M_neurons_filtered_gene_bc_matrices_h5.csv > 10X_10000row.csv
head -20000 1M_neurons_filtered_gene_bc_matrices_h5.csv > 10X_20000row.csv

Next, I performed oocPCA_csv2binary() against the CSV files.

library("oocRPCA")
oocPCA_csv2binary("10X_100row.csv", "10X_100row.bin")
oocPCA_csv2binary("10X_1000row.csv", "10X_1000row.bin")
oocPCA_csv2binary("10X_10000row.csv", "10X_10000row.bin")
oocPCA_csv2binary("10X_20000row.csv", "10X_20000row.bin")

Finally, I confirmed the binary files are generated and it is found that only 10X_20000row.bin is not generated.

ls -lth *.bin
# -rw-r--r-- 1 koki bit  98G Dec 26 17:31 10X_10000row.bin
# -rw-r--r-- 1 koki bit 9.8G Dec 26 16:53 10X_1000row.bin
# -rw-r--r-- 1 koki bit 997M Dec 26 16:49 10X_100row.bin

The same situation may be reproducible by constructing the same size of matrix as follows.

CSVFILES=()
for ((i=0; i < 200; i++)); do
  CSVFILES+=("10X_100row.csv") 
done
cat ${CSVFILES[@]} > 10X_2000row.csv

In my script, touch command is performed against the generated binary, so this process can generate the empty binary file.

If the input the binary file is empty, oocPCA_BIN() generates the initial value of eigenvalues and eigenvectors, because no row vectors are specified (I think, this is reasonable).

@linqiaozhi
Copy link
Member

Sorry to be getting back to you so late on this. Thanks for tracking the problem down to the csv2binary function.

How much memory does your computer have? As you can see here, the csv2binary binary actually allocates m*n doubles of space before writing it all. In other words, it loads the entire file to the memory, and then writes it. If it fails to allocate the memory, it is likely to have the kind of effect you are seeing (i.e. just creating an empty file). Could that be the problem?

Clearly, having enough memory to convert the file defeats the purpose of using out of core PCA entirely! However, the csv2binary function was not really intended to be used for anything except testing the software. If you have a CSV file--can you just use the oocPCA_CSV function?

In any case, the csv2binary function should definitely be fixed to read/write a line at a time, as opposed to reading the whole file in and then writing it.

@kokitsuyuzaki
Copy link
Author

How much memory does your computer have?

I cannot trace the precise machine environment, but the memory space of the machines ranges over 96GB to 128GB.

As you can see here, the csv2binary binary actually allocates m*n doubles of space before writing it all.

Ok, I clearly understand the reason.
oocPCA_CSV function will also work for my task, so I will use it for now.

Please think about the implementation of incremental read/write of csv2binary function, when you have energy enough to spare for the implementation.

Thanks,

Koki

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants