Skip to content
This repository has been archived by the owner on Aug 15, 2019. It is now read-only.

Cholesky, LU, QR, triangularSolve, setDiag, diagPart, broadcastTo, conj #1356

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

DirkToewe
Copy link
Contributor

@DirkToewe DirkToewe commented Oct 27, 2018

matMul now supports broadcasting

Description

Linear algebra operations seem to be widely requested and help was wanted, so I thought I'd try and help:

  • qr() now has a "low-level" CPU-implementation. At least on my machine this massively improves performance and memory efficiency.
  • lu() and luSolve() allow solving LES via LU Decomposition.
  • cholesky() and choleskySolve() allow solving symmetric positive definite LES via Cholesky Decomposition.
  • triangularSolve allows solving triangular LES or LES viar QR Decomposition.
  • adjoint and conj are now implemented.
  • setDiag, diagPart and bandPart were implemented for symbolic backpropagation.
  • broadcastTo is now implemented.
  • matMul now supports (matrix) broadcasting.

P.S: A proof of concept WebGL implementation of the Cholesky Decomposition can be found here. I will require some guidance if I am supposed to work it into TFJS as a Vertex Shader and some non-Tensor uniform parameters are required. QR and LU Decompostion as well as triangularSolve should not be to difficult to implement either.

P.P.S.: This is my first time with TypeScript and about my third time with JS, so I apologize is the code is not up to standard. The linting requirements aren't even remotely met yet. I will fix that iff/once the commit is otherwise accepted.


For repository owners only:

Please remember to apply all applicable tags to your pull request.
Tags: FEATURE, BREAKING, BUG, PERF, DEV, DOC, SECURITY

For more info see: https://github.com/tensorflow/tfjs/blob/master/DEVELOPMENT.md


This change is Reviewable

Copy link
Contributor

@dsmilkov dsmilkov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! Thanks for the contribution. It will be hard to review this PR due to its size, scope, code style etc. Let's break it into lots of smaller PRs. Let's start with QR PR. Is the existing implementation much slower than the one proposed in this PR? Can you run a benchmark with different-sized matrices and compare the two QR implementations side-by-side?

Reviewable status: 0 of 1 approvals obtained

@dsmilkov
Copy link
Contributor

dsmilkov commented Oct 29, 2018

Also let's do the PRs one by one - this will be a learning opportunity for the code style, how ops work in TF.js etc. And it will impact how the future PRs look.

@DirkToewe
Copy link
Contributor Author

DirkToewe commented Oct 29, 2018

I'd hardly call it a proper Benchmark but You can find a quick performance comparision here, and the here's the test code. The overhead of the TFJS API and the memory allocations is quite significant...

If it's easier for You I'm going to carve out all but qr for now.

EDIT: Forgot to tf.tidy. Gonna fix that asap. done!

@dsmilkov
Copy link
Contributor

Yeah, that's quite a difference in performance! Let's do the qr first. Thanks!!

@ds604
Copy link

ds604 commented Jan 28, 2019

Hello,
This thread came up while I was looking for information on linear algebra operations in tfjs (verifying operations against numpy and octave, and checking the status of different linear algebra functionality). I made a little polyfill, using this port of JAMA to javascript (https://github.com/ghewgill/jsmat), to make it a little easier to go back and forth between tensors and JAMA matrices, and make sure that things are working as they should. Maybe someone will find this helpful, for verification or other purposes.

<script src="https://cdn.jsdelivr.net/gh/ghewgill/jsmat@master/matrix.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script>
<script>
    ///////////////////////////////////////////////////
    // Tensor <-> Jama matrix

    // get data out of tensor, but keep shape information
    tf.Tensor.prototype.getData = function(){
        let data = this.dataSync()
        let [rows, columns] = this.shape

        // create matrix with proper shape
        let M = Array(rows).fill(0)
        return M.map((row,i) => data.slice(i*columns, (i+1)*columns))
    }

    Array.prototype.toJama = function(){
        return new Matrix(this)
    }

    Array.prototype.toTensor = function(){
        return tf.tensor(this)
    }

    ///////////////////////////////////////////////////
    // Jama operations that are not (yet) in tensorflow
    
    // inverse
    let A = tf.tensor([
        [1,2,3],
        [4,5,6],
        [7,8,10]
    ])
    .getData()
    .toJama()
    .inverse().A
    .toTensor()
    .print()

    // LU
    let B = tf.tensor([
        [1,2],
        [3,4]
    ])
    .getData()
    .toJama()
    .lu().LU
    .toTensor()
    .print()

    // SVD
    let C = tf.tensor([
        [1,2],
        [3,4]
    ])
    .getData()
    .toJama()
    .svd()

    tf.tensor(C.U).print()
    tf.tensor(C.s).print()
    tf.tensor(C.V).print()
</script>

@DirkToewe
Copy link
Contributor Author

That's pretty cool @ds604. I wish I had known about that Jama port before.

Maybe that comment might be better placed in the tfjs/issues? Here in the pull request it might be missed by people who are - like You - looking for a working solution right now.

When it comes to using another linalg implementation for verification purposes, I'm not sure how to go about that. QR, LU, Schur, SVD and other decompositions are not unique. Two different solutions might be both correct. So comparing solutions might be hard. Maybe I am missing something?

My current approach for testing is to check the very definitions of the decompositons (Q @ R == A, L @ U == A @ P, ...) as well as the matrix properties (orthogonality Q @ Qᵀ == I, tringularity, ...).

@ds604
Copy link

ds604 commented Jan 29, 2019

Wow, nice project! I'll have to dive in, but it looks like you've closed the gap to a full javascript Numpy, and with native javascript feel. That's what I was looking for.

What I was speaking of as far as verification is just achieving results that are as similar as possible to the outcomes obtained in another environment. I'm not so much on the implementation end of things, so my perspective is more as a user, or taking into account what might matter from a "design" perspective: That's true that wrt correctness of the operations, the choices in the implementation shouldn't matter. But the reality is that they do wind up being outcomes that people (unwittingly or not, correctly or not) come to rely on. One abstraction layer's "stylistic" choices are another layer's structural properties.

I guess from the implementation point of view, barring a hard correctness constraint, in the presence of choice (non-uniqueness of solutions), the safe option is to defer to the "most popular," or do what was done in the "best known" prior reference version. I'm sure there are many challenges when the GPU comes into play, but again, speaking here from the user's perspective: A significant portion of people bringing their functionality over to the web, upon seeing a difference in outcomes from what they had before, are more likely to conclude that there's something "wrong" than to really understand that some differences don't actually matter, or know what action to take if some tests start failing. Over time, I guess some of those choices in the implementations have actually come to matter in usage, or to be things that people have familiarized with and come to use like "waypoints."

So, I guess just consider the quirks of Matlab or Numpy guiding the implementation to be similar to the width of the horse's rear end determining the width of the solid rocket boosters. The implementation won't be any more or less correct, but will just fit a little better with what came before.

@moob123
Copy link

moob123 commented Mar 7, 2019

Hello,
I was trying to follow threads like this to try to figure out what is likely to happen with more general linalg operation support in tensorflowjs. I do not even know who to address this question to, but if anyone here has some time please write some lines on this.
thank you

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants