Skip to content
This repository has been archived by the owner on Mar 14, 2019. It is now read-only.

Replace Multiple Stores with Multiple FS.Collections #182

Open
aldeed opened this issue Mar 1, 2014 · 55 comments
Open

Replace Multiple Stores with Multiple FS.Collections #182

aldeed opened this issue Mar 1, 2014 · 55 comments

Comments

@aldeed
Copy link
Contributor

aldeed commented Mar 1, 2014

Based on a suggestion from @vsivsi.

This could be a good idea, but there is a lot to consider, so here's my attempt at figuring out how the new architecture might work and what the possible issues might be.

Generally speaking, stores option would become store and copy-making would become the job of another optional package

  1. I insert into FS.Collection on the client, which inserts into underlying collection.
  2. If uploader option is not null, FS.Collection.prototype.insert then passes the inserted FS.File instance to the uploader function specified (which is the one from the cfs-upload-http package by default, but could also be DDP or one provided by s3 cloud pkg). I already added this uploader option the other day.
  3. Assuming we uploaded over DDP or HTTP to our own server, we now have the data on the server in TempStore. We mark the file as fully uploaded (set bytesUploaded equal to size). (No real change here)
  4. FileWorker sees that we have a fully uploaded file that hasn't yet been saved to its store, and calls beforeSave followed by saveCopy (i.e., store.put). (The only difference here is that we're doing this for just one store, not multiple.)
  5. The FS.File properties name, size, etc. are overwritten to match the result of beforeSave. (Currently we put this info into copies property, but copies would no longer be necessary.)
  6. Here's where we would have to add in logic for saving the file to one or more other related collections. We need to:
    • Somehow tell the FileWorker to call beforeSave and saveCopy on additional FS.Collections, different from the one we did the initial insert on. Maybe some kind of API that lets you define a collection relationship map?
    • Somehow tell the TempStore not to delete the original data until we've saved into all related FS.Collections or given up.
  7. Assuming we can figure out the saving piece, there is still the matter of retrieval. We currently have the copies property in a FS.File to give us easy access to retrieving a certain related copy, but we would lose that.
    • In the common example where I upload an image into the "Images" collection, which then causes a thumbnail of that same image to be generated in the "Thumbnails" collection, where do we store the link between them? Maybe copies should be changed to an array of FS.Files containing pointers to related files in related collections?
@aldeed aldeed added this to the CollectionFS V2 - Final milestone Mar 1, 2014
@raix
Copy link

raix commented Mar 1, 2014

Not sure what this solves / motivation - maybe we should have a hangout to discuss the overview?

@raix
Copy link

raix commented Mar 1, 2014

all of the copies stuff progress etc. should go to the transfer maybe in a collection keeping track of uploads and progress - we should slim down the filerecord so that it contains as little info as possible - mainly static data.

The transfer knows when the file is uploaded and contacts the FS.Collection saying that the file is uploaded and ready to be transported in to storage adapter.

The FS.Collection will use the file worker is available if not it will store the file directly to the one allowed storage adapter - it would be nice to be able to throttle this via a task manager.

If file worker available we allow the beforeSave behaviour.

Again repeating this pr. storage adapter is really not that big a deal - its basically just the next task in the queue manager.

I dont personally see a problem with multiple stores pr. file - it seems to work pretty good?

Doing this at runtime eg. myfile?resize=40x40 would be an option but it could lay down the server since we are on a single thread... We could use a caching server, but what about security and expiration etc. its complicated.

The original thought / reason for multiple stores/copies (org. called filehandlers) where caching - a way to prepare the server for the requests.

@vsivsi
Copy link

vsivsi commented Mar 1, 2014

Just to be clear, I'm fine with concept of a file in a collection having multiple stores, I think that's a good idea that should be kept. My issue was that those stores should always and only have the same exact data in them. So if something like beforeSave is run, it should run against the first store, and then subsequent stores are exact copies of that.
Where metadata links etc come in is when different versions of an uploaded file are to be stored (eg image and thumbnail). They need not be saved in different collections (although that should be an option), but they must be saved in different FileFS objects.(because they contain different data) and the metadata is what links them together,

@vsivsi
Copy link

vsivsi commented Mar 1, 2014

This is where the idea of having a collectionFS level onChange function comes in. If you want every uploaded image to have an associated thumbnail generated, you write an onChange handler for the image collection that has the logic to detect changes to full size images and keeps their linked thumbnails up to date. The metadata required and where/how the thumbnails are stored is an implementation detail.

@vsivsi
Copy link

vsivsi commented Mar 1, 2014

Having just re-read the above two comments, I reminded myself that once you have onChange, beforeSave becomes redundant and more confusing since you can have multiple stores on a file, but only the first one can meaningfully have a beforeSave. Basically, in my reimagining of this beforeSave goes away and is replaced by much more flexible onChange logic on the collectionFS.

@vsivsi
Copy link

vsivsi commented Mar 1, 2014

Adding just a bit more... So onChange is directly analogous to supplying .added, .changed, and .removed to a publish function, it gives direct control over what happens when the state of a file in a collectionFS changes.

@raix
Copy link

raix commented Mar 1, 2014

Just trying to follow and figure out the problem you want to solve I'm not there yet,

So as I understand it we are talking about two concepts:

  1. stores / versions - current multiple versions pr. file
  2. copies - multiple copies of a file pr. collection

So thats really to very different tasks - I guess, we currently dont support 2.

Or is it out of principle that every version should be a document with an id?

The prior version of the storage adapters did have their own "filerecord" it had _id and a fileId where the fileId were the reference to the FS.Collection FS.File - We took that part away - we didnt use it and it bloated the use of collections - but it sounds much like what you are talking about in concept.

The nice thing about having multiple versions / stores pr. file is reference - eg. if browser only supports the ogg version of a file its easy for me to switch to that version - that would be harder (not impossible) if I were to search for it.

beforeSave is the action on the file content to perform before the data is passed on to the storage adapter and saved - reason why we protect this part a bit is really to have some control throttling the server - using runtime or onChange event to run these tasks would not scale linearly.

I did make a small overview of the cfs project yesterday, its rough wip overview of the architecture and what we are refactoring at the moment: https://www.dropbox.com/s/arlq4ed3r1gh5ty/overview.pdf

Both Eric and I was a bit surprised to see it - why should file upload be so complicated? Well, we are actually talking about creating a few more packages, I think we are lucky to have the Meteor package system.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 1, 2014

@vsivsi, I understand what you're saying. FileWorker would:

  1. Detect that there is new uploaded data sitting in the TempStore for file X.
  2. Retrieve an FS.File instance for file X, and attach the TempStore data to it.
  3. Pass the FS.File instance with data attached to store.put for each defined store (could still be multiple, but identical file is saved in each store, no beforeSave).
  4. If at least one store reports successful saving, then check the FS.Collection for an onChange function. If found, pass the FS.File instance with data attached to onChange, which will potentially alter and save the file in other collections, tying them together with custom metadata.
  5. onChange will let us know whether it's ok to delete the temp data. But how can it do this, since the fileworker has to finish saving later to whatever additional collections we've inserted into?

Some comments:

  • If we support onChange, I'd argue that we should NOT also support multiple stores. Because that adds complexity in terms of keeping a list of file keys per store and then indicating which store you want the file retrieved from. Unless we keep multiple stores only as a sort of "backup access" feature, where on the server we always try store 0 first, and then try store 1 if store 0 is down.
  • We still need to support some kind of beforeSave, too, because often (pretty much always for the use cases I'm interested in), we don't want to save the original at all. (Think of allowing image uploads of any size and format but we always want to save as JPEG resized to a max resolution.) So we need a way to check and potentially alter the initial save, too (the one that triggers the onChange).

@aldeed
Copy link
Contributor Author

aldeed commented Mar 1, 2014

@raix, just saw your last post. I think @vsivsi was primarily concerned that having one FS.File represent more than one actual file is not logical.

@raix
Copy link

raix commented Mar 1, 2014

Ok, well in a way it is one file typically - an image/sound/video just in different formats - the content is "the same".
I'm thinking we have been working alot between words like:

  • filehandlers
  • copies
  • stores
  • versions
  • files

Its all words we tend to use about the same thing - maybe we should think about these and what patterns and problems they imply? Eg.:

  • Can a file contain multiple:
    • versions = yes
    • copies = no
    • stores = no
    • files = no
  • Can a collection contain multiple:
    • versions = no
    • copies = yes
    • stores = no
    • files = yes

Maybe stores isnt the best term?

Linking copies and have them update triggered by onChange events not too sure about that - seems more complicated?

@raix
Copy link

raix commented Mar 1, 2014

I'll sleep on it
see you later :)

@aldeed
Copy link
Contributor Author

aldeed commented Mar 1, 2014

Good idea. Maybe we should all sleep on it a bit and have a hangout debate next week sometime. :)

@raix
Copy link

raix commented Mar 1, 2014

yep!

@vsivsi
Copy link

vsivsi commented Mar 1, 2014

Ok, well in a way it is one file typically - an image/sound/video just in different formats - the content is "the same".

If you follow this abstraction, you are designing a Content Management System, not a File System, because the "things" you are calling "the same" (e.g. different encodings of a single TV show episode) aren't represented by the same bits.

I totally agree that "content" can be a "thing"; just don't call that "thing" a "file", because
programmers settled on what it means to be "file" in a "filesystem" about 50 years ago.

This concept is so well-settled that 99+% of developers who encounter CollectionFS will not intuitively understand that a single "file" in your system can return different data depending on how they ask for it.

The solution to the problem @raix articulates (managing content that is "the same") is to build a CMS on top of CollectionFS that implements a clean set of "content" abstractions.

I'm not a pedantic person, and so I see how it may appear like I'm flogging a purely philosophical point here, but libraries/packages are made/broken by how "cleanly" they define a useful abstraction and implement it in a simple/accessible way. IMO, this is the single biggest differentiator of "well-" and "badly-" designed software.

"Files" have proven to be one of the most useful and durable abstractions in all of computer science for decades, so why mess with it?

@vsivsi
Copy link

vsivsi commented Mar 1, 2014

Here's how I see Stores fitting into the system.

I think there should be an allowance for multiple stores for a file. However, I see this as a "nice to have" feature, but not "must have". @aldeed was correct, the multiple stores may serve to back each other up, they may also allow for things like load balancing in the future.

I fully agree that stores should be black boxes, the only rule should be that the bits you put in are the same bits you get back out later, regardless of which store you request them from. What a store does on the inside (e.g. compression, encryption, replication) should be irrelevant.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 1, 2014

@vsivsi, I think we're basically building a file system layer, a content (file) management system, and file uploaders all in one ecosystem, but in discrete packages that can be mix and matched. Given that this is the case, we should make sure that it makes sense in any context.

@raix
Copy link

raix commented Mar 2, 2014

I think if we use CMS as a description it would be misleading, it's more of a FMS.
It could be used for file distribution, syncronization, caching and sure it could be used to build a CMS - it's flexible.

Why do we need multiple stores? Today we have things like dropbox, google drive etc. They sync, we might want to be able to connect with these - a common task is resizing images and converting file formats it's really a small footprint on the server and a Big help - a filesystem for the Web now days are not to be compared with 50 year old specs. It's sort of the transport method that sets some of the file api, it's simple to use, I dont think devs will have a problem understanding the concepts and what to expect.

Beforesave is just a small part of the tools, we could extract and build upon cfs but it would not be well designed - that Said it's possible to create a replacement package for the cfs-worker altering that behaviour - we created this system of alot of small packages that can be stacked and reused by others.

Personally i started this project because i was tired by the way files are normally handled in asp, php, jsp, .net etc. Not much have changed that area the last 24 years i've been programming. Uploading a file in cfs should not be harder than declaring the collections and use the ui components.

But i still have dificulties in understanding the org. problem that you wanted to solve. Devs dont have to use multiple stores, if only using one store it would work 1:1. Please checkout the overview it make more sense than my ipad typed text :)

@vsivsi
Copy link

vsivsi commented Mar 2, 2014

@aldeed That's a great way of looking at it. In that framework, my argument boils down to:

CollectionFS --> FileFS --> SA all deal with files (in the traditional sense). They faithfully carry metadata and have appropriately scoped execution hooks added to enable higher level abstractions to be built up in higher layers.

My notion of CollectionFS having onChange is basically a placeholder for "appropriately scoped execution hooks". I haven't thought deeply about the best way(s) to do that, and I'm happy to (read: need to) leave that one for you guys. Of course, where ever possible you should approach it in a very "Meteor" way...

beforeSave as currently implemented on the stores is problematic, IMO. It seems a poor fit to the "all stores have the same bits" rule, because if each store has its own beforeSave how do you enforce it? If there is only one beforeSave then it properly belongs to the Collection and the questions become:

  • Is beforeSave the right name for this? Seems more like Meteor's notion of transform functions on a collection.
  • Is beforeSave enough functionality? If this is generalized, then it becomes more like the onChange kind of idea, or the added, changed, removed hooks in Meteor's Publish functionality, except instead of acting on an EJSON document, you are acting on a file and its metadata.

For added and changed the file is sitting in a temp store waiting to be:

  • optionally transformed (maybe more than once) into other temp files
  • some of those transformed files may be themselves inserted as new files into some other collection (or even into the current collection as a different file...)
  • streamed to the SAs for this fileFS

There may also be changed with no corresponding temp file because all that changed was file metadata. in this case what to do depends on what that metadata impacts.

removed obviously cleans up all of the above.

@raix
Copy link

raix commented Mar 2, 2014

It's not the store that should resize, we are in refactoring - it's the file worker that should be able to exec. So basically the beforesave is part of the worker package (even is it's an option we set in the sa)

@vsivsi
Copy link

vsivsi commented Mar 2, 2014

@raix That's fine, it's where it's "attached" that I was referring to. Also, shouldn't you be asleep by now? I'm going to stop now because I need to drink beer with my friends! 🍻

@raix
Copy link

raix commented Mar 2, 2014

Transform instead of beforesave makes sense,
At the moment i think it's the transport section that should have an event onupload

@raix
Copy link

raix commented Mar 2, 2014

Hehe yep should be sleeping, normally it's me poking Eric to get out of my timezone when his up late :) but we have set a deadline before april and it's a fun project, hard to let go sometimes. Well see you on the flipside, enjoy the beer!

@raix
Copy link

raix commented Mar 2, 2014

@aldeed just read the message above mine, agree, you say in 3 Lines what i say in 30 Lines (glad it's not js) Agree it's kind of a file managing system FMS.

@vsivsi
Copy link

vsivsi commented Mar 2, 2014

@aldeed @raix Hey guys, what's the current status of everything that's committed? I'm trying to test out my changes to "unwrap" the MongoDB calls in cfs-gridfs, but no data gets written to any SAs any more, and I'm not seeing any errors in any consoles. Did some bit of interface change overnight that I haven't picked up and I'm getting silent failures?

As a general aside, the structure and inter-dependencies among the sub-parts of this project are getting really complicated. I'm all for modularization but the current state of the smart.json files reveal quite a few circular dependencies, which is usually a sign that things are starting to get out of control. Just saying.

@vsivsi
Copy link

vsivsi commented Mar 2, 2014

A bit more information about above "no data written to SAs" problem. .init is being called on the SA, and .del also gets called, but .get and .put are never invoked. My test harness involves uploading a file using the browser and then downloading it using {{url}}. The upload "works" in that a FileFS is created in the CollectionFS (and can be deleted, all the way down to the SA), but there's no get or put activity on the SA during all of this.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 2, 2014

I'm going to do some integration testing right now, so I'll see what's wrong. There are actually no circular dependencies. The smart.json sometimes includes packages that are circular because they are dependencies for running the tests, but they are not dependencies for running the actual package, so they won't be used. smart.json is just a list of what should get installed in the app's packages directory, whereas package.js defines which of those are actual dependencies.

@raix
Copy link

raix commented Mar 2, 2014

@aldeed have been working on storage adapters, it wip.
I agree we should not have circular deps, or merge packages - but reason is that we have very recently split the project up into packages and are currently refactoring so i think it will improve the next weeks.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

@vsivsi, I'm not seeing any problems with SAs get, put, or del, but there is an issue with uploads not working right now. Will work to fix that. You can test by inserting a file on the server in the meantime.

I did make a small change to the gridfs SA. get and getBytes are supposed to return a Buffer, so I removed the new Uint8Array() for each. Plus my changes yesterday to all SAs to have them use the file object directly, not sure if you saw those.

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

Yes, I saw all of that. There's currently a branch on cfs-gridfs (remove_wrap_async) with the _wrapAsyncs all removed if you want to test with that... :-)

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

OK, my latest round of pushes should have client inserts and uploading working again. @vsivsi, let me know if you're still having troubles.

@raix, while I was in cfs-tempstore, I also pulled temporary chunk tracking out of the file object and into a separate collection.

We still need to look at how we're dealing with file updating, figure out the correct way, and make sure our code is correct. Right now, a PUT (update) simply adds the received chunk to the tempstore for the designated file. This works within our system, but doesn't allow for changing the file's metadata, like content type. I'm not sure if we want to allow that, or if one should instead remove and re-insert a file.

We should really take another look at resumablejs. Now that we're shifting to HTTP uploads as the default, maybe it would make sense to use resumablejs as our client-side uploader? In other words, cfs-upload-http becomes a lightweight wrapper for resumablejs and our PUT methods are updated to accept/understand the params that resumablejs sends. Thoughts?

@aldeed aldeed added the question label Mar 3, 2014
@raix
Copy link

raix commented Mar 3, 2014

Im ok with resumablejs - does work with direct s3 upload? (just curious)
The tempstore collection would track the chunks and know when a file is completed. The fileworker could observe on this collection, it would be relative small compared with listening to all the collectionFS collections.
Normally it would be the transport section that fills tempstore - but it could also be a SA that got permission to sync.
Edit: so I think its a good idea

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

@aldeed Upload to cfs-gridfs seem to be working again. Is it using HTTP PUT for upload now?

The thing that still seems broken is HTTP GET in the browser. I'm getting 503s, which I'm guessing is because the X-Auth-Token header isn't being set by the browser when I use {{url}} to embed the image URL in an <img> tag. I'm not enough of a browser jockey to know if that's even possible to make work correctly. My guess is that Meteor never uses HTTP to transfer data that is subject to authorization. Maybe the ?token=XXXXXX needs to be supported as a backup.

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

Actually the above GET problem is timing related. If I reload the page then the GET works (or when the page redraws for any reason). So it seems like the HTTP access point returns 503 for a short period between the upload and storage in the SA and the reactivity in the {{url}} helper isn't picking up when the file actually becomes available.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

@vsivsi, for me, the url helper updates fine without page reload once the data has been successfully stored.

As far as the interim period, this is something we've discussed a bit. I think a good solution would be to add an alt option to url, providing an alternative to use while uploading:

<img src="{{url store='thumbs' alt='spinner.gif'}}">

We could also handle through server-side redirect responses, but it seems rare that you would want to blindly redirect everything to the same resource.

One can also handle using {{#if isUploading}}{{/if}} and perhaps we can figure out a way to add {{#if isProcessing}}{{/if}} for the period between upload and storage.

@raix, I haven't looked into resumable much yet. Don't know if it supports S3 upload as it seems to use custom params, but maybe the param names are customizable. It also uses multi-part posts. Not sure if PUT is supported. Will need to explore more, maybe later this week.

The tempstore collection would track the chunks and know when a file is completed. The fileworker could observe on this collection, it would be relative small compared with listening to all the collectionFS collections.

Good idea!

Normally it would be the transport section that fills tempstore - but it could also be a SA that got permission to sync.

Yep!

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

Also, I'm on the shark branch of both Meteor and cfs-handlebars. Although this morning is the first time I've seen this. In the past the reactivity in {{url}} has been perfect, now it never works for a new upload.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

Oh, I haven't tested with shark recently. I happened to just post something about this here. Could be related to the issues I mention? I didn't actually test, I'm just assuming we'll have issues based on how I've seen it work with other things I'm doing on shark.

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

It might be, but I'm just using {{url}} with no parameters. I ran another test with two browsers open to the same page. If I upload using one of them, they both fail to load the image with 503, and then don't reactively update when the URL becomes valid. So it seems that the reactivity in {{url}} is keyed to the creation of the FileFS in the client CollectionFS, but then doesn't react when the server FileFS is updated with the SA details (eg the returned fileKey).

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

Oh, hmm, actually if you're getting 503, then that means they've already reactively updated after storage is done. Prior to storage, url returns null so it wouldn't attempt to get the file and get 503. It must be more related to the token issue. I haven't tested with a secured image. Let me try that and get back to you.

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

Just tested, it fails the same way when logged out / unsecured, so definitely not token related. Here's what I see on the browser side:

[Error] Failed to load resource: the server responded with a status of 503 (Service Unavailable) (XxykjjAqyACoG7pZt, line 0)
[Log] HTTPUploadTransferQueue Autostart (power-queue.js, line 249)
[Log] File 1: [object Object] in GridFS (mclocks.coffee.js, line 149)  <-- This is in the callback for my .insert
[Log] HTTPUploadTransferQueue RELEASED (power-queue.js, line 242)
[Log] HTTPUploadTransferQueue ENDED (power-queue.js, line 235)

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

OK, still working fine for me, even with token. Can you provide a minimal repo for me to clone and test with? I don't know if I forgot to push some code or what.

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

Are you using the latest shark meteor/meteor@9b2b612?

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

Maybe not the latest. Let me make sure.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 3, 2014

OK, still no issues on the latest shark. There were a couple minor things I had not pushed, but nothing related to this. Maybe my markup is different and that's why it's working. My template looks something like this:

{{#each fsFile}}
{{#unless cfsIsUploading}}
    {{#with url store='thumbs'}}
    <div><a href="{{../url}}" target="_blank"><img src="{{this}}" alt="" class="thumbnail" /></a></div>
    {{/with}}
{{/unless}}
{{/each}}

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

I'm working on isolating just this piece from the rest of my app. We'll see what happens then...

@vsivsi
Copy link

vsivsi commented Mar 3, 2014

Here you go. I'm just using {{url}} in the template. Like I said above, this used to work great, through Saturday's changes. I couldn't get anything to work upload-wise yesterday, and as of this morning it consistently doesn't work...

@aldeed
Copy link
Contributor Author

aldeed commented Mar 4, 2014

Your test app is much prettier than mine!

I'm able to replicate. Still haven't figured out what's different from mine. Maybe I have another helper that's reacting and masking the issue. I'll have to figure it out tomorrow.

@vsivsi
Copy link

vsivsi commented Mar 4, 2014

Bootstrap makes everything pretty.

Sent from Vaughn's iPad

On Mar 3, 2014, at 4:44 PM, Eric Dobbertin [email protected] wrote:

Your test app is much prettier than mine!

I'm able to replicate. Still haven't figured out what's different from mine. Maybe I have another helper that's reacting and masking the issue. I'll have to figure it out tomorrow.


Reply to this email directly or view it on GitHub.

@aldeed
Copy link
Contributor Author

aldeed commented Mar 4, 2014

OK, this was my bad. For some reason I made some changes to the url method and they were totally wrong. Something in my test template was masking the issue. Probably one of those nights I was up too late. :) It's fixed now.

@vsivsi
Copy link

vsivsi commented Mar 4, 2014

Great, I'll check it out this afternoon.

@vsivsi
Copy link

vsivsi commented Mar 5, 2014

Finally got around to testing the newest and everything looks good. One case that still doesn't react is when the user logs in or out. Since {{url}} includes a login token, I would expect it to automatically update when the status of the current user changes. I'll file a bug on cfs-handlebars for this.

@raix
Copy link

raix commented Mar 5, 2014

fixed user deps in auth url

@aldeed
Copy link
Contributor Author

aldeed commented Mar 5, 2014

I noticed that, too. Thanks, @raix!

@raix
Copy link

raix commented Mar 21, 2014

Did we solve this guys?

@aldeed
Copy link
Contributor Author

aldeed commented Mar 21, 2014

I think this thread got a bit derailed on another issue, but as far as the original post in this issue, I am personally still thinking about it. Sometimes I think it would be a good pattern, but most of the time I think it would be a big re-org and we wouldn't necessarily get much out of it.

One thought I had: We could potentially implement this side-by-side, allowing either multiple stores or more of an onChange pattern. In fact, @vsivsi could theoretically develop the entire onChange feature in a separate add-on package (hint!). This would allow us to get a real-world sense of which pattern is better.

@raix
Copy link

raix commented Mar 21, 2014

ok, I have to read this issue one more time when ready, btw. the node js event emitter could perhaps used for event handling.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants