-
-
Notifications
You must be signed in to change notification settings - Fork 237
[1.0.0] Rework attachData #239
Comments
How does it depend on fs?
That is basically exactly what it does if I remember correctly. What problem does your API change solve? It seems more confusing to me, but maybe I just haven't understood the brilliance yet. |
ok, I take it as a compliment that you assume theres some brilliance hiding in there - I can't promise you that :) I'll just add some notes:
fsData server - but that kinda ok - since thats used to load data from the filesystem - Me not thinking.. I would like it to be streaming instead of using buffers (it seems to be - just skimmed the code roughly)
Yes, well I think the point is basically to use streaming when ever possible instead of buffers. The Http.request response is a readstream, the filesystem got createReadStream, we can also pass in a read stream directly - the only one missing is if we pass in a buffer - but its easy to create a read stream for that. The readStream can be passed directly into the Thats the primary reason to out factor into smaller functions with basically the same api, and use streaming? The api can be used to store data on insert or attachData (with mounted file) The its more about taking the concept of dataSources with their separate api, eg. the So the The insert and the attachData it self could use the dataSource api to get I have only skimmed the code - Its just some ideas I got after you made the attachData + FS.Data code. |
I might need to re-read a few times, but I think what you're saying is exactly how it already does work. When I wrote it, my intention was to avoid buffers internally, but allow you to get one if you need it. Example: I attach a filepath on the server. When I do this, FS.Data:
So no buffers involved. Then at some later point we call On the other hand, if I later want a Buffer, I call But in a pure streaming example, like all of the CFS internal code, there are no Buffers involved. It's the same when you pass in a URL. We simply store the URL, type, size, and then later stream directly from the remote URL when you call FS.Data could be moved to a |
Oh, and regarding separating out the handling of each type, I remember considering that but I thought there would be too much duplicate code? Although maybe you're right it would make testing easier. |
Really my only problem with your initial proposal at the top of this issue is this: It would put some of the data handling back in But I do like the idea of separating the type handling the more I think about it. So we should do that, but only internally within the |
OK, @raix, I factored out FS.Data into a separate package: https://github.com/CollectionFS/Meteor-data-man I took it off the I also refactored the server code as you suggested, separating into Tests are all passing still. Let me know if you think this is a good way to do it. I can remove the |
Ok, sorry, the out factored code looks good, regarding Naming i was thinking the superDataMan, maybe just dataSources or dataman :) good idea about adding the package, we could map dataman on FS.Data.
var regex = /^data:.+\/(.+);base64,(.*)$/;
var matches = string.match(regex);
var ext = matches[1];
var data = matches[2];
var buffer = new Buffer(data, 'base64');
Btw. The FS.Utility maps some of the underscore eg. Each etc. So it's easier to switch to lodash etc. And packages dont have to use the underscore package directly. |
Good ideas. I cached the Usings regex is nice, too. The underscore dependency is there for now just because I used _.bind. We can replace with native bind code. |
Cool, I think to remember that the node Buffer is handled special in node (like blob in client) + using size on dataUrl would cause data to be in both self.buffer and self.dataUrl + we can reuse all the buffer code. ok, I see bind is the only dependency - yep, good idea its not too hard to do native bind, @aldeed: I'm decoupling a bit this week, got some heavy deadlines, I'll be back in a weeks time |
FYI, I've pushed some improvements to data-man. All of your suggestions from above plus limited support for managing a readstream, plus a couple other improvements. |
I think this issue could be closed now except for one of your suggestions not yet done:
I'd have to look at the code more to see if that is a good idea. |
Closing. Will move last point to a new issue. |
@aldeed The data package depends on fs on the server-side, but I don't think we have to. Some ideas:
data
in the FS.File objectdata
by having dataHandlers that streams directly to theFS.TempStore.createWriteStream(fileObj);
attachData
should trigger the streaming toTempStore
and at some point on the client it should triggerupload
(maybe Meteor.createWriteStream('fs.upload', fsObj))dataHandlers
simply have acreateReadStream
,size
andtype
property e.g.:ReadStream
var data = FS.Data.ReadStream(readStream, { length:, type:, name: });
data.createReadStream();
data.size()
data.type()
data.name()
These simply utilize the
FS.Data.ReadStream
api:RemoteUrl
FS.Data.RemoteUrl(url, { [name: ]})
Buffer
FS.Data.Buffer(buffer, { name:, type: })
FileSystem
FS.Data.FileSystem(path, { [name] })
At some point when we have client-side streams we can do basically the same on the client too,
The text was updated successfully, but these errors were encountered: