-
Notifications
You must be signed in to change notification settings - Fork 18
[Sad Trombone] Image pixel data access #57
base: master
Are you sure you want to change the base?
Conversation
Ref: #48 |
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue) | ||
let bytesPerRow = width * 4 | ||
|
||
let context = CGBitmapContextCreate(data, width, height, 8, bytesPerRow, colorSpace, bitmapInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This context is leaking :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right you are, @NachoSoto, thanks!
’tis not the context that is leaking (CF objects are automatically memory-managed), but the data allocated on #57 (diff).
I added a PixelDataDestroyer
class to properly clean up the underlying PixelBitmap
data on deinit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh you're right, I was thinking of UIGraphicsEndImageContext()
…itmap data buffer
|
||
init(image: Image) { | ||
width = Int(image.size.width) | ||
height = Int(image.size.height) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Careful: Image.size
reports sizes in points, but for these CG APIs, you'll need to work in pixels.
Ugh, on-device performance is horrendous :( function(pixel) {
pixel.red = 1.0 - pixel.red
pixel.green = 1.0 - pixel.green
pixel.blue = 1.0 - pixel.blue
return pixel
} takes ~14 seconds on an iPad Air. Even just mapping over all pixels without modifying them takes 6-7 seconds :| Most time is spent in |
Added a sample scene to Examples/ |
I'm not super surprised, but that's a bummer. Two ideas, depending on exactly what's slow here:
|
Yup. I think 2. is going to be the right decision in the end, but I'll try 1. first. |
(Btw, if you're going with 2, on React Native we saw that serializing to JSON and transferring only one string over the bridge was faster than building up the arrays/objects manually using the JSC API.) |
@spicyj thanks for the tip! |
Interestingly, there's actually a standard |
CIFilter might come in handy. It covers quite a few image processing tasks. Here's a link to the list of filters: |
Yep, good point. Can only filter things that are already raster of course, and CI isn't suitable for realtime applications, but there are ways to mitigate both those issues if we find we need it.
|
@andymatuschak I didn't realize that. I did a quick search and Apple has some suggestions on how to make it work for realtime applications. They suggest using an EAGL context so all the bits stay on the GPU. The details are outlined in "Creating a Core Image Context on iOS When You Need Real-Time Performance" on https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html. |
There are probably also some situations where you don't necessarily need to apply real-time effects in which case plain old CI would work just fine. |
Right, thanks! I was responsible for UIKit's rendering concerns at Apple, so I'm familiar, just not looking forward to digging that ditch again. Ah well. Seems like there’s three use cases here:
|
👍 |
Sorry for the wait everyone, life/work just keeps getting in the way 😞 Went with this suggestion:
Still very slow on the device :( Inverting a 400x400 image of Pusheen takes 11+ seconds on iPad Air: @spicyj I tried constructing a JSON string representation of that but it just kept crashing. I’m not suprised, though since it’s pretty ridiculous (an array of 160000 arrays of 3 ints). I will still try 1. but I don’t expect any significant improvements. At this point I’m not sure if there’s any point in keeping this API if it runs so slow on-device. Maybe if it only lived in the Swift layer...? 👎 |
@saniul When JSONifying, you could try sending each pixel down as a 32-bit (24-bit?) int and then convert it using pure JS to the nested arrays? Maybe that would be lower overhead. Too bad there's no JSC API for typed arrays. |
(0.3s to run over the pixels is small relative to 11s but still pretty large relative to 16ms…) |
Rough. Thanks a lot for trying this out, @saniul. No need to apologize for delays! :) The performance characteristics of #1 should be pretty different: the only cost should be context switching, as opposed to all the copies here. It'll obviously be much worse than the 0.3s time here, but maybe it'll be OK? I can imagine more aggressive ways for us to attack this (e.g. implementing a new JS "data" type which knows how to be backed by some buffer), but if approach #1 doesn't work we should probably surrender for now. :/ |
@spicyj sending ints over the “wire” and then breaking them down in the JS later didn’t really help. @andymatuschak I implemented |
Hm. One more idea: you'd have half as many context switches and would move the work of iteration to Swift if you made |
(also: thanks for trying all this stuff @saniul!!!) |
oh @andymatuschak pos is actually an Int (treating the data as an array). there are overloads with an additional parameter for |
Ah, gotcha.
|
Also, no problem! I’m very happy to help. |
It’s a bit awkward that
Pixel
s store color info in Int8 [0, 255] instead of Double [0.0, 1.0], but I provided a.color
getter and setter, which could be enough... Thoughts?PixelBitmap
has atransform
method, which constructs a newPixelBitmap
by applying the transformation on the original. The additional convenience overloads oftransform
provide information about the index (0..<width*height), or the row/column of the pixel. What do you think? Should this be simpler?