Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

struct destructors are not being called #987

Merged
merged 1 commit into from
Feb 23, 2015
Merged

struct destructors are not being called #987

merged 1 commit into from
Feb 23, 2015

Conversation

etcimon
Copy link
Contributor

@etcimon etcimon commented Feb 23, 2015

This is one of a few fixes I made in my iteration of the memory module and could have caused leaks if kept in vibe. I think there's also other occurrences in containers where elements are not destroyed properly

@s-ludwig
Copy link
Member

Good catch! I think in case of FreeListRef, get for non-class types should return a ref instead of a pointer. Then the call can simply be .destroy(get()). I'll have a look at the two other cases.

s-ludwig added a commit that referenced this pull request Feb 23, 2015
struct destructors are not being called
@s-ludwig s-ludwig merged commit bfe0638 into vibe-d:master Feb 23, 2015
@s-ludwig
Copy link
Member

https://github.com/rejectedsoftware/vibe.d/blob/2b8a060bca245cd0ddf97926e02ad4b08fc8fef7/source/vibe/utils/memory.d#L508 seems to be fine already (the pointer is being dereferenced).
The other two have been fixed now by 842fce3. I've changed freeArray to perform the destroy calls, as allocArray similarly performs emplace.

@etcimon
Copy link
Contributor Author

etcimon commented Feb 24, 2015

I've changed freeArray to perform the destroy calls, as allocArray similarly performs emplace.

The HashMap will have all its elements destroyed during reallocation:

https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/utils/hashmap.d#L235

@etcimon
Copy link
Contributor Author

etcimon commented Feb 24, 2015

I fixed this with a max_destroy parameter in freeArray to stop looping destructors at a certain amount because I use this in Vector as well

@etcimon etcimon deleted the patch-2 branch February 24, 2015 16:19
@etcimon etcimon mentioned this pull request Feb 24, 2015
@s-ludwig
Copy link
Member

The error here is how HashMap copies the items. It should perform a normal copy instead of just bit-blitting the ubytes. The other possibility would be to bit-blit Value.init in the first loop instead of using emplace. Then we could use a bool call_destructors = true parameter in freeArray to save some overhead.

max_destroy sounds like a rather blunt tool for a critical thing like object destruction (unless used with either 0 or size_t.max). Doesn't that rather just hide a different issue somewhere else?

But the bad thing is that freeArray is public and not in vibe.internal, so that this might have broken other code, too...

@s-ludwig
Copy link
Member

so that this might have broken other code, too...

Or fixed ;)

s-ludwig added a commit that referenced this pull request Feb 26, 2015
@etcimon
Copy link
Contributor Author

etcimon commented Feb 26, 2015

Yes, that's the solution, thanks !

@dariusc93
Copy link
Contributor

@s-ludwig im not sure if the memory is suppose to free up over time or not but I did ran a quick test and it looks like the the memory is cut in half (normally would move up to 1GB of data with 1000 concurrent connection with over 1M request each but now is at ~531MB but does go up slowly if left the benchmark on). Is there a way to call GC to free the memory? If you run example/http_server then run the benchmark "ab -k -n 1000000 -c 1000 http://localhost:8080/" you would see that the memory does spike up and dont free itself overtime, which still leaves me concern about malicious attacks taking advantage of this issue.

@etcimon shouldnt have pull #893 fix this issue as well or is it related to something else?

@s-ludwig
Copy link
Member

I think that the GC still never frees memory, it only recycles it for further use. This could have changed recently, though, I haven't been able to follow the development of the last months. But if there are no memory leaks of any kind (false GC pointers could also be an issue) it should definitely top out somewhere.

One thing would be interesting: Do you also get constantly growing memory usage with the bench_http_server example? It uses -version=VibeManualMemoryManagement and should not allocate any GC memory during request processing, so if memory grows there it's clear that there is an actual memory leak.

@etcimon
Copy link
Contributor Author

etcimon commented Feb 26, 2015

should not allocate any GC memory during request processing, so if memory grows there it's clear that there is an actual memory leak.

That's a half truth. If you deserialize Json it will allocate on the GC, a lot! I fixed this in my new repo spd, using scoped pools to push that memory use into manual management.

Also, 2.067 adds new features to minimize the GC pool sizes more efficiently, which is why it's important that vibe.d becomes compatible soon.

@dariusc93
Copy link
Contributor

@etcimon is 2.067 still unstable?

@dariusc93
Copy link
Contributor

@s-ludwig the memory doesnt spike up with the bench-http-server example. It stays ~220M,

@s-ludwig
Copy link
Member

@etcimon: Yes, I was just talking about the basic request processing itself. This is certainly true for ab and bench-http-server.

@dariusc93: That's good to know. Are the 220MB the virtual memory or the physical/real memory use? On 64-bit systems, each task reserves a 16 MB address range for the stack, but this is not actually mapped to memory if not used (usually the stack use will max out at a few hundred kB). So a virtual memory use of this much wouldn't be surprising for a concurrency factor of 1000 (seems even a bit too low), but for real memory use it would definitely be too much.

@dariusc93
Copy link
Contributor

@s-ludwig real memory. Every application ive tested against apache benchmark (or loader.io in production) is all dealing with real memory. the bench-http-example right now is using ~60MB under my current test (havent tested multiple times) but others like web, http-server starts spiking up upon heavy testing (up to 2GB of real memory and possibly more but havent pushed its limit).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants