-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unbounded %MEM growth #2628
Comments
One general note, after a brief look at this example: Some computations inherently take a lot of space, this does not necessarily mean that the memory growth is unbounded. In this concrete example, there seems to be a huge internal "branching" of terms. For instance, if we consider just the following fragment: ?- D=3, Nmax=6, L+\(findall(Q, qs_d_nmax(Q, D, Nmax), Qs), length(Qs, L)). D = 3, Nmax = 6, L = 21952. The step from This list with more than 20_000 elements is represented in memory, and subsequent goals seem to fan this out further. I once had a similar situation with a huge Binary Decision Diagram (more than 40 GB) I wanted to represent in memory. For such purposes, you can rent a server with 256 GB of RAM for less than 200 USD per month: |
My little laptop has 64GB ram, I think. I know I will soon want to refine the generation of test cases to something less profligate, and no doubt I will take advantage of CLP(ℤ) for this! But my sense is that it's the "fanning-out" of subsequent goals you reference that really causes the problem. (I conjecture that a more 'eager' %?- D=2, Nmax=6, time(L+\(findall(Q, qs_d_nmax(Q, D, Nmax), Qs), length(Qs, L))).
%@ % CPU time: 0.068s, 249_976 inferences
%@ D = 2, Nmax = 6, L = 784.
%?- D=3, Nmax=6, time(L+\(findall(Q, qs_d_nmax(Q, D, Nmax), Qs), length(Qs, L))).
%@ % CPU time: 1.519s, 6_660_470 inferences
%@ D = 3, Nmax = 6, L = 21952.
%?- D=4, Nmax=6, time(L+\(findall(Q, qs_d_nmax(Q, D, Nmax), Qs), length(Qs, L))).
%@ % CPU time: 41.612s, 182_781_624 inferences
%@ D = 4, Nmax = 6, L = 614656. |
Also, Scryer Prolog does not yet have a garbage collector and therefore currently reclaims memory only on backtracking. You can use this to reclaim unneeded memory, by wrapping a computation in Another way out could be store what you need more compactly in memory. For instance, you can use the fundamental theorem of arithmetic to store a sequence of positive integers Another approach may be to introduce program-specific "abbreviations" of sequences or other data structures that occur frequently in your program, such as Yet another way could be to make use of Scryer Prolog's compact internal string representation, and store a sequence of integers as a list of characters with these code points. For instance, we would expect the following to create such a compact string from a list of codes, and also automatically reclaim data that was created during the conversion: :- use_module(library(lists)). :- use_module(library(iso_ext)). codes_to_chars(Codes, Chars) :- findall(Cs, (maplist(char_code, Cs0, Codes), partial_string(Cs0, Cs, [])), [Chars]). Example: ?- codes_to_chars([1,2,3,4], Chars). Chars = "\x1\\x2\\x3\\x4\".
|
One additional point: The ?- atom_codes(A, [1,2,3,4,5,6]). A = '\x1\\x2\\x3\\x4\\x5\\x6\'. |
Also implements what *should* (algorithmically) have been [I think] a more efficient computation d_gs/2 of all D+1 gₓ, but without demonstrated speedup and still very high peak memory usage.
Below is a self-contained repro for my problem. The unbounded
%MEM
growth intop
apparently happens where I try to used_n_qs_int/4
'backwards' to convert the integer 4th argument to the list-of-quotients 3rd arg. (I would really like not to have to implement separate 'forward' and 'backward' predicates! 🤡 )I'm very glad to pare this repro down further, if the problem is not immediately apparent.
The text was updated successfully, but these errors were encountered: