Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't install callbacks on values of TripleDict, MonoDict #14159

Closed
nbruin opened this issue Feb 22, 2013 · 71 comments
Closed

Don't install callbacks on values of TripleDict, MonoDict #14159

nbruin opened this issue Feb 22, 2013 · 71 comments

Comments

@nbruin
Copy link
Contributor

nbruin commented Feb 22, 2013

In #11521 trac_11521_callback.patch a callback was installed on a weakref value of a TripleDict:

 _cache[key] = KeyedRef(H, _cache.eraser, (id(X),id(Y),id(category)))

That's not safe: If the value under that key gets changed while the value remains in memory, the callback will be executed and remove an entry that now has an unrelated value!

So: Either prove that the value under this key will not change for the lifetime of H (keep in mind that cyclic references can extend the lifetime of an otherwise unreachable object essentially indefinitely, so the proof needs to include that all key components survive H, otherwise those ids could get reused) or provide a more selective callback (for instance, ensure that the value is still as expected before deleting).

Note: The patch trac_14159_safer_callback.patch follows the second approach, so that a memory leak remains fixed.

Another point is that the API of _cache.eraser isn't really published, so this behaviour is probably better encapsulated in a method on the dict itself.

See #12313 comment 317 for a situation that likely implicates these callbacks (someone has to hold strong references to these keys in order to set the dict, so the absence of the keys suggests a spurious execution of this kind of callback)

Apply

Depends on #13387
Depends on #14254

CC: @simon-king-jena @jpflori

Component: memleak

Author: Simon King

Reviewer: Nils Bruin

Merged: sage-5.10.beta0

Issue created by migration from https://trac.sagemath.org/ticket/14159

@nbruin nbruin added this to the sage-5.9 milestone Feb 22, 2013
@simon-king-jena
Copy link
Member

comment:1

What do you mean by "value under that key"?

Ah! You mean this situation: T[a,b,c]=v and later T[a,b,c]=w. Then v might become garbage collected and its callback would remove w from the dictionary.

Indeed, this would be bad.

Potential solutions (two alternatives):

  1. The callback shall test whether the value in the entry to be deleted coincides with the value that is being garbage collected.

  2. If an existing entry is overwritten by __setitem__ then its callback will be removed.

@nbruin
Copy link
Contributor Author

nbruin commented Feb 22, 2013

comment:2

Replying to @simon-king-jena:

  1. If an existing entry is overwritten by __setitem__ then its callback will be removed.

I guess you could reach into the KeyedRef object and invalidate the key to prevent action on callback. Doing so would complicate all our dict setting and deleting, so I think that is not an attractive option.

Just deleting the KeyedRef object won't guarantee it will go away: someone else may be holding a reference to the KeyedRef object as well.

So probably there needs to be an additional option on ...DictEraser objects to check that the value is as expected (or write yet another callback class, but that sounds like a waste).

And then ...Dict would grow an extra set_weak_value to install the appropriate KeyedRef on the value, because letting other code instantiate eraser objects is scary.

It feels like horrible feature creep, but I guess you have a good use case (can you come up with an alternative solution?)

@simon-king-jena
Copy link
Member

comment:3

Replying to @nbruin:

Replying to @simon-king-jena:

  1. If an existing entry is overwritten by __setitem__ then its callback will be removed.

I guess you could reach into the KeyedRef object and invalidate the key to prevent action on callback. Doing so would complicate all our dict setting and deleting, so I think that is not an attractive option.

So, what about the first option? It would make sure that an entry will only be deleted if the value is identical with the object whose callback is being called.

@nbruin
Copy link
Contributor Author

nbruin commented Feb 22, 2013

comment:4

Replying to @simon-king-jena:

So, what about the first option? It would make sure that an entry will only be deleted if the value is identical with the object whose callback is being called.

Yes, sorry for not being clear about it. That's what the second part of the reply expands on.

@simon-king-jena
Copy link
Member

comment:5

Replying to @nbruin:

Replying to @simon-king-jena:

So, what about the first option? It would make sure that an entry will only be deleted if the value is identical with the object whose callback is being called.

Yes, sorry for not being clear about it. That's what the second part of the reply expands on.

Sorry, I don't see the relation with suggestion 1.

So probably there needs to be an additional option on ...DictEraser objects to check that the value
is as expected

Why? The callback would be checking that the value is identical with the object pointed to by the weak reference. This should be cheap enough to be done by default.

@simon-king-jena
Copy link
Member

comment:6

I thought that the following would expose the problem, but it doesn't:

sage: from sage.structure.coerce_dict import TripleDict
sage: import gc
sage: class Foo: pass
sage: 
sage: a = 1
sage: b = 2
sage: c = 3
sage: v = Foo()
sage: w = Foo()
sage: T = TripleDict(13)
sage: T[a,b,c] = v
sage: T[a,b,c] = w
sage: del v
sage: _ = gc.collect()
sage: len([x for x in gc.get_objects() if isinstance(x,Foo)])
1

So, v got collected and thus the callback was executed. However:

sage: id(T[a,b,c])
85762632
sage: id(w)
85762632

The entry that was previously holding v did not get deleted when v's callback was called.

So, are we really in trouble?

@nbruin
Copy link
Contributor Author

nbruin commented Feb 22, 2013

comment:7

You have to make sure that the weakref object lives to see the death of v (and ensure that you make such a weakref in the first place!)

sage: from sage.structure.coerce_dict import TripleDict
sage: import gc
sage: import weakref
sage: class Foo: pass
sage: 
sage: a = 1
sage: b = 2
sage: c = 3
sage: v = Foo()
sage: w = Foo()
sage: T = TripleDict(13)
sage: T[a,b,c] = weakref.KeyedRef(v,T.eraser,(id(a),id(b),id(c)))
sage: h=T[a,b,c]
sage: T[a,b,c] = w
sage: del v
sage: _ = gc.collect()
sage: len([x for x in gc.get_objects() if isinstance(x,Foo)])
1
sage: T[a,b,c] == w
KeyError: (1, 2, 3)

I admit, the weakref surviving outside the dict is not a terribly likely event, but from your code at #11521 it wasn't entirely clear to me that you can absolutely rule it out.

@simon-king-jena
Copy link
Member

comment:8

Replying to @nbruin:

You have to make sure that the weakref object lives to see the death of v (and ensure that you make such a weakref in the first place!)

Apparently I had wrong memories about my code, then. I had in mind that the TripleDict has a weak reference with callback to the value.

sage: from sage.structure.coerce_dict import TripleDict
sage: import gc
sage: import weakref
sage: class Foo: pass
sage: 
sage: a = 1
sage: b = 2
sage: c = 3
sage: v = Foo()
sage: w = Foo()
sage: T = TripleDict(13)
sage: T[a,b,c] = weakref.KeyedRef(v,T.eraser,(id(a),id(b),id(c)))
sage: h=T[a,b,c]
sage: T[a,b,c] = w
sage: del v
sage: _ = gc.collect()
sage: len([x for x in gc.get_objects() if isinstance(x,Foo)])
1
sage: T[a,b,c] == w
KeyError: (1, 2, 3)

I admit, the weakref surviving outside the dict is not a terribly likely event, but from your code at #11521 it wasn't entirely clear to me that you can absolutely rule it out.

Hm. This particular TripleDict is only used by hom, and hom doe not override an existing entry. So, this particular case should be safe. However, it still seems reasonable to me to let the callback check whether the to-be-deleted value did not change.

@nbruin
Copy link
Contributor Author

nbruin commented Feb 22, 2013

comment:9

Replying to @simon-king-jena:

Hm. This particular TripleDict is only used by hom, and hom doe not override an existing entry. So, this particular case should be safe. However, it still seems reasonable to me to let the callback check whether the to-be-deleted value did not change.

I was fully expecting that to be true, but I could not come up with an explanation for the bug Jeroen reported at #12313, where the line in

 _cache[key] = KeyedRef(H, _cache.eraser, (id(X),id(Y),id(category)))

leads to a key error in TripleDict.set in the line:

r1,r2,r3 = <tuple>(self._refcache[h1,h2,h3])

The only thing I could think of for that line to fail is:

  • This KeyedRef object W is constructed
  • _cache.set(k1,k2,k3,W) is called with k1=id(X),k2=id(Y),k3=id(category). Note that the caller is holding strong references to (X,Y,category).
  • In order to end up in the line r1,r2,r3 = <tuple>(self._refcache[h1,h2,h3]), the key triple (k1,k2,k3) must be present in this TripleDict and hence in self._refcache
  • Yet, we get a KeyError. Calling self._refcache[h1,h2,h3] requires the construction of a new tuple (h1,h2,h3), which could trigger a GC and hence a callback that would remove entries. So the only explanation I see is that we're getting a callback with key (k1,k2,k3) due to a GC triggered by this tuple allocation.
  • However, this callback cannot result from any of X,Y,category getting collected, because someone is holding strong references to them. Hence such a callback must be coming from somewhere else.
  • The code pointed to (ironically the same code that triggers this TripleDict operation) is an example of code that installs such callbacks. So a possible scenario is that a statement
    _cache.set(k1,k2,k3,Wold)
    was executed before, that Wold is still hanging around in memory and that it finally gets removed due to this poorly timed garbage collection and consequently that the keytriple (k1,k2,k3) receives a callback, even though they are pointing to (or are about to be pointing to -- we don't know what happened in between) another, unrelated value.

In this scenario, values in this Hom dictionary do get changed.

We need to ensure that TripleDict cannot trigger GC at that point anyway (and that is what #13387 does as a side-effect), because all kinds of other bad things could happen as well. However, given that it needs such a complicated explanation, I would hope we can verify my assumptions, because otherwise my fix might not be sufficient.

Hence the formulation of the ticket: If you can prove that installing this callback is safe, I have no problem with it.

@nbruin
Copy link
Contributor Author

nbruin commented Feb 22, 2013

comment:10

Ouch, I looked at that code and you're correct: That doesn't overwrite the value at all (it's really a cache). Incidentally, at that code:

    try:
        H = _cache[key]()
    except KeyError:
        H = None
    if H is not None:
        # Are domain or codomain breaking the unique parent condition?
        if H.domain() is X and H.codomain() is Y:
            return H

Since X,Y are part of key, which is looked up by identity, the "domain" and "codomain" test should really be superfluous. So perhaps we want

if H is not None:
   assert H.domain() is X and H.codomain() is Y
   return H

since something is going very wrong if this is not the case! (H would not be allowed to sit there in the dict).

Furthermore, H has strong references to X,Y,category in the form of H.domain(), H.codomain() and H.homset_category(), so if H is alive then so must the keys. Which means that the callback on H must happen (or be purged) before those on X,Y etc., or at least in the same GC.
so their ids can't get reused until their callbacks have happened or have been discarded.

I think that only leaves the possibility that _cache got silently corrupted due to a GC at an inopportune time before this particular error. We cannot rule that out, but it's extremely unlikely: It requires a bucket to be rather full and buckets are typically not full at all (ideally only one entry).

On the other hand, since this is a global _cache, it does get to hold quite a few keys, so it's quite likely that it underwent a resize operation, where a corruption was much more likely to happen.

It's rather difficult to do an accurate postmortem with so little data.

@nbruin
Copy link
Contributor Author

nbruin commented Feb 24, 2013

patch that got misplaced on #12313

@simon-king-jena
Copy link
Member

comment:11

Attachment: trac_12313-revert_callback_from_11521.patch.gz

Thank you for moving the patch to the right location (I forgot about the existence of this ticket...).

@simon-king-jena
Copy link
Member

Author: Simon King

@nbruin
Copy link
Contributor Author

nbruin commented Feb 24, 2013

Documentation to explain the problem (and possible future problems)

@nbruin
Copy link
Contributor Author

nbruin commented Feb 25, 2013

comment:12

Attachment: trac_14159-document_callback_issue.patch.gz

"Reviewer" patch to provide some doc and pointers if this ever proves to be a problem. An example of the kind of situation where this could lead to quadratic memory use (relative to what is required):

R=[ZZ.quo(3^n) for n in [1..100]]
for i in range(100):
    for j in range(i,100):
        H = Hom(R[j],R[i])

This leaves 100*101/2 entries with (eventually) dead weakrefs in the dictionary.

I think we should leave ample documentation of this leak in the code, including the strategy to fix it, should it become necessary.

(for posterity: The KeyedRef alternative is probably OK, because it seems that H can never outlive its domain and codomain X and Y, so I don't see how their id's could get reused before the callback on H has happened or got purged. I'm just very cautious with anything that has to do with GC and callback orders after the problems we encountered with it elsewhere. In any case, the incantation would have to documented "DO NOT COPY THIS WITHOUT UNDERSTANDING THE CONSEQUENCES FULLY", because it's a bad pattern to use elsewhere, unless you prove a whole bunch of extra conditions.)

Simon, are you OK with the doc? If so we can put this to positive review (doctests passed for me)

@nbruin
Copy link
Contributor Author

nbruin commented Feb 25, 2013

Reviewer: Nils Bruin

@nbruin nbruin added the t: bug label Feb 25, 2013
@simon-king-jena
Copy link
Member

comment:14

Since your patch changes code (by inserting assert statements) I better run the doc tests with #14159, #12313 and #13387, before changing it to positive review.

Note, however, that the error reported by the patchbot probably is unrelated:

sage -t  -force_lib devel/sage-14159/doc/en/constructions/calculus.rst
**********************************************************************
File "/mnt/storage2TB/patchbot/Sage/sage-5.8.beta0/devel/sage-14159/doc/en/constructions/calculus.rst", line 102:
    sage: maxima(g).powerseries('x',0)
Expected:
    16*f0*('sum((2^(2*i1-1)-1)*bern(2*i1)*k^(2*i1-1)*x^(2*i1-1)/(2*i1)!,i1,0,inf))^4
Got:
    Maxima crashed -- automatically restarting.
    16*f0*('sum((2^(2*i1-1)-1)*bern(2*i1)*k^(2*i1-1)*x^(2*i1-1)/(2*i1)!,i1,0,inf))^4
**********************************************************************
1 items had failures:
   1 of   7 in __main__.example_3
***Test Failed*** 1 failures.

@nbruin
Copy link
Contributor Author

nbruin commented Feb 25, 2013

comment:15

A little data point for how this change might adversely affect memory and speed performance. Running

import resource
R=[ZZ.quo(3^n) for n in [1..1000]]
def test(R):
    for i in range(len(R)):
        print i
        for j in range(i,len(R)):
            H=Hom(R[j],R[i])

%time test(R)
resource.getrusage(resource.RUSAGE_SELF)

Reference:

resource.struct_rusage(ru_utime=50.020395, ru_stime=0.16497399999999998,
ru_maxrss=130108, ru_ixrss=0, ru_idrss=0, ru_isrss=0, ru_minflt=42660,
ru_majflt=1, ru_nswap=0, ru_inblock=600, ru_oublock=240, ru_msgsnd=0,
ru_msgrcv=0, ru_nsignals=0, ru_nvcsw=8669, ru_nivcsw=7514)

With this patch:

resource.struct_rusage(ru_utime=54.220757, ru_stime=0.35794499999999996,
ru_maxrss=703824, ru_ixrss=0, ru_idrss=0, ru_isrss=0, ru_minflt=187662,
ru_majflt=1, ru_nswap=0, ru_inblock=584, ru_oublock=240, ru_msgsnd=0,
ru_msgrcv=0, ru_nsignals=0, ru_nvcsw=12759, ru_nivcsw=7940)

This of course is an example that tries to expose exactly the flaw. We're finding:

  • about 10% runtime penalty. That seems to be the garbage collections that are happening.
  • a lot more memory usage (without the patch memory usage is flat. With the patch you can run out of memory)

I'm not sure how serious this is. It's making me a little hesitant on whether we should be "fixing" this at all, though!

@simon-king-jena
Copy link
Member

comment:16

Idea: We could add a callback---but a new one. Namely a callback, that will only remove the entry from the TripleDict if the value is the weak reference for which this callback is called.

@simon-king-jena
Copy link
Member

Attachment: trac_14159_safer_callback.patch.gz

A safe callback for weak values of a TripleDict

@simon-king-jena
Copy link
Member

comment:17

I did not run the full tests yet, but I think the new patch can be reviewed.

I introduce a variant of TripleDictEraser dedicated for use on the values of a TripleDict (if we use a callback for the values of MonoDict then we need to do the same). It will only erase an item if the value is the originally stored weak reference.

The patch contains tests demonstrating that TripleDictEraser is unsafe but TripleDictEraserOnValue is safe on values, and it contains a test demonstrating that homsets can still be garbage collected.

Apply trac_14159_safer_callback.patch

@simon-king-jena

This comment has been minimized.

@simon-king-jena
Copy link
Member

Dependencies: #13387

@simon-king-jena
Copy link
Member

comment:18

PS: I worked on top of #13387, so, that's a dependency.

Apply trac_14159_safer_callback.patch

@nbruin
Copy link
Contributor Author

nbruin commented Feb 25, 2013

comment:19

tests OK for me. One tiny nitpick:

  • You're claiming that this fixes an extremely rare race condition wrt. to GC. I'm actually not sure that it CAN occur for our particular use (because our value has a strong reference to a key component). I think your example on TripleDictEraserOnValue illustrates why we should prefer checking this anyway. I hope we don't have to be this paranoid in the future.

There's another one, that the _cache[X,Y,category]=KeyedRef(...) statement is very fragile: If you make a mistake in typing the (id(X),id(Y),id(category)) you'll have a very hard to trace (and possibly disastrous!) bug in your program. One solution would be to encapsulate it in TripleDict, but the instantiation of the TripleDictEraserOnValue would complicate the logic in TripleDict,
so let's just keep it this way.

@simon-king-jena
Copy link
Member

Work Issues: Make part the coverage script happy

@nbruin
Copy link
Contributor Author

nbruin commented Mar 1, 2013

comment:43
%cython
cdef class dictest(object):
    cpdef get(self,int k1,int k2,int k3):
        if k1==1 and k2==1 and k3==1:
            return True
        else:
            return False
    def __getitem__(self,k):
        cdef int k1,k2,k3
        try:
            k1,k2,k3=k
        except:
            raise KeyError
        return self.get(k1,k2,k3)

regular method call protocol is pretty expensive:

sage: C=dictest()
sage: timeit("C.get(1,1,1)")
625 loops, best of 3: 795 ns per loop
sage: timeit("C[1,1,1]")
625 loops, best of 3: 717 ns per loop

(but of course, from cython, one should absolutely use the cdef form)

@simon-king-jena
Copy link
Member

comment:44

Some other timings, making set/get cpdef:

sage: from sage.structure.coerce_dict import TripleDict, MonoDict
sage: R = [Integers(n) for n in range(1,10^4)]
sage: T = TripleDict(53)
sage: M = MonoDict(53)
sage: def test_py_getitem(T,R):
    for x in R:
        T[R,R,R] = 1
    for x in R:
        _ = T[R,R,R]
....:         
sage: def test_py_get(T,R):
    for x in R:         
        T.set(R,R,R, 1)
    for x in R:
        _ = T.get(R,R,R)
....:         
sage: cython("""
from sage.structure.coerce_dict cimport TripleDict
def test_cy_getitem(TripleDict T,R):
    for x in R:
        T[R,R,R] = 1
    for x in R:
        _ = T[R,R,R]
""")
....: 
sage: cython("""
from sage.structure.coerce_dict cimport TripleDict
def test_cy_get(TripleDict T,R):
    for x in R:
        T.set(R,R,R, 1)
    for x in R:
        _ = T.get(R,R,R)
""")
....: 
sage: %time test_py_getitem(T,R)
CPU times: user 0.03 s, sys: 0.01 s, total: 0.04 s
Wall time: 0.04 s
sage: %timeit test_py_getitem(T,R)
10 loops, best of 3: 23.9 ms per loop
sage: %timeit test_py_getitem(T,R)
10 loops, best of 3: 23.5 ms per loop
sage: %timeit test_py_getitem(T,R)
10 loops, best of 3: 23.6 ms per loop
sage: %timeit test_py_get(T,R)
10 loops, best of 3: 24.6 ms per loop
sage: %timeit test_py_get(T,R)
10 loops, best of 3: 24.3 ms per loop
sage: %timeit test_py_get(T,R)
10 loops, best of 3: 24.3 ms per loop
sage: %timeit test_cy_getitem(T,R)
10 loops, best of 3: 19.9 ms per loop
sage: %timeit test_cy_getitem(T,R)
100 loops, best of 3: 19.9 ms per loop
sage: %timeit test_cy_getitem(T,R)
100 loops, best of 3: 19.6 ms per loop
sage: %timeit test_cy_get(T,R)
100 loops, best of 3: 17.9 ms per loop
sage: %timeit test_cy_get(T,R)
100 loops, best of 3: 18 ms per loop
sage: %timeit test_cy_get(T,R)
100 loops, best of 3: 18.3 ms per loop

Hence, I can confirm that in Python one better relies on the magical methods, while in cython it is better (but not much better!) to use the cpdef methods.

But in fact, I found some spots in parent_old.pyx that are used and call the magical methods.

@simon-king-jena

This comment has been minimized.

@simon-king-jena
Copy link
Member

comment:45

I have added a new patch, that will hopefully make Sage a little faster. This is by using the cdef set/get methods everywhere in cython. The only spot that is using the magical methods i sage/categories/homset, which currently is python. Perhaps it should be cythoned? Since homsets are in the background for every conversion and coercion, this could be of some benefit.

Apply trac_14159_weak_value_triple_dict.patch trac_14159_use_cdef_get.patch

@simon-king-jena
Copy link
Member

comment:46

PS: In any case, cythoning sage.categories.homset should be on a different ticket.

@simon-king-jena
Copy link
Member

comment:47

Note that different patchbots show different results for the plugins: One patchbot, running sage-5.8.beta2, does not complain at all. Another one, running sage-5.8.beta1, complains both for startup_modules and startup_time.

@nbruin
Copy link
Contributor Author

nbruin commented Mar 1, 2013

comment:48

Replying to @simon-king-jena:

Note that different patchbots show different results for the plugins: One patchbot, running sage-5.8.beta2, does not complain at all.

For the startup module I think that's because 5.8b2 has #13387 merged (for now at least).

@simon-king-jena
Copy link
Member

Changed dependencies from #13387 to #13387, #14254

@simon-king-jena
Copy link
Member

comment:49

#14254 is a blocker that will likely to be merged before this ticket here is positively reviewed. Hence, it will be a dependency.

Question: Shall we remove the new function signed_id introduced in #14254? It is not needed with the approach that we take here. Alternatively, we could apply it. How much is the overhead of calling a cpdef inline function doing <Py_ssize_t><void *>(x), compared without doing it directly?

@simon-king-jena
Copy link
Member

comment:50

Apparently there is no significant difference:

sage: cython("""
cdef inline Py_ssize_t signed_id(object X):
    <Py_ssize_t><void *>(X)
from sage.all import srange
def testcall():
    cdef object i
    for i in srange(10^6):
        a = signed_id(i)
def testdirect():
    cdef object i
    for i in srange(10^6):
        a = <Py_ssize_t><void *>(i)
""")
sage: %timeit testcall()
10000 loops, best of 3: 87.2 us per loop
sage: %timeit testdirect()
10000 loops, best of 3: 87 us per loop

@nbruin
Copy link
Contributor Author

nbruin commented Mar 11, 2013

comment:51

Replying to @simon-king-jena:

Question: Shall we remove the new function signed_id introduced in #14254? It is not needed with the approach that we take here. Alternatively, we could apply it. How much is the overhead of calling a cpdef inline function doing <Py_ssize_t><void *>(x), compared without doing it directly?

It's inline, so using it in a place where the cdef part is used should be 0 overhead. The inline should get expanded before the optimizer even looks at the code. The whole point of inline is to get the performance of #define macros without the headaches.

If the function is not really in the way, it may be worth keeping around. I have found it immensely useful for debugging cython code if as much features as possible are also exposed in python. In particular, the cdef only data attributes can be a real pain.

@simon-king-jena
Copy link
Member

comment:52

Replying to @nbruin:

If the function is not really in the way, it may be worth keeping around. I have found it immensely useful for debugging cython code if as much features as possible are also exposed in python. In particular, the cdef only data attributes can be a real pain.

+5

So, let's keep it (it is cpdef anyway, hence, can be used from Python as well), and apparently it is fast enough (in the example above I had "cdef inline", but "cpdef inline" gives the same timings).

@simon-king-jena
Copy link
Member

comment:53

The first patch needed to be rebased, the second was still fine.

Apply trac_14159_weak_value_triple_dict.patch trac_14159_use_cdef_get.patch

@simon-king-jena
Copy link
Member

Optional weak values for mono- and tripledict

@simon-king-jena
Copy link
Member

comment:54

Attachment: trac_14159_weak_value_triple_dict.patch.gz

I needed another update of the patch: #14254 has introduced signed_id, which is imported in homset.py, but with the approach from here, we do not need to import signed_id. Hence, I removed the import statement in the current version of the patch.

Apply trac_14159_weak_value_triple_dict.patch trac_14159_use_cdef_get.patch

@nbruin
Copy link
Contributor Author

nbruin commented Apr 4, 2013

comment:55

Good stuff! Much cleaner. Patchbot is happy and effectively this patch consolidates some use of TripleDict that was already in ad-hoc use, so there is not too much chance for unexpected failures. The solution here does solve the "manual" installation of callbacks that was dangerous before and in includes a sanity check that the object on which the callback is happening is indeed still referring to the (now dead) weakref that is causing the callback.

@jdemeyer
Copy link

jdemeyer commented Apr 4, 2013

comment:56

Replying to @nbruin:

The inline should get expanded before the optimizer even looks at the code.

That's not entirely true, at least not with GCC. The inline keyword is just one of the things that GCC considers to decide whether or not to inline a function. Very simple functions like signed_id() might very will be always inlined at higher optimization levels, while complicated functions might never be inlined (even if marked inline).

@jdemeyer jdemeyer modified the milestones: sage-5.9, sage-5.10 Apr 5, 2013
@jdemeyer
Copy link

jdemeyer commented Apr 8, 2013

comment:58

attachment: trac_14159_use_cdef_get.patch needs a proper commit message

@simon-king-jena
Copy link
Member

Use cdef get/set methods of MonoDict and TripleDict in cython modules

@simon-king-jena
Copy link
Member

comment:59

Attachment: trac_14159_use_cdef_get.patch.gz

Message added.

@jdemeyer
Copy link

Merged: sage-5.10.beta0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants