Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: Can't pickle local object 'BoundedContinuous.default_transform.<locals>.transform_params' #4799

Closed
mjhajharia opened this issue Jun 21, 2021 · 2 comments

Comments

@mjhajharia
Copy link
Member

Sampling time error:

AttributeError: Can't pickle local object 'BoundedContinuous.default_transform.<locals>.transform_params'

To reproduce:

%matplotlib inline
import aesara
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import pandas as pd
from pandas_datareader.data import DataReader
from pandas.plotting import register_matplotlib_converters
from aesara.graph.op import Op
plt.style.use('seaborn')
register_matplotlib_converters()
cpi = DataReader('CPIAUCNS', 'fred', start='1971-01', end='2018-12')
cpi.index = pd.DatetimeIndex(cpi.index, freq='MS')

# Define the inflation series that we'll use in analysis
inf = np.log(cpi).resample('QS').mean().diff()[1:] * 400
print(inf.head())
class Loglike(Op):

    itypes = [aesara.tensor.dvector] # expects a vector of parameter values when called
    otypes = [aesara.tensor.dscalar] # outputs a single scalar value (the log likelihood)

    def __init__(self, model):
        self.model = model
        self.score = Score(self.model)

    def perform(self, node, inputs, outputs):
        theta, = inputs  # contains the vector of parameters
        llf = self.model.loglike(theta)
        outputs[0][0] = np.array(llf) # output the log-likelihood

    def grad(self, inputs, g):
        # the method that calculates the gradients - it actually returns the
        # vector-Jacobian product - g[0] is a vector of parameter values
        theta, = inputs  # our parameters
        out = [g[0] * self.score(theta)]
        return out


class Score(Op):
    itypes = [aesara.tensor.dvector]
    otypes = [aesara.tensor.dvector]

    def __init__(self, model):
        self.model = model

    def perform(self, node, inputs, outputs):
        theta, = inputs
        outputs[0][0] = self.model.score(theta)
        
ndraws = 3000  
nburn = 600  

mod = sm.tsa.statespace.SARIMAX(inf, order=(1, 0, 1))

loglike = Loglike(mod)

with pm.Model():
    # Priors
    arL1 = pm.Uniform('ar.L1', -0.99, 0.99)
    maL1 = pm.Uniform('ma.L1', -0.99, 0.99)
    sigma2 = pm.InverseGamma('sigma2', 2, 4)

    # convert variables to aesara vectors
    theta = aesara.tensor.as_tensor_variable([arL1, maL1, sigma2])
    

    # use a DensityDist (use a lamdba function to "call" the Op)
    pm.Potential('likelihood',loglike(theta))

    # Draw samples
    trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=False,return_inferencedata=True,cores=4)

Versions:

Python implementation: CPython
Python version       : 3.9.5
IPython version      : 7.24.1

pymc3      : 4.0
json       : 2.0.9
pandas     : 1.2.4
matplotlib : 3.4.2
numpy      : 1.20.3
aesara     : 2.0.11
statsmodels: 0.12.2

Watermark: 2.2.0

More information: I don't know but maybe it's because of v4 or aesara or pickle backend. It works when cores=1 during sampling, so its only an error during multiprocessing. Also, a similar error happened a while back (I used pm.DensityDist insted of pm.Potential and got "runtime error: chain 0 failed" which was fixed with pickle backend = dill. Perhaps we need a better fix, any help is appreciated!!!!

CC: @brandonwillard

@ricardoV94 ricardoV94 added this to the vNext (4.0.0) milestone Jun 27, 2021
@ricardoV94 ricardoV94 added the v4 label Jun 27, 2021
@ricardoV94
Copy link
Member

This might have been fixed #4858

@mjhajharia
Copy link
Member Author

omg yes just tested, it works!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants