Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error sparse matrix mul #338

Closed
20092136 opened this issue Nov 3, 2016 · 5 comments
Closed

Error sparse matrix mul #338

20092136 opened this issue Nov 3, 2016 · 5 comments
Assignees
Labels

Comments

@20092136
Copy link

20092136 commented Nov 3, 2016

F1103 21:54:53.385020 12754 Matrix.cpp:466] Check failed: ret == 0 (-1 vs. 0) Error sparse matrix mul
F1103 21:54:53.385568 12748 Matrix.cpp:466] Check failed: ret == 0 (-1 vs. 0) Error sparse matrix mul
F1103 21:54:53.385568 12748 Matrix.cpp:466] Check failed: ret == 0 (-1 vs. 0) Error sparse matrix mulF1103 21:54:53.386371 12751 Matrix.cpp:466] Check failed: ret == 0 (-1 vs. 0) Error sparse matrix mul

什么情况下会触发 Error sparse matrix mul ?

@backyes
Copy link
Contributor

backyes commented Nov 3, 2016

Paste you model if you feel free.

  • Model file
  • PaddlePaddle version
  • command line
  • reproduce method.

@20092136
Copy link
Author

20092136 commented Nov 3, 2016

def lstm_dnn_net(seq_dim,              
                feat_dim,              
                class_dim,             
                emb_dim=128,           
                hid_dim=512,           
                lstm_dim=128,          
                is_predict=False):  
    """                                
    lstm 和 DNN 网络                   
    """                                
    bias_attr = ParameterAttribute(initial_std=0., l2_rate=0.0001)
    fc_para_attr = ParameterAttribute(learning_rate=2e-3)
    lstm_para_attr = ParameterAttribute(initial_std=0., learning_rate=2e-3)#, sparse_update=True)
    sparse_up = ParameterAttribute(sparse_update=True, learning_rate=2e-3)
    para_attr = [lstm_para_attr, fc_para_attr]
    relu = ReluActivation()            

    #seq_data = data_layer("seq", seq_dim)
    #emb = embedding_layer(input=seq_data, size=emb_dim)
    #context_layer = text_conv_pool(input=emb, context_len=5, hidden_size=lstm_dim,
    #        context_start=-1, fc_act=TanhActivation(), fc_bias_attr=bias_attr)

    #bi_lstm = simple_lstm(input=emb, size=lstm_dim, concat_act=TanhActivation())

    #bi_lstm = bidirectional_lstm(input=emb, size=lstm_dim, concat_act=relu)
    #lstm_out = fc_layer(input=bi_lstm, size=lstm_dim, act=SoftmaxActivation(),
    #        bias_attr=bias_attr)   

    feat_data = data_layer("feat", feat_dim)
    feat_layer_1 = fc_layer(input=feat_data, size=hid_dim, act=relu,
                   bias_attr=bias_attr, param_attr=sparse_up)
    feat_layer_2 = fc_layer(input=feat_layer_1, size=hid_dim, bias_attr=bias_attr,
            param_attr=fc_para_attr)
    feat_layer_3 = fc_layer(input=feat_layer_2, size=hid_dim, bias_attr=bias_attr,
            param_attr=fc_para_attr)
    feat_layer_4 = fc_layer(input=feat_layer_3, size=hid_dim, bias_attr=bias_attr,
            param_attr=fc_para_attr)
    feat_layer_5 = fc_layer(input=feat_layer_4, size=hid_dim, bias_attr=bias_attr,
            param_attr=fc_para_attr)

    #output = fc_layer(name='fc_ly_4', input=[lstm_out, feat_layer_2], size=class_dim,
    #                  act=SoftmaxActivation(),
    #                  bias_attr=bias_attr, param_attr=para_attr)

    output = fc_layer(name='fc_ly_4', input=feat_layer_5, size=class_dim, act=SoftmaxActivation(),
                    bias_attr=bias_attr, param_attr=fc_para_attr)

    if is_predict:                  
        outputs(output)             
    else:                           
        #outputs(                   
        #    classification_cost(input=output, label=data_layer('label', 1),
        #        evaluator=[precision_recall_evaluator, classification_error_evaluator]))
        outputs(                    
            classification_cost(input=output, label=data_layer('label', 1)))
@init_hook_wrapper                                                                                                                                   
def hook(obj, dictionary, **kwargs):
    obj.slots = [SparseNonValueSlot(6121), IndexSlot(888)]
    obj.dictionary = dictionary
    obj.sym_dict = kwargs['tag_dict']
    obj.pop_dict = kwargs['pop_dict']
    obj.dis_dict = kwargs['dis_dict']
    obj.logger.info('dict len : %d' % (len(obj.dictionary)))

@provider(init_hook=hook)
def process(obj, file_name):
    with open(file_name, 'r') as fdata:
        for line in fdata:

            arr = line.strip().split('\t')
            if len(arr) != 4:
                continue
            label, seq, sym, pop = arr 

            if obj.dis_dict.get(label) is None:
                continue
            label = int(obj.dis_dict[label])

            if len(seq.strip()) == 0:
                continue

            feat_slot = [obj.sym_dict[s] for s in sym.split() if obj.sym_dict.get(s) is not None]
            if len(feat_slot) <= 0:
                continue
            feat_slot.extend([obj.pop_dict[p] for p in pop.split() if obj.pop_dict.get(p) is not None])
            if len(feat_slot) <= 0:
                continue

            obj.logger.info('feat_slot: %s' % feat_slot)
            obj.logger.info('label: %s %s' % (arr[0], label))
            yield feat_slot, label
paddle cluster_train \
  --config=test/sentiment_p1/cluster_job_config/job_config.py \
  --use_gpu=gpu \
  --num_nodes=4 \
  --num_passes=20 \
  --log_period=100 \
  --dot_period=10 \
  --trainer_count=16 \
  --saving_period=1 \
  --thirdparty=./test/sentiment_p1/thirdparty \
  --job_name=test \
  --time_limit=700:00:00 \
  --submitter=angelababy \
  --config_args=is_local=0 \
  --where=test_cluster \   

For
paddle_platform-v5.10.alpha

@backyes
Copy link
Contributor

backyes commented Nov 3, 2016

@reyoung Can you help to check dataprovider? It seems that SparseNonValueSlot(6121) had been deprecated.

@20092136 please try latest PaddlePaddle bin

@luotao1
Copy link
Contributor

luotao1 commented Nov 4, 2016

#330 这个issue也有sparse * sparse出错的问题。目前gpu下不支持sparse * sparse。能贴出Matrix.cpp的466行是什么吗

@typhoonzero
Copy link
Contributor

Closing this issue due to inactivity, feel free to reopen it.

thisjiang pushed a commit to thisjiang/Paddle that referenced this issue Oct 28, 2021
wangxicoding pushed a commit to wangxicoding/Paddle that referenced this issue Dec 9, 2021
* update docs

* update termtree doc

* add report

* add term type schema to termtree

* update wordtag docs

* update

* tune some doc

* update total plans

* update style

* fix error

* update doc

* update docs, add applications.

* zoom picture

* test

* update examples of termtree

* format tabel style

* update a demo picture

* update some docs

* update titles

* fix example bugs.

* modify readme

* fix format

* modify readme

* modify termtree_type

* modify termtree readme

* modify ernie-ctm readme

* modify readme

* modify readme

* fix link

* update picture

* update summary picture

* modify images

* modify images

* modify images

* fix link

* modify readme

* modify

* modify

* fix

* fix

* fix

* modify

* update documents change alts and json keys

* change images alt

Co-authored-by: qinhuapeng <[email protected]>
Co-authored-by: zmsearch <[email protected]>
Co-authored-by: Zeyu Chen <[email protected]>
AnnaTrainingG pushed a commit to AnnaTrainingG/Paddle that referenced this issue Sep 19, 2022
danleifeng pushed a commit to danleifeng/Paddle that referenced this issue Aug 22, 2023
danleifeng pushed a commit to danleifeng/Paddle that referenced this issue Sep 13, 2023
WAYKEN-TSE pushed a commit to WAYKEN-TSE/Paddle that referenced this issue Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants