This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
Replies: 1 comment 1 reply
-
Something like the following lines (algorithmic design) should work: class SomeModel(HybridSequential):
def __init__(self, someinput_params,**kwargs):
super().__init__(**kwargs)
self.features = gluon.nn.Conv2D(channels=32,kernel_size=3,padding=1) # Some feature extractor, can be far more complicated
self.variable_1 = gluon.nn.Conv2D(channels = NumberOfClasses, kernel_size=1) # An image as output
self.variable_2 = gluon.nn.Linear(5) # Some continuous variable as output
def forward(input):
feats = self.features(input)
var1 = self.variable_1(feats)
var2 = self.variable_2 ( feats)
return var1, var2
class CustomLoss(gluon.loss.Loss):
def __init__(someparamsYouNeed, batch_axis=0, weight=None, **kwargs):
super().__init__(batch_axis, weight, **kwargs)
self.loss_image = gluon.nn.BCELoss () # Actually this will not work as is for images, for demo purposes only
self.loss_regression = gluon.loss.L2() # Or somethnig similar
def forward(self, *preds, *labels): # note the * operator, it means this is a list of values
var1_gt = labels[0]
var2_gt = labels[1]
var1_pred = preds[0]
var2_pred = preds[1]
loss1 = self.loss_image(var1_pred,var1_gt)
loss2 = self.loss_regression(var2_pred,var2_gt)
return loss1 + loss2 Skip automatic training functions (d2l.train_ch3), write your own custom for training loop, it really is trivial in complexity. You network now outputs a list of 2 mxnet arrays (be it mxnet.numpy or mxnet.ndarray - depending on the version you are using,the above syntax is for mxnet 2.0) net = SomeModel()
trainer = gluon.Trainer(net.collect_params())
yourLoss = CustomLoss()
# Single epoch training - it's not difficult and you know what's happening under the hood.
for input, labels in datagenerator:
with autograd.record():
preds = net(inputs)
loss = yourLoss(preds,labels)
loss.backward() # calculate network gradients
trainer.step() # update network parameters It will need debugging to bring it into shape, but this should help. N-joy :). |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Can Mxnet handle multivariate regression problems?
I am a novice, trying to use the mxnet tool to solve a multi-task regression problem/multivariate regression problem, that is, you need to predict multiple continuous output values from multiple input features. Unlike classification problems, you can use softmax or cross-entropy loss functions. , Regression problems generally use L2loss to add the output errors of different samples, such as the housing price prediction problem, but it only has one variable to be predicted, and I need to predict multiple values. I try to add the output losses of different prediction targets as the final loss function and then propagate backwards. Whether I use the model or write each link myself, I will get an error. I searched the Internet for problems related to ANN and multivariate regression, but the results were very few. It seems that everyone is focusing on popular classification problems such as image recognition.
So I would like to ask whether mxnet or tools such as pytorch can handle the multivariate regression problem? If so, can you provide a direction to continue exploring (I lost in helplessness. :(
I believe it is certainly possible, but the relevant information is really scarce. If my description has any problems and is not specific, you are welcome to point it out and I will continue to add.
I will put a concise example below for everyone to understand.

Beta Was this translation helpful? Give feedback.
All reactions