You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently working on a neural network for identifying measurements with large errors to improve the robustness of state estimation of power systems. Anyway, the neural network is working pretty well as I've achieved 95% accuracy on the test dataset, but I'm having trouble demonstrating the impact of bad measurement detection on the accuracy of state estimation.
The first issue is that state estimation sometimes fails on normal measurements. I believe this has something to do with the way I set the standard deviation:
# Add the true measurements and perform state estimationmeasurements=base_measurements[:]
forjinrange(0, num_of_base_measurements):
measurements[j].value=true_measurements[i, j]
ifmeasurements[j].meas_type=='v':
measurements[j].std=0.01else:
ifmeasurements[j].value!=0:
measurements[j].std=0.01*abs(measurements[j].value)
net=add_measurements(net, measurements)
For reference, true measurements contain measurement noise generated as:
Second, my original idea was to simply set a large standard deviation for the measurements detected as erroneous, and I expected that this would improve the accuracy of the state estimate. By accuracy, I mean the RMSE between the correct and estimated states. However, this wasn't the case. To solve this problem, I simply removed the measurements that were identified as erroneous. Unfortunately, this sometimes leads to unobservability and the state estimate fails.
Does anyone have a clue as to what the problem might be with my approach to increasing the standard deviation? A code snippet is provided below.
# Identify the corrupted measurements, remove them from the measurements structure and perform state estimationpredicted_label=torch.round(torch.sigmoid(model(X_test[i, ]))).detach().numpy()
measurement_ids=np.where(predicted_label==1)[0]
# net.measurement.drop(labels=measurement_ids, axis=0, inplace=True)net.measurement.std_dev[measurement_ids] =10000
Also, I've just recently switched to Python from MATLAB, so I hope I'm not making some stupid mistake 👍
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi everyone!
I'm currently working on a neural network for identifying measurements with large errors to improve the robustness of state estimation of power systems. Anyway, the neural network is working pretty well as I've achieved 95% accuracy on the test dataset, but I'm having trouble demonstrating the impact of bad measurement detection on the accuracy of state estimation.
The first issue is that state estimation sometimes fails on normal measurements. I believe this has something to do with the way I set the standard deviation:
For reference, true measurements contain measurement noise generated as:
Second, my original idea was to simply set a large standard deviation for the measurements detected as erroneous, and I expected that this would improve the accuracy of the state estimate. By accuracy, I mean the RMSE between the correct and estimated states. However, this wasn't the case. To solve this problem, I simply removed the measurements that were identified as erroneous. Unfortunately, this sometimes leads to unobservability and the state estimate fails.
Does anyone have a clue as to what the problem might be with my approach to increasing the standard deviation? A code snippet is provided below.
Also, I've just recently switched to Python from MATLAB, so I hope I'm not making some stupid mistake 👍
Beta Was this translation helpful? Give feedback.
All reactions