Skip to content

Commit

Permalink
restructured updates (#30)
Browse files Browse the repository at this point in the history
  • Loading branch information
emreds authored Jul 31, 2024
1 parent abaade6 commit 0421b92
Show file tree
Hide file tree
Showing 12 changed files with 13 additions and 390 deletions.
3 changes: 0 additions & 3 deletions .dvcignore

This file was deleted.

18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,17 @@
# tum-dlr-automl-for-eo
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
<a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
<a href="https://github.com/HelmholtzAI-Consultants-Munich/ML-Pipeline-Template"><img alt="Template" src="https://img.shields.io/badge/-Lightning--Hydra--Template-017F2F?style=flat&logo=github&labelColor=gray"></a>
<a href="https://github.com/pyscaffold/pyscaffoldext-dsproject"><img alt="Template" src="https://img.shields.io/badge/-Pyscaffold--Datascience-017F2F?style=flat&logo=github&labelColor=gray"></a>

</div>

# Description
Towards a NAS Benchmark for Classification in Earth Observation

# Quickstart

## Create the pipeline environment and install the tum_dlr_automl_for_eo package
Before using the template, one needs to install the project as a package.
## Installation
- Create the pipeline environment and install the tum_dlr_automl_for_eo package
- Before using the template, one needs to install the project as a package.

* First, create a virtual environment.
> You can either do it with conda (preferred) or venv.
* Then, activate the environment
Expand All @@ -33,3 +31,11 @@ cd tum-dlr-automl-for-eo
```
pip install -e .
```

# How to Use?
- Main functions to trigger are under the `./scripts` folder.
- There are many scripts, including helper functions like `cluster_archs.py` which is not necessary for the main functionality.
- `nb101_dict_creator.py` reads the pickle containing NB101 architectures and converts them into json dict format.
- `path_sampler.py` reads the NB101 dict and also the list of previously trained architectures from NB101(if any) and samples the new architures using random walk sampling.
- `bash_slurm` folder contains the bash scripts to submit training jobs to slurm using bash script. Every training job is submitted separately the have a certain level of fault tolerancy during the training.
- `batch_train_submit.py` submits the training jobs using bash scripts in batch.
4 changes: 0 additions & 4 deletions models/.gitignore

This file was deleted.

9 changes: 0 additions & 9 deletions pyproject.toml

This file was deleted.

1 change: 0 additions & 1 deletion reports/figures/.gitignore

This file was deleted.

Empty file removed sample_analysis/.placeholder
Empty file.
64 changes: 0 additions & 64 deletions scripts/lhc_sampler.py

This file was deleted.

26 changes: 0 additions & 26 deletions scripts/test.py

This file was deleted.

2 changes: 0 additions & 2 deletions scripts_nasbenchmark/analyze_random_walks.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,11 +93,9 @@ def retrieve_performances(architecture_id, local_path, timestep_local):
histtype='step', alpha=0.55, color='blue', bins=40)
plt.ylabel("Density")
plt.xlabel("Number of steps")
#plt.legend(loc='upper right')
plt.show()
plt.savefig(prefix_saving_location + 'distributions_of_steps_in_all_walks.png')
plt.clf()
#print ('DONE!')

path = '/p/project/hai_nasb_eo/sampled_paths/all_trained_archs/'

Expand Down
32 changes: 1 addition & 31 deletions scripts_nasbenchmark/generate_nasb_eo_database _clean.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,40 +43,30 @@
with open(file_test_results) as json_data:
test_results_dict = json.load(json_data)

#print(arch_specs_dict.keys())
#print(test_results_dict.keys())


### iterate over architectures
for arch_dir_i in dir_list_updated:

local_id = arch_dir_i[len(arch_str_prefix):]
local_id = int(local_id)

#print(arch_specs_dict[local_id]['id'])
#print(arch_str_prefix)
#print(local_id)
assert arch_str_prefix + str(local_id) == arch_str_prefix + str(arch_specs_dict[local_id]['id'])

empty = False
# collect validation and training architecture at time step t_j
try:
pd_arch_i = pd.read_csv(path + arch_dir_i + '/metrics.csv')
except pd.errors.EmptyDataError:
empty = True
cpt_empty_evals += 1
#print ("empty file: ", path + arch_dir_i + '/version_0/metrics.csv')

if empty is False and pd_arch_i.empty is False:

#binary_code_i = arch_specs_dict[local_id]['binary_encoded']
matrix_i = arch_specs_dict[local_id]['module_adjacency']
list_ops_i = arch_specs_dict[local_id]['module_operations']
hash_arch_i = arch_specs_dict[local_id]['unique_hash']
num_params = test_results_dict[arch_dir_i]['num_params']

dict_spec_mix_i = dict()
#dict_spec_mix_i['arch_binary_encoding'] = binary_code_i
dict_spec_mix_i['module_adjacency'] = matrix_i
dict_spec_mix_i['module_operations'] = list_ops_i
dict_spec_mix_i['trainable_parameters'] = num_params
Expand All @@ -87,23 +77,17 @@
dict_arch_i_all_data_latency = dict()
dict_arch_i_all_data_MAC = dict()

for timestep_t in [107]: #[4-1, 12-1, 36-1, 107]:#range(1,108):
for timestep_t in [107]:

dict_arch_i_perf_t_micro = dict()
dict_arch_i_perf_t_macro = dict()
dict_arch_i_perf_t_latency = dict()
dict_arch_i_perf_t_MAC = dict()

for metric_i in ["avg_macro", "avg_micro", 'inference', 'MACs']:

#validation_i_acc_t = pd_arch_i['validation_' + metric_i + '_accuracy'][timestep_t * 2]
#training_i_acc_t = pd_arch_i['train_' + metric_i + '_accuracy'][timestep_t * 2 - 1]

# store it
if 'macro' in metric_i:
#print(f"This is the key: {'validation_' + metric_i + '_accuracy'}")
#print(pd_arch_i['validation_' + metric_i + '_accuracy'].keys())
#print(f"This arc has problem {arch_str_prefix + str(local_id)}")
validation_i_acc_t = pd_arch_i['validation_' + metric_i + '_accuracy'][timestep_t * 2]
training_i_acc_t = pd_arch_i['train_' + metric_i + '_accuracy'][timestep_t * 2 - 1]

Expand Down Expand Up @@ -172,14 +156,6 @@
dict_database_to_pickle_latency[hash_arch_i] = (dict_spec_mix_i, dict_arch_i_all_data_latency)
dict_database_to_pickle_MAC[hash_arch_i] = (dict_spec_mix_i, dict_arch_i_all_data_MAC)

#print(dict_database_to_pickle_micro)


#print(dict_database_to_pickle_micro['cfcd44543146cb597ffe7a861755abac'], '\n\n')
#print(dict_database_to_pickle_macro['cfcd44543146cb597ffe7a861755abac'], '\n\n')
#print(dict_database_to_pickle_latency['cfcd44543146cb597ffe7a861755abac'], '\n\n')
#print(dict_database_to_pickle_MAC['cfcd44543146cb597ffe7a861755abac'], '\n\n')


list_data_to_save = [dict_database_to_pickle_micro, dict_database_to_pickle_macro,
dict_database_to_pickle_latency, dict_database_to_pickle_MAC]
Expand All @@ -188,11 +164,5 @@
pickle_file_for_database_latency, pickle_file_for_database_MAC]



# save the data
#for file_name_i, dataset_i in zip(list_filenames_to_save, list_data_to_save):
# with open(file_name_i, 'wb') as f:
# pickle.dump(dataset_i, f)

print ('DONE!')

139 changes: 0 additions & 139 deletions src/tum_dlr_automl_for_eo/datamodules/classification.py

This file was deleted.

Loading

0 comments on commit 0421b92

Please sign in to comment.