Skip to content

Commit

Permalink
Merge pull request #4 from vloncar/opt1
Browse files Browse the repository at this point in the history
Opt1
  • Loading branch information
bo3z authored Jan 29, 2024
2 parents ad47f41 + 6eb391f commit e66d7e7
Show file tree
Hide file tree
Showing 4 changed files with 22 additions and 7 deletions.
8 changes: 4 additions & 4 deletions docs/advanced/model_optimization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,9 @@ Finally, optimizing Vivado DSPs is possible, given a hls4ml config:
from hls4ml.utils.config import config_from_keras_model
from hls4ml.optimization.objectives.vivado_objectives import VivadoDSPEstimator
# Note the change from optimize_model to optimize_keras_for_hls4ml
# The function optimize_keras_for_hls4ml acts as a wrapper for the function, parsing hls4ml config to model attributes
from hls4ml.optimization import optimize_keras_for_hls4ml
# Note the change from optimize_model to optimize_keras_model_for_hls4ml
# The function optimize_keras_model_for_hls4ml acts as a wrapper for the function, parsing hls4ml config to model attributes
from hls4ml.optimization import optimize_keras_model_for_hls4ml
# Create hls4ml config
default_reuse_factor = 4
Expand All @@ -113,7 +113,7 @@ Finally, optimizing Vivado DSPs is possible, given a hls4ml config:
# Optimize model
# Note the change from ParameterEstimator to VivadoDSPEstimator
optimized_model = optimize_keras_for_hls4ml(
optimized_model = optimize_keras_model_for_hls4ml(
baseline_model, model_attributes, VivadoDSPEstimator, scheduler,
X_train, y_train, X_val, y_val, batch_size, epochs,
optimizer, loss_fn, metric, increasing, rtol
Expand Down
13 changes: 13 additions & 0 deletions docs/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,19 @@ binary/ternary networks:
year = "2021"
}
optimization API:

.. code-block:: bibtex
@article{Ramhorst:2023fpga,
author = "Benjamin Ramhorst and others",
title = "{FPGA Resource-aware Structured Pruning for Real-Time Neural Networks}",
eprint = "2308.05170",
archivePrefix = "arXiv",
primaryClass = "cs.AR",
year = "2023"
}
Acknowledgments
===============
If you benefited from participating in our community, we ask that you please acknowledge the Fast Machine Learning collaboration, and particular individuals who helped you, in any publications.
Expand Down
2 changes: 1 addition & 1 deletion hls4ml/optimization/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
default_regularization_range = np.logspace(-6, -2, num=16).tolist()


def optimize_keras_for_hls4ml(
def optimize_keras_model_for_hls4ml(
keras_model,
hls_config,
objective,
Expand Down
6 changes: 4 additions & 2 deletions hls4ml/optimization/objectives/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,10 @@ def is_layer_optimizable(self, layer_attributes):
layer_attributes (hls4ml.optimization.attributes.LayerAttributes): Layer attributes
Returns:
optimizable (boolean): can optimizations be applied to this layer
optimization_attributes (hls4ml.optimization.attributes.OptimizationAttributes):
tuple containing
- optimizable (boolean): can optimizations be applied to this layer
- optimization_attributes (hls4ml.optimization.attributes.OptimizationAttributes):
Most suitable approach for optimization
Examples:
Expand Down

0 comments on commit e66d7e7

Please sign in to comment.