Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ryanunderhill/rel 1.1.0 #2651

Merged
merged 20 commits into from
Dec 14, 2019
Merged
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
2058d63
Add missig env variables for mac pipeline test (#2595)
askhade Dec 10, 2019
a6ca33c
Java API for onnxruntime (#2215)
Craigacp Dec 10, 2019
65c4f1d
Rename automl python tools folder to featurizer_ops. (#2593)
yuslepukhin Dec 10, 2019
c2ee54a
Add in merge conflicted and mysterious @echo %PATH% statement to poss…
RyanUnderhill Dec 11, 2019
9d0beb5
added the path to the front
shahasad Dec 12, 2019
ced539a
Merge with Asad's build fix
RyanUnderhill Dec 12, 2019
97676c4
Nudge build
RyanUnderhill Dec 13, 2019
5729370
Make sure fenced tensor could not reuse other tensor. (#2561)
zhanghuanrong Dec 11, 2019
26946d5
Add support for opset 11 in reshape fusion (#2592)
tianleiwu Dec 10, 2019
ce89025
Support opset 11 subgraph of Squad model in Embed Layer Normalization…
tianleiwu Dec 10, 2019
0f5bbe1
Allow providers to be set for InferenceSession at construction (#2606)
EricCousineau-TRI Dec 11, 2019
d308c75
EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613)
liuziyue Dec 11, 2019
b0a39f9
Improve Embed Layer Norm Fusion for SQuAD with static input shape (#…
tianleiwu Dec 11, 2019
40adc6d
Improve cuda expand() opeator's performance. (#2624)
zhanghuanrong Dec 13, 2019
c7d8133
Cuda pad optimize when no padding is needed. (#2625)
zhanghuanrong Dec 13, 2019
3d36c4f
Improve performance of resize() in Nearest mode (#2626)
zhanghuanrong Dec 13, 2019
b05eaae
Optimize cuda scatter() on 2D compatible. (#2628)
zhanghuanrong Dec 13, 2019
a1178f4
fix float16 comparison in initializer (#2629)
yufenglee Dec 12, 2019
dc40bf6
epsilon attribute for layernormalization fusion (#2639)
liuziyue Dec 12, 2019
bed79c3
Fix memory exception in Layer Norm Fusion (#2644)
tianleiwu Dec 13, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -39,3 +39,6 @@ onnxprofile_profile_test_*.json
/csharp/packages
/csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.targets
/csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.props
# Java specific ignores
java/src/main/native/ai_onnxruntime_*.h
java/.gradle
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -88,6 +88,7 @@ Additional dockerfiles can be found [here](./dockerfiles).
* [C](docs/C_API.md)
* [C#](docs/CSharp_API.md)
* [C++](./include/onnxruntime/core/session/onnxruntime_cxx_api.h)
* [Java](docs/Java_API.md)
* [Ruby](https://github.com/ankane/onnxruntime) (external project)

### Official Builds
@@ -107,6 +108,7 @@ system.
* Version: **CUDA 10.0** and **cuDNN 7.6**
* Older ONNX Runtime releases: used **CUDA 9.1** and **cuDNN 7.1** - please refer to [prior release notes](https://github.com/microsoft/onnxruntime/releases) for more details.
* Python binaries are compatible with **Python 3.5-3.7**. See [Python Dev Notes](./docs/Python_Dev_Notes.md). If using `pip` to be download the Python binaries, run `pip install --upgrade pip` prior to downloading.
* The Java API is compatible with **Java 8-13**.
* Certain operators makes use of system locales. Installation of the **English language package** and configuring `en_US.UTF-8 locale` is required.
* For Ubuntu install [language-pack-en package](https://packages.ubuntu.com/search?keywords=language-pack-en)
* Run the following commands:
6 changes: 6 additions & 0 deletions cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -726,6 +726,11 @@ if (onnxruntime_BUILD_SERVER)
include(onnxruntime_server.cmake)
endif()

if (onnxruntime_BUILD_JAVA)
message(STATUS "Java Build is enabled")
include(onnxruntime_java.cmake)
endif()

# some of the tests rely on the shared libs to be
# built; hence the ordering
if (onnxruntime_BUILD_UNIT_TESTS)
@@ -756,3 +761,4 @@ if (onnxruntime_BUILD_CSHARP)
# set_property(GLOBAL PROPERTY VS_DOTNET_TARGET_FRAMEWORK_VERSION "netstandard2.0")
include(onnxruntime_csharp.cmake)
endif()

110 changes: 110 additions & 0 deletions cmake/onnxruntime_java.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
# Licensed under the MIT License.

#set(CMAKE_VERBOSE_MAKEFILE on)

# Setup Java compilation
include(FindJava)
find_package(Java REQUIRED)
find_package(JNI REQUIRED)
include(UseJava)
include_directories(${JNI_INCLUDE_DIRS})
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c11")

set(JAVA_ROOT ${REPO_ROOT}/java)
set(CMAKE_JAVA_COMPILE_FLAGS "-source" "1.8" "-target" "1.8" "-encoding" "UTF-8")
if (onnxruntime_RUN_ONNX_TESTS)
set(JAVA_DEPENDS onnxruntime ${test_data_target})
else()
set(JAVA_DEPENDS onnxruntime)
endif()

# Specify the Java source files
set(onnxruntime4j_src
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/MapInfo.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/NodeInfo.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OnnxRuntime.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OnnxJavaType.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OnnxMap.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OnnxSequence.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OnnxTensor.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OnnxValue.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OrtAllocator.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OrtEnvironment.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OrtException.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OrtSession.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/OrtUtil.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/package-info.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/SequenceInfo.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/TensorInfo.java
${REPO_ROOT}/java/src/main/java/ai/onnxruntime/ValueInfo.java
)

# Build the jar and generate the native headers
add_jar(onnxruntime4j SOURCES ${onnxruntime4j_src} VERSION ${ORT_VERSION} GENERATE_NATIVE_HEADERS onnxruntime4j_generated DESTINATION ${REPO_ROOT}/java/src/main/native/)

# Specify the native sources (without the generated headers)
file(GLOB onnxruntime4j_native_src
"${REPO_ROOT}/java/src/main/native/*.c"
"${REPO_ROOT}/java/src/main/native/OrtJniUtil.h"
"${REPO_ROOT}/include/onnxruntime/core/session/*.h"
)

# Build the JNI library
add_library(onnxruntime4j_jni SHARED ${onnxruntime4j_native_src} ${onnxruntime4j_generated})
onnxruntime_add_include_to_target(onnxruntime4j_jni onnxruntime_session)
target_include_directories(onnxruntime4j_jni PRIVATE ${REPO_ROOT}/include ${REPO_ROOT}/java/src/main/native)
target_link_libraries(onnxruntime4j_jni PUBLIC ${JNI_LIBRARIES} onnxruntime onnxruntime4j_generated)

# Now the jar, jni binary and shared lib binary have been built, now to build the jar with the binaries added.

# This blob creates the new jar name
get_property(onnxruntime_jar_name TARGET onnxruntime4j PROPERTY JAR_FILE)
get_filename_component(onnxruntime_jar_abs ${onnxruntime_jar_name} ABSOLUTE)
get_filename_component(jar_path ${onnxruntime_jar_abs} DIRECTORY)
set(onnxruntime_jar_binaries_name "${jar_path}/onnxruntime4j-${ORT_VERSION}-with-binaries.jar")
set(onnxruntime_jar_binaries_platform "$<SHELL_PATH:${onnxruntime_jar_binaries_name}>")

# Copy the current jar
add_custom_command(TARGET onnxruntime4j_jni PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_jar_name}
${onnxruntime_jar_binaries_platform})

# Make a temp directory to store the binaries
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD
COMMAND ${CMAKE_COMMAND} -E make_directory "${CMAKE_CURRENT_BINARY_DIR}/java-libs/lib")

# Copy the binaries
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy "$<TARGET_FILE:onnxruntime4j_jni>" ${CMAKE_CURRENT_BINARY_DIR}/java-libs/lib/)

if (WIN32)
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy "$<TARGET_FILE:onnxruntime>" ${CMAKE_CURRENT_BINARY_DIR}/java-libs/lib/)
# Update the with-binaries jar so it includes the binaries
add_custom_command(
TARGET onnxruntime4j_jni POST_BUILD
COMMAND ${Java_JAR_EXECUTABLE} -uf ${onnxruntime_jar_binaries_platform} -C ${CMAKE_CURRENT_BINARY_DIR}/java-libs lib/$<TARGET_FILE_NAME:onnxruntime4j_jni> -C ${CMAKE_CURRENT_BINARY_DIR}/java-libs lib/$<TARGET_FILE_NAME:onnxruntime>
DEPENDS onnxruntime4j
COMMENT "Rebuilding Java archive ${_JAVA_TARGET_OUTPUT_NAME}"
VERBATIM
)
else ()
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy "$<TARGET_LINKER_FILE:onnxruntime>" ${CMAKE_CURRENT_BINARY_DIR}/java-libs/lib/)
# Update the with-binaries jar so it includes the binaries
add_custom_command(
TARGET onnxruntime4j_jni POST_BUILD
COMMAND ${Java_JAR_EXECUTABLE} -uf ${onnxruntime_jar_binaries_platform} -C ${CMAKE_CURRENT_BINARY_DIR}/java-libs lib/$<TARGET_FILE_NAME:onnxruntime4j_jni> -C ${CMAKE_CURRENT_BINARY_DIR}/java-libs lib/$<TARGET_LINKER_FILE_NAME:onnxruntime>
DEPENDS onnxruntime4j
COMMENT "Rebuilding Java archive ${_JAVA_TARGET_OUTPUT_NAME}"
VERBATIM
)
endif()

create_javadoc(onnxruntime4j_javadoc
FILES ${onnxruntime4j_src}
DOCTITLE "Onnx Runtime Java API"
WINDOWTITLE "OnnxRuntime-Java-API"
AUTHOR FALSE
USE TRUE
VERSION FALSE
)
36 changes: 35 additions & 1 deletion cmake/onnxruntime_unittests.cmake
Original file line number Diff line number Diff line change
@@ -800,7 +800,6 @@ list(APPEND onnxruntime_mlas_test_libs Threads::Threads)
target_link_libraries(onnxruntime_mlas_test PRIVATE ${onnxruntime_mlas_test_libs})
set_target_properties(onnxruntime_mlas_test PROPERTIES FOLDER "ONNXRuntimeTest")


add_library(custom_op_library SHARED ${REPO_ROOT}/onnxruntime/test/testdata/custom_op_library/custom_op_library.cc)
target_include_directories(custom_op_library PRIVATE ${REPO_ROOT}/include)
if(UNIX)
@@ -814,3 +813,38 @@ else()
# need to ignore the linker warning 4199, due to some global linker flags failing here
endif()
set_property(TARGET custom_op_library APPEND_STRING PROPERTY LINK_FLAGS ${ONNXRUNTIME_CUSTOM_OP_LIB_LINK_FLAG})

if (onnxruntime_BUILD_JAVA)
message(STATUS "Running Java tests")
# Build and run tests
set(onnxruntime4j_test_src
${REPO_ROOT}/java/src/test/java/ai/onnxruntime/InferenceTest.java
${REPO_ROOT}/java/src/test/java/ai/onnxruntime/TestHelpers.java
${REPO_ROOT}/java/src/test/java/ai/onnxruntime/OnnxMl.java
${REPO_ROOT}/java/src/test/java/ai/onnxruntime/UtilTest.java
)

# Create test directories
file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/java-tests/")
file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/java-tests/results")

# Download test dependencies
if (NOT EXISTS ${CMAKE_CURRENT_BINARY_DIR}/java-tests/junit-platform-console-standalone-1.5.2.jar)
message("Downloading JUnit 5")
file(DOWNLOAD https://repo1.maven.org/maven2/org/junit/platform/junit-platform-console-standalone/1.5.2/junit-platform-console-standalone-1.5.2.jar ${CMAKE_CURRENT_BINARY_DIR}/java-tests/junit-platform-console-standalone-1.5.2.jar EXPECTED_HASH SHA1=8d937d2b461018a876836362b256629f4da5feb1)
endif()

if (NOT EXISTS ${CMAKE_CURRENT_BINARY_DIR}/java-tests/protobuf-java-3.10.0.jar)
message("Downloading protobuf-java 3.10.0")
file(DOWNLOAD https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java/3.10.0/protobuf-java-3.10.0.jar ${CMAKE_CURRENT_BINARY_DIR}/java-tests/protobuf-java-3.10.0.jar EXPECTED_HASH SHA1=410b61dd0088aab4caa05739558d43df248958c9)
endif()

# Build the test jar
add_jar(onnxruntime4j_test SOURCES ${onnxruntime4j_test_src} VERSION ${ORT_VERSION} INCLUDE_JARS ${onnxruntime_jar_name} ${CMAKE_CURRENT_BINARY_DIR}/java-tests/junit-platform-console-standalone-1.5.2.jar ${CMAKE_CURRENT_BINARY_DIR}/java-tests/protobuf-java-3.10.0.jar)

add_dependencies(onnxruntime4j_test onnxruntime4j_jni onnxruntime4j)
get_property(onnxruntime_test_jar_name TARGET onnxruntime4j_test PROPERTY JAR_FILE)

# Run the tests with JUnit's console launcher
add_test(NAME java-api COMMAND ${Java_JAVA_EXECUTABLE} -jar ${CMAKE_CURRENT_BINARY_DIR}/java-tests/junit-platform-console-standalone-1.5.2.jar -cp ${CMAKE_CURRENT_BINARY_DIR}/java-tests/protobuf-java-3.10.0.jar -cp ${onnxruntime_test_jar_name} -cp ${onnxruntime_jar_binaries_platform} --scan-class-path --fail-if-no-tests --reports-dir=${CMAKE_CURRENT_BINARY_DIR}/java-tests/results --disable-banner WORKING_DIRECTORY ${REPO_ROOT})
endif()
56 changes: 56 additions & 0 deletions docs/Java_API.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# ONNX Runtime Java API
The ONNX runtime provides a Java binding for running inference on ONNX models on a JVM, using Java 8 or newer.

Two jar files are created during the build process, one contains the onnxruntime shared library, the JNI binding and the Java class files, and the other only contains the class files. By default the shared libraries are loaded from the classpath in a folder called `/lib`, if you wish to have them load from `java.library.path` then supply `-DORT_LOAD_FROM_LIBRARY_PATH` to the JVM at runtime.

## Sample Code

The unit tests contain several examples of loading models, inspecting input/output node shapes and types, as well as constructing tensors for scoring.

* [../java/src/test/java/ai/onnxruntime/InferenceTest.java#L66](../java/src/test/java/ai/onnxruntime/InferenceTest.java#L66)

## Getting Started
Here is simple tutorial for getting started with running inference on an existing ONNX model for a given input data. The model is typically trained using any of the well-known training frameworks and exported into the ONNX format.
Note the code presented below uses syntax available from Java 10 onwards. The Java 8 syntax is similar but more verbose.
To start a scoring session, first create the `OrtEnvironment`, then open a session using the `OrtSession` class, passing in the file path to the model as a parameter.

var env = OrtEnvironment.getEnvironment();
var session = env.createSession("model.onnx",new OrtSession.SessionOptions());

Once a session is created, you can execute queries using the `run` method of the `OrtSession` object.
At the moment we support `OnnxTensor` inputs, and models can produce `OnnxTensor`, `OnnxSequence` or `OnnxMap` outputs. The latter two are more likely when scoring models produced by frameworks like scikit-learn.
The run call expects a `Map<String,OnnxTensor>` where the keys match input node names stored in the model. These can be viewed by calling `session.getInputNames()` or `session.getInputInfo()` on an instantiated session.
The run call produces a `Result` object, which contains a `Map<String,OnnxValue>` representing the output. The `Result` object is `AutoCloseable` and can be used in a try-with-resources statement to
prevent references from leaking out. Once the `Result` object is closed, all it's child `OnnxValue`s are closed too.

OnnxTensor t1,t2;
var inputs = Map.of("name1",t1,"name2",t2);
try (var results = session.run(inputs)) {
// manipulate the results
}

You can load your input data into OnnxTensor objects in several ways. The most efficient way is to use a `java.nio.Buffer`, but it's possible to use multidimensional arrays too. If constructed using arrays the arrays must not be ragged.

FloatBuffer sourceData; // assume your data is loaded into a FloatBuffer
long[] dimensions; // and the dimensions of the input are stored here
var tensorFromBuffer = OnnxTensor.createTensor(env,sourceData,dimensions);

float[][] sourceArray = new float[28][28]; // assume your data is loaded into a float array
var tensorFromArray = OnnxTensor.createTensor(env,sourceArray);

Here is a [complete sample program](../java/sample/ScoreMNIST.java) that runs inference on a pretrained MNIST model.

## Running on a GPU or with another provider (Optional)
To enable other execution providers like GPUs simply turn on the appropriate flag on SessionOptions when creating an OrtSession.

int gpuDeviceId = 0; // The GPU device ID to execute on
var sessionOptions = new OrtSession.SessionOptions();
sessionOptions.addCUDA(gpuDeviceId);
var session = environment.createSession("model.onnx", sessionOptions);

The execution providers are preferred in the order they were enabled.

## API Reference

The Javadoc is available [here](https://microsoft.github.io/onnxruntime/java/index.html).

Loading