-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Java API for onnxruntime #2215
Java API for onnxruntime #2215
Conversation
As discussed in pull #1723 your loader doesn't support using the same library from multiple class loaders, which are heavily used by containers like Flink, Tomcat, WebLogic, etc, see issue tensorflow/tensorflow#6926, or build systems like sbt, see issue tensorflow/tensorflow#19298. Similar issues occur with shading, see issue netty/netty#7272. On Windows, it will also fail to erase extracted DLL files and fill up temporary directories, for example, as with TensorFlow, see issue tensorflow/tensorflow#18397. On Android, extracting libraries and using On Linux, mainly, but on other OS as well, different architectures (x86, x86_64, ARM, etc) produce library files with the same name, but this code doesn't differentiate between them, which is bound to cause problems. To save users from downloading large binaries they don't need, the Java class files should be packaged in a separate module from the native libraries. However, JPMS (required to use things like For users who wish to use ONNX Runtime with libraries like CUDA or MKL, I'm not seeing any support for packaging and loading those, so we will need to install them manually. If these kinds of limitations are not considered important enough to be fixed though, let's at least list them somewhere as known issues... |
I looked into the JNI loading issue in application servers after we first discussed it, and this is an intentional security limitation of JNI to prevent people breaking out of a ClassLoader via native code (https://docs.oracle.com/en/java/javase/11/docs/specs/jni/invocation.html#library-and-version-management). The trick in JavaCPP which increments the library name and keeps reloading it presumably manages to load the library, but it's unclear what native functions are called, and if there is an upcall back into the JVM using env->findClass then I don't know what class is returned, or which ClassLoader it came from. Have you done a security analysis of this part of JavaCPP? The current code fails with an exception if the library is loaded twice which is not a security or type safety risk (and is the intended behaviour of JNI in this situation). I wasn't aware of the Windows native loading issue. Did you open a bug on OpenJDK (or see if the same thing occurs on OpenJ9 or Azul)? It sounds like something they could look into. As you can't delete a file that's mapped in Windows then you'd need to unmap it from the JVM first, and my understanding is that that happens when the ClassLoader goes away (which is presumably after the deleteOnExit trigger fires for the root loader). I'm not too concerned about Android at the moment, my understanding was that the other PR was for Android. For the various other packaging issues, I agree the loader doesn't differentiate between native platforms, nor does the build system package dependent libraries like CUDA into the jar. Packaging it all into a set of jars leads to a cross product blowup in the number of jars (which would occur for the other language bindings here too - which I note don't package in libraries like CUDA), and I don't think it's worthwhile maintaining a jar file for each version of CUDA, glibc and combination of other backends that a user could want. Or you don't make a cross product and you end up with the issue that DL4J has where you can't use a decent CPU blas and a GPU at the same time using any published artifact. The current jar is 3.1MB on Linux containing a libonnxruntime compiled in release mode, along with the JNI binding. That's not too big a download so I'm not too worried about it. All that said, the packaging issues are a decision for the project rather than something I can control in this pull request. |
Only the JNI library is renamed and loaded separately. All the JNI code is contained in a single compilation unit with everything static, expect for the JNI functions. It's also linked with To do the kind of advanced "security analysis" that satisfies shops like Oracle however, I would need a bit more resources. Oracle has the resources though. Why are they not doing anything about it? Are you saying that Oracle is perfectly OK with shipping Java libraries that do not work properly with pretty much every Java-based enterprise container out there?
Does this mean you believe that OpenJDK is interested in looking into this? Where do I need to sign up? Anytime I bring up these issues on the Panama mailing list I get ignored. IIRC, when I brought up this issue specifically they said that Panama isn't going to be affected by this limitation so they are not going to do anything about that. For them, JNI is already legacy even before they have something shipping to replace it! If you can convince them otherwise that maybe they should care, that would be great. As for OpenJ9 and Azul Zulu, those use OpenJDK so they behave the same, but I don't know about Azul Zing. It would surprise me if they tried to do anything about this though since they're not an ML shop.
I don't see any reason why there should be 1 API for Java SE and another API for Android. This isn't helping the Java community. I hope this isn't the official stance of Oracle...?
But you're making it your decision by inserting code that bundles and extracts from a JAR file! If you're perfectly happy with the way OpenJDK intends us to do things, please leave that part out entirely, instead of forcing users to do it your way. Then users could do it however they want, with |
Raise the Windows bug on the appropriate OpenJDK mailing list, which is probably the class libraries one, rather than the Panama mailing list. It's not a Panama or JNI issue, but one with how Windows deals with mapped files and what the deleteOnExit hook is supposed to do. At the very least then you'd have a bug reference to talk about. I'm unclear whether the differences between OpenJ9 and Hotspot would change how this bug is dealt with, but given it's going to end up in the guts of how the JVM deals with class unloading, native libraries and shutdown hooks I'd be surprised if the behaviour was the same. Ditto for Zing. I do not speak for Oracle, the Java product group or OpenJDK, I'm an ML researcher in the research labs. Our group in Oracle Labs is trying to contribute this code because we've found it useful internally, and believe it would be better hosted as part of the onnxruntime project rather than separately. It's up to the maintainers of the project to decide what they want to do with the Android and Java interfaces, and I'm happy to work with them on that. MKL and CUDA are automatically loaded if they are on the library path the same way any other native library is. The onnxruntime library is bundled into the jar as expecting a user to have that installed is different to expecting them to have CUDA installed. CUDA and MKL are frequently provided in machine images, so bundling a specific one is less valuable. It's the same choice as made in the other language implementations in this project. Bundling large native libraries into the jar file causes other issues as then you either need to cache them in some filesystem dependent place (which causes problems when you don't have access to the filesystem, or it's network mounted etc) or you pay the cost of unzipping it out to disk each time the JVM starts. It's less of an issue for small libraries, but last time I checked CUDA was pretty big. |
The problem isn't relevant when using a cache anyway, so personally I don't care if they don't want to fix it. FWIW, it's probably going to end up like this 14-year-old bug:
Great, I'm happy we got that clear. It's not my intention to start arguing about issues related to Oracle here either. All I wanted to say is that your loader has limitations that you should either fix, or if you have no intentions of fixing all those issues, then at least provide a way to disable it so that the JNI libraries can be loaded by some other means. A note as part of the documentation about those limitations would be welcome, but in any case, it isn't my place to decide what ends up as part of ONNX Runtime!
Again, not relevant with a cache. The libraries get unzipped exactly once, the same as downloading and installing them manually. The only difference with properly packaged JAR files is that the installation step is automated. |
/azp run |
Azure Pipelines successfully started running 12 pipeline(s). |
…f the library loader, and allow loading directly from java.library.path.
I added a couple of startup flags that are checked by the library loader. Now it has optional debug logging on loading, and you can tell it to load the libraries from java.library.path. The file creation loader was causing some issues on an internal project using immutable docker containers, so now it's possible to disable it. One further issue is that the JVM complains that the stack of the generated libonnxruntime is executable which interferes with some protections. It suggests a remedy which changes how the shared library is linked, does that sound like it will cause issues?
|
What if users want to load libraries that are named differently than the "onnxruntime" and "ONNX4j"? To support multiple class loaders, among other things? Anyway, if this makes it into ONNX Runtime itself, downstream distributions can patch this easily enough by hard coding |
Supporting multiple classloaders the way you propose is explicitly against the JNI spec, so I'm not going to do that as it's behaviour is unknown wrt security (and might well have different behaviour across different JVM implementations). Hardcoding the Once there is some signoff on the the API I'll work on non-javadoc documentation, which is where the native loading considerations should be. That's why it's not in the current PR. |
public void close() throws ONNXException { | ||
if (!closed) { | ||
//closeAllocator(ONNX.ortApiHandle,handle); | ||
closed = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
closed = true [](start = 12, length = 13)
Native handle must be closed #Resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The free method for an allocator disappeared between 0.5 and the current master. I'm not sure what C API endpoint I should use to free one. #Resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class houses a Default allocator which is a singlton. No need to free it. However, I suggest you reflect in the class name and in the comments.
You may want to create a separate ONNXAllocator class that creates and frees custom alloctors. For that you can allocate OrtAlloctor structure, fill in members and use it. You'd have to free it when it is done.
In reply to: 340875164 [](ancestors = 340875164,340852616)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I can do that. Is there an example somewhere of making a non default allocator? All that stuff changed recently and I'm not sure how best to use the CAPI. #Resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't find it handy. We can add custom allocator later.
In reply to: 340885378 [](ancestors = 340885378)
* @return An ONNXTensor storing the data. | ||
* @throws ONNXException If the onnx runtime threw an error. | ||
*/ | ||
public ONNXTensor createTensor(Object data) throws ONNXException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Object data [](start = 35, length = 11)
Suggest a separate overload for String[] and non-string data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So createTensor(String[]), createTensor(String) and createTensor(Object) or just the String[] and Object ones?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
String[] and something for raw data. Perhaps something better than an Object? Perhaps, those nice overloads below would do?
In reply to: 340868484 [](ancestors = 340868484)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no top type for arrays nor primitives, so it's hard to have a more specific type. There's also no top type for multidimensional primitive arrays that keep the primitive type but abstracts over the dimensionality. Until Java gets a good tensor type or reified generics I'm not sure what's a nicer type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what does Object represent given all the overloads below?
In reply to: 340880169 [](ancestors = 340880169)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Object is anything that isn't matched by the overloads. Then the TensorInfo.constructFromJavaArray
call throws an ONNXException
if it's not a convertible type, if it is then it constructs an ONNXTensor
.
Hello, Nice PR, I've got an error when I run
Any idea how could I test it ? |
@XciD can you post the stack frames from the log file? I've not seen an error out of the build like that before. Did it run any of the tests? |
* @param clazz The class to use. | ||
* @return An ONNXJavaType instance. | ||
*/ | ||
public static ONNXJavaType mapFromClass(Class<?> clazz) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mapFromClas [](start = 31, length = 11)
Seems incomplete? #Resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't think those types were supported in the runtime. Otherwise I can map them to boxed types for the moment. With value types there will be a Java answer but that's not available yet. #Resolved
case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT64: | ||
case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64: | ||
return ONNXJavaType.INT64; | ||
case ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16 [](start = 17, length = 37)
Mapping this to FLOAT is probably wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it upconverts an fp16 result into a float on the way out of the native code, but I'll double check tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is represented as uint16_t in C++ code and this is what you will get here. Converting it to a standard float requires a call to an elaborate function.
In reply to: 340888816 [](ancestors = 340888816)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, in ONNXUtil.c it calls the convertHalfToFloat function whenever a fp16 value is passed back up. Given fp16 is usually used for memory and perf reasons within the network it doesn't seem so bad to have it be upconverted into a float for the output. At least that preserves all the information and doesn't require the user to do bit twiddling in Java land. If you want me to return it as a short
I can do that too, but it'll be more confusing for the users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what is end to end scenario? This allows to read output of float16, what about creating a Tensor of float16 as input? Also any plans to support BFloat16?
In reply to: 341115288 [](ancestors = 341115288)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently I don't have a way of creating a float16 Tensor as an input. That's mainly because I couldn't figure out a good carrier type in Java. I suppose I could put a flag on an entry point that takes a float array or buffer and have it generate either float or float16.
For bfloat16 I was going off the comments that said it wasn't supported in the onnx runtime. Has that changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Craigacp Since they're using uint16_t
in C++ for those, we can simply use short
in Java, that's perfectly fine. There's no fundamental type for that in C++ either! It works great, see examples of that here: https://github.com/bytedeco/javacpp/blob/master/src/test/java/org/bytedeco/javacpp/IndexerTest.java#L868
case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT8: | ||
case ONNX_TENSOR_ELEMENT_DATA_TYPE_INT8: | ||
return ONNXJavaType.INT8; | ||
case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case ONNX_TENSOR_ELEMENT_DATA_TYPE_UINT16: [](start = 12, length = 42)
We will have to decide on what to do with Tensors that contain unsigned types. Perhaps, the should not be allowed especially, when you created it from Java API. And that's the approach you take in Maps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In maps I support the types that are specified as supported in the c API header, for unsigned types this is rather unfortunate but it does allow users to use models with unsigned types even if they have to contort their calling code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's easy enough to support unsigned 8-bit and 16-bit types in Java with helper methods that pass values using int
arguments. Here's an example of that using JavaCPP's indexer package:
http://bytedeco.org/news/2014/12/23/third-release/
…o dev/shahasad/java-api-pr-5
I've merged in that other PR, so can someone rerun the CI? |
/azp run |
Azure Pipelines successfully started running 12 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 12 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 12 pipeline(s). |
* Add missig env variables for mac pipeline test (#2595) * Java API for onnxruntime (#2215) * Rename automl python tools folder to featurizer_ops. (#2593) * Make sure fenced tensor could not reuse other tensor. (#2561) * Add support for opset 11 in reshape fusion (#2592) * Support opset 11 subgraph of Squad model in Embed Layer Normalization (#2605) * Allow providers to be set for InferenceSession at construction (#2606) * EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613) * Improve Embed Layer Norm Fusion for SQuAD with static input shape (#2621) * Improve cuda expand() opeator's performance. (#2624) * Cuda pad optimize when no padding is needed. (#2625) * Shortcut cuda Pad() when no padding is needed. * Improve performance of resize() in Nearest mode (#2626) * Optimize cuda scatter() on 2D compatible. (#2628) * Optimize cuda scatter() on 2D compatible. * fix float16 comparison in initializer (#2629) * epsilon attribute for layernormalization fusion (#2639) * Fix memory exception in Layer Norm Fusion (#2644)
* Disable Attention fusion tests when DISABLE_CONTRIB_OPS is defined (#2529) * Setup java ci (#2528) * Add provision in ORT for session options to be parsed when available via model file (#2449) * Initial commit * Fix gitmodules * Nits * Nits * Updates * Update * More changes * Updates * Update * Some updates * More changes * Update * Update * Merge * Update * Updates * More changes * Update * Fix nits * Updates * Fix warning * Fix build * Add comment * PR feedback * PR feedback * Updates * Updates * Update * More changes * Fix build break * Comment test for now * Updates * Updates * PR feedback * Updates * Nits * Add tests * Fix build * Fix build * Fix build * Fix build break * Fix build * Nits * PR feedback * More change * Expose GetSessionOptions in pybind logic and add unit test for python * Fix build * PR feedback * PR feedback * Revert "Disable thread pool creation when enabled OpenMP (#2485)" (#2535) This reverts commit 7c7d5a1. * Add dynamic shape support in TensorRT execution provider (#2450) * remove onnx-tensorrt submodule * add new onnx-tensorrt submodule (experiment) for trt6 * update engine build for trt6 * update compile and compute for tensorrt6.0 * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * switch to onnx-tensorrt master for TensorRT6' * Update tensorrt_execution_provider.cc * Handle dynamic batch size and add memcpy in TensorRT EP * update test cases * Update tensorrt_execution_provider.cc * update onnx-tensorrt submodule * Update Dockerfile.ubuntu_tensorrt * Update Dockerfile.ubuntu_tensorrt * Update run_dockerbuild.sh * Update run_dockerbuild.sh * Update install_ubuntu.sh * Update concat_op_test.cc * Update tensorrt_execution_provider.cc * Upgrade TensorRT to version 6.0.1.5 * Update onnxruntime_providers.cmake * Update CMakeLists.txt * Update reduction_ops_test.cc * Update install_ubuntu.sh * Update Dockerfile.ubuntu_tensorrt * Update Dockerfile.tensorrt * Update BUILD.md * Update run_dockerbuild.sh * Update install_ubuntu.sh * Update onnxruntime_providers.cmake * Update install_ubuntu.sh * Update install_ubuntu.sh * Update gemm_test.cc * Update gather_op_test.cc * Update CMakeLists.txt * Removed submodule * update onnx-tensorrt submodule * update header file * Removed submodule * add submodule onnx-tensorrt kevin's branch shape-test' * add debugging code * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * merge master * Removed submodule * update onnx-tensorrt submodule * add more changes for dynamic shapes * Update tensorrt_execution_provider.cc * update for dynamic shape * update dynamic shape processing * fix logger issue * remove submodule onnx-tensorrt * add submodule onnx-tensorrt * add env variable min_subgraph_size * remove redundency * update document * use onnxruntime::make_unique * fix multi-run issue * remove some tests to save CI build time * Add dynamic shape test * Update TensorRT-ExecutionProvider.md * Add example of running Faster R-CNN model on TensorRT EP * Add more details on env variables * update environment variables * Update tensorrt_basic_test.cc * Update model tests * Update tensor_op_test.cc * remove --use_full_protobuf * Update build.py * User/xianz/telemetry (#2458) * enabme telemetry * enable telemetry * set enable telemetry as default * for debugging * remove log and set disable telemetry as default back * delete private file while testing * resolve comment: mainly add license header, rename macro and update docs * rewording in privacy.md * Fix integer overflow in cuda NonMaxSuppression implementation (#2540) * add test case that should pass but fail * fix nms * extract int_max_output_boxes_per_class * Introduce container type runtime checks and other improvements (#2522) Rework TensorSeq in a manner consistent with Tensor and SparseTensor in terms of type system setup. Reduce templating. Introduce helpers to ensure the same data type. Make OrtValue __dtor not virtual. Introduce ContainerChecker * Fix C API tests for centos and mac (#2544) * change c++14 to c++11 * add ld lib path for centos * enable csharp tests on macos * fix C API test on MacOS + fix manylinux dotnet install * fix manylinux dotnet install * fix lib link * Add back executable bit to build.py * Fix a bug handling negative begin pad values in Pad op (#2550) * Fix bug in Pad op * Update * DNNL CMAKE update (#2548) * Fix android build (#2558) * Update win-x86-ci.yml (#2557) Fix build pipeline break * Re-enable Windows C# tests (#2564) * disable onnx_test_runner -x invocations for dnnl (#2568) * Allow sequence length to be symbolic (#2559) * setup java ci mac (#2570) * make layernorm fusion to support opset 11 (#2545) * Fix a warning found in the latest VS release * Add more check on SkipLayerNorm and BiasGelu fusion (#2574) * Fix file not found error during docker build. (#2569) * Add ConvTranspose1D (#2578) * Ryanunderhill/packagename test (#2582) * [Nuphar EP] fixes for some object detection models (#2581) Update notebook tutorial with multi-threaded int8 GEMM from #2517 * EmbedLayerNormalization Fusion Improvement (#2553) Embedding layer norm fusion improvements - add more checks * Update version (#2584) * Temporarily exclude vgg19 test from Python backend test 1. temporarily exclude vgg19 test which comsumes too much memory, run out of memory on Upsquared device. Single test pass for vgg19, need furture investigation (#2588) 2. Update docker file to decrease the docker image size * Update docs for Android NNAPI EP (#2586) * Fix lto bug for protobuf and ubuntu * add path to build dir before test run (#2590) * Add missig env variables for mac pipeline test (#2595) * Fixed an issue in updating realized dims (#2597) when we update realized dims for scan's output, the sliced axis also needs to be inclusive, i.e. we should check with "dim >= insert_inclusive_axis", because the offset in the symbols are based on Scan sugraph. Otherwise, we would end up with shape mismatch later. * Java API for onnxruntime (#2215) * Add support for opset 11 in reshape fusion (#2592) Support opset verion 11 in reshape fusion * Rename automl python tools folder to featurizer_ops. (#2593) * Support opset 11 subgraph of Squad model in Embed Layer Normalization (#2605) Support opset 11 Squad model that is exported from PyTorch nightly. The embed layer uses Range op which is missed in the transformer. * symbolic shape inference: fix warnings in GPT-2 model (#2608) And revise nuphar perf test on BERT squad * Dump subgraph ID and fused graph ID (#2607) * Dump subgraph ID and fused graph ID Dump subgraph ID and fused graph ID for better debugging * Remove local static fused_count added a field global_fused_count_ to NupharExecutionProvider class * EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613) Support subgraph of SQuAD model exported from pytorch with dynamic input axes * Allow providers to be set for InferenceSession at construction (#2606) * Remove unnecessary parameter in some places in GatherElements implementation (#2612) * Remove unnecessary parameter in some places * Update * Update * Make sure fenced tensor could not reuse other tensor. (#2561) Fix random error caused by this. * Improve Embed Layer Norm Fusion for SQuAD with static input shape (#2621) * fix float16 comparison in initializer (#2629) * epsilon attribute for layernormalization fusion (#2639) * removed unnecessary batch file and fix path (#2640) * Add shape inference to ConvTransposeWithDynamicPads schema (#2632) * Improve cuda expand() opeator's performance. (#2624) * Cuda pad optimize when no padding is needed. (#2625) * Shortcut cuda Pad() when no padding is needed. * Optimize cuda scatter() on 2D compatible. (#2628) * Optimize cuda scatter() on 2D compatible. * Add some comments. * fix build error for ARM (#2648) * Improve performance of resize() in Nearest mode (#2626) Special treatment for 2D, check same size as input image. And in 2d kernel, template use_expolation. * Fix memory exception in Layer Norm Fusion (#2644) * Windows CI changes(#2650) * Revert "User/orilevari/windowsai master merge (#2674)" This reverts commit fe26146.
* Initial Commit * Merged PR 3985217: add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc (#2346) add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements. We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything. * Initial changes for layering * more snipping to get core into ort * update build instructions to include --build_shared_lib (#2358) * update build instructions to include --build_shared_lib * fix line breaks * Task 23998197: add winml_lib_core into onnnxruntime.dll (#2368) * Task 23998197: add winml_lib_core into onnnxruntime.dll * PR feedback build break on perf_test * return proper error when the model path isn't found (#2391) * LearningModelSession is cleaned up to use the adapter, and parts of b… (#2382) this is a big PR. we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile. we are going to move this into the L3 so that ryan can start doing intergration testing. we will pause for a full code review and integration test result prior to going into the L2. >>>> raw comments from previous commits >>> * LearningModelSession is cleaned up to use the adapter, and parts of binding are. * moved everything in the winmladapter made it all nano-com using, WRL to construct objects in the ORT side. base interfaces for everythign for winml to call cleaned up a bunch of winml to use the base interfaces. * more pieces * GetData across the abi. * renamed some namepsace cleaned up OrtValue cleaned up Tensor cleaned up custom ops. everything *but* learnignmodel should be clean * make sure it's building. winml.dll is still a monolith. * model moved over. everything builds clean. step ! * weak ref comment * Layer dev paulm (#2408) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * Layer dev paulm (#2414) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * User/xianz/win ml telemetry (#2410) * add option to enable winml telemetry * add option to enable winml telemetry * clean logs while developping * clean the log of GUID * compile onnxruntime_common with winml telemetry * use option for use_telemetry * rename option winml_use_telemetry to onnxruntime_use_telemetry * little change * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * Layer dev paulm (#2423) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * Layer dev paulm (#2424) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * Layer dev paulm (#2425) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * fixed 2 more heap corruptions * Layer dev paulm (#2426) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * fixed 2 more heap corruptions * Add opset and IR check when loading model (#2413) * Add opset and IR check. * Add test case for future opsets. https://github.com/microsoft/onnxruntime/issues/2371 * fixed map and sequence when passing stl types across the ABI . found a leak in nvidia driver, but skipped it. all winmlapitests pass now * Moved SessionOptions over to the abi * WinML CI (#2412) * Pass flags to build/test WinML in CI * Add initial CMake config for unit tests in WinML * Set winml_unittests standard to C++17 * Add WinML API tests and port them to googletest * Install WinML test collateral * Add LearningModelSessionAPITests ported to googletest * Fix WinML test files encoding * Add GPU tests * Add parameterized test, skip GPU tests * Enable precompiled header * Remove unused code and collateral * Remove brand images * Add dllload.cpp * Remove images not used in API tests * Add LICENSE.md to image collaterals * Add models with licenses * Remove FNS Candy tests * Add API test models * Add ModelInSubdirectory * Install collaterals post-build with copy_if_different, split common lib * fix warnings * Link to gtest_main * Register WinML TraceLogging provider on Onnxruntime.dll (#2455) * Register WinML TraceLogging provider on Onnxruntime.dll * Add ifdef to make sure trace logging provider has telemetry option when LAYERING_DONE * No need for ifdef for TraceLoggingOptionMicrosoftTelemetry * PR feedback * Move etw registration into lotus environment constructor and deresgister in lotus environment destructor * Brianma/cpuwinml (#2466) * allow building winml cpu without dml. * Brianma/breaks (#2469) * fix some more breaks * learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers * move dml checks out of winml and into the adapter * better error handling * Brianma/fi (#2470) * learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers * User/xianz/win ml telemetry (#2410) * add option to enable winml telemetry * add option to enable winml telemetry * clean logs while developping * clean the log of GUID * compile onnxruntime_common with winml telemetry * use option for use_telemetry * rename option winml_use_telemetry to onnxruntime_use_telemetry * little change * Add opset and IR check when loading model (#2413) * Add opset and IR check. * Add test case for future opsets. https://github.com/microsoft/onnxruntime/issues/2371 * WinML CI (#2412) * Pass flags to build/test WinML in CI * Add initial CMake config for unit tests in WinML * Set winml_unittests standard to C++17 * Add WinML API tests and port them to googletest * Install WinML test collateral * Add LearningModelSessionAPITests ported to googletest * Fix WinML test files encoding * Add GPU tests * Add parameterized test, skip GPU tests * Enable precompiled header * Remove unused code and collateral * Remove brand images * Add dllload.cpp * Remove images not used in API tests * Add LICENSE.md to image collaterals * Add models with licenses * Remove FNS Candy tests * Add API test models * Add ModelInSubdirectory * Install collaterals post-build with copy_if_different, split common lib * fix warnings * Link to gtest_main * fix bad merge * Checking in a staging checkpoint point so that Ryan can work with me in parrallel * build break. * Brianma/testfails (#2473) * add missing ir version to dictvectorizer-string.onnx * add missing ir version to relu.onnx * add missing ir version to zipmap*onnx * add IR version to manually generated models * remove an unnecessary ifdef dml * Brianma/windowsai fi (#2475) * update dockerfiles/README (#2336) * Make elementwise op run 4 items per thread (#2335) Description: Describe your changes. Make elementwise op run 4 items per thread unroll for loop to leverage ILP remove unnessary N==0 check inside elementwise GPU kernel Motivation and Context Why is this change required? What problem does it solve? It can improve the performance of GPU elementwise ops. ~2% performance gain on popular NLP bert model. If it fixes an open issue, please link to the issue here. * Add CUDA GatherElements kernel (#2310) * Updates * Update test * Update * Updates * nits * PR feedback * Update * Update * PR feedback * PR comments * Update * Fix build * Fix build * Nits * Fix * Layer Normalization Fusion (#2319) basic layer normalization transform * Add FastGelu Cuda Op for Gelu and Add bias fusion (#2293) * Add FastGelu cuda op * Add AddBiasGelu for experiment * Revert "Add AddBiasGelu for experiment" This reverts commit 5c1ee019858c657e6bb75887265cb85675626e5b. * Add bias * Add unit tests * update comment * update script * fix build error * update coding style * update for CR feedback Enable half2 optimization only when cuda arch >= 7.0 * move _Tanh to common.cuh * implement CPU contrib OP Attention (#2333) * Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanUnusedInitializers. (#2320) * Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanupUnusedInitializers. This means initializers that have been replaced during graph optimizations are not left in the GraphProto when we save an optimized model. * Handle edge case where a model has an unused initializer with matching graph input by also removing the graph input. * Use non-const iterators in std::find_if calls to make centos build happy. * Nuget pipeline changes (#2305) 1. refactor the pipeline, remove some duplicated code 2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool 3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml 4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed * Cuda Reverse Sequence Op, maping types of same size using same template function. (#2281) * Set ElementType to String type of node metadata, instead of byte[] (#2348) * Set ElementType to String type of node metadata, instead of byte[] * Fix spacing * Introduce PrimitiveType into a Type System along with an integer constant (#2307) Improve perf by avoiding GetType<T>() calls. Introduce MLTypeCallDispatcher to switch on Input Type. Add Tensor IsType<T>() fast method. * Fix/test dim value of 0 handling in a couple of places (#2337) * Update the CUDA Where implementation broadcasting logic to handle a dim with value of 0. Add unit test Also add unit test for unary op with dim value of 0 * Exclude ngraph from Where test with 0 dim. * Openvino EP R3.1 onnxrt server (#2357) * onnxrt server with OVEP * onnxrt server with OVEP * Update Dockerfile.server.openvino * onnxrt server OVEP fix reviews * onnxrt server OVEP fix reviews * Implement cuda nonzero op. (#2056) Implement cuda nonzero op. * Direct use python numpy array's memory if already contiguous. (#2355) * Direct use python numpy array's memory if already contiguous. This could greatly improve performance for session with large input, like big image 1920x1080 fastrcnn, 30~40% speed up could be achieved. * Add test case enforce contiguous/non-contiguos numpy array as inputs. * Add helper to create output to minimize binary size. (#2365) Add ConstEigenTensorMap typedef so we don't unnecessarily const_cast the const input Tensor. * fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS (#2369) * fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS * update * Add Tracelogging for profiling (#1639) Enabled only if onnxruntime_ENABLE_INSTRUMENT is ON * test bidaf with nuphar for avx target (#2370) increase nuphar test coverage a bit * Fix a bug in TLS refcount that may destabilized CUDA CI (#2374) * update output size calculation for resize (#2366) * change how output size is calculated for resize op * add tests for ver 10 resize * Extend OneHot CPU kernel to support more types (#2311) * Extend OneHot CPU kernel to support input int64_t, depth int32_t, output float * Skip BERT before the test data fix is picked up * Fix bug with Slice. Need to pass in flattened input dimensions so the initial offset into the input is calculated correctly. (#2372) * Add opset 11 version of Split to CUDA ops (#2376) Organize the CUDA ops definitions so all the opset 10 and 11 parts are together (same setup used for CPU ops) * Layer Norm Fusion Fix (#2379) * layer norm fusion fix * Add input shape check in code and unit tests * Fuse Add + Gelu (#2360) Implement the transformer to fuse add + gelu Implement the accurate kernel * Skip layer norm transform (#2350) * skip layer normalization transformer * Another try to stabilize CUDA CI (#2383) The root cause seems to be failure in CUDA dealloc when tear down. cudaFree return code was ignored before, so should the debug check. * fix BUILD.md typo (#2375) build.py: error: argument --config: invalid choice: 'RelWithDebugInfo' (choose from 'Debug', 'MinSizeRel', 'Release', 'RelWithDebInfo') * Fixed compilation with ngraph (#2388) * Fix reuse logic in allocation planner. (#2393) * Fix reuse logic in allocation planner. * PR comments * Add helpful comments * Don't allow reuse across string tensors. * [NupharEP] Multiple optimizations (#2380) Fuse transpose into MatMul Implement Pow and constant scalar simplification Vectorize ReduceMean Improve symbolic shape inference Minor updates for better debugging in fused function name * Avoid using the default logger in the graph lib and optimizers (#2361) 1. Use the session logger if it is available. 2. Don't disable warning 4100 globally. We should fix the warnings instead of disabling it. * Change CUDA implementation of Transpose to support all fixed size tensor types (#2387) * Change CUDA implementation of Transpose to not use a typed kernel so we can support more types with minimum binary size. Add support for 8, 16, 32 and 64 bit types. Add unit tests. Add method so the implementation can be called directly (will be used by CUDA Scan very soon). * Disable TensorRT for MLFloat16 and int8 unit tests. * Address PR comment and add support for calling cublas implementation if type is mlfloat16. * Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. (#2398) * Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. * [NupharEP] force some low/zero cost ops to be inlined (#2409) * fix cross compile bug (#2415) * Minor optimization: if a node has already been placed, there's no need to find a kernel for it. (#2417) * Add Reshape Fusion (#2395) * Add reshape fusion * Add some comments * update comments * update comment format * update according to feedback * update for recent logger change * fix build error * (1) Support both input and output edges in find path in graphutils (2) Add a test case of only one constant initializer of Concat input. (3) Refactor ReshapeFusion class to allow add more subgraph fusion in the future. * fix error * (1) loose constraint on initializer: non constant is allowed for reshape fusion. (2) Change versions type to vector. (3) Add logging. (4) Return false when multiple output edges matched in FindPath. Add comments. * only allow one direction (input or output) in FindPath * [NupharEP] Update notebook and docker image (#2416) Add BERT squad in Nuphar tutorial Enhance speed comparsion readability * Fix the issue in matmul_add_fusion (#2407) Fix the issue in matmul_add_fusion If Muatmul + Add has shape [K] * [K, N], reset it to [1, K] * [K, N] will make the output shape to [1, N] will also requires a reshape on the output. Fix: just remove the shape reset to not fuse it. Add a negative test case for matmul+add fusion * feat(treeregressor): Update TreeEnsembleRegressor for type support (#2389) Updates the `TreeEnsembleRegressor` to allow for `double`, `float`, `int64`, and `int32` inputs to match the upstream specification. Signed-off-by: Nick Groszewski <[email protected]> * onnxrt server documentation update (#2396) * Added support for Pad-2 operator in OpenVINO-EP (#2405) * Add CUDA If operator. (#2377) * Add CUDA If operator. Uses CPU operator for implementation. By adding a CUDA version the inputs/outputs (with the exception of the 'cond' input) stay on GPU, and no other logic is required to avoid a copy to CPU across the control flow node. * Improved documentation for onnxruntime::utils::SwapByteOrderCopy(), added precondition check. * Fix the type constraints on CUDA If operator to exclude strings. (#2431) * add Im2col<uint8_t> (#2438) * Adjust codegen vectorization width from target (#2439) * Adjust codegen vectorization width from target * Add CUDA Scan operator. (#2403) * Add Scan CUDA op. Uses CPU implementation for logic. Added some device specific functors for handling when data needs to be manipulated on a different device. Added ability to override the materialization logic in the OrtValue slicer so DML can plugin their handling. * Fix Windows GPU C API packaging pipeline failure (#2440) Fix Windows GPU C API packaging pipeline failure (#2440) * Correctly handle implicit inputs for fused nodes (#2390) * Correctly handle implicit inputs for fused nodes Previously, nuphar's partitioning function didn't include node's implicit inputs into the inputs list of MetaDef, and hence a crash was triggered in the onnx graph checker. This commit fixed the issue. Furthermore, it also fixed a related issue where we didn't add implicit inputs into graph_inputs_excluding_initializers_ in Graph::SetGraphInputsOutputs. the issue was that graph_inputs_including_initializers_ populated by SetInputs (e.g. called by FunctionImpl::FunctionImpl) may contain implicit inputs which were not of any node's initializers in the graph. Because they were not part of any initializers, these implicit inputs couldn't be visited by going through all nodes' inputs. Consequently, they would *not* be added into graph_inputs_excluding_initializers_. We fixed the issue by first copying the populated graph_inputs_including_initializers_ into graph_inputs_excluding_initalizers_, which then had both initializers and non-initializers as its initial content. Later, we erase initializers from the list. In this way, we can ensure all implicit inputs to remain in graph_inputs_excluding_initializers_. * refined comments and fixed duplicates Address CR by revisiting comments in terms of implicit inputs Also fixed an issue by skipping duplicates while copying inputs from graph_inputs_including_initializers_. * address CR explain why we need to collect nodes' implicit inputs * don't rely on pointer values for iterating std::set Previously, openvino relied on iterating a set of NodeArg pointers to construct inputs and outputs for a fused graph. It could cause non-determinism. The reason was that although iterating std::set by itself is stable, pointer values of NodeArgs may vary. Consequently, we could end up visiting the set's elements in different orders for different runs for the same test, which resulted in constructing inputs (and outputs) with different orders to the fused graph. For example, for the same test, we may have inputs [A, B] in some runs but inputs[B, A] in others. Let's use std::string as the key type to avoid such nondeterminism. This commit also added implicit inputs into meta->inputs while returning the capability from the openvino provider. * Fixed another latent issue in openvino's GetCapability function The issue was that we couldn't simply erase fused_inputs and fused_outputs while iterating the nodes. For example, an output NodeArg may have multiple uses, and it's wrong if we erase it from fused_outputs when we encounter only one of its uses as input. * Remove DeviceAllocatorRegistry class (#2451) Remove DeviceAllocatorRegistry class * CSharp api and test for loading custom op shared library (#2420) - Added C-API test for loading custom op shared lib. - Made some changes in C++ api header and C-api implementation to get it working. - Added C# API and corresponding test for loading custom op shared library. * Parallel Gelu with ParallelFor (#2399) Parallel Gelu to get better performance for Gelu * Clean up build.py (#2446) * Pull the latest image before running docker build * Fuse SkipLayerNorm with Bias (#2453) Fuse SkipLayerNorm with Bias * Allow more than one invocation of CreateEnv in the same process. (#2467) * Allow more than one invocation of CreateEnv in the same process. * Fix centos build * Symbolic shape inference improvements: (#2460) * Symbolic shape inference improvements: - add a mode to guess unknown ops' output rank - add support for GatherND - add support for If - fix a bug in get_int_values when then tensor rank > 1D, by treating it as no sympy data - add symbol to literal merge when ONNX silently merges dims - fix a bug in Concat when input dim is 0 - fix a bug in ConstantOfShape that computed dim is not updated - add support for dynamic shape in ConstantOfShape - fix a bug in Loop output shape that loop iterator dim is not inserted at dim 0 - add support for dynamic padding in Pad - add support for dynamic shape in Reshape - add support for Resize with opset > 10, by treating output dims as dynamic - fix a bug in Slice when starts/ends are dynamic - restrict input model to opset 7 and above - make output model optional to avoid disk write when testing Run model tests for symbolic shape inference Reduce 2GB docker image size of nuphar * add additional test data set for nuget pipeline (#2448) * add SAS token to download internal test data for nuget pipeline * update azure endpoint * fix keyvault download step * fix variable declaration for secret group * fix indentation * fix yaml syntax for variables * fix setting secrets for script * fix env synctax * Fix macos pipeline * attempt to add secrets to windows download data * fix mac and win data download * fix windows data download * update test data set url and location * Revert "Brianma/windowsai fi (#2475)" This reverts commit 5780b864a15513fda4eadbfc2b5345fefe70b5ec. * Add scenario tests (#2457) * Add scenario tests * Remove TODO from model license * Add winml_api test dependency * fix model load test. fi from master changed the constructor (#2483) * make api tests all pass (#2486) * fix bad merge * fix bad model merge * Layer dev paulm (#2492) * commetns for dml graph transformer fixed ort value passing using the allocatir info * fixed and coded maps and sequences across the abi * Rename ambiguous header (#2489) * fix one more missing IR version model (#2500) * add missing IR version to 4 more models used by scenario tests (#2501) * Add CLI parameters to test runner, build WinML in ARM and x86 CI (#2479) * Support test parameters through CLI arguments * Add WinML do Windows x86/ARM CI builds * Code style fixes * Update googletest Remove GPUTEST macros everywhere now that GTEST_SKIP is supported * Refactor main.cpp * Build scenario tests without DML * Link scenario tests to DML when it's enabled (#2502) * Layer dev release pipeline (#2488) Adds winml binaries to existing cpu nuget package, and creates new gpu dml nuget package with winml binaries and DML EP. * Layer dev paulm (#2506) * commetns for dml graph transformer fixed ort value passing using the allocatir info * fixed and coded maps and sequences across the abi * cleaned up w4's cleaned up the model info ABI delayload directml.dll from winml * Remove usage of IOBinding in WinML and use C_API Run method (#2504) * remove usage of iobinding * Change data structure to use vector of Ort::Values * Polish bind input / output * Use C APIrun method * Update providers on evaluate getresults * Remove run and IObinding interface from WinMLAdapter * Remove use of IObinding * bind unbound outputs code moved to learningmodelbinding * clean up unneeded istensor adapter function * Fix comment * Check if session is closed before binding and clearing * PR feedback * Layer dev paulm (#2507) * commetns for dml graph transformer fixed ort value passing using the allocatir info * fixed and coded maps and sequences across the abi * cleaned up w4's cleaned up the model info ABI delayload directml.dll from winml * cleaned up namepsace aliases. renamed _winmla to winmla this was good PR feedback from tiago a while back. * Make tests dependend on winml_dll (#2509) * add dml binaries to DirectML package and be more explicit about condition variables (#2520) * re-enable warnings for winml builds and fix the warnings that were hiding (#2526) * turn devmode back on for winml builds * fix some warnings. include protobuf in a way that disables some warnings * undo protobufhelpers changes and just ignore 4100 errors in pb code * attempt to isolate protobufhelpers errors * add template specialization for getting tensor proto data * Layer dev paulm (#2533) * commetns for dml graph transformer fixed ort value passing using the allocatir info * fixed and coded maps and sequences across the abi * cleaned up w4's cleaned up the model info ABI delayload directml.dll from winml * cleaned up namepsace aliases. renamed _winmla to winmla this was good PR feedback from tiago a while back. * moved files from inc to lib\api.core cleaned up some of the cmake * staged changes * Spawn child process to run DeviceLostRecovery scenario test (#2530) * Spawn child process to run DeviceLostRecovery scenario test * Layer dev paulm (#2536) ori said yes * add missing namespace to winml_trace_logging_provider in lotusenvironment.h (#2542) * Handle exception thrown from all apis in WinMLAdapter (#2539) * various changes to unblock windowsai ADO build * Fix custom ops scenario tests (#2562) * Do not shutdown protobuf after ort environment gets destroyed. Lazy load lotus environment first time it is needed * comment typo * pr comment about calling phoenix singleton * Make lotus_environment static in winmladapter * Layer dev paulm (#2567) * commetns for dml graph transformer fixed ort value passing using the allocatir info * fixed and coded maps and sequences across the abi * cleaned up w4's cleaned up the model info ABI delayload directml.dll from winml * cleaned up namepsace aliases. renamed _winmla to winmla this was good PR feedback from tiago a while back. * moved files from inc to lib\api.core cleaned up some of the cmake * staged changes * making windowsAI azure dev ops work. * code review comments. * revert changes * Cmake and preprocessor fixes that where uncovered by building on agents without DML available via SDK * Layer dev dml delayload (#2580) * Brianma/cpu (#2583) * don't include dml stuff in cpu builds * tests that link the image lib also need the telemetry lib now * Throw Winml_err_invalid_binding if binding gpu resource on cpu device (#2589) * Throw Winml_err_invalid_binding if binding gpu resource on cpu device * PR comments. No need to query executionprovider for is gpu device * User/xianz/ortthrow (#2596) * thrown and handle onnxruntime exceptions * handle exception thrown from ort in winmladapter * undo changes in error.h * add message to HRESULT * User/xianz/ortthrow (#2599) * thrown and handle onnxruntime exceptions * handle exception thrown from ort in winmladapter * undo changes in error.h * add message to HRESULT * add status error message * Remove uwp onsuspending winrt call because logruntimeperf is getting removed (#2630) * User/xianz/dedup telemetry (#2631) * investigate duplication of telemetry in winml and ort * remove winml telemetry events * telemetry executionProviderEvent * remove unneccessary file and refactor code little bit * Revert back TelemetryEvent, which send up ETW event. * merge changes from layer_dev to windowsai (#2638) * Remove underscore from googletest names (#2616) * Fix leaking memory allocator Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24278761 and https://microsoft.visualstudio.com/OS/_workitems/edit/24330198 * Explicitly initialize Ort::Value with nullptr * Cache WinML adapter * bad merge * define private version of dxcore enum that is added in 19H1 SDK. (#2654) * add comment for explaning private definition of dxcore d3d feature level ennum value. (#2672) * do not package directml.pdb for redist packages. (#2676) * Fix leaking operator registry (#2645) Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24354916 * User/orilevari/windowsai master merge (#2674) merge resolutions included pulling in telemetry logic that was merged to master and not windowsai and dereferencing InferenceSession::sessionstate now that it is a unique pointer * Delete Ort Allocator in LearningModelBinding (#2653) * Delete OrtAllocator in LearningModelBinding * PR comments to make Ort::Allocator a smart pointer * Small comment change * PR feedback to clean up code * PR feedback on move semantics * Clean up std::move * Fix memory leaks (#2679) Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24356109, https://microsoft.visualstudio.com/OS/_workitems/edit/24388361 and https://microsoft.visualstudio.com/OS/_workitems/edit/24388596 * various changes to properly organize and skip GPU tests. For now for No DML builds we will not run GPU tests at all. In the future we should adapt the tests to expect the appropiate errors. (#2695) * Windowsai without fi (#2701) * Disable Attention fusion tests when DISABLE_CONTRIB_OPS is defined (#2529) * Setup java ci (#2528) * Add provision in ORT for session options to be parsed when available via model file (#2449) * Initial commit * Fix gitmodules * Nits * Nits * Updates * Update * More changes * Updates * Update * Some updates * More changes * Update * Update * Merge * Update * Updates * More changes * Update * Fix nits * Updates * Fix warning * Fix build * Add comment * PR feedback * PR feedback * Updates * Updates * Update * More changes * Fix build break * Comment test for now * Updates * Updates * PR feedback * Updates * Nits * Add tests * Fix build * Fix build * Fix build * Fix build break * Fix build * Nits * PR feedback * More change * Expose GetSessionOptions in pybind logic and add unit test for python * Fix build * PR feedback * PR feedback * Revert "Disable thread pool creation when enabled OpenMP (#2485)" (#2535) This reverts commit 7c7d5a149c9ed52eec67304bae5c4b132166a8a1. * Add dynamic shape support in TensorRT execution provider (#2450) * remove onnx-tensorrt submodule * add new onnx-tensorrt submodule (experiment) for trt6 * update engine build for trt6 * update compile and compute for tensorrt6.0 * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * switch to onnx-tensorrt master for TensorRT6' * Update tensorrt_execution_provider.cc * Handle dynamic batch size and add memcpy in TensorRT EP * update test cases * Update tensorrt_execution_provider.cc * update onnx-tensorrt submodule * Update Dockerfile.ubuntu_tensorrt * Update Dockerfile.ubuntu_tensorrt * Update run_dockerbuild.sh * Update run_dockerbuild.sh * Update install_ubuntu.sh * Update concat_op_test.cc * Update tensorrt_execution_provider.cc * Upgrade TensorRT to version 6.0.1.5 * Update onnxruntime_providers.cmake * Update CMakeLists.txt * Update reduction_ops_test.cc * Update install_ubuntu.sh * Update Dockerfile.ubuntu_tensorrt * Update Dockerfile.tensorrt * Update BUILD.md * Update run_dockerbuild.sh * Update install_ubuntu.sh * Update onnxruntime_providers.cmake * Update install_ubuntu.sh * Update install_ubuntu.sh * Update gemm_test.cc * Update gather_op_test.cc * Update CMakeLists.txt * Removed submodule * update onnx-tensorrt submodule * update header file * Removed submodule * add submodule onnx-tensorrt kevin's branch shape-test' * add debugging code * Update tensorrt_execution_provider.cc * Update tensorrt_execution_provider.cc * merge master * Removed submodule * update onnx-tensorrt submodule * add more changes for dynamic shapes * Update tensorrt_execution_provider.cc * update for dynamic shape * update dynamic shape processing * fix logger issue * remove submodule onnx-tensorrt * add submodule onnx-tensorrt * add env variable min_subgraph_size * remove redundency * update document * use onnxruntime::make_unique * fix multi-run issue * remove some tests to save CI build time * Add dynamic shape test * Update TensorRT-ExecutionProvider.md * Add example of running Faster R-CNN model on TensorRT EP * Add more details on env variables * update environment variables * Update tensorrt_basic_test.cc * Update model tests * Update tensor_op_test.cc * remove --use_full_protobuf * Update build.py * User/xianz/telemetry (#2458) * enabme telemetry * enable telemetry * set enable telemetry as default * for debugging * remove log and set disable telemetry as default back * delete private file while testing * resolve comment: mainly add license header, rename macro and update docs * rewording in privacy.md * Fix integer overflow in cuda NonMaxSuppression implementation (#2540) * add test case that should pass but fail * fix nms * extract int_max_output_boxes_per_class * Introduce container type runtime checks and other improvements (#2522) Rework TensorSeq in a manner consistent with Tensor and SparseTensor in terms of type system setup. Reduce templating. Introduce helpers to ensure the same data type. Make OrtValue __dtor not virtual. Introduce ContainerChecker * Fix C API tests for centos and mac (#2544) * change c++14 to c++11 * add ld lib path for centos * enable csharp tests on macos * fix C API test on MacOS + fix manylinux dotnet install * fix manylinux dotnet install * fix lib link * Add back executable bit to build.py * Fix a bug handling negative begin pad values in Pad op (#2550) * Fix bug in Pad op * Update * DNNL CMAKE update (#2548) * Fix android build (#2558) * Update win-x86-ci.yml (#2557) Fix build pipeline break * Re-enable Windows C# tests (#2564) * disable onnx_test_runner -x invocations for dnnl (#2568) * Allow sequence length to be symbolic (#2559) * setup java ci mac (#2570) * make layernorm fusion to support opset 11 (#2545) * Fix a warning found in the latest VS release * Add more check on SkipLayerNorm and BiasGelu fusion (#2574) * Fix file not found error during docker build. (#2569) * Add ConvTranspose1D (#2578) * Ryanunderhill/packagename test (#2582) * [Nuphar EP] fixes for some object detection models (#2581) Update notebook tutorial with multi-threaded int8 GEMM from #2517 * EmbedLayerNormalization Fusion Improvement (#2553) Embedding layer norm fusion improvements - add more checks * Update version (#2584) * Temporarily exclude vgg19 test from Python backend test 1. temporarily exclude vgg19 test which comsumes too much memory, run out of memory on Upsquared device. Single test pass for vgg19, need furture investigation (#2588) 2. Update docker file to decrease the docker image size * Update docs for Android NNAPI EP (#2586) * Fix lto bug for protobuf and ubuntu * add path to build dir before test run (#2590) * Add missig env variables for mac pipeline test (#2595) * Fixed an issue in updating realized dims (#2597) when we update realized dims for scan's output, the sliced axis also needs to be inclusive, i.e. we should check with "dim >= insert_inclusive_axis", because the offset in the symbols are based on Scan sugraph. Otherwise, we would end up with shape mismatch later. * Java API for onnxruntime (#2215) * Add support for opset 11 in reshape fusion (#2592) Support opset verion 11 in reshape fusion * Rename automl python tools folder to featurizer_ops. (#2593) * Support opset 11 subgraph of Squad model in Embed Layer Normalization (#2605) Support opset 11 Squad model that is exported from PyTorch nightly. The embed layer uses Range op which is missed in the transformer. * symbolic shape inference: fix warnings in GPT-2 model (#2608) And revise nuphar perf test on BERT squad * Dump subgraph ID and fused graph ID (#2607) * Dump subgraph ID and fused graph ID Dump subgraph ID and fused graph ID for better debugging * Remove local static fused_count added a field global_fused_count_ to NupharExecutionProvider class * EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613) Support subgraph of SQuAD model exported from pytorch with dynamic input axes * Allow providers to be set for InferenceSession at construction (#2606) * Remove unnecessary parameter in some places in GatherElements implementation (#2612) * Remove unnecessary parameter in some places * Update * Update * Make sure fenced tensor could not reuse other tensor. (#2561) Fix random error caused by this. * Improve Embed Layer Norm Fusion for SQuAD with static input shape (#2621) * fix float16 comparison in initializer (#2629) * epsilon attribute for layernormalization fusion (#2639) * removed unnecessary batch file and fix path (#2640) * Add shape inference to ConvTransposeWithDynamicPads schema (#2632) * Improve cuda expand() opeator's performance. (#2624) * Cuda pad optimize when no padding is needed. (#2625) * Shortcut cuda Pad() when no padding is needed. * Optimize cuda scatter() on 2D compatible. (#2628) * Optimize cuda scatter() on 2D compatible. * Add some comments. * fix build error for ARM (#2648) * Improve performance of resize() in Nearest mode (#2626) Special treatment for 2D, check same size as input image. And in 2d kernel, template use_expolation. * Fix memory exception in Layer Norm Fusion (#2644) * Windows CI changes(#2650) * Revert "User/orilevari/windowsai master merge (#2674)" This reverts commit fe261463112f0cf7cdef214c57eb7c70e816b616. * Revert "Windowsai without fi (#2701)" This reverts commit 285d4c85ff5c4e265f963208170304ef3461e684. * Revert "User/orilevari/windowsai master merge (#2674)" This reverts commit fe261463112f0cf7cdef214c57eb7c70e816b616. * Deref unique pointer for session_state * send shutdown event when dll is unloaded and EvaluationStop, SessionC… (#2704) * send shutdown event when dll is unloaded and EvaluationStop, SessionCreationStart Events. * Add EvalutationStart Event * add comment * use correct type for for loop (#2755) * ARM CI (#2759) * Set ARM agent pool * Set CMake generator to VS 2019 in ARM * Use system-wide CMake instead of custom version Our custom version is too old for VS 2019 * Use DML and build shared lib in ARM CI * Restore nuget packages in ARM CI * Disable DML * Refactor ARM debug/release builds * Use system packaged Python version * Remove hardcoded Python path * Downgrade Python to 3.7 for build * Remove explicit CMake path * Fix invalid JSON in cgmanifest.json (#2760) * Fix cgmanifest.json generating script (#2770) * Fix protobuf submodule name * Workaround pygit2 bug * Remove usage of WHOLEARCHIVE in WinML CMake and add WinMLAdapterFactory (#2726) * Remove usage of WHOLEARCHIVE in WinMLAdapter CMake and add WinMLAdapterFactory * PR feedback, no need for dll(export) since using def file * PR comments * Small comment in gen_def.py * User/orilevari/32bit comparison warning (#2800) * use correct type for for loop * explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276. * Move winml_provider_factory.h to proper location (#2801) * Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809) * Add winml macro wrappers on top of google test macros * change test methods to disabled * Add custom winml macros for both taef and google tests * PR comments * Filter CPU case for IsFloat16Supported (#2802) * Merge fixes * CMake cross-generator fixes (#2790) * Fix compilation w/ non-VS CMake generators * Fix custom WINMD target in Ninja * Remove usage of msbuild .targets file * Fix linking using DML in Ninja * Automate SDK kit version choice * Cleanup DML package install * Fix SDK version detection * Fix comment * Revert unittest linkage changes * Fix latest SDK detection * Don't link to non-uapcore libraries * Remove MessageBoxA reference and unused link libs * Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829) * Add winml macro wrappers on top of google test macros * change test methods to disabled * Add custom winml macros for both taef and google tests * PR comments * Refactor winml api tests * Move additional gtest specific macro definition into googleTestMacros.h * Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837) * Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841) * Fix warnings that cause build to fail * Fix test warnings and delayload linking (#2843) * Ortmemoryinfo struct changed * mark the camera scenario test as edgecore because it uses d3d11 (#2852) * User/orilevari/pipeline fi breaks (#2853) * remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines. * change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev * Remove internal libs from tests (#2864) * Support custom DML in onnxruntime_providers.cmake (#2867) * Make DML include path global (#2882) * Make DML include path global * Add generated cppwinrt headers to winml_lib_common * Integrate changes to WindowsAI to make ADO Build (#2886) * Revert "CMake cross-generator fixes (#2790)" This reverts commit dbe7d97fa1ab155f1309bced87199527e8f35bd2. * add additional suppress warning in onnx_proto * ignore /wd4996 warning * DML execution provider fixes * Revert "Revert "CMake cross-generator fixes (#2790)"" This reverts commit 1ae7b4bcbc02edc881ad28685da98e095dfceb17. * Update func signature of custom op function overloads * common devicehelpers fixes * Add pch.h for winml_lib_common * re-add winml_lib_common_dir/inc to include path for winml_adapter * User/orilevari/dml redist shared folder (#2890) * move dml nuget package directory up one level to make it shared between build flavors * Merge conflict fix * Revert "Merge conflict fix" This reverts commit 142fa72cf9ce4344ad717b50b7ea2b8582aadc7c. * Revert "Merge remote-tracking branch 'origin/master' into windowsai" This reverts commit 6e2126d46e5e5f564d65da37dd4f70c93dd81165, reversing changes made to b3f5583dc9249834b947c8ea905f6a98060d5bd6. * Make winml_test_common free of test macros (#2902) * Add option to build winml_test_common without googletest specifics * remove test macros from squeezenet * comment change * Make cmake functions to get scenario and api source * PRcomments about hresult * Build errors fixed * Fix cmake variable * Make winml_google_test_lib to build main.cpp once * PRcomments * Don't generate files outside the build root (#2914) * Don't generate files outside the build root * Add onnxruntime_EXTERNAL_DEPENDENCIES to WinML * Add DML depedency on RESTORE_PACKAGES * User/orilevari/fix yaml merge bugs (#2918) * Add winml test source parameter into cmake function (#2919) * Add option to build winml_test_common without googletest specifics * remove test macros from squeezenet * comment change * Make cmake functions to get scenario and api source * PRcomments about hresult * Build errors fixed * Fix cmake variable * Make winml_google_test_lib to build main.cpp once * PRcomments * Add arguments to unittest cmake functions * remove comment * Revert "Revert "Merge remote-tracking branch 'origin/master' into windowsai"" This reverts commit ade5abe72a4234fdbc3623093c61c02c6b0bdc26. * Fix breaks from merge with ORT master * Brianma/linux (#2917) * don't include windows.h in cross-plat header * add default case for switch statement * signed/unsigned mismatch fix Co-authored-by: Brian Martin <[email protected]> * User/sheilk/winml adapter c api (#2891) * Create winml adapter c api * fix build * make it build * move adapter into onnxruntime core/session * entry point not exported * minor changes * make model metadata work * make tests pass * implement all the model reflection apis on the adapter c abi * update the new ort interface to create a lotus ennvironment with a logging sink * start adding ort env * move all winml code into adapter folder/lib to isolate it * ensure a single logging manager at a time * start refactoring session * refactor session creation interface * add cpu and dml session option methods to adapter * finish session init * stub out interfaces in ort lib to perform similar mechanics of iinference session * enable profiling, and enable schema override * update session register graph transformers * turn back on custom registry for custom ops * Add sync api * add last c api stubs * should build... but all feature values are broken since this is in flight to moving all implementation details into ivalue * remove ep adapter header * Implement DML execution provider functions from adapter (#2846) * Implement DML execution provider functions from adapter * Use functions in OnnxruntimeEngine.cpp * make map/sequence type_infos freeable, and start implementing ivalue * make it build again * implement value methods * implement remaining methods * remove com adapter abi * check dml session * cache the allocator on ivalue * check if resource is cpu/gpu when access its mutable data * update tensor * mismatched parentheses * fix tensor base and binding obj * it evaluates tensors! sometimes... * minor fixes * enable gpu evals * wrapper all existing winml adapter apis with API_IMPL to try catch (#2854) * update winml... tensor strings are broken, need to template tensorbase to do different things for strings * make tensor strings work with 2 copies in/2 copies out * Fix tensor string and allocator bug * make maps work again... needs some fixes still * Make it build! * enable map inputs * map outputs * unbound outputs for sequences and maps * User/xianz/merge windowsai (#2883) * Packaging pipeline changes for VS 2019 (#2711) * Tiny fix to codegen * Simplify cache implementation and avoid static variables that may carry over between models * Extend DML kernels (#2641) * Additional DML operators * Check unsupported attributes and inputs * Address PR comments * Add kernel capability function used for partitioning, and re-enable stride-based int64 support based on value range * Fix test failures * Build fix * PR comments * Update Nuphar tutorial notebook (#2721) 1. Reflect int8 GEMV improvements for multi-threading from #2696 2. Add notes on multi-threading control using OpenMP 3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2 4. Add rnn_benchmark example to resolve #1993 * Add schema for new Qops (#2611) * Add schema for new Qops * adding shape inference + qlinearaveragepool * plus review comments * plus review comments * updates per review comments * plus review comments * [server] Add supposed for model_name and model_version as cli parameter (#2708) * remove 64bit warning message from python validation. (#2727) * MLAS: ARM64 build fix (#2734) fix bad usage of vreinterpret to cast vector element types * Fix broken python docs links (#2740) * Fix build on Mac OS (#2731) mac os ld doesn't support --while-archive, correct option is -all_load * fix ngraph wheel (#2737) * fix ngraph wheel 1.1.0 onnxruntime_ngraph wheel doesn't work * remove libdnnl.so in nGraph Libs * make it easy to compare * Split onnxruntime server to a separated folder (#2744) * Fix build for Python 3.8 (#2747) * Fix build for Python 3.8 * Update protobuf to 3.11.2 (#1928) Update protobuf to 3.11.2 (#1928) * Change default optimization level to All (from Basic) (#2745) * change default optimization level to All (from Basic) * fix test * fix c# test * Update numpy to 1.18 (#2758) * Update numpy to 1.18 * Pipeline changes for python 3.8 (#2753) 1. Pipeline changes for python 3.8 2. Fix a regression in setup.py which was just introduced in the previous commit. Please notice, we still haven't made python 3.8 + Windows + CUDA work. * Add basic stacktrace output for posix debug builds. (#2749) * [NupharEP] fix a race condition when multiple sessions running different models concurrently (#2772) * Revert "Change default optimization level to All (from Basic) (#2745)" This reverts commit 56bb503c2f26474b6613bcb2a198691a11dcef38. * Fix typo in error message (#2736) * Rename MKL-DNN to DNNL to fix broken link (#2730) * Fix nightly build version number issue * Pass BUILD_BUILDNUMBER to linux docker * Disable featurizers in python packages * Import more featurizers (#2781) Make kernels non-template. Add input constraint for learnt data. Add min_max_scalar_transformer, robust_scalar_transformer, inputation_marker_transfomer, label_encoder_transformer, missing_dummies_transformer along with tests. Advance Featurizers library commit. * Implement a more stable softmax (#2715) * Implement a more stable SoftMax e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough. A math transform as below is leveraged to get a stable softmax: e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max)) And for convenience, force max to 0.f if all xi are negative * Contributing: Fix a typo (#2784) * ACL EP GEMM improvements (#2780) When it is posible we use a fully connected layer instead of the gemm implementation. This will let the library use the best implementation based on the input data. * ACL EP convolution improvements (#2774) Added the optimized implementation for depthwise convolution for both ACL v19.02 and ACL 19.05. Also the pointwise convolution seems to be more optimal in the CPU implementation so we opted for that instead. * Add script for release Nuget validation (#2719) * Initial commit * Nits * Disable a test temporarily * Change working directory * Test * Add download python step * Test update * More changes * Fix space issue * Fix * Verify nuget signing * Fix * Spaces * PR feedback * Nit * Fix * Fix * Remove temporary changes * add uint8 support to where op (#2792) * Improve bert optimization script: (#2712) (1) Move input int64=>int32 conversion to embed layer fusion. (2) Output epsilon attribute for LayerNormalization fusion. * add session creation time cost. (#2798) * ML.NET team needs featurizers within a package (#2789) Add auto ml featurizers to Windows, MacOS as well as to GPU packaging-pipelines. * Initialize max of softmax with lowest of float (#2786) * MLAS: update SGEMM threading parameters (#2808) * add interface to copy batch tensors. (#2807) * add interface to copy batch tensors. * onnxruntime * speed up Windows TRT CI (#2811) * don't run cuda tests if building with tensorrt * remove unnecessary build options for win trt ci * refactor win gpu tensorrt ci yml * --numpy_version=1.17 * update * update * azcopy and cuda path * Update test data (#2356) * Add timeseries imputer transformer featurizer kernel (#2813) Make kernels non-template. Add input constraint for learnt data. Fixup tests. Add two more featurizers along with tests. Tests fail. min_max_scalar_transformer robust_scalar_transformer Fix tests serialized stream by prepending version bytes. Add inputation_marker_transfomer and the test. Fix up float/double type designations. Added label_encoder_transformer along with a test. string_throw case is broken at the momement. Fix labelencodertransfomer_test.cc string_throw case Rename maxabsscalertransformer_test.cc Add MissingDummiesTransformer along with the test. Update manifest. Add TimeSeriesImputerTransformer definition, implementation and tests * Fix memory leak in TRT (#2815) * fix memory leak issue * revert EP_FAIL on enueueV2 * Add manifest missing comma * Run static code analyzer on most of our code (#2817) * Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809) * Add winml macro wrappers on top of google test macros * change test methods to disabled * Add custom winml macros for both taef and google tests * PR comments * update quantization doc (#2783) * update documentation for quantization script * plus some spell corrections * Filter CPU case for IsFloat16Supported (#2802) * update default optimization level + fix gemm_activation fusion (#2791) * update defualt optimization level + fix gemm_activation fusion * fix typo * add unit test and incorporate review comments * fix test comment * Fix dnnl wheel package name (#2823) * Append '-dnnl' to whl package name when --use_dnnl * Update build.py * Update Ubuntu & TensorRT version in README (#2820) Dockerfile.tensorrt is using nvcr.io/nvidia/tensorrt:19.09-py3 as base Image, update Ubuntu and TensorRT version according to https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_19-09.html#rel_19-09 * Merge fixes * Add OneHotEncoder and HashOneHotEncoder kernels. (#2830) Add defs and imlementation for OneHotEncoders, adjuist date_time_transformer kernel and test. Add OneHotEncoder kernel test. Add HashOneHotVectorizerTransformer unit test. This does not link due to multiple definitions of functions that are included into header from a CPP file. * Upgrade gtest to the latest version (#2827) WinML would like to update the googletest submodule. They want some newer features (namely GTEST_SKIP to skip tests programmatically and be able to skip entire fixtures easily) and would need to update the submodule version. However, because the new version of code hit a bug in gcc, even though the bug is already fixed in the latest gcc but we're using gcc 4.8.x and it won't get patched for the bug, so we have to do a compromise, change our code a little bit to make it work. The gcc bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51213 * Add support for int64_t for topk CPU. Fixes github issue #2806. (#2833) * Ignore allocator type in ExecutionProviders allocator map. Make default initialization of OrtMemoryInfo more clearly invalid. (#2768) * Remove allocator type from the key comparison in ExecutionProviders. Remove usage of DummyArena as it's no longer necessary. * Fix x86 tests where arena allocator is disabled. Make initialization of OrtMemoryInfo clearer by adding Invalid enum value. * Make OrtValueNameIdxMap::MaxIdx more intuitive. * Convert ExternalProject Featurizers into git submodule (#2834) Add git submodule for Featurizer library. Update cmake to build for git submodule. * add domain check for nodes + update documentation (#2831) * Fix cgmanifest.json generating script (#2770) * Fix protobuf submodule name * Workaround pygit2 bug * User/orilevari/32bit comparison warning (#2800) * use correct type for for loop * explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276. * CMake cross-generator fixes (#2790) * Fix compilation w/ non-VS CMake generators * Fix custom WINMD target in Ninja * Remove usage of msbuild .targets file * Fix linking using DML in Ninja * Automate SDK kit version choice * Cleanup DML package install * Fix SDK version detection * Fix comment * Revert unittest linkage changes * Fix latest SDK detection * Don't link to non-uapcore libraries * Remove MessageBoxA reference and unused link libs * Fix Linux CUDA nuget packaging pipeline break * Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829) * Add winml macro wrappers on top of google test macros * change test methods to disabled * Add custom winml macros for both taef and google tests * PR comments * Refactor winml api tests * Move additional gtest specific macro definition into googleTestMacros.h * Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837) * Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841) * update optimization doc for BERT related fusions (#2819) * Add bert related transformers to doc * Add execution provider and comment for bert optimizations * Add comment about accuracy impact of approximation * Fix warnings that cause build to fail * MLAS: enable threading for quantized GEMMs (#2844) * Fix test warnings and delayload linking (#2843) * Ortmemoryinfo struct changed * mark the camera scenario test as edgecore because it uses d3d11 (#2852) * User/orilevari/pipeline fi breaks (#2853) * remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines. * change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev * Remove internal libs from tests (#2864) * Support custom DML in onnxruntime_providers.cmake (#2867) * remove old winmladapter cpp Co-authored-by: Changming Sun <[email protected]> Co-authored-by: KeDengMS <[email protected]> Co-authored-by: Jeff <[email protected]> Co-authored-by: Ashwini Khade <[email protected]> Co-authored-by: Andrey <[email protected]> Co-authored-by: George Wu <[email protected]> Co-authored-by: Tracy Sharpe <[email protected]> Co-authored-by: Faith Xu <[email protected]> Co-authored-by: zhanyi-ms <[email protected]> Co-authored-by: Changyoung Koh <[email protected]> Co-authored-by: Scott McKay <[email protected]> Co-authored-by: Takeshi Watanabe <[email protected]> Co-authored-by: Dmitri Smirnov <[email protected]> Co-authored-by: Yufeng Li <[email protected]> Co-authored-by: Maher Jendoubi <[email protected]> Co-authored-by: Andrews548 <[email protected]> Co-authored-by: Hariharan Seshadri <[email protected]> Co-authored-by: Nathan <[email protected]> Co-authored-by: Tianlei Wu <[email protected]> Co-authored-by: Ke Zhang <[email protected]> Co-authored-by: stevenlix <[email protected]> Co-authored-by: Ryan Lai <[email protected]> Co-authored-by: Ori Levari <[email protected]> Co-authored-by: Yingge WAN <[email protected]> Co-authored-by: Qing <[email protected]> Co-authored-by: Pranav Sharma <[email protected]> Co-authored-by: Tiago Koji Castro Shibata <[email protected]> * move sequence implementation into ort lib... still commented out... need to turn back on... * begin sequence implementation * make maps and sequences work * fix broken tests * remove dead code * misc cleanup * CR feedback * User/xianz/winml adapter c api (#2869) * wrapper all existing winml adapter apis with API_IMPL to try catch * Return HR or Throw for WinML adapter APIs if failed * undo macro wrapper for two places * Wrap error macros around ort apis, too. * address CR feedback #2 * add more api throw/return macros * Revert changes no longer needed * revert changes to cxx api * format winml lib.ort and winml adapter * remove static pheonix singleton Co-authored-by: Ryan Lai <[email protected]> Co-authored-by: Xiang Zhang <[email protected]> Co-authored-by: Changming Sun <[email protected]> Co-authored-by: KeDengMS <[email protected]> Co-authored-by: Jeff <[email protected]> Co-authored-by: Ashwini Khade <[email protected]> Co-authored-by: Andrey <[email protected]> Co-authored-by: George Wu <[email protected]> Co-authored-by: Tracy Sharpe <[email protected]> Co-authored-by: Faith Xu <[email protected]> Co-authored-by: zhanyi-ms <[email protected]> Co-authored-by: Changyoung Koh <[email protected]> Co-authored-by: Scott McKay <[email protected]> Co-authored-by: Takeshi Watanabe <[email protected]> Co-authored-by: Dmitri Smirnov <[email protected]> Co-authored-by: Yufeng Li <[email protected]> Co-authored-by: Maher Jendoubi <[email protected]> Co-authored-by: Andrews548 <[email protected]> Co-authored-by: Hariharan Seshadri <[email protected]> Co-authored-by: Nathan <[email protected]> Co-authored-by: Tianlei Wu <[email protected]> Co-authored-by: Ke Zhang <[email protected]> Co-authored-by: stevenlix <[email protected]> Co-authored-by: Ori Levari <[email protected]> Co-authored-by: Yingge WAN <[email protected]> Co-authored-by: Qing <[email protected]> Co-authored-by: Pranav Sharma <[email protected]> Co-authored-by: Tiago Koji Castro Shibata <[email protected]> * missing use_dml check in winml_adapter_session (#2930) * --use_dnnl flag was mangled in merge (#2931) * use dml macro not wrapping custom registry code (#2934) * Disable LNK4199 winml_dll to enable cuda builds (#2936) * Disable LNK4199 in winml_dll * linkler->linker * LearningModelSessionAPITestGpu.CreateSessionWithCastToFloat16InModel should return DXGI_ERROR_UNSUPPORTED when FP16 not supported (#2937) * Disable LNK4199 in winml_dll * linkler->linker * Need to return DXGI_ERROR_UNSUPPORTED when Model does not support fp16 * Publish build symbols (#2939) * Publish build symbols * Don't upload PDBs for .exe files * Make x86 build (#2943) * fix last remaining size_t/int64_t warnings->errors (#2948) * TensorString, Sequences and Maps use the first allocator, but should use the cpu default allocator. (#2952) * fix tensor string allcoator * clean up default allocator usage for strings in winml lib/api.ort Co-authored-by: Ryan Lai <[email protected]> * Handle tensor shape of ze…
Description:
This pull request provides a Java 8 API using JNI. It has unit tests ported from the v0.5.0 release of the C# API, I'll work on porting the new tests from the master branch over the next few weeks. I assume there will be some design & naming discussion on this PR so we can have that while I work on the unit tests.
Currently it builds using a separate gradle project which I've tested on Mac & Linux. The build process involves running
gradle clean build -x test; gradle build
as the combination of a JNI and Java project in Gradle 5 isn't properly supported. I could do with some help integrating it into the CMake build system, but I've not used CMake much before. Integrating it into CMake will make it simpler to put in the appropriate provider compilation flags and fix the oddities in the build (as CMake has all the information necessary).