Skip to content
This repository has been archived by the owner on Sep 18, 2023. It is now read-only.

[NSE-206]Update documents and License for 1.1.0 #292

Merged
merged 2 commits into from
Apr 30, 2021

Conversation

HongW2019
Copy link
Contributor

@HongW2019 HongW2019 commented Apr 29, 2021

What changes were proposed in this pull request?

  1. Remove duplicate documents and images from arrow-data-source/docs
  2. Update Changelog and License
  3. Update OAP-Installation-Guide and Developer-Guide

How was this patch tested?

N/A

@github-actions
Copy link

#206

@HongW2019
Copy link
Contributor Author

@zhztheplayer Please take a review about the deletion of arrow-data-source/docs, thanks.

* [PMem Shuffle](https://github.com/oap-project/pmem-shuffle/tree/v1.1.0-spark-3.0.0#5-install-dependencies-for-pmem-shuffle)
* [Remote Shuffle](https://github.com/oap-project/remote-shuffle/tree/v1.1.0-spark-3.0.0)
* [OAP MLlib](https://github.com/oap-project/oap-mllib/tree/v1.1.0-spark-3.0.0)
* [Native SQL Engine](https://github.com/oap-project/native-sql-engine/tree/v1.1.0-spark-3.0.0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1.1.0-spark-3.0.0
these branches names seems not correct

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we specify the tag version, it will work after we tag and release OAP v1.1.0-spark-3.0.0.

@@ -10,7 +10,7 @@ Please make sure you have already installed the software in your system.
5. maven 3.1.1 or higher version
6. Hadoop 2.7.5 or higher version
7. Spark 3.0.0 or higher version
8. Intel Optimized Arrow 0.17.0
8. Intel Optimized Arrow 3.0.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gcc > 7 should be enough

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


Then the dependencies below will be installed:

* [Cmake](https://help.directadmin.com/item.php?id=494)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one maybe better: https://cmake.org/install/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -19,9 +20,19 @@ $ wget -c https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
$ chmod +x Miniconda2-latest-Linux-x86_64.sh
$ bash Miniconda2-latest-Linux-x86_64.sh
```
For changes to take effect, close and re-open your current shell. To test your installation, run the command `conda list` in your terminal window. A list of installed packages appears if it has been installed correctly.
For changes to take effect, ***close and re-open*** your current shell.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reload" sounds better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -34,21 +45,12 @@ Dependencies below are required by OAP and all of them are included in OAP Conda
- [OneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi.html)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oap-project/arrow

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@zhouyuan
Copy link
Collaborator

@weiting-chen please take a look here

Copy link
Collaborator

@weiting-chen weiting-chen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done for the review, looks good for me.

@HongW2019
Copy link
Contributor Author

@zhouyuan Please take a review. If no more modification required, please help merge it, thanks!

@zhouyuan zhouyuan merged commit 73f1930 into oap-project:branch-1.1-spark-3.x Apr 30, 2021
@zhouyuan
Copy link
Collaborator

@HongW2019 thanks for the cleanup!

zhixingheyi-tian pushed a commit that referenced this pull request Apr 30, 2021
* [NSE-206]Update documents and remove duplicate parts

* Modify documents by comments
zhouyuan added a commit that referenced this pull request May 13, 2021
* [NSE-262] fix remainer loss in decimal divide (#263)

* fix decimal divide int issue

* correct cpp uts

* use const reference

Co-authored-by: Yuan <[email protected]>

Co-authored-by: Yuan <[email protected]>

* [NSE-261] ArrowDataSource: Add S3 Support (#270)

Closes #261

* [NSE-196] clean up configs in unit tests (#271)

* remove testing config

* remove unused configs

* [NSE-265] Reserve enough memory before UnsafeAppend in builder (#266)

* change the UnsafeAppend to Append

* fix buffer builder in shuffle

shuffle builder use UnsafeAppend API for better performance. it
tries to reserve enough space based on results of last recordbatch,
this maybe not buggy if there's a dense recordbatch after a sparse one.

this patch adds below fixes:
- adds Reset() after Finish() in builder
- reserve length for offset_builder in binary builder

A further clean up on the reservation logic should be needed.

Signed-off-by: Yuan Zhou <[email protected]>

Co-authored-by: Yuan Zhou <[email protected]>

* [NSE-274] Comment to trigger tpc-h RAM test (#275)

Closes #274

* bump cmake to 3.16 (#281)

Signed-off-by: Yuan Zhou <[email protected]>

* [NSE-276] Add option to switch Hadoop version (#277)

Closes #276

* [NSE-119] clean up on comments (#288)

Signed-off-by: Yuan Zhou <[email protected]>

* [NSE-206]Update installation guide and configuration guide. (#289)

* [NSE-206]Update installation guide and configuration guide.

* Fix numaBinding setting issue. & Update description for protobuf

* [NSE-206]Fix Prerequisite and Arrow Installation Steps. (#290)

* [NSE-245]Adding columnar RDD cache support (#246)

* Adding columnar RDD cache support

Signed-off-by: Chendi Xue <[email protected]>

* Directly save reference, only convert to Array[Byte] when calling by BlockManager

Signed-off-by: Chendi Xue <[email protected]>

* Add DeAllocator to construction to make sure this instance will be released once it be deleted by JVM

Signed-off-by: Chendi Xue <[email protected]>

* Delete cache by adding a release in InMemoryRelation

Since unpersist only delete RDD object, seems our deAllocator wasn't being called along
Now we added a release function in InMemoryRelation clearCache() func, may need to think
a new way for 3.1.0

Signed-off-by: Chendi Xue <[email protected]>

* [NSE-207] fix issues found from aggregate unit tests (#233)

* fix incorrect input in Expand

* fix empty input for aggregate

* fix only result expressions

* fix empty aggregate expressions

* fix res attr not found issue

* refine

* fix count distinct with null

* fix groupby of NaN, -0.0 and 0.0

* fix count on mutiple cols with null in WSCG

* format code

* support normalize NaN and 0.0

* revert and update

* support normalize function in WSCG

* [NSE-206]Update documents and License for 1.1.0 (#292)

* [NSE-206]Update documents and remove duplicate parts

* Modify documents by comments

* [NSE-293] fix unsafemap with key = '0' (#294)

Signed-off-by: Yuan Zhou <[email protected]>

* [NSE-257] fix multiple slf4j bindings (#291)

* [NSE-297] Disable incremental compiler in GHA CI (#298)

Closes #297

* [NSE-285] ColumnarWindow: Support Date input in MAX/MIN (#286)

Closes #285

* [NSE-304] Upgrade to Arrow 4.0.0: Change basic GHA TPC-H test target OAP Arrow branch (#306)

* [NSE-302] remove exception (#303)

* [NSE-273] support spark311 (#272)

* support spark 3.0.2

Signed-off-by: Yuan Zhou <[email protected]>

* update to use spark 302 in unit tests

Signed-off-by: Yuan Zhou <[email protected]>

* support spark 311

Signed-off-by: Yuan Zhou <[email protected]>

* fix

Signed-off-by: Yuan Zhou <[email protected]>

* fix missing dep

Signed-off-by: Yuan Zhou <[email protected]>

* fix broadcastexchange metrics

Signed-off-by: Yuan Zhou <[email protected]>

* fix arrow data source

Signed-off-by: Yuan Zhou <[email protected]>

* fix sum with decimal

Signed-off-by: Yuan Zhou <[email protected]>

* fix c++ code

Signed-off-by: Yuan Zhou <[email protected]>

* adding partial sum decimal sum

Signed-off-by: Yuan Zhou <[email protected]>

* fix hashagg in wscg

Signed-off-by: Yuan Zhou <[email protected]>

* fix partial sum with number type

Signed-off-by: Yuan Zhou <[email protected]>

* fix AQE shuffle copy

Signed-off-by: Yuan Zhou <[email protected]>

* fix shuffle redudant reat

Signed-off-by: Yuan Zhou <[email protected]>

* fix rebase

Signed-off-by: Yuan Zhou <[email protected]>

* fix format

Signed-off-by: Yuan Zhou <[email protected]>

* avoid unecessary fallbacks

Signed-off-by: Yuan Zhou <[email protected]>

* on-demand scala unit tests

Signed-off-by: Yuan Zhou <[email protected]>

* clean up

Signed-off-by: Yuan Zhou <[email protected]>

* [NSE-311] Build reports errors (#312)

Closes #311

* [NSE-257] fix the dependency issue on v2

Co-authored-by: Rui Mo <[email protected]>
Co-authored-by: Hongze Zhang <[email protected]>
Co-authored-by: JiaKe <[email protected]>
Co-authored-by: Wei-Ting Chen <[email protected]>
Co-authored-by: Chendi.Xue <[email protected]>
Co-authored-by: Hong <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants