Skip to content

Commit

Permalink
Merge branch 'develop' into 4813-allow-duplicate-files
Browse files Browse the repository at this point in the history
  • Loading branch information
sekmiller committed Jul 16, 2020
2 parents 54011d7 + 875374f commit d0f5adc
Show file tree
Hide file tree
Showing 15 changed files with 1,134 additions and 12 deletions.
30 changes: 30 additions & 0 deletions doc/release-notes/6505-zipdownload-service.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
### A multi-file, zipped download optimization

In this release we are offering an experimental optimization for the
multi-file, download-as-zip functionality. If this option is enabled,
instead of enforcing size limits, we attempt to serve all the files
that the user requested (that they are authorized to download), but
the request is redirected to a standalone zipper service running as a
cgi executable. Thus moving these potentially long-running jobs
completely outside the Application Server (Payara); and preventing
service threads from becoming locked serving them. Since zipping is
also a CPU-intensive task, it is possible to have this service running
on a different host system, thus freeing the cycles on the main
Application Server. (The system running the service needs to have
access to the database as well as to the storage filesystem, and/or S3
bucket).

Please consult the scripts/zipdownload/README.md in the Dataverse 5
source tree.

The components of the standalone "zipper tool" can also be downloaded
here:
(my plan is to build the executable and to add it to the v5
release files on github: - L.A.)
https://github.com/IQSS/dataverse/releases/download/v5.0/zipper.zip.

## New JVM Options and DB Options

### New DB Option CustomZipDownloadServiceUrl

If defined, this is the URL of the zipping service outside the main Application Service where zip downloads should be directed (instead of /api/access/datafiles/)
48 changes: 48 additions & 0 deletions doc/sphinx-guides/source/installation/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,51 @@ If you have successfully installed multiple app servers behind a load balancer y
You would repeat the steps above for all of your app servers. If users seem to be having a problem with a particular server, you can ask them to visit https://dataverse.example.edu/host.txt and let you know what they see there (e.g. "server1.example.edu") to help you know which server to troubleshoot.

Please note that :ref:`network-ports` under the Configuration section has more information on fronting your app server with Apache. The :doc:`shibboleth` section talks about the use of ``ProxyPassMatch``.

Optional Components
-------------------

Standalone "Zipper" Service Tool
++++++++++++++++++++++++++++++++

As of Dataverse v5.0 we offer an experimental optimization for the
multi-file, download-as-zip functionality. If this option
(``:CustomZipDownloadServiceUrl``) is enabled, instead of enforcing
the size limit on multi-file zipped downloads (as normally specified
by the option ``:ZipDownloadLimit``), we attempt to serve all the
files that the user requested (that they are authorized to download),
but the request is redirected to a standalone zipper service running
as a cgi-bin executable under Apache. Thus moving these potentially
long-running jobs completely outside the Application Server (Payara);
and preventing worker threads from becoming locked serving them. Since
zipping is also a CPU-intensive task, it is possible to have this
service running on a different host system, freeing the cycles on the
main Application Server. (The system running the service needs to have
access to the database as well as to the storage filesystem, and/or S3
bucket).

Please consult the scripts/zipdownload/README.md in the Dataverse 5
source tree for more information.

To install: You can follow the instructions in the file above to build
``ZipDownloadService-v1.0.0.jar``. It will also be available, pre-built as part of the Dataverse release on GitHub. Copy it, together with the shell
script scripts/zipdownload/cgi-bin/zipdownload to the cgi-bin
directory of the chosen Apache server (/var/www/cgi-bin standard).

Make sure the shell script (zipdownload) is executable, and edit it to configure the
database access credentials. Do note that the executable does not need
access to the entire Dataverse database. A security-conscious admin
can create a dedicated database user with access to just one table:
``CUSTOMZIPSERVICEREQUEST``.

You may need to make extra Apache configuration changes to make sure /cgi-bin/zipdownload is accessible from the outside.
For example, if this is the same Apache that's in front of your Dataverse Payara instance, you will need to add another pass through statement to your configuration:

``ProxyPassMatch ^/cgi-bin/zipdownload !``

Test this by accessing it directly at ``<SERVER URL>/cgi-bin/download``. You should get a ``404 No such download job!``. If instead you are getting an "internal server error", this may be an SELinux issue; try ``setenforce Permissive``. If you are getting a generic Dataverse "not found" page, review the ``ProxyPassMatch`` rule you have added.

To activate in Dataverse::

curl -X PUT -d '/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl

13 changes: 13 additions & 0 deletions doc/sphinx-guides/source/installation/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2134,3 +2134,16 @@ Unlike other facets, those indexed by Date/Year are sorted chronologically by de
If you don’t want date facets to be sorted chronologically, set:

``curl -X PUT -d 'false' http://localhost:8080/api/admin/settings/:ChronologicalDateFacets``

:CustomZipDownloadServiceUrl
++++++++++++++++++++++++++++

The location of the "Standalone Zipper" service. If this option is specified, Dataverse will be redirecing bulk/mutli-file zip download requests to that location, instead of serving them internally. See the "Advanced" section of the Installation guide for information on how to install the external zipper. (This is still an experimental feature, as of v5.0).

To enable redirects to the zipper installed on the same server as the main Dataverse application:

``curl -X PUT -d '/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl``

To enable redirects to the zipper on a different server:

``curl -X PUT -d 'https://zipper.example.edu/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl``
104 changes: 104 additions & 0 deletions scripts/zipdownload/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
Work in progress!

to build:

cd scripts/zipdownload
mvn clean compile assembly:single

to install:

install cgi-bin/zipdownload and ZipDownloadService-v1.0.0.jar in your cgi-bin directory (/var/www/cgi-bin standard).

Edit the config lines in the shell script (zipdownload) as needed.

You may need to make extra Apache configuration changes to make sure /cgi-bin/zipdownload is accessible from the outside.
For example, if this is the same Apache that's in front of your Dataverse Payara instance, you'll need to add another pass through statement to your configuration:

``ProxyPassMatch ^/cgi-bin/zipdownload !``

(see the "Advanced" section of the Installation Guide for some extra troubleshooting tips)

To activate in Dataverse:

curl -X PUT -d '/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl

How it works:
=============

(This is an ongoing design discussion - other developers are welcome to contribute)

The goal: to move this potentially long-running task out of the
Application Server. This is the sole focus of this implementation. It
does not attempt to make it faster.

The rationale here is a zipped download of a large enough number of
large enough files will always be slow. Zipping (compressing) itself
is a fairly CPU-intensive task. This will most frequently be the
bottleneck of the service. Although with a slow storage location (S3
or Swift, with a slow link to the share) it may be the speed at which
the application accesses the raw bytes. The exact location of the
bottleneck is in a sense irrelevant. On a very fast system, with the
files stored on a very fast local RAID, the bottleneck for most users
will likely shift to the speed of their internet connection to the
server. The bottom line is, downloading this multi-file compressed
stream will take a long time no matter how you slice it. So this hack
addresses it by moving the task outside Payara, where it's not going
to hog any threads.

A quick, somewhat unrelated note: attempting to download a multi-GB
stream over http will always have its own inherent risks. If the
download has to take hours or days to complete, it is very likely that
it'll break down somewhere in the middle. Do note that for a zipped
download our users will not be able to utilize `wget --continue`, or
any similar "resume" functionality - because it's impossible to resume
generating a zipped stream from a certain offset.

The implementation is a hack. It relies on direct access to everything - storage locations (filesystem or S3) and the database.

There are no network calls between the application (Dataverse) and the zipper (an
implementation relying on such a call was discussed early
on). Dataverse issues a "job key" and sends the user's browser to the
zipper (to, for ex., /cgi-bin/zipdownload?<job key>) instead of
/api/access/datafiles/<file ids>). To authorize the zipdownload for
the "job key", and inform the zipper on which files to zip and where
to find them, the application relies on a database table, that the
zipper also has access to. In other words, there is a saved state
information associated with each zipped download request. Zipper may
be given a limited database access - for example, via a user
authorized to access that one table only. After serving the files, the
zipper removes the database entries. Job records in the database have
time stamps, so on the application side, as an added level of cleanup,
it automatically deletes any records older than 5 minutes (can be
further reduced) every time the service adds new records; as an added
level of cleanup for any records that got stuck in the db because the
corresponding zipper jobs never completed. A paranoid admin may choose
to give the zipper read-only access to the database, and rely on a
cleanup solely on the application side.

I have explored ways to avoid maintaining this state information. A
potential implementation we discussed early on, where the application
would make a network call to the zipper before redirecting the user
there, would NOT solve that problem - the state would need to somehow
be maintained on the zipper side. The only truly stateless
implementation would rely on including all the file information WITH
the redirect itself, with some pre-signed URL mechanism to make it
secure. Mechanisms for pre-signing requests are readily available and
simple to implement. We could go with something similar to how S3
presigns their access URLs. Jim Myers has already speced out how this
could be done for Dataverse access urls in a design document
(https://docs.google.com/document/d/1J8GW6zi-vSRKZdtFjLpmYJ2SUIcIkAEwHkP4q1fxL-s/edit#). (Basically,
you hash the product of your request parameters, the issue timestamp
AND some "secret" - like the user's API key - and send the resulting
hash along with the request. Tampering with any of the parameters, or
trying to extend the life span of the request, becomes impossible,
because it would invalidate the hash). What stopped me from trying
something like that was the sheer size of information that would need
to be included with a request, for a potentially long list of files
that need to be zipped. When serving a zipped download from a page
that would be doable - we could javascript together a POST call that
the browser could make to send all that info to the zipper. But if we
want to implement something similar in the API, I felt like I really
wanted to be able to simply issue a quick redirect to a manageable url
- which with the implementation above is simply
/cgi-bin/zipdownload?<job key>, with the <job key> being just a 16
character hex string in the current implementation.
11 changes: 11 additions & 0 deletions scripts/zipdownload/cgi-bin/zipdownload
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/sh

CLASSPATH=/var/www/cgi-bin; export CLASSPATH

PGHOST="localhost"; export PGHOST
PGPORT=5432; export PGPORT
PGUSER="dvnapp"; export PGUSER
PGDB="dvndb"; export PGDB
PGPW="xxxxx"; export PGPW

java -Ddb.serverName=$PGHOST -Ddb.portNumber=$PGPORT -Ddb.user=$PGUSER -Ddb.databaseName=$PGDB -Ddb.password=$PGPW -jar ZipDownloadService-v1.0.0.jar
86 changes: 86 additions & 0 deletions scripts/zipdownload/pom.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>ZipDownloadService</groupId>
<artifactId>ZipDownloadService</artifactId>
<version>1.0.0</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<updatePolicy>never</updatePolicy>
</releases>
</pluginRepository>
</pluginRepositories>
<repositories>
<repository>
<id>central-repo</id>
<name>Central Repository</name>
<url>https://repo1.maven.org/maven2</url>
<layout>default</layout>
</repository>
</repositories>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.790</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.postgresql/postgresql -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.2</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/java</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4</version>
<configuration>
<archive>
<manifest>
<mainClass>edu.harvard.iq.dataverse.custom.service.download.ZipDownloadService</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<finalName>${project.artifactId}-v${project.version}</finalName>
<appendAssemblyId>false</appendAssemblyId>
</configuration>
</plugin>
</plugins>
</build>
</project>
Loading

0 comments on commit d0f5adc

Please sign in to comment.