Skip to content

Commit

Permalink
add source code
Browse files Browse the repository at this point in the history
  • Loading branch information
tjhance committed Sep 5, 2020
1 parent 2e011ed commit 053dd16
Show file tree
Hide file tree
Showing 5,925 changed files with 1,916,838 additions and 54 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
58 changes: 45 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Welcome to the VeriBetrKV (also known as VeriSafeKV) artifact for our OSDI'20 submission,

Storage Systems are Distributed Systems (So Verify Them That Way!)
_Storage Systems are Distributed Systems (So Verify Them That Way!)_

This artifact is distributed as a Docker container based on an Ubuntu image and includes,

Expand All @@ -17,31 +17,59 @@ All source is distributed under their projects' respective licenses.

# Obtaining the Docker image

You can either download the GitHub release, `osdi2020-artifact`, and load the image with
You have a choice of obtaining an image for SSD-optimized VeriBetrKV or
HDD-optimized VeriBetrKV.

docker load -i veribetrkv-artifact.tgz
## Obtaining the HDD-optimized Docker image

You can either download the GitHub release, `veribetrkv-artifact-hdd`, and load the image with

docker load -i veribetrkv-artifact-hdd.tgz

or build it yourself with,

cd docker
docker build -t veribetrkv-artifact .
cd docker-hdd
docker build -t veribetrkv-artifact-hdd .

## Obtaining the SSD-optimized Docker image

You can either download the GitHub release, `veribetrkv-artifact-ssd`, and load the image with

docker load -i veribetrkv-artifact-ssd.tgz

or build it yourself with,

cd docker-ssd
docker build -t veribetrkv-artifact-ssd .

# Evaluating this artifact

To fully evaluate the artifact, our benchmark suite needs to be run twice, once for
There are two versions of Veribetrkv, one optimised for hdds and one optimized
for ssds. The only difference is the size of the B-epsilon tree nodes.

In our paper, SSD-reported numbers are done using the SSD-optimized version,
and HDD-reported numbers are done using the HDD-optimized version. (The exact hardware
specs we used can be found in Section 7.2 of our paper.)

We show commands for evaluating on hdds in this README. Replace `hdd` with `ssd`
to use the SSD-optimzed version.

To fully evaluate the artifact on the chosen hardware (HDD or SSD),
our benchmark suite needs to be run twice, once for
the 'dynamic-frames' version and once for the 'linear' version.

Furthermore, some of these benchmarks are very sensitive to memory capacity and
to the type of underlying hardware. Thus, to obtain results similar to the ones in our
paper, the right memory configuration must be used.
Furthermore, some of these benchmarks are very sensitive to the available memory capacity.
Thus, to obtain to results similar to the ones in our paper,
the right memory configurations must be used for certain experiments.

However, some of the other operations require higher memory limits.
On the other hand, some of the other operations will fail if they
are not given _enough_ memory.

Therefore, the recommended way to evaluate this artifact is
to run these scripts (from outside the Docker container).

./run-experiments-in-docker-dynamic-frames.sh results-df
./run-experiments-in-docker-linear.sh results-linear
./run-experiments-in-docker-dynamic-frames.sh results-df hdd
./run-experiments-in-docker-linear.sh results-linear hdd

The first script will launch Docker containers and,

Expand All @@ -58,7 +86,11 @@ The first script will launch Docker containers and,
* When complete, it will summarize all results into a file,
`results-df/artifact/paper.pdf`.

The second script will do the same, but for the 'linear' version.
The second script will do the same, but for the 'linear' version. (The only other
difference is that it will not re-run the BerkeleyDB or RocksDB experiments
again, as it would be redundant to run them twice. Those two experiments
are not affected by changed to VeriBetrKV source code.)
It will likewise summarize all results into a file, `results-linear/artifact/paper.pdf`.

Together, these two output pdfs should reproduce the results from our paper.

Expand Down
4 changes: 2 additions & 2 deletions docker/Dockerfile → docker-hdd/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ RUN ./install-dafny.sh
RUN rm ./install-dafny.sh
ENV PATH="/home/root/dafny/bin/:${PATH}"

COPY repos/veribetrkv-dynamic-frames /home/root/veribetrkv-dynamic-frames
COPY repos/veribetrkv-linear /home/root/veribetrkv-linear
COPY src/veribetrkv-dynamic-frames /home/root/veribetrkv-dynamic-frames
COPY src/veribetrkv-linear /home/root/veribetrkv-linear

RUN ln -s /home/root/dafny /home/root/veribetrkv-dynamic-frames/.dafny
RUN ln -s /home/root/dafny /home/root/veribetrkv-linear/.dafny
Expand Down
File renamed without changes.
File renamed without changes.
88 changes: 88 additions & 0 deletions docker-hdd/src/veribetrkv-dynamic-frames/Betree/Betree.i.dfy
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
include "../Betree/BlockInterface.i.dfy"
include "../lib/Base/sequences.i.dfy"
include "../lib/Base/Maps.s.dfy"
include "../MapSpec/MapSpec.s.dfy"
include "../Betree/Graph.i.dfy"
include "../Betree/BetreeSpec.i.dfy"
//
// Betree lowers the "lifted" op-sequences of BetreeSpec down to concrete state machine
// steps that advance the BetreeBlockInterface as required by BetreeSpec.
// It also interleaves Betree operations with BlockInterface garbage collection.
//
// TODO(jonh): This probably should get renamed; its place in the heirarchy
// is confusing.
//

module Betree {
import opened BetreeSpec`Internal
import BI = BetreeBlockInterface
import MS = MapSpec
import opened Maps
import opened Sequences
import opened KeyType
import opened ValueType
import UI

import opened G = BetreeGraph

datatype Constants = Constants(bck: BI.Constants)
datatype Variables = Variables(bcv: BI.Variables)

// TODO(jonh): [cleanup] Not sure why these 3 are in this file.
predicate LookupRespectsDisk(view: BI.View, lookup: Lookup) {
forall i :: 0 <= i < |lookup| ==> IMapsTo(view, lookup[i].ref, lookup[i].node)
}

predicate IsPathFromRootLookup(k: Constants, view: BI.View, key: Key, lookup: Lookup) {
&& |lookup| > 0
&& lookup[0].ref == Root()
&& LookupRespectsDisk(view, lookup)
&& LookupFollowsChildRefs(key, lookup)
}

predicate IsSatisfyingLookup(k: Constants, view: BI.View, key: Key, value: Value, lookup: Lookup) {
&& IsPathFromRootLookup(k, view, key, lookup)
&& LookupVisitsWFNodes(lookup)
&& BufferDefinesValue(InterpretLookup(lookup, key), value)
}

function EmptyNode() : Node {
var buffer := imap key | MS.InDomain(key) :: G.M.Define(G.M.DefaultValue());
Node(imap[], buffer)
}

predicate Init(k: Constants, s: Variables) {
&& BI.Init(k.bck, s.bcv)
&& s.bcv.view[Root()] == EmptyNode()
}

predicate GC(k: Constants, s: Variables, s': Variables, uiop: UI.Op, refs: iset<Reference>) {
&& uiop.NoOp?
&& BI.GC(k.bck, s.bcv, s'.bcv, refs)
}

predicate Betree(k: Constants, s: Variables, s': Variables, uiop: UI.Op, betreeStep: BetreeStep)
{
&& ValidBetreeStep(betreeStep)
&& BetreeStepUI(betreeStep, uiop)
&& BI.Reads(k.bck, s.bcv, BetreeStepReads(betreeStep))
&& BI.OpTransaction(k.bck, s.bcv, s'.bcv, BetreeStepOps(betreeStep))
}

datatype Step =
| BetreeStep(step: BetreeStep)
| GCStep(refs: iset<Reference>)
| StutterStep

predicate NextStep(k: Constants, s: Variables, s': Variables, uiop: UI.Op, step: Step) {
match step {
case BetreeStep(betreeStep) => Betree(k, s, s', uiop, betreeStep)
case GCStep(refs) => GC(k, s, s', uiop, refs)
case StutterStep => s == s' && uiop.NoOp?
}
}

predicate Next(k: Constants, s: Variables, s': Variables, uiop: UI.Op) {
exists step: Step :: NextStep(k, s, s', uiop, step)
}
}
Loading

0 comments on commit 053dd16

Please sign in to comment.