If you just want to install and run, then you can just download a zip file.
You will still need the the dependencies below, but you don't need to clone the git repo for the source code.
You can run our bootstrap script to make sure you have all the dependencies. It will also install and start up Elasticsearch:
script/bootstrap
To run Open Data Maker, you will need to have the following software installed on your computer:
- Elasticsearch 1.7.3
- Ruby 2.2.2
NOTE: Open Data Maker does not currently work with Elasticsearch versions 2.x and above. You can follow or assist our progress towards 2.x compatibility at this GitHub issue.
On a Mac, we recommend installing Ruby 2.2.2 via RVM, and Elasticsearch 1.7.3 via Homebrew. If you don't want to use the bootstrap script above, you can install elasticsearch 1.7 with brew using the following command:
brew install elasticsearch17
If you are contributing to development, you will also need Git. If you don't already have these tools, the 18F laptop script will install them for you.
For development, fork the repo first, then clone your fork.
git clone https://github.com/<your GitHub username>/open-data-maker.git
cd open-data-maker
If you just ran script/bootstrap
, then Elasticsearch should already be
running. But if you stopped it or restarted your computer, you'll need to
start it back up. Assuming you installed Elasticsearch via our bootstrap
script, you can restart it with this command:
brew services restart elasticsearch
To get started, you can import sample data with:
rake import
padrino start
Go to: http://127.0.0.1:3000/
and you should see the text Welcome to Open Data Maker
with a link to
the API created by the sample data.
You can verify that the import was successful by visiting http://127.0.0.1:3000/v1/cities?name=Cleveland. You should see something like:
{
"state": "OH",
"name": "Cleveland",
"population": 396815,
"land_area": 77.697,
"location": {
"lat": 41.478138,
"lon": -81.679486
}
While the app is running (or anytime) you can run rake import
. For instance, if you had a presidents/data.yaml
file, you would import
it with:
export DATA_PATH=presidents
rake import
# or, more succintly:
DATA_PATH=presidents rake import
to clear the data, assuming the data set had an index named "president-data"
rake es:delete[president-data]
you may alternately delete all the indices (which could affect other apps if they are using your local Elasticsearch)
rake es:delete[_all]
The data directory can optionally include a file called data.yaml
(see the sample one for its schema) that references one or more .csv
files and specifies data types,
field name mapping, and other support data.
Optionally, you can enable indexing from webapp, but this option is still experimental:
export INDEX_APP=enable
- in your browser, go to /index/reindex
the old index (if present) will be deleted and re-created from source files at DATA_PATH.
Read additional implementation notes