Skip to content
forked from labzero/adpq

ADPQ Prototype A: CDT Procurement demo #CDT-ADPQ-0117 - Lab Zero

Notifications You must be signed in to change notification settings

rdeanbaker/adpq

 
 

Repository files navigation

Prototype A: CDT Procurement Demo - Lab Zero

The prototype is running at this url: https://adpq.labzero.com/

Administration Login

  • User: admin
  • Pass: admin

Requester Login

  • User: user
  • Pass: user

Notes:

  1. You can create additional Requester accounts by logging in with any unique username and the password “user”. This may be helpful for cart & reporting testing.
  2. Quick-access walkthrough to confirm how Lab Zero's prototype meets the functional requirements stated in Prototype A RFI.

Table of Contents

Setup Instructions

Installation of requirements

Software Versions

  • Elixir 1.4.1 (Erlang/OTP 19 [erts-8.2])
  • Phoenix Framework 1.2.1
  • postgres (PostgreSQL) 9.6.2
  • Node.js 7.5.0
  • React 15.4.2

MacOS dev environment

  1. Install Homebrew if not already installed /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
  2. Update Homebrew brew update
  3. Install postgresql brew install postgresql
  4. Install node brew install node
  5. Install elixir brew install elixir
  6. Install mix mix local.hex
  7. Create PostgreSQL role createuser -d adpq
  8. Create and migrate schema mix ecto.create && mix ecto.migrate
  9. To add seed data to your database: mix run priv/repo/seeds.exs

Starting the application

  1. Install dependencies with mix deps.get
  2. Install Node.js dependencies with npm install
  3. Start Phoenix endpoint with mix phoenix.server

Now you can visit localhost:4000 from your browser.

Technical Approach

Introduction

The Lab Zero team’s approach to product development and agile software delivery mirrors the U.S. Digital Services Playbook as shown in the Playbook Adherence section below and fully illustrated within the Docs folder in this repo. Our team kicked off design by interviewing target users to understand their needs and to test solution ideas. User feedback informed design iterations, user stories in the backlog, and prioritization during the sprint cycles. Collaboration enabled the team to optimize design iterations that could be feasibly delivered within the timeline. Our engineers chose modern tools that supported our need to bring features together quickly and deliver them continually with a high degree of quality. The team’s high level of rigor in engineering—gleaned from years of experience delivering mission-critical applications—results in code that is easy to adapt to meet evolving business needs for the State of California.

Architectural Approach

This web application consists of a modern React.js app (Single Page Application) that consumes a JSON API backend written in Elixir using the Phoenix framework backed by a Postgres database. We considered using Shopify or Spree but ultimately decided to build the prototype from scratch. This decision enabled us to demonstrate our ability to develop an easy-to-use application designed in light of careful and deliberate conversations with real users.

  1. React Components
  1. JS REST access
  1. JS routes (defining client side URLs)
  1. JSON serialization
  1. Controller
  1. Model

Development Process

We use the GitFlow branching model and create feature branches off of the develop branch for all new changes. All commits should adhere to the guidelines described in our commit guide. Each feature branch is pushed to Github and a pull request is created, built and tested in CircleCI before peer-review is performed by other developers on the team. Upon final approval by the dev lead, the branch will be squash-merged back into develop.

CI Process

The CI service checks all Pull Requests and looks for success for all of these steps:

  • Compilation and Docker container build
  • Credo (code quality/style analyzer)
  • Unit tests
  • eslint on javascript

Continuous Delivery

The delivery process relies upon automated movement of code and assets into the test environment triggered by commits to the develop branch.

  • Commits to develop trigger deployment to our Test environment
  • Upon deployment, post-deploy automated testing it performed

Release Process

Using GitFlow tooling, we create a release branch and tag. The tag is then used to create a new container image. A job in CircleCI is used to deploy the tagged container to ECS in AWS.

Infrastructure Approach

We built the application in a cloud-first manner on AWS, but deployed it in a Docker container in order to allow cloud portability. However, if AWS offers a managed service for something we need, we prefer the managed service to rolling our own infrastructure, i.e. Postgres via RDS instead of running our own Postgres servers in EC2.

We maintain our VPC and security blueprints as CloudFormation templates checked into Git.

Database table definition/migrations https://github.com/labzero/adpq/blob/master/priv/repo/migrations/20170217185137_create_catalog_item.exs

Cloud Architecture

Playbook Adherence

Our prioritized Prototype Design and Prototype Dev backlogs within GitHub show the activities in our iterative and collaborative process from discovery to delivery and deployment. You may also find reference to the Playbook activity within many cards in the Product Design backlog (noted as “PB”).

The list below associates key activities and artifacts with the Digital Service Plays:

1: Understand what people need

2: Address the whole experience, from start to finish

  • Illustrated on- & off-line touch points and align team on key points of impact & focus Service Map
  • Stated project summary, goals, & metrics to ensure the effort meets needs Product Speclet

3: Make it simple and intuitive

  • Consistently utilized US Web Design Standards
  • Followed accessibility best practices Section G of Requirements List
  • Leveraged login to provide users with a way to exit and return later to complete process
  • Improved readability by re-formatting and adjusting sample data Data Spreadsheet

4: Build the service using agile and iterative practices

6: Assign on leader and hold that person accountable

7: Bring in experienced teams

8: Choose a modern technology stack

9: Deploy in a flexible hosting environment

10: Automate testing and deployments

12: User data to drive

13: Default to open

Requirements List

####A. Assigned one (1) leader and gave that person authority and responsibility and held that person accountable for the quality of the prototype submitted

Aaron Cripps, Product Owner

####B. Assembled a multidisciplinary and collaborative team that includes, at a minimum, five (5) of the labor categories as identified in Attachment B: PQVP DS-AD Labor Category Descriptions

The majority of the team is based in the San Francisco Bay Area. One member is in Tucson AZ, one member in Little Rock AR. Our team collaborates using tools like Slack, Google Hangouts, Screen Hero, GoToMeeting, and Google Docs.

  • Product Manager - Aaron Cripps
  • Technical Architect - Sasha Voynow, Matt Wilson
  • Interaction Designer - Dean Baker, Clayton Hopkins
  • Visual Designer - Jim Ochsenreiter
  • Front End Web Developer - Adam Ducker, Jeffrey Carl Faden
  • Backend Web Developer - Sasha Voynow
  • DevOps Engineer - Brien Wankel, Dave O’Dell

####C. Understood what people needed, by including people in the prototype development and design process

Informed by our initial persona attributes, we found three individuals whose job activities aligned with or related to the Lead Purchasing Organization Administration and State Agency IT Requester roles.

  • Dennis Baker, State of California Assembly Reprographics Manager
  • Robert Lee, Startup Office Manager
  • Ned Holets, Lead Software Engineer who has worked on CMS projects

####D. Used at least a minimum of three (3) “user-centric design” techniques and/or tools

Human-centered design is a core aspect of our process. We consider each idea to be a hypothesis which should be tested and proven. You can find a richer explanation of our findings here. Key activity examples below:

  • Customer Development
    • Stating and prioritizing learning goals (hypotheses)
    • Open-ended interviews with people who met our target personas to understand their needs and goals
  • In-person usability testing to validate solution ideas/hypotheses
    • Clickable prototypes to support usability testing
    • ‘Think aloud’ qualitative user tests of prototype
    • Accessibility testing
  • Leveraging existing usability research
    • Baymard Institute, an ecommerce usability research firm who uses qualitative and quantitative research methods.

####E. Used GitHub to document code commits

Yes, we’ve used Github fully for peer-review and as our sole code repository.

####F. Used Swagger to document the RESTful API, and provided a link to the Swagger API

Yes, we've implemented Swagger, you can view the test UI or point your own ui to the raw JSON describing the API. When testing, you can authorize in the Swagger-UI by putting your username in the Authorization header.

####G. Complied with Section 508 of the Americans with Disabilities Act and WCAG 2.0

Yes, we have used HTML and CSS in a manner that complies with the ADA and WCAG 2.0

####H. Created or used a design style guide and/or a pattern library

  • Utilized the US Web Design Standards for user experience, visual design and responsive guidelines and patterns.
  • Leveraged the Baymard Institute’s research-based user interaction guidelines for eCommerce product lists, homepages and checkout.

####I. Performed usability tests with people

We showed functional prototypes to the following individuals facilitated by a “Think Aloud” qualitative user test.

####J. Used an iterative approach, where feedback informed subsequent work or versions of the prototype

We began by clarifying the business case and target outcomes without proposing solutions. This sets the stage for each activity to be oriented around learning and empowers each team member to bring their expertise and creativity into the solutions which are iteratively built and tested. Learnings from each activity are fed back into subsequent iterations, cross-functionally.

  • Product Owner led goal-oriented kickoff and drafted a first version of the “Speclet” to align and hold the team accountable to high-level key outcomes and measurements.
  • Explorations improve in fidelity based on our learning needs
  • Key learnings from user interviews informed the project summary, goals, and measurements and allowed us to apply improvements to our designs and development.
  • Team story time for formal technical review of prioritized backlog. Development feedback assisted in clarifying prototype behavior and story decomposition.
  • Validated design concepts through prototypes with people outside the team. User feedback informed design and development work.
  • Daily sharing design, development, and product ideas through informal conversations and standups.
  • Utilized Scrum framework for frequent inspection and adaptation
    • Product Owner managed a prioritized backlog of tasks for Design & Development
    • Peers review and accept work
    • Daily standup
    • Sprints: team performed demos and retrospectives

####K. Created a prototype that works on multiple devices, and presents a responsive design

Our prototype has been designed, developed and tested to work on desktop browsers, iOS and Android phones.

####L. Used at least five (5) modern and open-source technologies, regardless of architectural layer (frontend, backend, etc.)

We utilized many modern open-source technologies:

  • Elixir
  • Phoenix Framework
  • Ecto (data layer)
  • React.js
  • Docker
  • SASS
  • Javascript/ES6

####M. Deployed the prototype on an Infrastructure as a Service (IaaS) or Platform as Service (PaaS) provider, and indicated which provider they used

Our prototype has been deployed to AWS as a Docker container running in ECS using RDS for it’s datastore.

####N. Developed automated unit tests for their code

The Engineering Team delivered stories with working code and some level of automated testing. All tests are run in the continuous integration loop with each.

####O. Setup or used a continuous integration system to automate the running of tests and continuously deployed their code to their IaaS or PaaS provider

Our use of a CI server drives automated tests and our deployment pipeline. All new pull requests are tested. We used CircleCI to automate our CI and CD automation.

####P. Setup or used configuration management

We generate CloudFormation templates and build Docker containers, adhering to a https://12factor.net/ approach. CloudFormation templates for staging and production environments can be found in the docs/12-CloudFormation directory.

####Q. Setup or used continuous monitoring

We setup Honeybadger.io for error reporting and Pingdom for uptime monitoring.

####R. Deployed their software in an open source container, such as Docker (i.e., utilized operating-system-level virtualization)

We build Docker containers in our CI/CD process and deploy them to ECR/ECS in AWS.

####S. Provided sufficient documentation to install and run their prototype on another machine

Please see the Setup Instructions section in this document. All engineers used these steps to set up their development environments.

####T. Prototype and underlying platforms used to create and run the prototype are openly licensed and free of charge

All systems used to create and run the prototype are open source and free of charge for use.

About

ADPQ Prototype A: CDT Procurement demo #CDT-ADPQ-0117 - Lab Zero

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 58.7%
  • Elixir 33.2%
  • CSS 4.9%
  • Shell 3.0%
  • HTML 0.2%