diff --git a/.flake8 b/.flake8
new file mode 100644
index 00000000000..4580b984652
--- /dev/null
+++ b/.flake8
@@ -0,0 +1,22 @@
+[flake8]
+# Idea is to remove these over time
+#
+# E203 = whitespace before ':'
+# E221 = multiple spaces before operator
+# E241 = multiple spaces after ','
+# E231 = missing whitespace after ','
+
+# E261 = at least two spaces before inline comment
+# E126 = continuation line over-indented for hanging indent
+# E128 = continuation line under-indented for visual indent
+# I100 = Imported names are in the wrong order.
+
+# Q000 = double quotes
+
+ignore = E126,E128,E221,E226,E261,E203,E231,E241,Q000,I100
+
+max-line-length = 200
+
+exclude = docs, migrations, node_modules, bower_components, venv, */__init__.py
+
+import-order-style = google
diff --git a/.gitignore b/.gitignore
index ca9071369c4..33d5d7f1885 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,13 +1,13 @@
# Django Related
-local_settings.py
+tabbycat/local_settings.py
.gitmodules/
__pycache__/
# Compilation
-staticfiles/
-static/css/
-static/js/
-static/fonts/
+tabbycat/staticfiles/
+tabbycat/static/css/
+tabbycat/static/js/
+tabbycat/static/fonts/
# Dependencies
node_modules
@@ -22,10 +22,10 @@ data/*
!data/sandbox/
!data/test/
!data/presets/
-!data/fixtues/
+!data/fixtures/
# Docs
-site/
+docs/site/
# Tags
tags
diff --git a/.travis.yml b/.travis.yml
index eab4209b971..a0d50cc4afe 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -10,4 +10,5 @@ services:
install:
- pip install -r requirements_common.txt
script:
- - dj test -v 2
+ - flake8 tabbycat
+ - cd tabbycat && dj test -v 2
diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index c14502d4166..31733b6d34f 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -2,6 +2,32 @@
Change Log
==========
+1.0.0
+-----
+Redesigned and redeveloped adjudicator allocation page
+ - Redesigned interface, featuring clearer displays of conflict and diversity information
+ - Changes to importances and panels are now automatically saved
+ - Added debate "liveness" to help identify critical rooms—many thanks to Thevesh Theva
+ - Panel score calculations performed live to show strength of voting majorities
+New features
+ - Added record pages for teams and adjudicators
+ - Added a diversity tab to display demographic information about participants and scoring
+Significant general improvements
+ - Shifted most table rendering to Vue.js to improve performance and design
+ - Drastically reduced number of SQL queries in large tables, *e.g.* draw, results, tab
+Break round management
+ - Completed support for break round draws
+ - Simplified procedure for adding remarks to teams and updating break
+ - Reworked break generation code to be class-based, to improve future extensibility
+ - Added support for break qualification rules: AIDA Australs, AIDA Easters, WADL
+Feedback
+ - Changed Boolean fields in AdjudicatorFeedbackQuestion to reflect what they actually do
+ - Changed "panellist feedback enabled" option to "feedback paths", a choice of three options
+- Dropped "/t/" from tournament URLs and moved "/admin/" to "/database/", with 301 redirects
+- Added basic code linting to the continuous integration tests
+- Many other small bug fixes, refactors, optimisations, and documentation updates
+
+
0.9.0
-----
- Added a beta implementation of the break rounds workflow
diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index ad00e68b29e..16a8bfe8bea 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -2,16 +2,12 @@
Contributing
============
-.. important:: We are using the `git-flow workflow `_, so please submit any pull requests against the **develop branch** (and not master).
-
-Contributions are welcome, and are greatly appreciated! Every little bit helps, and credit will be given. `Join our Facebook group `_.
+Contributions are welcome, and are greatly appreciated! Every little bit helps, and credit will be given. `Join our Facebook group `_ if you have any questions about how to get started contributing.
Bug reports
===========
-Please report bugs by opening a new issue in our `GitHub repository `_.
-
-It is most helpful if you can include:
+Please report bugs by opening a new issue in our `GitHub repository `_. It is most helpful if you can include:
- How Tabbycat was installed (on Heroku, locally on OS X, `etc.`)
- Any details about your tournament and setup that might be helpful in troubleshooting
@@ -20,10 +16,20 @@ It is most helpful if you can include:
Getting started
===============
-- Insert general setup instructions
-- Insert instructions on how to make a feature/bug branch
-- Maybe insert instructions on how to run tests / flake8
-- Insert pull request checklist/guidelines
+.. important:: We are using the `git-flow workflow `_, so please submit any pull requests against the **develop branch** (and not master).
+
+- Generally we prefer that features and bug fixes are submitted as pull requests on their own branch (as described in the git-flow process)
+- We use Django's testing tools — it would be great if new features came with unit tests
+- TODO: more detail on tests and pull request checklist/guidelines
+
+Style guide
+===========
+
+We use `flake8 `_ to check for a non-strict series of style rules. Warnings will trigger a Travis CI build to fail. The entire codebase can be checked by using::
+
+ $ flake8 .
+
+While in the base directory
Semantic versioning convention
==============================
@@ -48,14 +54,11 @@ Documentation
Documentation is created using `Sphinx `_ and hosted at `Read The Docs `_. Pushes to ``develop`` will update the *latest* documentation set, while pushes to ``master`` will update the *stable* documentation set.
-Previewing Locally
-------------------
-
-Install the docs-specific requirements (from the base folder)::
+To preview the documentation locally, install the docs-specific requirements (from the base folder)::
$ pip install -r 'docs/requirements.txt'
-Start the server::
+Then start the server::
$ sphinx-autobuild docs docs/_build/html --port 7999
diff --git a/Gulpfile.js b/Gulpfile.js
index 95c7b3c55ae..9c1932b241e 100644
--- a/Gulpfile.js
+++ b/Gulpfile.js
@@ -1,4 +1,5 @@
var gulp = require('gulp');
+var gutil = require('gulp-util'); // Error logging + NoOop
// Compilation
var sass = require('gulp-sass');
@@ -6,78 +7,137 @@ var rename = require('gulp-rename');
var concat = require('gulp-concat');
// Compression
-var minifyCSS = require('gulp-minify-css');
+var cleanCSS = require('gulp-clean-css');
var uglify = require('gulp-uglify');
+// Browserify
+var browserify = require('browserify'); // Bundling modules
+var babelify = require('babelify'); // Use ES syntax
+var vueify = require('vueify');
+var source = require('vinyl-source-stream'); // Use browserify in gulp
+var es = require('event-stream'); // Browserify multiple files at once
+var streamify = require('gulp-streamify');
+// Debug & Config
+var livereload = require('gulp-livereload');
+var outputDir = 'tabbycat/static/';
+var isProduction = (gutil.env.development === true) ? false: true;
+if (isProduction === true) {
+ console.log('GULP: Building for production');
+} else if (isProduction === false) {
+ console.log('GULP: Building for development');
+}
+
+// Tasks
gulp.task('fonts-compile', function() {
gulp.src([
- 'bower_components/**/*.eot',
- 'bower_components/**/*.svg',
- 'bower_components/**/*.ttf',
- 'bower_components/**/*.woff',
- 'bower_components/**/*.woff2',
+ 'node_modules/bootstrap-sass/assets/fonts/**/*.eot',
+ 'node_modules/bootstrap-sass/assets/fonts/**/*.svg',
+ 'node_modules/bootstrap-sass/assets/fonts/**/*.ttf',
+ 'node_modules/bootstrap-sass/assets/fonts/**/*.woff',
+ 'node_modules/bootstrap-sass/assets/fonts/**/*.woff2',
+ 'node_modules/lato-font/fonts/**/*.eot',
+ 'node_modules/lato-font/fonts/**/*.svg',
+ 'node_modules/lato-font/fonts/**/*.ttf',
+ 'node_modules/lato-font/fonts/**/*.woff',
+ 'node_modules/lato-font/fonts/**/*.woff2',
])
.pipe(rename({dirname: ''})) // Remove folder structure
- .pipe(gulp.dest('static/fonts/vendor/'));
+ .pipe(gulp.dest(outputDir + 'fonts/'));
});
gulp.task('styles-compile', function() {
- gulp.src(['templates/scss/printables.scss', 'templates/scss/style.scss'])
+ gulp.src([
+ 'tabbycat/templates/scss/allocation-old.scss',
+ 'tabbycat/templates/scss/printables.scss',
+ 'tabbycat/templates/scss/style.scss'])
.pipe(sass().on('error', sass.logError))
- .pipe(minifyCSS())
- .pipe(rename(function (path) {
- path.basename += ".min";
- }))
- .pipe(gulp.dest('static/css/'));
+ // '*' compatability = IE9+
+ .pipe(isProduction ? cleanCSS({compatibility: '*'}) : gutil.noop())
+ .pipe(gulp.dest(outputDir + '/css/'))
+ .pipe(isProduction ? gutil.noop() : livereload());
});
-// Creates task for collecting dependencies
-gulp.task('js-compile', function() {
- gulp.src(['templates/js/*.js'])
- .pipe(uglify())
- .pipe(rename(function (path) {
- path.basename += ".min";
- }))
- .pipe(rename({dirname: ''})) // Remove folder structure
- .pipe(gulp.dest('static/js/'));
+gulp.task("js-vendor-compile", function() {
+ gulp.src([
+ 'node_modules/jquery/dist/jquery.js', // For Debug Toolbar
+ 'node_modules/datatables.net/js/jquery.dataTables.js', // Deprecate,
+ 'node_modules/jquery-validation/dist/jquery.validate.js', // Deprecate,
+ 'tabbycat/templates/js-vendor/jquery-ui.min.js', // Deprecate,
+ ])
+ .pipe(isProduction ? uglify() : gutil.noop()) // Doesnt crash
+ .pipe(gulp.dest(outputDir + '/js/vendor/'));
});
-// Creates task for collecting dependencies
-gulp.task('js-main-vendor-compile', function() {
- gulp.src(['bower_components/jquery/dist/jquery.js',
- 'bower_components/bootstrap-sass/assets/javascripts/bootstrap.js',
- 'templates/js/vendor/jquery.dataTables.min.js',
- 'templates/js/vendor/fixed-header.js',
- ])
- .pipe(concat('vendor.js'))
- .pipe(uglify())
- .pipe(rename(function (path) {
- path.basename += ".min";
- }))
- .pipe(rename({dirname: ''})) // Remove folder structure
- .pipe(gulp.dest('static/js/vendor/'));
+gulp.task("js-compile", function() {
+ gulp.src([
+ 'tabbycat/templates/js-standalones/*.js',
+ ])
+ // Can't run uglify() until django logic is out of standalone js files
+ // .pipe(isProduction ? uglify() : gutil.noop())
+ .pipe(gulp.dest(outputDir + '/js/'))
+ .pipe(isProduction ? gutil.noop() : livereload());
});
-// Creates task for collecting dependencies
-gulp.task('js-alt-vendor-compile', function() {
- gulp.src(['bower_components/jquery/dist/jquery.min.js', // Redundant but needed for debug toolbar
- 'bower_components/d3/d3.min.js',
- 'bower_components/jquery-ui/jquery-ui.min.js',
- 'bower_components/jquery-validation/dist/jquery.validate.min.js',
- 'bower_components/vue/dist/vue.min.js',
- 'bower_components/vue/dist/vue.js', // For when debug is on
- ])
- .pipe(uglify())
- .pipe(rename({dirname: ''})) // Remove folder structure
- .pipe(gulp.dest('static/js/vendor/'));
+gulp.task("js-browserify", function() {
+ // With thanks to https://fettblog.eu/gulp-browserify-multiple-bundles/
+ // We define our input files, which we want to have bundled
+ var files = [
+ 'tabbycat/templates/js-bundles/public.js',
+ 'tabbycat/templates/js-bundles/admin.js'
+ ];
+ // map them to our stream function
+ var tasks = files.map(function(entry) {
+ return browserify({ entries: [entry] })
+ .transform(vueify)
+ .on('error', gutil.log)
+ .transform([babelify, {
+ presets: ["es2015"],
+ plugins: ['transform-runtime']
+ }])
+ .on('error', gutil.log)
+ .bundle().on('error', gutil.log)
+ .on('error', function() {
+ gutil.log
+ this.emit('end');
+ })
+ .pipe(source(entry))
+ .on('error', gutil.log)
+ .pipe(isProduction ? streamify(uglify()) : gutil.noop())
+ .on('error', gutil.log)
+ .pipe(rename({
+ extname: '.bundle.js',
+ dirname: ''
+ }))
+ .pipe(gulp.dest(outputDir + '/js/'));
+ // .pipe(isProduction ? gutil.noop() : livereload());
+ // TODO: get proper hot reloading going?
+ });
+ // create a merged stream
+ return es.merge.apply(null, tasks);
});
-// Automatically build and watch the CSS folder for when a file changes
-gulp.task('default', ['build'], function() {
- gulp.watch('templates/scss/**/*.scss', ['styles-compile']);
- gulp.watch('templates/js/**/*.js', ['js-compress']);
+gulp.task("html-reload", function() {
+ return gulp.src('')
+ .pipe(livereload());
});
-// Build task for production
-gulp.task('build', ['fonts-compile', 'styles-compile', 'js-compile', 'js-main-vendor-compile', 'js-alt-vendor-compile' ]);
\ No newline at end of file
+// Runs with --production if debug is false or there's no local settings
+gulp.task('build', [
+ 'fonts-compile',
+ 'styles-compile',
+ 'js-vendor-compile',
+ 'js-compile',
+ 'js-browserify',
+ ]);
+
+// Runs when debug is True and when runserver/collectstatic is called
+// Watch the CSS/JS for changes and copy over to static AND static files when done
+gulp.task('watch', ['build'], function() {
+ livereload.listen();
+ gulp.watch('tabbycat/templates/scss/**/*.scss', ['styles-compile']);
+ gulp.watch('tabbycat/templates/js-standalones/*.js', ['js-compile']);
+ gulp.watch('tabbycat/templates/js-bundles/*.js', ['js-browserify']);
+ gulp.watch('tabbycat/templates/**/*.vue', ['js-browserify']);
+ gulp.watch('tabbycat/**/*.html', ['html-reload']);
+});
diff --git a/Procfile b/Procfile
index 0c1a04e6715..fbd46315ebc 100644
--- a/Procfile
+++ b/Procfile
@@ -1,4 +1,5 @@
# production
-web: waitress-serve --port=$PORT wsgi:application
+web: sh -c 'cd ./tabbycat/ && waitress-serve --port=$PORT wsgi:application'
+
# debug
-#web: waitress-serve --port=$PORT --expose-tracebacks wsgi:application
+web: sh -c 'cd tabbycat && waitress-serve --port=$PORT --expose-tracebacks wsgi:application'
diff --git a/README.md b/README.md
index 3ce73229f12..b67518ff8b9 100644
--- a/README.md
+++ b/README.md
@@ -1,12 +1,12 @@
# Tabbycat
-[![Docs](https://readthedocs.org/projects/tabbycat/badge/?version=latest)](http://tabbycat.readthedocs.io/en/latest/) [![Docs](https://readthedocs.org/projects/tabbycat/badge/?version=stable)](http://tabbycat.readthedocs.io/en/stable/) [![Build Status](https://travis-ci.org/czlee/tabbycat.svg?branch=develop)](https://travis-ci.org/czlee/tabbycat) [![Dependency Status](https://www.versioneye.com/user/projects/574bd0dace8d0e00473733b5/badge.svg?style=flat)](https://www.versioneye.com/user/projects/574bd0dace8d0e00473733b5)
+[![Docs](https://readthedocs.org/projects/tabbycat/badge/?version=latest)](http://tabbycat.readthedocs.io/en/latest/) [![Docs](https://readthedocs.org/projects/tabbycat/badge/?version=stable)](http://tabbycat.readthedocs.io/en/stable/) [![Build Status](https://travis-ci.org/czlee/tabbycat.svg?branch=develop)](https://travis-ci.org/czlee/tabbycat) [![Dependency Status](https://gemnasium.com/badges/github.com/czlee/tabbycat.svg)](https://gemnasium.com/github.com/czlee/tabbycat)
[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy)
-Tabbycat is a draw tabulation system for 3 vs 3 debating tournaments. It was used at Auckland Australs 2010, [Victoria Australs 2012](https://www.facebook.com/Australs2012), [Otago Australs 2014](http://australs2014.herokuapp.com), [Daejeon Australs 2015](http://australs2015.herokuapp.com) and [many other tournaments of all sizes](http://tabbycat.readthedocs.io/en/stable/about/tournament-history.html).
+Tabbycat is a draw tabulation system for 3 vs 3 debating tournaments. It was used at Australs in Auckland 2010, [Wellington 2012](https://www.facebook.com/Australs2012), [Dunedin 2014](http://australs2014.herokuapp.com), [Daejeon 2015](http://australs2015.herokuapp.com) and [Perth 2016](http://australs2016.herokuapp.com), as well as [many other tournaments of all sizes](http://tabbycat.readthedocs.io/en/stable/about/tournament-history.html).
-Our **demo site** is at [tabbycatdebate.herokuapp.com](http://tabbycatdebate.herokuapp.com/). It's normally up, but its form will vary from time to time as we set up new feature demos for people. If it's down and you'd like to see it, or if you want to play with it as if you were running a tournament, [contact us](#authors-and-contacts). To see a post-tournament website, have a look at the [Daejeon Australs 2015 tab website](http://australs2015.herokuapp.com).
+Our **demo site** is at [tabbycatdebate.herokuapp.com](http://tabbycatdebate.herokuapp.com/). It's normally up, but its form will vary from time to time as we set up new feature demos for people. If it's down and you'd like to see it, or if you want to play with it as if you were running a tournament, [contact us](#authors-and-contacts). To see a post-tournament website, have a look at the [WAustrals 2016 tab website](http://australs2016.herokuapp.com).
## Features
diff --git a/actionlog/urls.py b/actionlog/urls.py
deleted file mode 100644
index af1e7dcab24..00000000000
--- a/actionlog/urls.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from django.conf.urls import url
-
-from . import views
-
-urlpatterns = [
- url(r'^latest_actions/$', views.latest_actions, name='latest_actions'),
-]
diff --git a/actionlog/views.py b/actionlog/views.py
deleted file mode 100644
index 0c7b80cf67f..00000000000
--- a/actionlog/views.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from django.template import Template, Context
-import json
-
-from .models import ActionLogEntry
-from utils.views import *
-import datetime
-
-@login_required
-@tournament_view
-def latest_actions(request, t):
- action_objects = []
- actions = ActionLogEntry.objects.filter(tournament=t).order_by(
- '-timestamp')[:15].select_related('user', 'debate', 'ballot_submission')
-
- timestamp_template = Template("{% load humanize %}{{ t|naturaltime }}")
- for a in actions:
- action_objects.append({
- 'user': a.user.username if a.user else a.ip_address or "anonymous",
- 'type': a.get_type_display(),
- 'param': a.get_parameters_display(),
- 'timestamp': timestamp_template.render(Context({'t': a.timestamp})),
- })
-
- return HttpResponse(json.dumps(action_objects), content_type="text/json")
diff --git a/adjallocation/admin.py b/adjallocation/admin.py
deleted file mode 100644
index afdcbb87897..00000000000
--- a/adjallocation/admin.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from django.contrib import admin
-
-from .models import DebateAdjudicator
-
-# ==============================================================================
-# Debate Adjudicators
-# ==============================================================================
-
-class DebateAdjudicatorAdmin(admin.ModelAdmin):
- list_display = ('debate', 'adjudicator', 'type')
- search_fields = ('adjudicator__name', 'type')
- raw_id_fields = ('debate',)
-
-admin.site.register(DebateAdjudicator, DebateAdjudicatorAdmin)
diff --git a/adjallocation/models.py b/adjallocation/models.py
deleted file mode 100644
index 870057cf656..00000000000
--- a/adjallocation/models.py
+++ /dev/null
@@ -1,111 +0,0 @@
-from django.db import models
-
-from tournaments.models import SRManager
-from participants.models import Adjudicator
-
-class DebateAdjudicator(models.Model):
- TYPE_CHAIR = 'C'
- TYPE_PANEL = 'P'
- TYPE_TRAINEE = 'T'
-
- TYPE_CHOICES = (
- (TYPE_CHAIR, 'chair'),
- (TYPE_PANEL, 'panellist'),
- (TYPE_TRAINEE, 'trainee'),
- )
-
- objects = SRManager()
-
- debate = models.ForeignKey('draw.Debate')
- adjudicator = models.ForeignKey('participants.Adjudicator')
- type = models.CharField(max_length=2, choices=TYPE_CHOICES)
- timing_confirmed = models.NullBooleanField(verbose_name="Available? ")
-
- def __str__(self):
- return '{} in {}'.format(self.adjudicator, self.debate)
-
- class Meta:
- unique_together = ('debate', 'adjudicator')
-
-
-class AdjudicatorConflict(models.Model):
- adjudicator = models.ForeignKey('participants.Adjudicator')
- team = models.ForeignKey('participants.Team')
-
- class Meta:
- verbose_name = "adjudicator-team conflict"
-
-class AdjudicatorAdjudicatorConflict(models.Model):
- adjudicator = models.ForeignKey('participants.Adjudicator', related_name="source_adjudicator")
- conflict_adjudicator = models.ForeignKey('participants.Adjudicator', related_name="target_adjudicator", verbose_name="Adjudicator")
-
- class Meta:
- verbose_name = "adjudicator-adjudicator conflict"
-
-class AdjudicatorInstitutionConflict(models.Model):
- adjudicator = models.ForeignKey('participants.Adjudicator')
- institution = models.ForeignKey('participants.Institution')
-
- class Meta:
- verbose_name = "adjudicator-institution conflict"
-
-class AdjudicatorAllocation:
- """Not a model, just a container object for the adjudicators on a panel."""
-
- def __init__(self, debate, chair=None, panel=None):
- self.debate = debate
- self.chair = chair
- self.panel = panel or []
- self.trainees = []
-
- @property
- def list(self):
- """Panel only, excludes trainees."""
- a = [self.chair]
- a.extend(self.panel)
- return a
-
- def __str__(self):
- items = [str(getattr(x, "name", x)) for x in self.list]
- return ", ".join(items)
-
- def __iter__(self):
- """Iterates through all, including trainees."""
- if self.chair is not None:
- yield DebateAdjudicator.TYPE_CHAIR, self.chair
- for a in self.panel:
- yield DebateAdjudicator.TYPE_PANEL, a
- for a in self.trainees:
- yield DebateAdjudicator.TYPE_TRAINEE, a
-
- def __contains__(self, item):
- return item == self.chair or item in self.panel or item in self.trainees
-
- def __eq__(self, other):
- return self.debate == other.debate and self.chair == other.chair and \
- set(self.panel) == set(other.panel) and set(self.trainees) == set(other.trainees)
-
- def delete(self):
- """Delete existing, current allocation"""
- self.debate.debateadjudicator_set.all().delete()
- self.chair = None
- self.panel = []
- self.trainees = []
-
- @property
- def has_chair(self):
- return self.chair is not None
-
- @property
- def is_panel(self):
- return len(self.panel) > 0
-
- @property
- def valid(self):
- return self.has_chair and len(self.panel) % 2 == 0
-
- def save(self):
- self.debate.debateadjudicator_set.all().delete()
- for t, adj in self:
- if adj:
- DebateAdjudicator(debate=self.debate, adjudicator=adj, type=t).save()
diff --git a/adjfeedback/templates/adjudicator_source_list.html b/adjfeedback/templates/adjudicator_source_list.html
deleted file mode 100644
index 9d5ba33109b..00000000000
--- a/adjfeedback/templates/adjudicator_source_list.html
+++ /dev/null
@@ -1,111 +0,0 @@
-{% extends "base.html" %}
-{% load debate_tags %}
-{% load static %}
-
-{% block head-title %}View Feedback by Source{% endblock %}
-{% block page-title %}View Feedback by Source{% endblock %}
-
-{% block page-subnav-sections %}
-
- Feedback Overview
-
-
- Latest Feedback
-
-
- Find Feedback
-
- {% if pref.public_ballots_randomised or pref.public_feedback_randomised %}
-
- Randomised URLs
-
- {% endif %}
-
- Unsubmitted Ballots
-
- {% include "tables/table_search.html" %}
-{% endblock %}
-
-{% block page-subnav-actions %}
- Add Feedback
-{% endblock %}
-
-{% block content %}
-
- Please double-check this before announcing the break or releasing it to the
- public. The code that generates the break has not been robustly tested for
- corner cases. If there are errors, please take a snapshot of the database
- and a screenshot and send it to the developers.
-
-
-
- How to edit the break: You can edit the remarks in the
- right-hand column. Any team with a remark is considered ineligible for this
- break. (The algorithm will sometimes insert its own remarks where appropriate,
- based on the break size, institution cap and break category priorities.) After you do this, you must save the remarks
- before proceeding (otherwise your changes will be not be saved). Then,
- click the appropriate update button below.
-
-
- The procedure isn't perfect; if you have complicated break category rules
- (for example, if some teams are allowed to choose their preferred category)
- then you may have to iterate through remark editing and saving a few times
- for the algorithm to get what you want. As a last resort, you can edit the
- breaking teams list directly in the database through the
- Edit Data section (under Setup in the menu).
-
-
- {% else %}
- {% if pref.public_breaking_teams %}
-
- The public breaking teams configuration setting is
- enabled. As soon as you click the button, the breaking teams list will
- be visible on the public site, before you have a chance to
- double-check it! It is strongly recommended that you disable this
- setting on the
- tournament configuration page before generating the team
- breaks.
-
- {% else %}
-
-
- The break hasn't yet been generated. Would you like to generate
- the break for all categories?
-
-
- (It's safe to generate the break before all preliminary rounds are
- complete, if you're curious. You can regenerate it later.)
-
- {% if round.adjudicators_allocation_validity == 1 %}
- One or more debates don't have a chair.
- {% elif round.adjudicators_allocation_validity == 2 %}
- One or more debates have panels with an even number of adjudicators.
- {% endif %}
- Edit adjudicators.
-
- {% endif %}
- {% endif %}
-
- {% if round.draw_status = round.STATUS_RELEASED and not pref.public_draw %}
-
- You have released the draw, but it will not show to the public unless
- 'public view of draw' setting is enabled in
- this tournament's
- configuration.
-
- {% endif %}
-
- {% if round.motions_released and not pref.public_motions %}
-
- You have released the motions, but they will not show to the public unless the
- 'public view of motions' setting is enabled in
- this tournament's
- configuration.
-
@@ -199,39 +199,42 @@
-
-
+
{% endblock content %}
-{% block extra-js %}
+{% block extra-css %}
+
+
+{% endblock extra-css %}
+
+{% block js %}
+ {{ block.super }}
+
-{% endblock extra-js %}
+{% endblock js %}
diff --git a/tabbycat/adjallocation/templates/edit_adj_allocation.html b/tabbycat/adjallocation/templates/edit_adj_allocation.html
new file mode 100644
index 00000000000..ae694690927
--- /dev/null
+++ b/tabbycat/adjallocation/templates/edit_adj_allocation.html
@@ -0,0 +1,41 @@
+{% extends "base.html" %}
+{% load debate_tags %}
+
+{% block nav %}{% endblock %}
+{% block header %}{% endblock %}
+{% block subheader %}{% endblock %}
+{% block footer %}{% endblock %}
+{% block page-subnav-sections %}{% endblock page-subnav-sections %}
+
+{% block content %}
+
+
+
+
+{% endblock content %}
+
+{% block js %}
+
+ {{ block.super }}
+{% endblock js %}
diff --git a/adjallocation/urls.py b/tabbycat/adjallocation/urls.py
similarity index 53%
rename from adjallocation/urls.py
rename to tabbycat/adjallocation/urls.py
index 538e625684f..11b4f48ab59 100644
--- a/adjallocation/urls.py
+++ b/tabbycat/adjallocation/urls.py
@@ -1,25 +1,38 @@
from django.conf.urls import url
from . import views
-from participants.models import Adjudicator
urlpatterns = [
- url(r'^create/$',
+ # Old busted
+ url(r'^create_old/$',
views.create_adj_allocation,
name='create_adj_allocation'),
- url(r'^edit/$',
+ url(r'^edit_old/$',
views.draw_adjudicators_edit,
name='draw_adjudicators_edit'),
- url(r'^_get/$',
+ url(r'^_get_old/$',
views.draw_adjudicators_get,
name='draw_adjudicators_get'),
- url(r'^save/$',
- views.SaveAdjudicatorsView.as_view(),
- name='save_adjudicators'),
url(r'^_update_importance/$',
views.update_debate_importance,
name='update_debate_importance'),
- url(r'^conflicts/$',
+ url(r'^conflicts_old/$',
views.adj_conflicts,
name='adj_conflicts'),
+ url(r'^save/$',
+ views.SaveAdjudicatorsView.as_view(),
+ name='save_adjudicators'),
+ # New Hotness
+ url(r'^edit/$',
+ views.EditAdjudicatorAllocationView.as_view(),
+ name='edit_adj_allocation'),
+ url(r'^create/$',
+ views.CreateAutoAllocation.as_view(),
+ name='create_auto_allocation'),
+ url(r'^importance/set/$',
+ views.SaveDebateImportance.as_view(),
+ name='save_debate_importance'),
+ url(r'^panel/set/$',
+ views.SaveDebatePanel.as_view(),
+ name='save_debate_panel'),
]
diff --git a/tabbycat/adjallocation/utils.py b/tabbycat/adjallocation/utils.py
new file mode 100644
index 00000000000..87b7e3e4e89
--- /dev/null
+++ b/tabbycat/adjallocation/utils.py
@@ -0,0 +1,376 @@
+import json
+import math
+from itertools import permutations
+
+from .models import AdjudicatorAdjudicatorConflict, AdjudicatorConflict, AdjudicatorInstitutionConflict, DebateAdjudicator
+
+from availability.models import ActiveAdjudicator
+from breakqual.utils import calculate_live_thresholds, determine_liveness
+from draw.models import DebateTeam
+from participants.models import Adjudicator, Team
+from participants.prefetch import populate_feedback_scores, populate_win_counts
+
+
+def adjudicator_conflicts_display(debates):
+ """Returns a dict mapping elements (debates) in `debates` to a list of
+ strings of explaining conflicts between adjudicators and teams, and
+ conflicts between adjudicators and each other."""
+
+ adjteamconflicts = {}
+ for conflict in AdjudicatorConflict.objects.filter(adjudicator__debateadjudicator__debate__in=debates).distinct():
+ adjteamconflicts.setdefault(conflict.adjudicator_id, []).append(conflict.team_id)
+ adjinstconflicts = {}
+ for conflict in AdjudicatorInstitutionConflict.objects.filter(adjudicator__debateadjudicator__debate__in=debates).distinct():
+ adjinstconflicts.setdefault(conflict.adjudicator_id, []).append(conflict.institution_id)
+ adjadjconflicts = {}
+ for conflict in AdjudicatorAdjudicatorConflict.objects.filter(adjudicator__debateadjudicator__debate__in=debates).distinct():
+ adjadjconflicts.setdefault(conflict.adjudicator_id, []).append(conflict.conflict_adjudicator_id)
+
+ conflict_messages = {debate: [] for debate in debates}
+ for debate in debates:
+ for adjudicator in debate.adjudicators.all():
+ for team in debate.teams:
+ if team.id in adjteamconflicts.get(adjudicator.id, []):
+ conflict_messages[debate].append(
+ "Conflict between {adj} & {team}".format(
+ adj=adjudicator.name, team=team.short_name)
+ )
+ if team.institution_id in adjinstconflicts.get(adjudicator.id, []):
+ conflict_messages[debate].append(
+ "Conflict between {adj} & institution {inst} ({team})".format(
+ adj=adjudicator.name, team=team.short_name, inst=team.institution.code)
+ )
+
+ for adj1, adj2 in permutations(debate.adjudicators.all(), 2):
+ if adj2.id in adjadjconflicts.get(adj1.id, []):
+ conflict_messages[debate].append(
+ "Conflict between {adj} & {other}".format(
+ adj=adj1.name, other=adj2.name)
+ )
+
+ if adj2.institution_id in adjinstconflicts.get(adj1.id, []):
+ conflict_messages[debate].append(
+ "Conflict between {adj} & {inst} ({other})".format(
+ adj=adj1.name, other=adj2.name, inst=adj2.institution.code)
+ )
+
+ return conflict_messages
+
+
+def percentile(n, percent, key=lambda x:x):
+ """
+ Find the percentile of a list of values.
+
+ @parameter N - is a list of values. Note N MUST BE already sorted.
+ @parameter percent - a float value from 0.0 to 1.0.
+ @parameter key - optional key function to compute value from each element of N.
+
+ @return - the percentile of the values
+ """
+ if not n:
+ return None
+ k = (len(n)-1) * percent
+ f = math.floor(k)
+ c = math.ceil(k)
+ if f == c:
+ return key(n[int(k)])
+ d0 = key(n[int(f)]) * (c-k)
+ d1 = key(n[int(c)]) * (k-f)
+ return d0+d1
+
+
+def get_adjs(r):
+
+ active = ActiveAdjudicator.objects.filter(
+ round=r).values_list(
+ 'adjudicator')
+ active = [(a[0], None) for a in active]
+
+ allocated_adjs = DebateAdjudicator.objects.select_related(
+ 'debate__round', 'adjudicator').filter(
+ debate__round=r).values_list(
+ 'adjudicator', 'debate')
+ allocated_ids = [a[0] for a in allocated_adjs]
+
+ # Remove active adjs that have been assigned to debates
+ unallocated_adjs = [a for a in active if a[0] not in allocated_ids]
+ all_ids = list(allocated_adjs) + list(unallocated_adjs)
+
+ # Grab all the actual adjudicator objects
+ active_adjs = Adjudicator.objects.select_related(
+ 'institution', 'tournament', 'tournament__current_round').in_bulk(
+ [a[0] for a in all_ids])
+
+ # Build a list of adjudicator objects (after setting their debate property)
+ round_adjs = []
+ for round_id, round_adj in zip(all_ids, list(active_adjs.values())):
+ round_adj.debate = round_id[1]
+ round_adjs.append(round_adj)
+
+ return round_adjs
+
+
+def populate_conflicts(adjs, teams):
+ # Grab all conflicts data and assign it
+ teamconflicts = AdjudicatorConflict.objects.filter(
+ adjudicator__in=adjs).values_list(
+ 'adjudicator', 'team')
+ institutionconflicts = AdjudicatorInstitutionConflict.objects.filter(
+ adjudicator__in=adjs).values_list(
+ 'adjudicator', 'institution')
+ adjconflicts = AdjudicatorAdjudicatorConflict.objects.filter(
+ adjudicator__in=adjs).values_list(
+ 'adjudicator', 'conflict_adjudicator')
+
+ for a in adjs:
+ a.personal_teams = [c[1] for c in teamconflicts if c[0] == a.id]
+ a.institutional_institutions = [a.institution.id]
+ a.institutional_institutions.extend(
+ [c[1] for c in institutionconflicts if c[0] == a.id and c[1] != a.institution.id])
+
+ # Adj-adj conflicts should be symmetric
+ a.personal_adjudicators = [c[1] for c in adjconflicts if c[0] == a.id]
+ a.personal_adjudicators += [c[0] for c in adjconflicts if c[1] == a.id]
+
+ for t in teams:
+ t.personal_adjudicators = [c[0] for c in teamconflicts if c[1] == t.id]
+ # For teams conflicted_institutions is a list of adjs due to the asymetric
+ # nature of adjs having multiple instutitonal conflicts
+ t.institutional_institutions = [t.institution_id]
+
+ return adjs, teams
+
+
+def populate_histories(adjs, teams, t, r):
+
+ da_histories = DebateAdjudicator.objects.filter(
+ debate__round__tournament=t, debate__round__seq__lt=r.seq).select_related(
+ 'debate__round').values_list(
+ 'adjudicator', 'debate', 'debate__round__seq').order_by('-debate__round__seq')
+ dt_histories = DebateTeam.objects.filter(
+ debate__round__tournament=t, debate__round__seq__lt=r.seq).select_related(
+ 'debate__round').values_list(
+ 'team', 'debate', 'debate__round__seq').order_by('-debate__round__seq')
+
+ for a in adjs:
+ hists = []
+ # Iterate over all DebateAdjudications from this adj
+ for dah in [dah for dah in da_histories if dah[0] == a.id]:
+ # Find the relevant DebateTeams from the matching debates
+ hists.extend([{
+ 'team': dat[0],
+ 'debate': dah[1],
+ 'ago': r.seq - dah[2],
+ } for dat in dt_histories if dat[1] == dah[1]])
+ # From these DebateAdjudications find panellists from the same debates
+ hists.extend([{
+ 'adjudicator': dah2[0],
+ 'debate': dah2[1],
+ 'ago': r.seq - dah2[2],
+ } for dah2 in da_histories if dah2[1] == dah[1] and dah2 != dah])
+ a.histories = hists
+
+ for t in teams:
+ hists = []
+ # Iterate over the DebateTeams and match to teams
+ for dat in [dat for dat in dt_histories if dat[0] == t.id]:
+ # Once matched, find all DebateAdjudicators from that debate and
+ # add them as conflicts for this team
+ hists.extend([{
+ 'adjudicator': dah[0],
+ 'debate': dah[1],
+ 'ago': r.seq - dah[2],
+ } for dah in da_histories if dah[1] == dat[1]])
+ t.histories = hists
+
+ return adjs, teams
+
+
+def debates_to_json(draw, t, r):
+
+ data = [{
+ 'id': debate.id,
+ 'bracket': debate.bracket,
+ 'importance': debate.importance,
+ 'aff_team': debate.aff_team.id,
+ 'neg_team': debate.neg_team.id,
+ 'panel': [{
+ 'id': adj.id,
+ 'position': position,
+ } for adj, position in debate.adjudicators.with_debateadj_types()],
+
+ } for debate in draw]
+ return json.dumps(data)
+
+
+def adjs_to_json(adjs, regions, t):
+ """Converts to a standard JSON object for Vue components to use"""
+
+ populate_feedback_scores(adjs)
+ fw = t.current_round.feedback_weight
+ for adj in adjs:
+ adj.abs_score = adj.weighted_score(fw)
+
+ absolute_scores = [adj.abs_score for adj in adjs]
+ absolute_scores.sort()
+ percentile_cutoffs = [(100 - i, percentile(absolute_scores, i/100)) for i in range(0,100,10)]
+ percentile_cutoffs.reverse()
+
+ data = {}
+ for adj in adjs:
+ data[adj.id] = {
+ 'type': 'adjudicator',
+ 'id': adj.id,
+ 'name': adj.name,
+ 'gender': adj.gender,
+ 'gender_show': False,
+ 'region': [r for r in regions if r['id'] is adj.institution.region_id][0] if adj.institution.region_id is not None else '',
+ 'region_show': False,
+ 'institution': {
+ 'id': adj.institution.id,
+ 'name': adj.institution.code,
+ 'code' : adj.institution.code,
+ 'abbreviation' : adj.institution.abbreviation
+ },
+ 'score': "%.1f" % adj.abs_score,
+ 'ranking': next(pc[0] for pc in percentile_cutoffs if pc[1] <= adj.abs_score),
+ 'histories': adj.histories,
+ 'conflicts': {
+ 'teams': adj.personal_teams,
+ 'institutions': adj.institutional_institutions,
+ 'adjudicators': adj.personal_adjudicators,
+ },
+ 'conflicted': {
+ 'hover': {'personal': False, 'institutional': False, 'history': False, 'history_ago': 99},
+ 'panel': {'personal': False, 'institutional': False, 'history': False, 'history_ago': 99}
+ }
+ }
+
+ return json.dumps(data)
+
+
+def teams_to_json(teams, regions, categories, t, r):
+ thresholds = {bc['id']: calculate_live_thresholds(bc, t, r) for bc in categories}
+
+ # populate team categories
+ tbcs = Team.break_categories.through.objects.filter(team__in=teams)
+ break_category_ids_by_team = {team.id: [] for team in teams}
+ for tbc in tbcs:
+ break_category_ids_by_team[tbc.team_id].append(tbc.breakcategory_id)
+
+ populate_win_counts(teams)
+
+ data = {}
+ for team in teams:
+ team_categories = break_category_ids_by_team[team.id]
+ break_categories = [{
+ 'id': bc['id'],
+ 'name': bc['name'],
+ 'seq': bc['seq'],
+ 'will_break': determine_liveness(thresholds[bc['id']], team.wins_count)
+ } for bc in categories if bc['id'] in team_categories]
+
+ data[team.id] = {
+ 'type': 'team',
+ 'id': team.id,
+ 'name': team.short_name,
+ 'long_name': team.long_name,
+ 'uses_prefix': team.use_institution_prefix,
+ 'speakers': [{
+ 'name': s.name,
+ 'gender': s.gender
+ } for s in team.speakers],
+ 'gender_show': False,
+ 'wins': team.wins_count,
+ 'region': [r for r in regions if r['id'] is team.institution.region_id][0] if team.institution.region_id else '',
+ 'region_show': False,
+ # TODO: Searching for break cats here incurs extra queries; should be done earlier
+ 'categories': break_categories,
+ 'category_show': False,
+ 'institution': {
+ 'id': team.institution.id,
+ 'name': team.institution.code,
+ 'code' : team.institution.code,
+ 'abbreviation' : team.institution.abbreviation
+ },
+ 'histories': team.histories,
+ 'conflicts': {
+ 'teams': [], # No team-team conflicts
+ 'institutions': team.institutional_institutions,
+ 'adjudicators': team.personal_adjudicators
+ },
+ 'conflicted': {
+ 'hover': {'personal': False, 'institutional': False, 'history': False, 'history_ago': 99},
+ 'panel': {'personal': False, 'institutional': False, 'history': False, 'history_ago': 99}
+ }
+ }
+ return json.dumps(data)
+
+# REDUNDANT; although parts worth translating
+# class AllocationTableBuilder(TabbycatTableBuilder):
+
+# def liveness(self, team, teams_count, prelims, current_round):
+# live_info = {'text': team.wins_count, 'tooltip': ''}
+
+# # The actual calculation should be shifed to be a cached method on
+# # the relevant break category
+# # print("teams count", teams_count)
+# # print("prelims", prelims)
+# # print("current_round", current_round)
+
+# highest_liveness = 3
+# for bc in team.break_categories.all():
+# # print(bc.name, bc.break_size)
+# import random
+# status = random.choice([1,2,3])
+# highest_liveness = 3
+# if status is 1:
+# live_info['tooltip'] += 'Definitely in for the %s break test' % bc.name
+# if highest_liveness != 2:
+# highest_liveness = 1 # Live not ins are the most important highlight
+# elif status is 2:
+# live_info['tooltip'] += 'Still live for the %s break test' % bc.name
+# highest_liveness = 2
+# elif status is 3:
+# live_info['tooltip'] += 'Cannot break in %s break test' % bc.name
+
+# if highest_liveness is 1:
+# live_info['class'] = 'bg-success'
+# elif highest_liveness is 2:
+# live_info['class'] = 'bg-warning'
+
+# return live_info
+
+# def add_team_wins(self, draw, round, key):
+# prelims = self.tournament.prelim_rounds(until=round).count()
+# teams_count = self.tournament.team_set.count()
+
+# wins_head = {
+# 'key': key,
+# 'tooltip': "Number of wins a team is on; "
+# }
+# wins_data = []
+# for d in draw:
+# team = d.aff_team if key is "AW" else d.neg_team
+# wins_data.append(self.liveness(team, teams_count, prelims, round.seq))
+
+# self.add_column(wins_head, wins_data)
+
+# def add_debate_importances(self, draw, round):
+# importance_head = {
+# 'key': 'importance',
+# 'icon': 'glyphicon-fire',
+# 'tooltip': "Set a debate's importance (higher receives better adjs)"
+# }
+# importance_data = [{
+# 'component': 'debate-importance',
+# 'id': d.id,
+# 'sort': d.importance,
+# 'importance': d.importance,
+# 'url': reverse_tournament(
+# 'set_debate_importance',
+# self.tournament,
+# kwargs={'round_seq': round.seq})
+# } for d in draw]
+
+# self.add_column(importance_head, importance_data)
diff --git a/adjallocation/views.py b/tabbycat/adjallocation/views.py
similarity index 59%
rename from adjallocation/views.py
rename to tabbycat/adjallocation/views.py
index 026eca083ba..267915bd904 100644
--- a/adjallocation/views.py
+++ b/tabbycat/adjallocation/views.py
@@ -1,21 +1,31 @@
import json
import logging
-from django.views.generic.base import View
from django.db.utils import IntegrityError
+from django.views.generic.base import TemplateView, View
+from django.http import HttpResponse, HttpResponseBadRequest
+from django.shortcuts import render
+from actionlog.mixins import LogActionMixin
from actionlog.models import ActionLogEntry
+from breakqual.utils import categories_ordered
from draw.models import Debate, DebateTeam
from participants.models import Adjudicator, Team
+from participants.utils import regions_ordered
from tournaments.mixins import RoundMixin
-from utils.mixins import SuperuserRequiredMixin
-from utils.views import *
+from utils.mixins import JsonDataResponseView, SuperuserRequiredMixin
+from utils.views import admin_required, expect_post, round_view
+
from .allocator import allocate_adjudicators
+from .allocation import AdjudicatorAllocation
from .hungarian import HungarianAllocator
-from .models import AdjudicatorAllocation, AdjudicatorConflict, AdjudicatorInstitutionConflict, AdjudicatorAdjudicatorConflict, DebateAdjudicator
+from .models import AdjudicatorAdjudicatorConflict, AdjudicatorConflict, AdjudicatorInstitutionConflict, DebateAdjudicator
+from .utils import adjs_to_json, debates_to_json, get_adjs, populate_conflicts, populate_histories, teams_to_json
+
logger = logging.getLogger(__name__)
+
@admin_required
@expect_post
@round_view
@@ -82,58 +92,18 @@ def calculate_prior_adj_genders(team):
regions = round.tournament.region_set.order_by('name')
break_categories = round.tournament.breakcategory_set.order_by(
'seq').exclude(is_general=True)
+ # TODO: colors below are redundant
colors = ["#C70062", "#00C79B", "#B1E001", "#476C5E",
"#777", "#FF2983", "#6A268C", "#00C0CF", "#0051CF"]
- context['regions'] = list(zip(regions, colors + ["black"]
- * (len(regions) - len(colors))))
+ context['regions'] = list(zip(regions, colors + ["black"] * (len(regions) -
+ len(colors))))
context['break_categories'] = list(zip(
- break_categories, colors + ["black"] * (len(break_categories) - len(colors))))
+ break_categories, colors + ["black"] * (len(break_categories) -
+ len(colors))))
return render(request, "draw_adjudicators_edit.html", context)
-def _json_adj_allocation(debates, unused_adj):
-
- obj = {}
-
- def _adj(a):
-
- if a.institution.region:
- region_name = "region-%s" % a.institution.region.id
- else:
- region_name = ""
-
- return {
- 'id': a.id,
- 'name': a.name,
- 'institution': a.institution.short_code,
- 'is_unaccredited': a.is_unaccredited,
- 'gender': a.gender,
- 'region': region_name
- }
-
- def _debate(d):
- r = {}
- if d.adjudicators.chair:
- r['chair'] = _adj(d.adjudicators.chair)
- r['panel'] = [_adj(a) for a in d.adjudicators.panel]
- r['trainees'] = [_adj(a) for a in d.adjudicators.trainees]
- return r
-
- obj['debates'] = dict((d.id, _debate(d)) for d in debates)
- obj['unused'] = [_adj(a) for a in unused_adj]
-
- return HttpResponse(json.dumps(obj))
-
-
-@admin_required
-@round_view
-def draw_adjudicators_get(request, round):
- draw = round.get_draw()
-
- return _json_adj_allocation(draw, round.unused_adjudicators())
-
-
class SaveAdjudicatorsView(SuperuserRequiredMixin, RoundMixin, View):
def post(self, request, *args, **kwargs):
@@ -156,10 +126,11 @@ def _extract_id(s):
adjs = [Adjudicator.objects.get(id=int(x)) for x in values]
if key.startswith("chair_"):
if len(adjs) > 1:
- logger.warning("There was more than one chair for debate {}, only saving the first".format(allocation.debate))
+ logger.warning("There was more than one chair for debate %s, only saving the first", alloc.debate)
+ continue
alloc.chair = adjs[0]
elif key.startswith("panel_"):
- alloc.panel.extend(adjs)
+ alloc.panellists.extend(adjs)
elif key.startswith("trainees_"):
alloc.trainees.extend(adjs)
@@ -169,18 +140,16 @@ def _extract_id(s):
revised = allocations[d_id]
if existing != revised:
changed += 1
- logger.info("Saving adjudicators for debate {}".format(debate))
- logger.info("{} --> {}".format(existing, revised))
- existing.delete()
+ logger.info("Saving adjudicators for debate %s", debate)
+ logger.info("%s --> %s", existing, revised)
try:
revised.save()
except IntegrityError:
- return HttpResponseBadRequest("""An adjudicator
- was allocated to the same debate multiple times. Please
- remove them and re-save.""")
+ return HttpResponseBadRequest("An adjudicator was allocated to the same "
+ "to the same debate multiple times. Please remove them and re-save.")
if not changed:
- logger.warning("Didn't save any adjudicator allocations, nothing changed.")
+ logger.info("Didn't save any adjudicator allocations, nothing changed.")
return HttpResponse("There aren't any changes to save.")
ActionLogEntry.objects.log(type=ActionLogEntry.ACTION_TYPE_ADJUDICATORS_SAVE,
@@ -190,6 +159,48 @@ def _extract_id(s):
return HttpResponse("Saved changes for {} debates!".format(changed))
+def _json_adj_allocation(debates, unused_adj):
+
+ obj = {}
+
+ def _adj(a):
+
+ if a.institution.region:
+ region_name = "region-%s" % a.institution.region.id
+ else:
+ region_name = ""
+
+ return {
+ 'id': a.id,
+ 'name': a.name,
+ 'institution': a.institution.short_code,
+ 'is_unaccredited': a.is_unaccredited,
+ 'gender': a.gender,
+ 'region': region_name
+ }
+
+ def _debate(d):
+ r = {}
+ if d.adjudicators.chair:
+ r['chair'] = _adj(d.adjudicators.chair)
+ r['panel'] = [_adj(a) for a in d.adjudicators.panel]
+ r['trainees'] = [_adj(a) for a in d.adjudicators.trainees]
+ return r
+
+ obj['debates'] = dict((d.id, _debate(d)) for d in debates)
+ obj['unused'] = [_adj(a) for a in unused_adj]
+
+ return HttpResponse(json.dumps(obj))
+
+
+@admin_required
+@round_view
+def draw_adjudicators_get(request, round):
+ draw = round.get_draw()
+
+ return _json_adj_allocation(draw, round.unused_adjudicators())
+
+
@admin_required
@round_view
def adj_conflicts(request, round):
@@ -224,10 +235,95 @@ def add(type, adj_id, target_id):
try:
add('history', da.adjudicator_id, da.debate.aff_team.id)
except DebateTeam.DoesNotExist:
- pass # For when a Debate/DebateTeam may have been deleted
+ pass # For when a Debate/DebateTeam may have been deleted
try:
add('history', da.adjudicator_id, da.debate.neg_team.id)
except DebateTeam.DoesNotExist:
- pass # For when a Debate/DebateTeam may have been deleted
+ pass # For when a Debate/DebateTeam may have been deleted
return HttpResponse(json.dumps(data), content_type="text/json")
+
+
+# ==============================================================================
+# New UI
+# ==============================================================================
+
+class EditAdjudicatorAllocationView(RoundMixin, SuperuserRequiredMixin, TemplateView):
+
+ template_name = 'edit_adj_allocation.html'
+
+ def get_context_data(self, **kwargs):
+ t = self.get_tournament()
+ r = self.get_round()
+
+ draw = r.debate_set_with_prefetches(ordering=('room_rank',), select_related=(), speakers=False, divisions=False)
+
+ teams = Team.objects.filter(debateteam__debate__round=r).prefetch_related('speaker_set')
+ adjs = get_adjs(self.get_round())
+
+ regions = regions_ordered(t)
+ categories = categories_ordered(t)
+ adjs, teams = populate_conflicts(adjs, teams)
+ adjs, teams = populate_histories(adjs, teams, t, r)
+
+ kwargs['allRegions'] = json.dumps(regions)
+ kwargs['allCategories'] = json.dumps(categories)
+ kwargs['allDebates'] = debates_to_json(draw, t, r)
+ kwargs['allTeams'] = teams_to_json(teams, regions, categories, t, r)
+ kwargs['allAdjudicators'] = adjs_to_json(adjs, regions, t)
+
+ return super().get_context_data(**kwargs)
+
+
+class CreateAutoAllocation(LogActionMixin, RoundMixin, SuperuserRequiredMixin, JsonDataResponseView):
+
+ action_log_type = ActionLogEntry.ACTION_TYPE_ADJUDICATORS_AUTO
+
+ def get_data(self):
+ round = self.get_round()
+ if round.draw_status == round.STATUS_RELEASED:
+ return HttpResponseBadRequest("Draw is already released, unrelease draw to redo auto-allocations.")
+ if round.draw_status != round.STATUS_CONFIRMED:
+ return HttpResponseBadRequest("Draw is not confirmed, confirm draw to run auto-allocations.")
+
+ allocate_adjudicators(round, HungarianAllocator)
+ return debates_to_json(round.get_draw(), self.get_tournament(), round)
+
+
+class SaveDebateInfo(SuperuserRequiredMixin, RoundMixin, LogActionMixin, View):
+ pass
+
+
+class SaveDebateImportance(SaveDebateInfo):
+ action_log_type = ActionLogEntry.ACTION_TYPE_DEBATE_IMPORTANCE_EDIT
+
+ def post(self, request, *args, **kwargs):
+ debate_id = request.POST.get('debate_id')
+ debate_importance = request.POST.get('importance')
+
+ debate = Debate.objects.get(pk=debate_id)
+ debate.importance = debate_importance
+ debate.save()
+
+ return HttpResponse()
+
+
+class SaveDebatePanel(SaveDebateInfo):
+ action_log_type = ActionLogEntry.ACTION_TYPE_ADJUDICATORS_SAVE
+
+ def post(self, request, *args, **kwargs):
+ debate_id = request.POST.get('debate_id')
+ debate_panel = json.loads(request.POST.get('panel'))
+
+ to_delete = DebateAdjudicator.objects.filter(debate_id=debate_id).exclude(
+ adjudicator_id__in=[da['id'] for da in debate_panel])
+ for debateadj in to_delete:
+ logger.info("deleted %s" % debateadj)
+ to_delete.delete()
+
+ for da in debate_panel:
+ debateadj, created = DebateAdjudicator.objects.update_or_create(debate_id=debate_id,
+ adjudicator_id=da['id'], defaults={'type': da['position']})
+ logger.info("%s %s" % ("created" if created else "updated", debateadj))
+
+ return HttpResponse()
diff --git a/adjfeedback/__init__.py b/tabbycat/adjfeedback/__init__.py
similarity index 100%
rename from adjfeedback/__init__.py
rename to tabbycat/adjfeedback/__init__.py
diff --git a/adjfeedback/admin.py b/tabbycat/adjfeedback/admin.py
similarity index 57%
rename from adjfeedback/admin.py
rename to tabbycat/adjfeedback/admin.py
index da83a845f3b..2b5f621d740 100644
--- a/adjfeedback/admin.py
+++ b/tabbycat/adjfeedback/admin.py
@@ -1,31 +1,34 @@
-from django.contrib import admin
-import django.contrib.messages as messages
+from django.contrib import admin, messages
from .models import AdjudicatorFeedback, AdjudicatorFeedbackQuestion
+
# ==============================================================================
# Adjudicator Feedback Questions
# ==============================================================================
class AdjudicatorFeedbackQuestionAdmin(admin.ModelAdmin):
- list_display = ('reference', 'text', 'seq', 'tournament', 'answer_type', 'required', 'chair_on_panellist', 'panellist_on_chair', 'panellist_on_panellist', 'team_on_orallist')
- list_filter = ('tournament',)
- ordering = ('tournament', 'seq')
+ list_display = ('reference', 'text', 'seq', 'tournament', 'answer_type',
+ 'required', 'from_adj', 'from_team')
+ list_filter = ('tournament',)
+ ordering = ('tournament', 'seq')
admin.site.register(AdjudicatorFeedbackQuestion, AdjudicatorFeedbackQuestionAdmin)
+
# ==============================================================================
# Adjudicator Feedback Answers
# ==============================================================================
class BaseAdjudicatorFeedbackAnswerInline(admin.TabularInline):
- model = None # must be set by subclasses
+ model = None # Must be set by subclasses
fields = ('question', 'answer')
extra = 1
def formfield_for_foreignkey(self, db_field, request, **kwargs):
if db_field.name == "question":
- kwargs["queryset"] = AdjudicatorFeedbackQuestion.objects.filter(answer_type__in=AdjudicatorFeedbackQuestion.ANSWER_TYPE_CLASSES_REVERSE[self.model])
+ kwargs["queryset"] = AdjudicatorFeedbackQuestion.objects.filter(
+ answer_type__in=AdjudicatorFeedbackQuestion.ANSWER_TYPE_CLASSES_REVERSE[self.model])
return super(BaseAdjudicatorFeedbackAnswerInline, self).formfield_for_foreignkey(db_field, request, **kwargs)
@@ -36,32 +39,35 @@ class RoundListFilter(admin.SimpleListFilter):
def lookups(self, request, model_admin):
from tournaments.models import Round
- return [(str(r.id), "[{}] {}".format(r.tournament.name, r.name)) for r in Round.objects.all()]
+ return [(str(r.id), "[{}] {}".format(r.tournament.short_name, r.name)) for r in Round.objects.all()]
def queryset(self, request, queryset):
return queryset.filter(source_team__debate__round_id=self.value()) | queryset.filter(source_adjudicator__debate__round_id=self.value())
+
# ==============================================================================
# Adjudicator Feedbacks
# ==============================================================================
class AdjudicatorFeedbackAdmin(admin.ModelAdmin):
- list_display = ('adjudicator', 'source_adjudicator', 'source_team', 'confirmed', 'score', 'version')
- search_fields = ('source_adjudicator__adjudicator__name', 'source_team__team__institution__code', 'source_team__team__reference', 'adjudicator__name', 'adjudicator__institution__code',)
+ list_display = ('adjudicator', 'confirmed', 'score', 'version',
+ 'source_adjudicator', 'source_team')
+ search_fields = ('adjudicator', 'score', 'source_adjudicator', 'source_team')
raw_id_fields = ('source_team',)
- list_filter = (RoundListFilter, 'adjudicator', 'source_adjudicator', 'source_team')
- actions = ('mark_as_confirmed', 'mark_as_unconfirmed')
+ list_filter = (RoundListFilter, 'adjudicator')
+ actions = ('mark_as_confirmed', 'mark_as_unconfirmed')
- # dynamically generate inline tables for different answer types
+ # Dynamically generate inline tables for different answer types
inlines = []
for _answer_type_class in AdjudicatorFeedbackQuestion.ANSWER_TYPE_CLASSES_REVERSE:
- _inline_class = type(_answer_type_class.__name__ + "Inline", (BaseAdjudicatorFeedbackAnswerInline,),
- {"model": _answer_type_class, "__module__": __name__})
+ _inline_class = type(
+ _answer_type_class.__name__ + "Inline", (BaseAdjudicatorFeedbackAnswerInline,),
+ {"model": _answer_type_class, "__module__": __name__})
inlines.append(_inline_class)
def _construct_message_for_user(self, request, count, action, **kwargs):
- message_bit = "1 feedback submission was" if count == 1 else "{:d} feedback submissions were".format(count)
- self.message_user(request, message_bit + " " + action, **kwargs)
+ message_bit = "1 feedback submission was " if count == 1 else "{:d} feedback submissions were ".format(count)
+ self.message_user(request, message_bit + action, **kwargs)
def mark_as_confirmed(self, request, queryset):
original_count = queryset.count()
@@ -69,16 +75,19 @@ def mark_as_confirmed(self, request, queryset):
fb.confirmed = True
fb.save()
final_count = queryset.filter(confirmed=True).count()
- self._construct_message_for_user(request, final_count, "marked as confirmed. " \
- "Note that this may have caused other feedback to be marked as unconfirmed.")
+ self._construct_message_for_user(
+ request, final_count, "marked as confirmed. Note that this may " +
+ "have caused other feedback to be marked as unconfirmed.")
difference = original_count - final_count
if difference > 0:
- self._construct_message_for_user(request, difference, "did not end up as confirmed, " \
- "probably because other feedback that conflicts with it was also marked as confirmed.",
- level=messages.WARNING)
+ self._construct_message_for_user(
+ request, difference, "not marked as confirmed, probably " +
+ "because other feedback that conflicts with it was also " +
+ "marked as confirmed.", level=messages.WARNING)
def mark_as_unconfirmed(self, request, queryset):
count = queryset.update(confirmed=False)
- self._construct_message_for_user(request, count, "marked as unconfirmed.")
+ self._construct_message_for_user(request, count,
+ "marked as unconfirmed.")
admin.site.register(AdjudicatorFeedback, AdjudicatorFeedbackAdmin)
diff --git a/adjfeedback/apps.py b/tabbycat/adjfeedback/apps.py
similarity index 69%
rename from adjfeedback/apps.py
rename to tabbycat/adjfeedback/apps.py
index 5ee4ca9b2ee..75c97389df6 100644
--- a/adjfeedback/apps.py
+++ b/tabbycat/adjfeedback/apps.py
@@ -1,5 +1,6 @@
from django.apps import AppConfig
+
class AdjFeedbackConfig(AppConfig):
name = 'adjfeedback'
- verbose_name = "Adjudicator Feedback"
\ No newline at end of file
+ verbose_name = "Adjudicator Feedback"
diff --git a/adjfeedback/dbutils.py b/tabbycat/adjfeedback/dbutils.py
similarity index 89%
rename from adjfeedback/dbutils.py
rename to tabbycat/adjfeedback/dbutils.py
index 61a2e4dff47..9e0499594c8 100644
--- a/adjfeedback/dbutils.py
+++ b/tabbycat/adjfeedback/dbutils.py
@@ -5,10 +5,9 @@
by a front-end interface as well."""
from . import models as fm
-from draw.models import Debate, DebateTeam
+from draw.models import DebateTeam
from participants.models import Team, Adjudicator
from django.contrib.auth import get_user_model
-from results.result import BallotSet
from adjallocation.models import DebateAdjudicator
import random
@@ -26,28 +25,32 @@
}
COMMENTS = {
- 5: ["Amazeballs.", "Saw it exactly how we did.", "Couldn't have been better.", "Really insightful feedback."],
- 4: ["Great adjudication but parts were unclear.", "Clear but a bit long. Should break.", "Understood debate but missed a couple of nuances.", "Agreed with adjudication but feedback wasn't super helpful."],
- 3: ["Identified all main issues, didn't see interactions between them.", "Solid, would trust to get right, but couldn't articulate some points.", "Pretty good for a novice adjudicator.", "Know what (s)he's doing but reasoning a bit convoluted."],
- 2: ["Missed some crucial points in the debate.", "Stepped into debate, but not too significantly.", "Didn't give the other team enough credit for decent points.", "Had some awareness of the debate but couldn't identify main points."],
- 1: ["It's as if (s)he was listening to a different debate.", "Worst adjudication I've ever seen.", "Give his/her own analysis to rebut our arguments.", "Should not be adjudicating at this tournament."]
+ 5: ["Amazeballs.", "Saw it exactly how we did.", "Couldn't have been better.", "Really insightful feedback."], # flake8: noqa
+ 4: ["Great adjudication but parts were unclear.", "Clear but a bit long. Should break.", "Understood debate but missed a couple of nuances.", "Agreed with adjudication but feedback wasn't super helpful."], # flake8: noqa
+ 3: ["Identified all main issues, didn't see interactions between them.", "Solid, would trust to get right, but couldn't articulate some points.", "Pretty good for a novice adjudicator.", "Know what (s)he's doing but reasoning a bit convoluted."], # flake8: noqa
+ 2: ["Missed some crucial points in the debate.", "Stepped into debate, but not too significantly.", "Didn't give the other team enough credit for decent points.", "Had some awareness of the debate but couldn't identify main points."], # flake8: noqa
+ 1: ["It's as if (s)he was listening to a different debate.", "Worst adjudication I've ever seen.", "Give his/her own analysis to rebut our arguments.", "Should not be adjudicating at this tournament."] # flake8: noqa
}
+
def add_feedback_to_round(round, **kwargs):
"""Calls add_feedback() for every debate in the given round."""
for debate in round.get_draw():
add_feedback(debate, **kwargs)
+
def delete_all_feedback_for_round(round):
"""Deletes all feedback for the given round."""
fm.AdjudicatorFeedback.objects.filter(source_adjudicator__debate__round=round).delete()
fm.AdjudicatorFeedback.objects.filter(source_team__debate__round=round).delete()
+
def delete_feedback(debate):
"""Deletes all feedback for the given debate."""
fm.AdjudicatorFeedback.objects.filter(source_adjudicator__debate=debate).delete()
fm.AdjudicatorFeedback.objects.filter(source_team__debate=debate).delete()
+
def add_feedback(debate, submitter_type, user, probability=1.0, discarded=False, confirmed=False):
"""Adds feedback to a debate.
Specifically, adds feedback from both teams on the chair, and from every
@@ -58,9 +61,8 @@ def add_feedback(debate, submitter_type, user, probability=1.0, discarded=False,
``user`` is a User object.
``probability``, a float between 0.0 and 1.0, is the probability with which
feedback is generated.
- ``discarded`` and ``confirmed`` are whether the feedback should be discarded or
- confirmed, respectively."""
-
+ ``discarded`` and ``confirmed`` are whether the feedback should be
+ discarded or confirmed, respectively."""
if discarded and confirmed:
raise ValueError("Feedback can't be both discarded and confirmed!")
@@ -73,14 +75,14 @@ def add_feedback(debate, submitter_type, user, probability=1.0, discarded=False,
(debate.neg_team, debate.adjudicators.chair),
]
sources_and_subjects.extend(itertools.permutations(
- (adj for type, adj in debate.adjudicators), 2))
+ (adj for type, adj in debate.adjudicators), 2))
fbs = list()
for source, adj in sources_and_subjects:
if random.random() > probability:
- logger.info(" - Skipping {} on {}".format(source, adj))
+ logger.info(" - Skipping %s on %s", source, adj)
continue
fb = fm.AdjudicatorFeedback(submitter_type=submitter_type)
@@ -90,10 +92,10 @@ def add_feedback(debate, submitter_type, user, probability=1.0, discarded=False,
fb.adjudicator = adj
if isinstance(source, Adjudicator):
fb.source_adjudicator = DebateAdjudicator.objects.get(
- debate=debate, adjudicator=source)
+ debate=debate, adjudicator=source)
elif isinstance(source, Team):
fb.source_team = DebateTeam.objects.get(
- debate=debate, team=source)
+ debate=debate, team=source)
else:
raise TypeError("source must be an Adjudicator or a Team")
@@ -130,8 +132,8 @@ def add_feedback(debate, submitter_type, user, probability=1.0, discarded=False,
question.answer_type_class(question=question, feedback=fb, answer=answer).save()
name = source.name if isinstance(source, Adjudicator) else source.short_name
- logger.info("[{}] {} on {}: {}".format(debate.round.tournament.slug, name, adj, score))
+ logger.info("[%s] %s on %s: %s", debate.round.tournament.slug, name, adj, score)
fbs.append(fb)
- return fbs
\ No newline at end of file
+ return fbs
diff --git a/adjfeedback/forms.py b/tabbycat/adjfeedback/forms.py
similarity index 60%
rename from adjfeedback/forms.py
rename to tabbycat/adjfeedback/forms.py
index 6cd52fc6b8b..e3bee356e2c 100644
--- a/adjfeedback/forms.py
+++ b/tabbycat/adjfeedback/forms.py
@@ -5,26 +5,41 @@
from django.utils.translation import ugettext_lazy
from django.utils.translation import ugettext as _
-from .models import AdjudicatorFeedback, AdjudicatorFeedbackQuestion
-from tournaments.models import Round
-from participants.models import Adjudicator, Team
+from adjallocation.allocation import AdjudicatorAllocation
from adjallocation.models import DebateAdjudicator
from draw.models import Debate, DebateTeam
-from utils.forms import OptionalChoiceField
+from participants.models import Adjudicator, Team
from results.forms import TournamentPasswordField
+from tournaments.models import Round
+from utils.forms import OptionalChoiceField
+
+from .models import AdjudicatorFeedback, AdjudicatorFeedbackQuestion
+from .utils import expected_feedback_targets
logger = logging.getLogger(__name__)
+ADJUDICATOR_POSITION_NAMES = {
+ 'c': 'chair',
+ 'o': 'solo',
+ 'p': 'panellist',
+ 't': 'trainee'
+}
+
+
+# ==============================================================================
# General, but only used here
+# ==============================================================================
class IntegerRadioFieldRenderer(forms.widgets.RadioFieldRenderer):
"""Used by IntegerRadioSelect."""
- outer_html = '
{content}
'
- inner_html = '
{choice_value}{sub_widgets}
'
+ outer_html = '
{content}
'
+ inner_html = '
{choice_value}{sub_widgets}
'
+
class IntegerRadioSelect(forms.RadioSelect):
renderer = IntegerRadioFieldRenderer
+
class IntegerScaleField(forms.IntegerField):
"""Class to do integer scale fields."""
widget = IntegerRadioSelect
@@ -33,6 +48,7 @@ def __init__(self, *args, **kwargs):
super(IntegerScaleField, self).__init__(*args, **kwargs)
self.widget.choices = tuple((i, str(i)) for i in range(self.min_value, self.max_value+1))
+
class BlankUnknownBooleanSelect(forms.NullBooleanSelect):
"""Uses '--------' instead of 'Unknown' for the None choice."""
@@ -43,16 +59,19 @@ def __init__(self, attrs=None):
# skip the NullBooleanSelect constructor
super(forms.NullBooleanSelect, self).__init__(attrs, choices)
+
class BooleanSelectField(forms.NullBooleanField):
"""Widget to do boolean select fields following our conventions.
Specifically, if 'required', checks that an option was chosen."""
widget = BlankUnknownBooleanSelect
+
def clean(self, value):
value = super(BooleanSelectField, self).clean(value)
if self.required and value is None:
raise forms.ValidationError(_("This field is required."))
return value
+
class RequiredTypedChoiceField(forms.TypedChoiceField):
def clean(self, value):
value = super(RequiredTypedChoiceField, self).clean(value)
@@ -60,16 +79,21 @@ def clean(self, value):
raise forms.ValidationError(_("This field is required."))
return value
+
+# ==============================================================================
# Feedback Fields
+# ==============================================================================
class AdjudicatorFeedbackCheckboxFieldRenderer(forms.widgets.CheckboxFieldRenderer):
"""Used by AdjudicatorFeedbackCheckboxSelectMultiple."""
outer_html = '
{content}
'
inner_html = '
{choice_value}{sub_widgets}
'
+
class AdjudicatorFeedbackCheckboxSelectMultiple(forms.CheckboxSelectMultiple):
renderer = AdjudicatorFeedbackCheckboxFieldRenderer
+
class AdjudicatorFeedbackCheckboxSelectMultipleField(forms.MultipleChoiceField):
"""Class to do multiple choice fields following our conventions.
Specifically, converts to a string rather than a list."""
@@ -79,14 +103,17 @@ def clean(self, value):
value = super(AdjudicatorFeedbackCheckboxSelectMultipleField, self).clean(value)
return AdjudicatorFeedbackQuestion.CHOICE_SEPARATOR.join(value)
+
+# ==============================================================================
# Feedback Forms
+# ==============================================================================
class BaseFeedbackForm(forms.Form):
"""Base class for all dynamically-created feedback forms. Contains all
question fields."""
# parameters set at "compile time" by subclasses
- tournament = None # must be set by subclasses
+ _tournament = None # must be set by subclasses
_use_tournament_password = False
_confirm_on_submit = False
_enforce_required = True
@@ -96,6 +123,13 @@ def __init__(self, *args, **kwargs):
super(BaseFeedbackForm, self).__init__(*args, **kwargs)
self._create_fields()
+ @staticmethod
+ def coerce_target(value):
+ debate_id, adj_id = value.split('-')
+ debate = Debate.objects.get(id=int(debate_id))
+ adjudicator = Adjudicator.objects.get(id=int(adj_id))
+ return debate, adjudicator
+
def _make_question_field(self, question):
if question.answer_type == question.ANSWER_TYPE_BOOLEAN_SELECT:
field = BooleanSelectField()
@@ -114,8 +148,7 @@ def _make_question_field(self, question):
else:
field = IntegerScaleField(min_value=min_value, max_value=max_value)
elif question.answer_type == question.ANSWER_TYPE_FLOAT:
- field = forms.FloatField(min_value=question.min_value,
- max_value=question.max_value)
+ field = forms.FloatField(min_value=question.min_value, max_value=question.max_value)
elif question.answer_type == question.ANSWER_TYPE_TEXT:
field = forms.CharField()
elif question.answer_type == question.ANSWER_TYPE_LONGTEXT:
@@ -133,17 +166,17 @@ def _make_question_field(self, question):
def _create_fields(self):
"""Creates dynamic fields in the form."""
# Feedback questions defined for the tournament
- adj_min_score = self.tournament.pref('adj_min_score')
- adj_max_score = self.tournament.pref('adj_max_score')
+ adj_min_score = self._tournament.pref('adj_min_score')
+ adj_max_score = self._tournament.pref('adj_max_score')
score_label = mark_safe("Overall score (%s=lowest, %s=highest)" % (adj_min_score, adj_max_score))
self.fields['score'] = forms.FloatField(min_value=adj_min_score, max_value=adj_max_score, label=score_label)
- for question in self.tournament.adj_feedback_questions.filter(**self.question_filter):
+ for question in self._tournament.adj_feedback_questions.filter(**self.question_filter):
self.fields[question.reference] = self._make_question_field(question)
# Tournament password field, if applicable
- if self._use_tournament_password and self.tournament.pref('public_use_password'):
- self.fields['password'] = TournamentPasswordField(tournament=self.tournament)
+ if self._use_tournament_password and self._tournament.pref('public_use_password'):
+ self.fields['password'] = TournamentPasswordField(tournament=self._tournament)
def save_adjudicatorfeedback(self, **kwargs):
"""Saves the question fields and returns the AdjudicatorFeedback.
@@ -152,17 +185,17 @@ def save_adjudicatorfeedback(self, **kwargs):
if self._confirm_on_submit:
self.discard_all_existing(adjudicator=kwargs['adjudicator'],
- source_adjudicator=kwargs['source_adjudicator'],
- source_team=kwargs['source_team'])
+ source_adjudicator=kwargs['source_adjudicator'],
+ source_team=kwargs['source_team'])
af.confirmed = True
af.score = self.cleaned_data['score']
af.save()
- for question in self.tournament.adj_feedback_questions.filter(**self.question_filter):
+ for question in self._tournament.adj_feedback_questions.filter(**self.question_filter):
if self.cleaned_data[question.reference] is not None:
- answer = question.answer_type_class(feedback=af, question=question,
- answer=self.cleaned_data[question.reference])
+ answer = question.answer_type_class(
+ feedback=af, question=question, answer=self.cleaned_data[question.reference])
answer.save()
return af
@@ -172,7 +205,8 @@ def discard_all_existing(self, **kwargs):
fb.discarded = True
fb.save()
-def make_feedback_form_class(source, *args, **kwargs):
+
+def make_feedback_form_class(source, tournament, *args, **kwargs):
"""Constructs a FeedbackForm class specific to the given source.
'source' is the Adjudicator or Team who is giving feedback.
'submission_fields' is a dict of fields that is passed directly as keyword
@@ -180,111 +214,109 @@ def make_feedback_form_class(source, *args, **kwargs):
'confirm_on_submit' is a bool, and indicates that this feedback should be
as confirmed and all others discarded."""
if isinstance(source, Adjudicator):
- return make_feedback_form_class_for_adj(source, *args, **kwargs)
+ return make_feedback_form_class_for_adj(source, tournament, *args, **kwargs)
elif isinstance(source, Team):
- return make_feedback_form_class_for_team(source, *args, **kwargs)
+ return make_feedback_form_class_for_team(source, tournament, *args, **kwargs)
else:
raise TypeError('source must be Adjudicator or Team: %r' % source)
-def make_feedback_form_class_for_adj(source, submission_fields, confirm_on_submit=False,
- enforce_required=True, include_unreleased_draws=False):
+
+def make_feedback_form_class_for_adj(source, tournament, submission_fields, confirm_on_submit=False,
+ enforce_required=True, include_unreleased_draws=False):
"""Constructs a FeedbackForm class specific to the given source adjudicator.
Parameters are as for make_feedback_form_class."""
- def adj_choice(da):
- return (da.id, '%s (%s, %s)' % (da.adjudicator.name,
- da.debate.round.name, da.get_type_display()))
- def coerce_da(value):
- return DebateAdjudicator.objects.get(id=int(value))
+ def adj_choice(adj, debate, pos):
+ value = '%d-%d' % (debate.id, adj.id)
+ display = '%s (%s, %s)' % (adj.name, debate.round.name, ADJUDICATOR_POSITION_NAMES[pos])
+ return (value, display)
+
+ debateadjs = DebateAdjudicator.objects.filter(
+ debate__round__tournament=tournament, adjudicator=source,
+ debate__round__seq__lte=tournament.current_round.seq,
+ debate__round__stage=Round.STAGE_PRELIMINARY
+ ).order_by('-debate__round__seq').prefetch_related(
+ 'debate__debateadjudicator_set__adjudicator'
+ )
- debate_filter = {'debateadjudicator__adjudicator': source}
- if not source.tournament.pref('panellist_feedback_enabled'):
- debate_filter['debateadjudicator__type'] = DebateAdjudicator.TYPE_CHAIR # include only debates for which this adj was the chair
if include_unreleased_draws:
- debate_filter['round__draw_status__in'] = [Round.STATUS_CONFIRMED, Round.STATUS_RELEASED]
+ debateadjs = debateadjs.filter(debate__round__draw_status__in=[Round.STATUS_CONFIRMED, Round.STATUS_RELEASED])
else:
- debate_filter['round__draw_status'] = Round.STATUS_RELEASED
- debates = Debate.objects.filter(**debate_filter)
+ debateadjs = debateadjs.filter(debate__round__draw_status=Round.STATUS_RELEASED)
choices = [(None, '-- Adjudicators --')]
- # for an adjudicator, find every adjudicator on their panel except them
- choices.extend(adj_choice(da) for da in DebateAdjudicator.objects.filter(
- debate__in=debates).exclude(
- adjudicator=source).select_related(
- 'debate').order_by(
- '-debate__round__seq'))
+ for debateadj in debateadjs:
+ targets = expected_feedback_targets(debateadj, tournament.pref('feedback_paths'))
+ for target, pos in targets:
+ choices.append(adj_choice(target, debateadj.debate, pos))
class FeedbackForm(BaseFeedbackForm):
- tournament = source.tournament # BaseFeedbackForm setting
- _use_tournament_password = True # BaseFeedbackForm setting
+ _tournament = tournament # BaseFeedbackForm setting
+ _use_tournament_password = True # BaseFeedbackForm setting
_confirm_on_submit = confirm_on_submit
_enforce_required = enforce_required
- question_filter = dict(chair_on_panellist=True)
+ question_filter = dict(from_adj=True)
- debate_adjudicator = RequiredTypedChoiceField(choices=choices, coerce=coerce_da)
+ target = RequiredTypedChoiceField(choices=choices, coerce=BaseFeedbackForm.coerce_target, label='Adjudicator this feedback is about')
def save(self):
"""Saves the form and returns the AdjudicatorFeedback object."""
- da = self.cleaned_data['debate_adjudicator']
- sa = DebateAdjudicator.objects.get(adjudicator=source, debate=da.debate)
- kwargs = dict(adjudicator=da.adjudicator, source_adjudicator=sa, source_team=None)
+ debate, target = self.cleaned_data['target']
+ sa = DebateAdjudicator.objects.get(adjudicator=source, debate=debate)
+ kwargs = dict(adjudicator=target, source_adjudicator=sa, source_team=None)
kwargs.update(submission_fields)
return self.save_adjudicatorfeedback(**kwargs)
return FeedbackForm
-def make_feedback_form_class_for_team(source, submission_fields, confirm_on_submit=False,
- enforce_required=True, include_unreleased_draws=False):
+
+def make_feedback_form_class_for_team(source, tournament, submission_fields, confirm_on_submit=False,
+ enforce_required=True, include_unreleased_draws=False):
"""Constructs a FeedbackForm class specific to the given source team.
Parameters are as for make_feedback_form_class."""
+ def adj_choice(adj, debate, pos):
+ value = '%d-%d' % (debate.id, adj.id)
+ if pos == AdjudicatorAllocation.POSITION_CHAIR:
+ pos_text = '—chair gave oral'
+ elif pos == AdjudicatorAllocation.POSITION_PANELLIST:
+ pos_text = '—chair rolled, this panellist gave oral'
+ elif pos == AdjudicatorAllocation.POSITION_ONLY:
+ pos_text = ''
+ display = '{name} ({r}{pos})'.format(name=adj.name, r=debate.round.name, pos=pos_text)
+ return (value, display)
+
# Only include non-silent rounds for teams.
- debate_filter = {
- 'debateteam__team': source,
- 'round__silent': False,
- }
+ debates = Debate.objects.filter(
+ debateteam__team=source, round__silent=False,
+ round__seq__lte=tournament.current_round.seq,
+ round__stage=Round.STAGE_PRELIMINARY
+ ).order_by('-round__seq').prefetch_related('debateadjudicator_set__adjudicator')
if include_unreleased_draws:
- debate_filter['round__draw_status__in'] = [Round.STATUS_CONFIRMED, Round.STATUS_RELEASED]
+ debates = debates.filter(round__draw_status__in=[Round.STATUS_CONFIRMED, Round.STATUS_RELEASED])
else:
- debate_filter['round__draw_status'] = Round.STATUS_RELEASED
- debates = Debate.objects.filter(**debate_filter).order_by('-round__seq')
+ debates = debates.filter(round__draw_status=Round.STATUS_RELEASED)
choices = [(None, '-- Adjudicators --')]
for debate in debates:
- try:
- chair = DebateAdjudicator.objects.get(debate=debate, type=DebateAdjudicator.TYPE_CHAIR)
- except DebateAdjudicator.DoesNotExist:
- continue
- panel = DebateAdjudicator.objects.filter(debate=debate, type=DebateAdjudicator.TYPE_PANEL)
- if panel.exists():
- choices.append((chair.id, '{name} ({r} - chair gave oral)'.format(
- name=chair.adjudicator.name, r=debate.round.name)))
- for da in panel:
- choices.append((da.id, '{name} ({r} - chair rolled, this panellist gave oral)'.format(
- name=da.adjudicator.name, r=debate.round.name)))
- else:
- choices.append((chair.id, '{name} ({r})'.format(
- name=chair.adjudicator.name, r=debate.round.name)))
-
- def coerce_da(value):
- return DebateAdjudicator.objects.get(id=int(value))
+ for adj, pos in debate.adjudicators.voting_with_positions():
+ choices.append(adj_choice(adj, debate, pos))
class FeedbackForm(BaseFeedbackForm):
- tournament = source.tournament # BaseFeedbackForm setting
- _use_tournament_password = True # BaseFeedbackForm setting
+ _tournament = tournament # BaseFeedbackForm setting
+ _use_tournament_password = True # BaseFeedbackForm setting
_confirm_on_submit = confirm_on_submit
_enforce_required = enforce_required
- question_filter = dict(team_on_orallist=True)
+ question_filter = dict(from_team=True)
- debate_adjudicator = RequiredTypedChoiceField(choices=choices, coerce=coerce_da)
+ target = RequiredTypedChoiceField(choices=choices, coerce=BaseFeedbackForm.coerce_target)
def save(self):
# Saves the form and returns the m.AdjudicatorFeedback object
- da = self.cleaned_data['debate_adjudicator']
- st = DebateTeam.objects.get(team=source, debate=da.debate)
- kwargs = dict(adjudicator=da.adjudicator, source_adjudicator=None, source_team=st)
+ debate, target = self.cleaned_data['target']
+ st = DebateTeam.objects.get(team=source, debate=debate)
+ kwargs = dict(adjudicator=target, source_adjudicator=None, source_team=st)
kwargs.update(submission_fields)
return self.save_adjudicatorfeedback(**kwargs)
return FeedbackForm
-
diff --git a/adjfeedback/management/commands/generatefeedback.py b/tabbycat/adjfeedback/management/commands/generatefeedback.py
similarity index 65%
rename from adjfeedback/management/commands/generatefeedback.py
rename to tabbycat/adjfeedback/management/commands/generatefeedback.py
index b27803595e2..656031364ea 100644
--- a/adjfeedback/management/commands/generatefeedback.py
+++ b/tabbycat/adjfeedback/management/commands/generatefeedback.py
@@ -1,10 +1,10 @@
-from utils.management.base import RoundCommand, CommandError
-from ...dbutils import add_feedback, add_feedback_to_round, delete_all_feedback_for_round, delete_feedback
-
from django.contrib.auth import get_user_model
-from tournaments.models import Round
-from draw.models import Debate
+
from adjfeedback.models import AdjudicatorFeedback
+from draw.models import Debate
+from utils.management.base import CommandError, RoundCommand
+
+from ...dbutils import add_feedback, add_feedback_to_round, delete_all_feedback_for_round, delete_feedback
OBJECT_TYPE_CHOICES = ["round", "debate"]
SUBMITTER_TYPE_MAP = {
@@ -13,6 +13,7 @@
}
User = get_user_model()
+
class Command(RoundCommand):
help = "Adds randomly-generated feedback to the database"
@@ -20,17 +21,32 @@ class Command(RoundCommand):
def add_arguments(self, parser):
super(Command, self).add_arguments(parser)
- parser.add_argument("--debates", type=int, nargs="+", help="IDs of specific debates to add feedback to. Done in addition to rounds, if any.", default=[])
- parser.add_argument("-p", "--probability", type=float, help="Probability with which to add feedback", default=1.0)
- parser.add_argument("-T", "--submitter-type", type=str, help="Submitter type, either 'tabroom' or 'public'", choices=list(SUBMITTER_TYPE_MAP.keys()), default="tabroom")
- parser.add_argument("-u", "--user", type=str, help="Username of submitter", default="random")
+ parser.add_argument("--debates", type=int, nargs="+",
+ help="IDs of specific debates to add feedback to. "
+ "Done in addition to rounds, if any.",
+ default=[])
+ parser.add_argument("-p", "--probability", type=float,
+ help="Probability with which to add feedback",
+ default=1.0)
+ parser.add_argument("-T", "--submitter-type", type=str,
+ help="Submitter type, either 'tabroom' or 'public'",
+ choices=list(SUBMITTER_TYPE_MAP.keys()),
+ default="tabroom")
+ parser.add_argument("-u", "--user", type=str,
+ help="Username of submitter", default="random")
status = parser.add_mutually_exclusive_group()
- status.add_argument("-D", "--discarded", action="store_true", help="Make feedback discarded")
- status.add_argument("-c", "--confirmed", action="store_true", help="Make feedback confirmed")
-
- parser.add_argument("--clean", help="Remove all associated feedback first", action="store_true")
- parser.add_argument("--create-user", help="Create user if it doesn't exist", action="store_true")
+ status.add_argument("-D", "--discarded", action="store_true",
+ help="Make feedback discarded")
+ status.add_argument("-c", "--confirmed", action="store_true",
+ help="Make feedback confirmed")
+
+ parser.add_argument("--clean",
+ help="Remove all associated feedback first",
+ action="store_true")
+ parser.add_argument("--create-user",
+ help="Create user if it doesn't exist",
+ action="store_true")
@staticmethod
def _get_user(options):
@@ -58,14 +74,17 @@ def handle(self, *args, **options):
if not self.get_rounds(options) and not options["debates"]:
raise CommandError("No rounds or debates were given. (Use --help for more info.)")
- super(Command, self).handle(*args, **options) # handles rounds
+ super(Command, self).handle(*args, **options) # Handles rounds
for tournament in self.get_tournaments(options):
for debate_id in options["debates"]:
try:
debate = Debate.objects.get(round__tournament=tournament, id=debate_id)
except Debate.DoesNotExist:
- self.stdout.write(self.style.WARNING("Warning: There is no debate with id {:d} for tournament {!r}, skipping".format(debate_id, tournament.slug)))
+ self.stdout.write(
+ self.style.WARNING("Warning: There is no debate with "
+ "id {:d} for tournament {!r}, "
+ "skipping".format(debate_id, tournament.slug)))
self.handle_debate(debate, **options)
def handle_round(self, round, **options):
@@ -89,4 +108,3 @@ def handle_debate(self, debate, **options):
add_feedback(debate, **self.feedback_kwargs(options))
except ValueError as e:
raise CommandError(e)
-
diff --git a/adjfeedback/management/commands/printmultiplefeedback.py b/tabbycat/adjfeedback/management/commands/printmultiplefeedback.py
similarity index 69%
rename from adjfeedback/management/commands/printmultiplefeedback.py
rename to tabbycat/adjfeedback/management/commands/printmultiplefeedback.py
index ef65c78ff62..1a8d903baec 100644
--- a/adjfeedback/management/commands/printmultiplefeedback.py
+++ b/tabbycat/adjfeedback/management/commands/printmultiplefeedback.py
@@ -1,12 +1,15 @@
from utils.management.base import TournamentCommand
+
class Command(TournamentCommand):
help = "Checks for feedback with more than one version."
def add_arguments(self, parser):
super(Command, self).add_arguments(parser)
- parser.add_argument("--num", "-n", type=int, help="Show feedback with at least this many versions", default=2)
+ parser.add_argument(
+ "--num", "-n", type=int,
+ help="Show feedback with at least this many versions", default=2)
def handle_tournament(self, tournament, **options):
@@ -21,8 +24,9 @@ def handle_tournament(self, tournament, **options):
source_team=feedback.source_team).order_by('version')
num = others.count()
if num >= options["num"]:
- self.stdout.write(self.style.MIGRATE_HEADING(" *** Adjudicator: {0}, from: {1}, {2:d} versions".format(adj, feedback.source, num)))
+ self.stdout.write(self.style.MIGRATE_HEADING(
+ " *** Adjudicator: {0}, from: {1}, {2:d} versions".format(adj, feedback.source, num)))
for other in others:
self.stdout.write(" {id:>3} {submitter:<12} {round:<4} {c} {version} {score:.1f}".format(
- score=other.score, version=other.version, round=other.round.abbreviation, submitter=other.submitter.username,
- id=other.id, c="c" if other.confirmed else "-"))
+ score=other.score, version=other.version, round=other.round.abbreviation, submitter=other.submitter.username,
+ id=other.id, c="c" if other.confirmed else "-"))
diff --git a/adjfeedback/migrations/0001_initial.py b/tabbycat/adjfeedback/migrations/0001_initial.py
similarity index 100%
rename from adjfeedback/migrations/0001_initial.py
rename to tabbycat/adjfeedback/migrations/0001_initial.py
diff --git a/adjfeedback/migrations/0002_adjudicatortestscorehistory_adjudicator.py b/tabbycat/adjfeedback/migrations/0002_adjudicatortestscorehistory_adjudicator.py
similarity index 100%
rename from adjfeedback/migrations/0002_adjudicatortestscorehistory_adjudicator.py
rename to tabbycat/adjfeedback/migrations/0002_adjudicatortestscorehistory_adjudicator.py
diff --git a/adjfeedback/migrations/0003_auto_20160103_1927.py b/tabbycat/adjfeedback/migrations/0003_auto_20160103_1927.py
similarity index 100%
rename from adjfeedback/migrations/0003_auto_20160103_1927.py
rename to tabbycat/adjfeedback/migrations/0003_auto_20160103_1927.py
diff --git a/adjfeedback/migrations/0004_auto_20160109_1834.py b/tabbycat/adjfeedback/migrations/0004_auto_20160109_1834.py
similarity index 100%
rename from adjfeedback/migrations/0004_auto_20160109_1834.py
rename to tabbycat/adjfeedback/migrations/0004_auto_20160109_1834.py
diff --git a/adjfeedback/migrations/0005_auto_20160228_1838.py b/tabbycat/adjfeedback/migrations/0005_auto_20160228_1838.py
similarity index 100%
rename from adjfeedback/migrations/0005_auto_20160228_1838.py
rename to tabbycat/adjfeedback/migrations/0005_auto_20160228_1838.py
diff --git a/tabbycat/adjfeedback/migrations/0006_auto_20160716_1245.py b/tabbycat/adjfeedback/migrations/0006_auto_20160716_1245.py
new file mode 100644
index 00000000000..c4e37d9273c
--- /dev/null
+++ b/tabbycat/adjfeedback/migrations/0006_auto_20160716_1245.py
@@ -0,0 +1,30 @@
+# -*- coding: utf-8 -*-
+# Generated by Django 1.9.7 on 2016-07-16 12:45
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('adjfeedback', '0005_auto_20160228_1838'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='adjudicatorfeedbackquestion',
+ name='chair_on_panellist',
+ field=models.BooleanField(help_text='Despite the name, applies to all adjudicator-on-adjudicator feedback'),
+ ),
+ migrations.AlterField(
+ model_name='adjudicatorfeedbackquestion',
+ name='panellist_on_chair',
+ field=models.BooleanField(help_text='Not currently used, reserved for future use'),
+ ),
+ migrations.AlterField(
+ model_name='adjudicatorfeedbackquestion',
+ name='panellist_on_panellist',
+ field=models.BooleanField(help_text='Not currently used, reserved for future use'),
+ ),
+ ]
diff --git a/tabbycat/adjfeedback/migrations/0007_from_adj_from_team.py b/tabbycat/adjfeedback/migrations/0007_from_adj_from_team.py
new file mode 100644
index 00000000000..2e43c36ec87
--- /dev/null
+++ b/tabbycat/adjfeedback/migrations/0007_from_adj_from_team.py
@@ -0,0 +1,33 @@
+# -*- coding: utf-8 -*-
+# Generated by Django 1.9.7 on 2016-07-26 20:04
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('adjfeedback', '0006_auto_20160716_1245'),
+ ]
+
+ operations = [
+ migrations.RenameField(
+ model_name='adjudicatorfeedbackquestion',
+ old_name='chair_on_panellist',
+ new_name='from_adj',
+ ),
+ migrations.RenameField(
+ model_name='adjudicatorfeedbackquestion',
+ old_name='team_on_orallist',
+ new_name='from_team',
+ ),
+ migrations.RemoveField(
+ model_name='adjudicatorfeedbackquestion',
+ name='panellist_on_chair',
+ ),
+ migrations.RemoveField(
+ model_name='adjudicatorfeedbackquestion',
+ name='panellist_on_panellist',
+ ),
+ ]
diff --git a/tabbycat/adjfeedback/migrations/0008_auto_20160726_2007.py b/tabbycat/adjfeedback/migrations/0008_auto_20160726_2007.py
new file mode 100644
index 00000000000..95f4c7313d0
--- /dev/null
+++ b/tabbycat/adjfeedback/migrations/0008_auto_20160726_2007.py
@@ -0,0 +1,25 @@
+# -*- coding: utf-8 -*-
+# Generated by Django 1.9.7 on 2016-07-26 20:07
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('adjfeedback', '0007_from_adj_from_team'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='adjudicatorfeedbackquestion',
+ name='from_adj',
+ field=models.BooleanField(help_text='Adjudicators should be asked this question (about other adjudicators)'),
+ ),
+ migrations.AlterField(
+ model_name='adjudicatorfeedbackquestion',
+ name='from_team',
+ field=models.BooleanField(help_text='Teams should be asked this question'),
+ ),
+ ]
diff --git a/adjfeedback/migrations/__init__.py b/tabbycat/adjfeedback/migrations/__init__.py
similarity index 100%
rename from adjfeedback/migrations/__init__.py
rename to tabbycat/adjfeedback/migrations/__init__.py
diff --git a/adjfeedback/models.py b/tabbycat/adjfeedback/models.py
similarity index 88%
rename from adjfeedback/models.py
rename to tabbycat/adjfeedback/models.py
index 56f1f9a1af6..d98e1b798c0 100644
--- a/adjfeedback/models.py
+++ b/tabbycat/adjfeedback/models.py
@@ -2,8 +2,8 @@
from django.db import models
from django.utils.functional import cached_property
-from results.models import Submission
from adjallocation.models import DebateAdjudicator
+from results.models import Submission
class AdjudicatorTestScoreHistory(models.Model):
@@ -97,19 +97,15 @@ class AdjudicatorFeedbackQuestion(models.Model):
help_text="The order in which questions are displayed")
text = models.CharField(
max_length=255,
- help_text=
- "The question displayed to participants, e.g., \"Did you agree with the decision?\"")
+ help_text="The question displayed to participants, e.g., \"Did you agree with the decision?\"")
name = models.CharField(
max_length=30,
- help_text=
- "A short name for the question, e.g., \"Agree with decision\"")
+ help_text="A short name for the question, e.g., \"Agree with decision\"")
reference = models.SlugField(
help_text="Code-compatible reference, e.g., \"agree_with_decision\"")
- chair_on_panellist = models.BooleanField()
- panellist_on_chair = models.BooleanField() # for future use
- panellist_on_panellist = models.BooleanField() # for future use
- team_on_orallist = models.BooleanField()
+ from_adj = models.BooleanField(help_text="Adjudicators should be asked this question (about other adjudicators)")
+ from_team = models.BooleanField(help_text="Teams should be asked this question")
answer_type = models.CharField(max_length=2, choices=ANSWER_TYPE_CHOICES)
required = models.BooleanField(
@@ -118,19 +114,16 @@ class AdjudicatorFeedbackQuestion(models.Model):
min_value = models.FloatField(
blank=True,
null=True,
- help_text=
- "Minimum allowed value for numeric fields (ignored for text or boolean fields)")
+ help_text="Minimum allowed value for numeric fields (ignored for text or boolean fields)")
max_value = models.FloatField(
blank=True,
null=True,
- help_text=
- "Maximum allowed value for numeric fields (ignored for text or boolean fields)")
+ help_text="Maximum allowed value for numeric fields (ignored for text or boolean fields)")
choices = models.CharField(
max_length=500,
blank=True,
null=True,
- help_text=
- "Permissible choices for select one/multiple fields, separated by %r (ignored for other fields)"
+ help_text="Permissible choices for select one/multiple fields, separated by %r (ignored for other fields)"
% CHOICE_SEPARATOR)
class Meta:
@@ -198,7 +191,7 @@ def debate_adjudicator(self):
try:
return self.adjudicator.debateadjudicator_set.get(
debate=self.debate)
- except DebateAdjudicator.DoesNotExist as e:
+ except DebateAdjudicator.DoesNotExist:
return None
@property
@@ -215,6 +208,9 @@ def clean(self):
if not (self.source_adjudicator or self.source_team):
raise ValidationError(
"Either the source adjudicator or source team wasn't specified.")
+ if self.source_adjudicator and self.source_team:
+ raise ValidationError(
+ "There was both a source adjudicator and a source team.")
if self.adjudicator not in self.debate.adjudicators:
- raise ValidationError("Adjudicator did not see this debate")
- super(AdjudicatorFeedback, self).clean()
+ raise ValidationError("Adjudicator did not see this debate.")
+ return super(AdjudicatorFeedback, self).clean()
diff --git a/tabbycat/adjfeedback/progress.py b/tabbycat/adjfeedback/progress.py
new file mode 100644
index 00000000000..4a45599641a
--- /dev/null
+++ b/tabbycat/adjfeedback/progress.py
@@ -0,0 +1,378 @@
+"""Utilities to compute which feedback has been submitted and not submitted
+by participants of the tournament.
+
+There are a few possibilities for how to characterise a feedback submission:
+"""
+
+import logging
+from operator import attrgetter
+
+from adjallocation.models import DebateAdjudicator
+from adjfeedback.models import AdjudicatorFeedback
+from draw.models import DebateTeam
+from results.prefetch import populate_confirmed_ballots
+from tournaments.models import Round
+
+from .utils import expected_feedback_targets
+
+logger = logging.getLogger(__name__)
+
+
+class BaseFeedbackExpectedSubmissionTracker:
+ """Represents a single piece of expected feedback."""
+
+ expected = True
+
+ def __init__(self, source):
+ self.source = source # either a DebateTeam or a DebateAdjudicator
+
+ @property
+ def round(self):
+ return self.source.debate.round
+
+ @property
+ def count(self):
+ return len(self.acceptable_submissions())
+
+ @property
+ def fulfilled(self):
+ return self.count == 1
+
+ def acceptable_submissions(self):
+ if not hasattr(self, '_acceptable_submissions'):
+ self._acceptable_submissions = self.get_acceptable_submissions()
+ return self._acceptable_submissions
+
+ def get_acceptable_submissions(self):
+ """Subclasses should override this method to provide an iterable of
+ acceptable submissions. Users of this class might pre-populate
+ the `_acceptable_submissions` attribute to avoid duplicate database
+ hits."""
+ raise NotImplementedError
+
+ def submission(self):
+ if self.fulfilled:
+ return self.acceptable_submissions()[0]
+ else:
+ return None
+
+
+class FeedbackExpectedSubmissionFromTeamTracker(BaseFeedbackExpectedSubmissionTracker):
+ """Represents a single piece of expected feedback from a team."""
+
+ def __init__(self, source, enforce_orallist=True):
+ self.enforce_orallist = enforce_orallist
+ super().__init__(source)
+
+ def acceptable_targets(self):
+ """For a team, this must be the adjudicator who delivered the oral
+ adjudication. If the chair was rolled, then it is one of the majority
+ adjudicators; if the chair was in the majority, then it must be the
+ chair."""
+
+ if self.enforce_orallist and self.source.debate.confirmed_ballot:
+ majority = self.source.debate.confirmed_ballot.ballot_set.majority_adj
+ chair = self.source.debate.adjudicators.chair
+ if chair in majority:
+ return [chair]
+ else:
+ return majority
+ else:
+ return list(self.source.debate.adjudicators.voting())
+
+ def get_acceptable_submissions(self):
+ return self.source.adjudicatorfeedback_set.filter(confirmed=True,
+ source_team=self.source,
+ adjudicator__in=self.acceptable_targets()).select_related(
+ 'source_team', 'adjudicator', 'adjudicator__institution')
+
+
+class FeedbackExpectedSubmissionFromAdjudicatorTracker(BaseFeedbackExpectedSubmissionTracker):
+ """Represents a single piece of expected feedback from an adjudicator."""
+
+ def __init__(self, source, target):
+ self.target = target
+ return super().__init__(source)
+
+ def get_acceptable_submissions(self):
+ return self.source.adjudicatorfeedback_set.filter(confirmed=True,
+ adjudicator=self.target, source_adjudicator=self.source).select_related(
+ 'source_adjudicator', 'adjudicator', 'adjudicator__institution')
+
+ def acceptable_targets(self):
+ return [self.target]
+
+
+class FeedbackUnexpectedSubmissionTracker:
+ """Represents a single piece of unexpected feedback."""
+
+ expected = False
+ fulfilled = False
+
+ def __init__(self, feedback):
+ self.feedback = feedback # an AdjudicatorFeedback instance
+
+ @property
+ def round(self):
+ return self.feedback.round
+
+ @property
+ def count(self):
+ return 1
+
+ def submission(self):
+ return self.feedback
+
+
+class BaseFeedbackProgress:
+ """Class to compute feedback submitted or owed by a participant.
+
+ Rather than just counting and comparing aggregates, everything is compared
+ at the individual feedback level using objects called "trackers". This
+ ensures that feedbacks that were actually submitted match those that were
+ expected."""
+
+ def __init__(self, tournament):
+ self.show_unexpected = tournament.pref('show_unexpected_feedback')
+
+ def get_expected_trackers(self):
+ raise NotImplementedError
+
+ def get_submitted_feedback(self):
+ raise NotImplementedError
+
+ def expected_trackers(self):
+ if not hasattr(self, "_expected_trackers"):
+ self._expected_trackers = self.get_expected_trackers()
+ return self._expected_trackers
+
+ def submitted_feedback(self):
+ if not hasattr(self, "_submitted_feedback"):
+ self._submitted_feedback = self.get_submitted_feedback()
+ return self._submitted_feedback
+
+ def expected_feedback(self):
+ """Returns a list of AdjudicatorFeedback objects that are submitted
+ as expected (including where more are submitted than expected)."""
+ return [feedback for tracker in self.expected_trackers()
+ for feedback in tracker.acceptable_submissions()]
+
+ def unexpected_trackers(self):
+ """Returns a list of trackers for feedback that was submitted but not
+ expected to be there."""
+ if self.show_unexpected:
+ return [FeedbackUnexpectedSubmissionTracker(feedback) for feedback in
+ self.submitted_feedback() if feedback not in self.expected_feedback()]
+ else:
+ return []
+
+ def fulfilled_trackers(self):
+ """Returns a list of trackers that are fulfilled."""
+ return [tracker for tracker in self.expected_trackers() if tracker.fulfilled]
+
+ def trackers(self):
+ """Returns a list of all trackers, sorted by round."""
+ return sorted(self.expected_trackers() + self.unexpected_trackers(),
+ key=lambda x: x.round.seq)
+
+ def num_submitted(self):
+ """Returns the number of feedbacks that were submitted, including
+ duplicate and unexpected submissions."""
+ return len(self.submitted_feedback())
+
+ def num_expected(self):
+ """Returns the number of feedbacks that are expected from this participant."""
+ return len(self.expected_trackers())
+
+ def num_fulfilled(self):
+ """Returns the number of feedbacks that are correctly submitted,
+ excluding those where more than one feedback was submitted but only
+ one was expected."""
+ return len(self.fulfilled_trackers())
+
+ def num_unsubmitted(self):
+ return self.num_expected() - self.num_fulfilled()
+
+ def coverage(self):
+ """Returns the number of fulfilled feedbacks divided by the number
+ of expected feedbacks."""
+ if self.num_expected() == 0:
+ return 1.0
+ return self.num_fulfilled() / self.num_expected()
+
+ def _prefetch_tracker_acceptable_submissions(self, trackers, tracker_identifier, feedback_identifier):
+ trackers_by_identifier = {}
+ for tracker in trackers:
+ tracker._acceptable_submissions = []
+ identifier = tracker_identifier(tracker)
+ trackers_by_identifier[identifier] = tracker
+ for feedback in self.submitted_feedback():
+ identifier = feedback_identifier(feedback)
+ try:
+ tracker = trackers_by_identifier[identifier]
+ except KeyError:
+ continue
+ if feedback.adjudicator in tracker.acceptable_targets():
+ tracker._acceptable_submissions.append(feedback)
+
+
+class FeedbackProgressForTeam(BaseFeedbackProgress):
+ """Class to compute feedback submitted or owed by a team."""
+
+ def __init__(self, team, tournament=None):
+ self.team = team
+ if tournament is None:
+ tournament = team.tournament
+ self.enforce_orallist = tournament.pref("show_splitting_adjudicators")
+ super().__init__(tournament)
+
+ @staticmethod
+ def _submitted_feedback_queryset_operations(queryset):
+ # this is also used by get_feedback_progress
+ return queryset.filter(confirmed=True,
+ source_team__debate__round__stage=Round.STAGE_PRELIMINARY).select_related(
+ 'adjudicator', 'adjudicator__institution', 'source_team__debate__round')
+
+ def get_submitted_feedback(self):
+ queryset = AdjudicatorFeedback.objects.filter(source_team__team=self.team)
+ return self._submitted_feedback_queryset_operations(queryset)
+
+ @staticmethod
+ def _debateteam_queryset_operations(queryset):
+ # this is also used by get_feedback_progress
+ debateteams = queryset.filter(
+ debate__ballotsubmission__confirmed=True,
+ debate__round__silent=False,
+ debate__round__stage=Round.STAGE_PRELIMINARY
+ ).select_related('debate', 'debate__round').prefetch_related(
+ 'debate__debateadjudicator_set__adjudicator')
+ populate_confirmed_ballots([dt.debate for dt in debateteams], ballotsets=True)
+ return debateteams
+
+ def _get_debateteams(self):
+ if not hasattr(self, '_debateteams'):
+ self._debateteams = self._debateteam_queryset_operations(self.team.debateteam_set)
+ return self._debateteams
+
+ def get_expected_trackers(self):
+ # There is one tracker for each debate for which there is a confirmed ballot,
+ # and the round is not silent.
+
+ debateteams = self._get_debateteams()
+ trackers = [FeedbackExpectedSubmissionFromTeamTracker(dt, self.enforce_orallist) for dt in debateteams]
+ self._prefetch_tracker_acceptable_submissions(trackers,
+ attrgetter('source'), attrgetter('source_team'))
+ return trackers
+
+
+class FeedbackProgressForAdjudicator(BaseFeedbackProgress):
+ """Class to compute feedback submitted or owed by an adjudicator."""
+
+ def __init__(self, adjudicator, tournament=None):
+ self.adjudicator = adjudicator
+ if tournament is None:
+ tournament = adjudicator.tournament
+ if tournament is None:
+ logger.warning("No tournament specified and adjudicator %s has no tournament", adjudicator)
+ else:
+ self.feedback_paths = tournament.pref('feedback_paths')
+ super().__init__(tournament)
+
+ @staticmethod
+ def _submitted_feedback_queryset_operations(queryset):
+ # this is also used by get_feedback_progress
+ return queryset.filter(confirmed=True,
+ source_adjudicator__debate__round__stage=Round.STAGE_PRELIMINARY).select_related(
+ 'adjudicator', 'adjudicator__institution', 'source_adjudicator__debate__round')
+
+ def get_submitted_feedback(self):
+ queryset = AdjudicatorFeedback.objects.filter(source_adjudicator__adjudicator=self.adjudicator)
+ return self._submitted_feedback_queryset_operations(queryset)
+
+ @staticmethod
+ def _debateadjudicator_queryset_operations(queryset):
+ # this is also used by get_feedback_progress
+ return queryset.filter(
+ debate__ballotsubmission__confirmed=True,
+ debate__round__stage=Round.STAGE_PRELIMINARY
+ ).select_related('debate', 'debate__round').prefetch_related(
+ 'debate__debateadjudicator_set__adjudicator')
+
+ def _get_debateadjudicators(self):
+ if not hasattr(self, '_debateadjudicators'):
+ self._debateadjudicators = self._debateadjudicator_queryset_operations(self.adjudicator.debateadjudicator_set)
+ return self._debateadjudicators
+
+ def get_expected_trackers(self):
+ """Trackers are as follows:
+ - Chairs owe on everyone in their panel.
+ - Panellists owe on chairs if the relevant tournament preference is enabled.
+ """
+ debateadjs = self._get_debateadjudicators()
+
+ trackers = []
+ for debateadj in debateadjs:
+ for target, _ in expected_feedback_targets(debateadj, self.feedback_paths):
+ trackers.append(FeedbackExpectedSubmissionFromAdjudicatorTracker(debateadj, target))
+
+ self._prefetch_tracker_acceptable_submissions(trackers,
+ attrgetter('source', 'target'), attrgetter('source_adjudicator', 'adjudicator'))
+
+ return trackers
+
+
+def get_feedback_progress(t):
+ """Returns a list of FeedbackProgressForTeam objects and a list of
+ FeedbackProgressForAdjudicator objects.
+
+ This function pre-populates the FeedbackProgress objects to avoid needing
+ duplicate SQL queries for every team and adjudicator, so it should be used
+ for performance when the feedback progress of all teams and adjudicators is
+ needed."""
+
+ teams_progress = []
+ adjs_progress = []
+
+ teams = t.team_set.prefetch_related('speaker_set').all()
+
+ submitted_feedback_by_team_id = {team.id: [] for team in teams}
+ submitted_feedback_teams = AdjudicatorFeedback.objects.filter(
+ source_team__team__in=teams).select_related('source_team')
+ submitted_feedback_teams = FeedbackProgressForTeam._submitted_feedback_queryset_operations(submitted_feedback_teams)
+ for feedback in submitted_feedback_teams:
+ submitted_feedback_by_team_id[feedback.source_team.team_id].append(feedback)
+
+ debateteams_by_team_id = {team.id: [] for team in teams}
+ debateteams = DebateTeam.objects.filter(team__in=teams)
+ debateteams = FeedbackProgressForTeam._debateteam_queryset_operations(debateteams)
+ for debateteam in debateteams:
+ debateteams_by_team_id[debateteam.team_id].append(debateteam)
+
+ for team in teams:
+ progress = FeedbackProgressForTeam(team)
+ progress._submitted_feedback = submitted_feedback_by_team_id[team.id]
+ progress._debateteams = debateteams_by_team_id[team.id]
+ teams_progress.append(progress)
+
+ adjudicators = t.adjudicator_set.all()
+
+ submitted_feedback_by_adj_id = {adj.id: [] for adj in adjudicators}
+ submitted_feedback_adjs = AdjudicatorFeedback.objects.filter(
+ source_adjudicator__adjudicator__in=adjudicators).select_related('source_adjudicator')
+ submitted_feedback_adjs = FeedbackProgressForAdjudicator._submitted_feedback_queryset_operations(submitted_feedback_adjs)
+ for feedback in submitted_feedback_adjs:
+ submitted_feedback_by_adj_id[feedback.source_adjudicator.adjudicator_id].append(feedback)
+
+ debateadjs_by_adj_id = {adj.id: [] for adj in adjudicators}
+ debateadjs = DebateAdjudicator.objects.filter(adjudicator__in=adjudicators)
+ debateadjs = FeedbackProgressForAdjudicator._debateadjudicator_queryset_operations(debateadjs)
+ for debateadj in debateadjs:
+ debateadjs_by_adj_id[debateadj.adjudicator_id].append(debateadj)
+
+ for adj in adjudicators:
+ progress = FeedbackProgressForAdjudicator(adj)
+ progress._submitted_feedback = submitted_feedback_by_adj_id[adj.id]
+ progress._debateadjudicators = debateadjs_by_adj_id[adj.id]
+ adjs_progress.append(progress)
+
+ return teams_progress, adjs_progress
diff --git a/tabbycat/adjfeedback/tables.py b/tabbycat/adjfeedback/tables.py
new file mode 100644
index 00000000000..290d666d9e7
--- /dev/null
+++ b/tabbycat/adjfeedback/tables.py
@@ -0,0 +1,157 @@
+import logging
+
+from utils.misc import reverse_tournament
+from utils.tables import TabbycatTableBuilder
+
+from .progress import FeedbackProgressForAdjudicator, FeedbackProgressForTeam
+
+logger = logging.getLogger(__name__)
+
+
+class FeedbackTableBuilder(TabbycatTableBuilder):
+
+ def add_breaking_checkbox(self, adjudicators, key="Breaking"):
+ breaking_header = {
+ 'key': 'B',
+ 'icon': 'glyphicon-star',
+ 'tooltip': 'Whether the adj is marked as breaking (click to mark)',
+ }
+ breaking_data = [{
+ 'text': '' % (adj.id, 'checked' if adj.breaking else ''),
+ 'sort': adj.breaking,
+ 'class': 'checkbox-target'
+ } for adj in adjudicators]
+
+ self.add_column(breaking_header, breaking_data)
+
+ def add_score_columns(self, adjudicators):
+
+ feedback_weight = self.tournament.current_round.feedback_weight
+ scores = {adj: adj.weighted_score(feedback_weight) for adj in adjudicators}
+
+ overall_header = {
+ 'key': 'Overall Score',
+ 'icon': 'glyphicon-signal',
+ 'tooltip': 'Current weighted score',
+ }
+ overall_data = [{
+ 'text': '%0.1f' % scores[adj] if scores[adj] is not None else 'N/A',
+ 'tooltip': 'Current weighted average of all feedback',
+ } for adj in adjudicators]
+ self.add_column(overall_header, overall_data)
+
+ test_header = {
+ 'key': 'Test Score',
+ 'icon': 'glyphicon-scale',
+ 'tooltip': 'Test score result',
+ }
+ test_data = [{
+ 'text': '%0.1f' % adj.test_score if adj.test_score is not None else 'N/A',
+ 'modal': adj.id,
+ 'class': 'edit-test-score',
+ 'tooltip': 'Click to edit test score',
+ } for adj in adjudicators]
+ self.add_column(test_header, test_data)
+
+ def add_feedback_graphs(self, adjudicators):
+ feedback_head = {
+ 'key': 'Feedback',
+ 'text': 'Feedback as Chair ' +
+ ' Panellist ' +
+ ' Trainee '
+ }
+ feedback_data = [{
+ 'graphData': adj.feedback_data,
+ 'component': 'feedback-trend',
+ 'minScore': self.tournament.pref('adj_min_score'),
+ 'maxScore': self.tournament.pref('adj_max_score'),
+ 'roundSeq': len(self.tournament.prelim_rounds()),
+ } for adj in adjudicators]
+ self.add_column(feedback_head, feedback_data)
+
+ def add_feedback_link_columns(self, adjudicators):
+ link_head = {
+ 'key': 'VF',
+ 'icon': 'glyphicon-question-sign'
+ }
+ link_cell = [{
+ 'text': 'View Feedback',
+ 'class': 'view-feedback',
+ 'link': reverse_tournament('adjfeedback-view-on-adjudicator', self.tournament, kwargs={'pk': adj.pk})
+ } for adj in adjudicators]
+ self.add_column(link_head, link_cell)
+
+ def add_feedback_misc_columns(self, adjudicators):
+ if self.tournament.pref('enable_adj_notes'):
+ note_head = {
+ 'key': 'NO',
+ 'icon': 'glyphicon-list-alt'
+ }
+ note_cell = [{
+ 'text': 'Edit Note',
+ 'class': 'edit-note',
+ 'modal': str(adj.id) + '===' + str(adj.notes)
+ } for adj in adjudicators]
+ self.add_column(note_head, note_cell)
+
+ adjudications_head = {
+ 'key': 'DD',
+ 'icon': 'glyphicon-eye-open',
+ 'tooltip': 'Debates adjudicated'
+ }
+ adjudications_cell = [{'text': adj.debates} for adj in adjudicators]
+ self.add_column(adjudications_head, adjudications_cell)
+
+ avgs_head = {
+ 'key': 'AVGS',
+ 'icon': 'glyphicon-resize-full',
+ 'tooltip': 'Average Margin (top) and Average Score (bottom)'
+ }
+ avgs_cell = [{
+ 'text': "%0.1f %0.1f" % (adj.avg_margin if adj.avg_margin else 0, adj.avg_score if adj.avg_margin else 0),
+ 'tooltip': 'Average Margin (top) and Average Score (bottom)'
+ } for adj in adjudicators]
+ self.add_column(avgs_head, avgs_cell)
+
+ def add_feedback_progress_columns(self, progress_list, key="P"):
+
+ def _owed_cell(progress):
+ owed = progress.num_unsubmitted()
+ cell = {
+ 'text': owed,
+ 'sort': owed,
+ 'class': 'text-danger strong' if owed > 0 else 'text-success'
+ }
+ return cell
+
+ owed_header = {
+ 'key': 'Owed',
+ 'icon': 'glyphicon-remove',
+ 'tooltip': 'Unsubmitted feedback ballots',
+ }
+ owed_data = [_owed_cell(progress) for progress in progress_list]
+ self.add_column(owed_header, owed_data)
+
+ if self._show_record_links:
+
+ def _record_link(progress):
+ if isinstance(progress, FeedbackProgressForTeam):
+ url_name = 'participants-team-record' if self.admin else 'participants-public-team-record'
+ pk = progress.team.pk
+ elif isinstance(progress, FeedbackProgressForAdjudicator):
+ url_name = 'participants-adjudicator-record' if self.admin else 'participants-public-adjudicator-record'
+ pk = progress.adjudicator.pk
+ else:
+ logger.error("Unrecognised progress type: %s", progress.__class__.__name__)
+ return ''
+ return reverse_tournament(url_name, self.tournament, kwargs={'pk': pk})
+
+ owed_link_header = {
+ 'key': 'Submitted',
+ 'icon': 'glyphicon-question-sign',
+ }
+ owed_link_data = [{
+ 'text': 'View Missing Feedback',
+ 'link': _record_link(progress)
+ } for progress in progress_list]
+ self.add_column(owed_link_header, owed_link_data)
diff --git a/adjfeedback/templates/add_feedback.html b/tabbycat/adjfeedback/templates/add_feedback.html
similarity index 65%
rename from adjfeedback/templates/add_feedback.html
rename to tabbycat/adjfeedback/templates/add_feedback.html
index 5b22a678e33..c35011512c7 100644
--- a/adjfeedback/templates/add_feedback.html
+++ b/tabbycat/adjfeedback/templates/add_feedback.html
@@ -1,24 +1,16 @@
-{% extends "base.html" %}
+{% extends "feedback_base.html" %}
{% load debate_tags %}
+{% load static %}
{% block head-title %}Who is the feedback from?{% endblock %}
{% block page-title %}Enter Feedback{% endblock %}
-{% block page-subnav-sections %}
-
+ {% endif %}
+
+ {{ block.super }}
+
+{% endblock %}
+
+{% block js %}
+
+ {{ block.super }}
+
+
+
+{% endblock js %}
diff --git a/adjfeedback/templates/public_add_feedback.html b/tabbycat/adjfeedback/templates/public_add_feedback.html
similarity index 86%
rename from adjfeedback/templates/public_add_feedback.html
rename to tabbycat/adjfeedback/templates/public_add_feedback.html
index 62ce140a0fd..f4b683043ca 100644
--- a/adjfeedback/templates/public_add_feedback.html
+++ b/tabbycat/adjfeedback/templates/public_add_feedback.html
@@ -4,10 +4,8 @@
{% block head-title %}Who are you?{% endblock %}
{% block sub-title %}click your name or your team on this list{% endblock %}
-{% block page-subnav-sections %}
- {% include "tables/table_search.html" %}
-{% endblock %}
-
+{% block page-subnav-sections %}{% endblock %}
+{% block page-subnav-actions %}{% endblock %}
{% block page-alerts %}{% endblock %}
{% block enter-feedback-adj-link %}
diff --git a/availability/__init__.py b/tabbycat/adjfeedback/tests/__init__.py
similarity index 100%
rename from availability/__init__.py
rename to tabbycat/adjfeedback/tests/__init__.py
diff --git a/tabbycat/adjfeedback/tests/test_progress.py b/tabbycat/adjfeedback/tests/test_progress.py
new file mode 100644
index 00000000000..05b67083d08
--- /dev/null
+++ b/tabbycat/adjfeedback/tests/test_progress.py
@@ -0,0 +1,403 @@
+import logging
+
+from django.test import TestCase
+
+from adjallocation.models import DebateAdjudicator
+from adjfeedback.models import AdjudicatorFeedback
+from draw.models import Debate, DebateTeam
+from participants.models import Adjudicator, Institution, Speaker, Team
+from results.models import BallotSubmission
+from results.result import BallotSet
+from tournaments.models import Round, Tournament
+from venues.models import Venue
+
+from ..progress import FeedbackExpectedSubmissionFromAdjudicatorTracker, FeedbackExpectedSubmissionFromTeamTracker
+from ..progress import FeedbackProgressForAdjudicator, FeedbackProgressForTeam
+
+
+class TestFeedbackProgress(TestCase):
+
+ NUM_TEAMS = 6
+ NUM_ADJS = 7
+ NUM_VENUES = 3
+
+ def setUp(self):
+ self.t = Tournament.objects.create()
+ for i in range(self.NUM_TEAMS):
+ inst = Institution.objects.create(code=i, name=i)
+ team = Team.objects.create(tournament=self.t, institution=inst, reference=i)
+ for j in range(3):
+ Speaker.objects.create(team=team, name="%d-%d" % (i, j))
+
+ adjsinst = Institution.objects.create(code="Adjs", name="Adjudicators")
+ for i in range(self.NUM_ADJS):
+ Adjudicator.objects.create(tournament=self.t, institution=adjsinst, name=i)
+ for i in range(self.NUM_VENUES):
+ Venue.objects.create(name=i, priority=i)
+
+ self.rd = Round.objects.create(tournament=self.t, seq=1, abbreviation="R1")
+
+ def tearDown(self):
+ self.t.delete()
+ Institution.objects.all().delete()
+ Venue.objects.all().delete()
+
+ def _team(self, t):
+ return Team.objects.get(tournament=self.t, reference=t)
+
+ def _adj(self, a):
+ return Adjudicator.objects.get(tournament=self.t, name=a)
+
+ def _dt(self, debate, t):
+ return DebateTeam.objects.get(debate=debate, team=self._team(t))
+
+ def _da(self, debate, a):
+ return DebateAdjudicator.objects.get(debate=debate, adjudicator=self._adj(a))
+
+ def _create_debate(self, teams, adjs, votes, trainees=[], venue=None):
+ """Enters a debate into the database, using the teams and adjudicators specified.
+ `votes` should be a string (or iterable of characters) indicating "a" for affirmative or
+ "n" for negative, e.g. "ann" if the chair was rolled in a decision for the negative.
+ The method will give the winning team all 76s and the losing team all 74s.
+ The first adjudicator is the chair; the rest are panellists."""
+
+ if venue is None:
+ venue = Venue.objects.first()
+ debate = Debate.objects.create(round=self.rd, venue=venue)
+
+ aff, neg = teams
+ aff_team = self._team(aff)
+ DebateTeam.objects.create(debate=debate, team=aff_team, position=DebateTeam.POSITION_AFFIRMATIVE)
+ neg_team = self._team(neg)
+ DebateTeam.objects.create(debate=debate, team=neg_team, position=DebateTeam.POSITION_NEGATIVE)
+
+ chair = self._adj(adjs[0])
+ DebateAdjudicator.objects.create(debate=debate, adjudicator=chair,
+ type=DebateAdjudicator.TYPE_CHAIR)
+ for p in adjs[1:]:
+ panellist = self._adj(p)
+ DebateAdjudicator.objects.create(debate=debate, adjudicator=panellist,
+ type=DebateAdjudicator.TYPE_PANEL)
+ for tr in trainees:
+ trainee = self._adj(tr)
+ DebateAdjudicator.objects.create(debate=debate, adjudicator=trainee,
+ type=DebateAdjudicator.TYPE_TRAINEE)
+
+ ballotsub = BallotSubmission(debate=debate, submitter_type=BallotSubmission.SUBMITTER_TABROOM)
+ ballotset = BallotSet(ballotsub)
+
+ for t in teams:
+ team = self._team(t)
+ speakers = team.speaker_set.all()
+ for pos, speaker in enumerate(speakers, start=1):
+ ballotset.set_speaker(team, pos, speaker)
+ ballotset.set_speaker(team, 4, speakers[0])
+
+ for a, vote in zip(adjs, votes):
+ adj = self._adj(a)
+ if vote == 'a':
+ teams = [aff_team, neg_team]
+ elif vote == 'n':
+ teams = [neg_team, aff_team]
+ else:
+ raise ValueError
+ for team, score in zip(teams, (76, 74)):
+ for pos in range(1, 4):
+ ballotset.set_score(adj, team, pos, score)
+ ballotset.set_score(adj, team, 4, score / 2)
+
+ ballotset.confirmed = True
+ ballotset.save()
+
+ return debate
+
+ def _create_feedback(self, source, target):
+ if isinstance(source, DebateTeam):
+ source_kwargs = dict(source_team=source)
+ else:
+ source_kwargs = dict(source_adjudicator=source)
+ target_adj = self._adj(target)
+ return AdjudicatorFeedback.objects.create(confirmed=True, adjudicator=target_adj, score=3,
+ **source_kwargs)
+
+ # ==========================================================================
+ # From team
+ # ==========================================================================
+
+ def assertExpectedFromTeamTracker(self, debate, t, expected, fulfilled, count, submissions, targets, tracker_kwargs={}): # noqa
+ tracker = FeedbackExpectedSubmissionFromTeamTracker(self._dt(debate, t), **tracker_kwargs)
+ self.assertIs(tracker.expected, expected)
+ self.assertIs(tracker.fulfilled, fulfilled)
+ self.assertEqual(tracker.count, count)
+ self.assertCountEqual(tracker.acceptable_submissions(), submissions)
+ self.assertCountEqual(tracker.acceptable_targets(), [self._adj(a) for a in targets])
+
+ def test_chair_oral_no_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for t in (0, 1):
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 0, [], [0])
+
+ def test_chair_oral_good_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for t in (0, 1):
+ feedback = self._create_feedback(self._dt(debate, t), 0)
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback], [0])
+
+ def test_chair_oral_bad_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for t in (0, 1):
+ feedback = self._create_feedback(self._dt(debate, t), 1)
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 0, [], [0])
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback], [0, 1, 2], {'enforce_orallist': False})
+
+ def test_chair_oral_multiple_submissions(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for t in (0, 1):
+ feedback1 = self._create_feedback(self._dt(debate, t), 0)
+ feedback2 = self._create_feedback(self._dt(debate, t), 1)
+ # The submission on adj 1 is irrelevant, so shouldn't appear at all.
+ # (It should appear as "unexpected" in the FeedbackProgressForTeam.)
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback1], [0])
+ # If the orallist is not enforced, though, both submissions are relevant.
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 2, [feedback1, feedback2], [0, 1, 2], {'enforce_orallist': False})
+
+ def test_chair_rolled_no_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "ann")
+ for t in (0, 1):
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 0, [], [1, 2])
+
+ def test_chair_rolled_good_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "ann")
+ for t in (0, 1):
+ feedback = self._create_feedback(self._dt(debate, t), 1)
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback], [1, 2])
+
+ def test_chair_rolled_bad_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "ann")
+ for t in (0, 1):
+ feedback = self._create_feedback(self._dt(debate, t), 0)
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 0, [], [1, 2])
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback], [0, 1, 2], {'enforce_orallist': False})
+
+ def test_chair_rolled_multiple_submissions(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "ann")
+ for t in (0, 1):
+ feedback1 = self._create_feedback(self._dt(debate, t), 1)
+ feedback2 = self._create_feedback(self._dt(debate, t), 2)
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 2, [feedback1, feedback2], [1, 2])
+
+ def test_sole_adjudicator_no_submissions(self):
+ debate = self._create_debate((0, 1), (0,), "n")
+ for t in (0, 1):
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 0, [], [0])
+
+ def test_sole_adjudicator_good_submission(self):
+ debate = self._create_debate((0, 1), (0,), "n")
+ for t in (0, 1):
+ feedback = self._create_feedback(self._dt(debate, t), 0)
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback], [0])
+
+ def test_sole_adjudicator_bad_submission(self):
+ debate = self._create_debate((0, 1), (0,), "n")
+ for t in (0, 1):
+ self._create_feedback(self._dt(debate, t), 3)
+ self.assertExpectedFromTeamTracker(debate, t, True, False, 0, [], [0])
+
+ def test_sole_adjudicator_multiple_submissions(self):
+ debate = self._create_debate((0, 1), (0,), "n")
+ for t in (0, 1):
+ feedback1 = self._create_feedback(self._dt(debate, t), 0)
+ self._create_feedback(self._dt(debate, t), 3)
+ self._create_feedback(self._dt(debate, t), 4)
+ # The submissions on adjs 3 and 4 are irrelevant, so shouldn't appear at all.
+ # (They should appear as "unexpected" in the FeedbackProgressForTeam.)
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback1], [0])
+ self.assertExpectedFromTeamTracker(debate, t, True, True, 1, [feedback1], [0], {'enforce_orallist': False})
+
+ # ==========================================================================
+ # From adjudicator
+ # ==========================================================================
+
+ def assertExpectedFromAdjudicatorTracker(self, debate, source, target, expected, fulfilled, count, submissions): # noqa
+ tracker = FeedbackExpectedSubmissionFromAdjudicatorTracker(self._da(debate, source), self._adj(target))
+ self.assertIs(tracker.expected, expected)
+ self.assertIs(tracker.fulfilled, fulfilled)
+ self.assertEqual(tracker.count, count)
+ self.assertCountEqual(tracker.acceptable_submissions(), submissions)
+ self.assertCountEqual(tracker.acceptable_targets(), [self._adj(target)])
+
+ def test_adj_on_adj_no_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for a in (1, 2):
+ self.assertExpectedFromAdjudicatorTracker(debate, 0, a, True, False, 0, [])
+
+ def test_adj_on_adj_good_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for a in (1, 2):
+ feedback = self._create_feedback(self._da(debate, 0), a)
+ self.assertExpectedFromAdjudicatorTracker(debate, 0, a, True, True, 1, [feedback])
+
+ def test_adj_on_adj_bad_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for a in (1, 2):
+ self._create_feedback(self._da(debate, 0), a+2)
+ self.assertExpectedFromAdjudicatorTracker(debate, 0, a, True, False, 0, [])
+
+ def test_adj_on_adj_multiple_submission(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "aan")
+ for a in (1, 2):
+ logging.disable(logging.WARNING)
+ self._create_feedback(self._da(debate, 0), a)
+ feedback2 = self._create_feedback(self._da(debate, 0), a)
+ logging.disable(logging.NOTSET)
+ self.assertExpectedFromAdjudicatorTracker(debate, 0, a, True, True, 1, [feedback2])
+
+ def test_adj_on_adj_trainees_not_submitted(self):
+ debate = self._create_debate((0, 1), (0,), "n", trainees=[4])
+ self.assertExpectedFromAdjudicatorTracker(debate, 0, 4, True, False, 0, [])
+
+ def test_adj_on_adj_trainees_submitted(self):
+ debate = self._create_debate((0, 1), (0, 1, 2), "nan", trainees=[4])
+ feedback = self._create_feedback(self._da(debate, 0), 4)
+ self.assertExpectedFromAdjudicatorTracker(debate, 0, 4, True, True, 1, [feedback])
+
+ # ==========================================================================
+ # Team progress
+ # ==========================================================================
+
+ def _create_team_progress_dataset(self, adj1, adj2, adj3):
+ debate1 = self._create_debate((0, 1), (0, 1, 2), "nnn")
+ debate2 = self._create_debate((0, 2), (3, 4, 5), "ann")
+ debate3 = self._create_debate((0, 3), (6,), "a")
+ if adj1 is not None:
+ self._create_feedback(self._dt(debate1, 0), adj1)
+ if adj2 is not None:
+ self._create_feedback(self._dt(debate2, 0), adj2)
+ if adj3 is not None:
+ self._create_feedback(self._dt(debate3, 0), adj3)
+
+ def assertTeamProgress(self, show_splits, t, submitted, expected, fulfilled, unsubmitted, coverage): # noqa
+ self.t.preferences['ui_options__show_splitting_adjudicators'] = show_splits
+ progress = FeedbackProgressForTeam(self._team(t))
+ self.assertEqual(progress.num_submitted(), submitted)
+ self.assertEqual(progress.num_expected(), expected)
+ self.assertEqual(progress.num_fulfilled(), fulfilled)
+ self.assertEqual(progress.num_unsubmitted(), unsubmitted)
+ self.assertAlmostEqual(progress.coverage(), coverage)
+ return progress
+
+ def test_team_progress_all_good(self):
+ self._create_team_progress_dataset(0, 4, 6)
+ self.assertTeamProgress(True, 0, 3, 3, 3, 0, 1.0)
+ self.assertTeamProgress(False, 0, 3, 3, 3, 0, 1.0)
+
+ def test_team_progress_no_submissions(self):
+ self._create_team_progress_dataset(None, None, None)
+ self.assertTeamProgress(True, 0, 0, 3, 0, 3, 0.0)
+ self.assertTeamProgress(False, 0, 0, 3, 0, 3, 0.0)
+
+ def test_team_progress_no_debates(self):
+ FeedbackProgressForTeam(self._team(4))
+ self.assertTeamProgress(True, 4, 0, 0, 0, 0, 1.0)
+
+ def test_team_progress_missing_submission(self):
+ self._create_team_progress_dataset(0, None, 6)
+ self.assertTeamProgress(True, 0, 2, 3, 2, 1, 2/3)
+ self.assertTeamProgress(False, 0, 2, 3, 2, 1, 2/3)
+
+ def test_team_progress_wrong_target_on_unanimous(self):
+ self._create_team_progress_dataset(2, 4, 6)
+ progress = self.assertTeamProgress(True, 0, 3, 3, 2, 1, 2/3)
+ self.assertEqual(len(progress.unexpected_trackers()), 1)
+ progress = self.assertTeamProgress(False, 0, 3, 3, 3, 0, 1.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 0)
+
+ def test_team_progress_wrong_target_on_rolled_chair(self):
+ self._create_team_progress_dataset(0, 3, 6)
+ progress = self.assertTeamProgress(True, 0, 3, 3, 2, 1, 2/3)
+ self.assertEqual(len(progress.unexpected_trackers()), 1)
+ progress = self.assertTeamProgress(False, 0, 3, 3, 3, 0, 1.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 0)
+
+ def test_team_progress_unexpected(self):
+ self._create_team_progress_dataset(5, 3, None)
+ progress = self.assertTeamProgress(True, 0, 2, 3, 0, 3, 0.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 2)
+
+ self.t.preferences['feedback__show_unexpected_feedback'] = False
+ progress = self.assertTeamProgress(True, 0, 2, 3, 0, 3, 0.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 0)
+
+ # ==========================================================================
+ # Adjudicator progress
+ # ==========================================================================
+
+ def _create_adjudicator_progress_dataset(self, adjs1, adjs2, adjs3):
+ debate1 = self._create_debate((0, 1), (0, 1, 2), "nnn")
+ debate2 = self._create_debate((2, 3), (3, 0, 4), "ann")
+ debate3 = self._create_debate((4, 0), (0,), "a")
+ for adj in adjs1:
+ self._create_feedback(self._da(debate1, 0), adj)
+ for adj in adjs2:
+ self._create_feedback(self._da(debate2, 0), adj)
+ for adj in adjs3:
+ self._create_feedback(self._da(debate3, 0), adj)
+
+ def assertAdjudicatorProgress(self, feedback_paths, a, submitted, expected, fulfilled, unsubmitted, coverage): # noqa
+ self.t.preferences['feedback__feedback_paths'] = feedback_paths
+ progress = FeedbackProgressForAdjudicator(self._adj(a))
+ self.assertEqual(progress.num_submitted(), submitted)
+ self.assertEqual(progress.num_expected(), expected)
+ self.assertEqual(progress.num_fulfilled(), fulfilled)
+ self.assertEqual(progress.num_unsubmitted(), unsubmitted)
+ self.assertAlmostEqual(progress.coverage(), coverage)
+ return progress
+
+ def test_adjudicator_progress_all_good(self):
+ self._create_adjudicator_progress_dataset([1, 2], [3, 4], [])
+ self.assertAdjudicatorProgress('minimal', 0, 4, 2, 2, 0, 1.0)
+ self.assertAdjudicatorProgress('with-p-on-c', 0, 4, 3, 3, 0, 1.0)
+ self.assertAdjudicatorProgress('all-adjs', 0, 4, 4, 4, 0, 1.0)
+
+ def test_adjudicator_progress_missing_p_on_p(self):
+ self._create_adjudicator_progress_dataset([1, 2], [3], [])
+ self.assertAdjudicatorProgress('minimal', 0, 3, 2, 2, 0, 1.0)
+ self.assertAdjudicatorProgress('with-p-on-c', 0, 3, 3, 3, 0, 1.0)
+ self.assertAdjudicatorProgress('all-adjs', 0, 3, 4, 3, 1, 3/4)
+
+ def test_adjudicator_progress_no_submissions(self):
+ self._create_adjudicator_progress_dataset([], [], [])
+ self.assertAdjudicatorProgress('minimal', 0, 0, 2, 0, 2, 0.0)
+ self.assertAdjudicatorProgress('with-p-on-c', 0, 0, 3, 0, 3, 0.0)
+ self.assertAdjudicatorProgress('all-adjs', 0, 0, 4, 0, 4, 0.0)
+
+ def test_adjudicator_progress_no_debates(self):
+ FeedbackProgressForAdjudicator(self._adj(5))
+ self.assertAdjudicatorProgress('minimal', 5, 0, 0, 0, 0, 1.0)
+ self.assertAdjudicatorProgress('with-p-on-c', 5, 0, 0, 0, 0, 1.0)
+ self.assertAdjudicatorProgress('all-adjs', 5, 0, 0, 0, 0, 1.0)
+
+ def test_adjudicator_progress_missing_submission(self):
+ self._create_adjudicator_progress_dataset([1], [3], [])
+ self.assertAdjudicatorProgress('minimal', 0, 2, 2, 1, 1, 1/2)
+ self.assertAdjudicatorProgress('with-p-on-c', 0, 2, 3, 2, 1, 2/3)
+ self.assertAdjudicatorProgress('all-adjs', 0, 2, 4, 2, 2, 1/2)
+
+ def test_adjudicator_progress_wrong_target(self):
+ self._create_adjudicator_progress_dataset([1, 2], [4], [])
+ progress = self.assertAdjudicatorProgress('with-p-on-c', 0, 3, 3, 2, 1, 2/3)
+ self.assertEqual(len(progress.unexpected_trackers()), 1)
+
+ def test_adjudicator_progress_extra_target(self):
+ self._create_adjudicator_progress_dataset([1, 2], [3, 4], [])
+ progress = self.assertAdjudicatorProgress('with-p-on-c', 0, 4, 3, 3, 0, 1.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 1)
+
+ def test_adjudicator_progress_unexpected(self):
+ self._create_adjudicator_progress_dataset([5], [1], [2])
+ progress = self.assertAdjudicatorProgress('with-p-on-c', 0, 3, 3, 0, 3, 0.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 3)
+
+ self.t.preferences['feedback__show_unexpected_feedback'] = False
+ progress = self.assertAdjudicatorProgress('with-p-on-c', 0, 3, 3, 0, 3, 0.0)
+ self.assertEqual(len(progress.unexpected_trackers()), 0)
diff --git a/adjfeedback/urls_admin.py b/tabbycat/adjfeedback/urls_admin.py
similarity index 66%
rename from adjfeedback/urls_admin.py
rename to tabbycat/adjfeedback/urls_admin.py
index 1cae06a7c67..a4754cb9fd1 100644
--- a/adjfeedback/urls_admin.py
+++ b/tabbycat/adjfeedback/urls_admin.py
@@ -1,24 +1,19 @@
from django.conf.urls import url
+from participants.models import Adjudicator, Team
+
from . import views
-from participants.models import Team, Adjudicator
urlpatterns = [
# Overviews
url(r'^$',
- views.feedback_overview,
+ views.FeedbackOverview.as_view(),
name='adjfeedback-overview'),
url(r'^progress/$',
- views.feedback_progress,
+ views.FeedbackProgress.as_view(),
name='feedback_progress'),
# Getting/setting values
- url(r'^scores/all/$',
- views.adj_scores,
- name='adj_scores'),
- url(r'^scores/get/$',
- views.get_adj_feedback,
- name='get_adj_feedback'),
url(r'^test/set/$',
views.SetAdjudicatorTestScoreView.as_view(),
name='adjfeedback-set-adj-test-score'),
@@ -28,6 +23,13 @@
url(r'^notes/test/set/$',
views.SetAdjudicatorNoteView.as_view(),
name='adjfeedback-set-adj-note'),
+ # Only used in old allocation screen; TODO: deprecate
+ url(r'^scores/all/$',
+ views.GetAdjScores.as_view(),
+ name='adj_scores'),
+ url(r'^feedback/get/$',
+ views.GetAdjFeedback.as_view(),
+ name='get_adj_feedback'),
# Source
url(r'^latest/$',
@@ -42,6 +44,15 @@
url(r'^source/adjudicator/(?P\d+)/$',
views.FeedbackFromAdjudicatorView.as_view(),
name='adjfeedback-view-from-adjudicator'),
+ url(r'^target/list/$',
+ views.FeedbackByTargetView.as_view(),
+ name='adjfeedback-view-by-target'),
+ url(r'^target/adjudicator/(?P\d+)/$',
+ views.FeedbackOnAdjudicatorView.as_view(),
+ name='adjfeedback-view-on-adjudicator'),
+ url(r'^target/adjudicator/json/(?P\d+)/$',
+ views.GetAdjFeedbackJSON.as_view(),
+ name='get_adj_feedback_json'),
# Adding
url(r'^add/$',
@@ -61,4 +72,10 @@
url(r'^randomised_urls/generate/$',
views.GenerateRandomisedUrlsView.as_view(),
name='randomised-urls-generate'),
+ url(r'^randomised_urls/emails/list/$',
+ views.EmailRandomisedUrlsView.as_view(),
+ name='randomised-urls-email'),
+ url(r'^randomised_urls/emails/confirm/$',
+ views.ConfirmEmailRandomisedUrlsView.as_view(),
+ name='confirm-feedback-urls-send'),
]
diff --git a/adjfeedback/urls_public.py b/tabbycat/adjfeedback/urls_public.py
similarity index 91%
rename from adjfeedback/urls_public.py
rename to tabbycat/adjfeedback/urls_public.py
index e91fd6d3d11..d1eca568b92 100644
--- a/adjfeedback/urls_public.py
+++ b/tabbycat/adjfeedback/urls_public.py
@@ -1,11 +1,13 @@
from django.conf.urls import url
-from participants.models import Team, Adjudicator
+
+from participants.models import Adjudicator, Team
+
from . import views
urlpatterns = [
# Overviews
url(r'^feedback_progress/$',
- views.public_feedback_progress,
+ views.PublicFeedbackProgress.as_view(),
name='public_feedback_progress'),
# Submission via Public Form
diff --git a/tabbycat/adjfeedback/utils.py b/tabbycat/adjfeedback/utils.py
new file mode 100644
index 00000000000..eb94a843571
--- /dev/null
+++ b/tabbycat/adjfeedback/utils.py
@@ -0,0 +1,186 @@
+import logging
+
+from django.core.exceptions import ObjectDoesNotExist
+
+from adjallocation.allocation import AdjudicatorAllocation
+from adjallocation.models import DebateAdjudicator
+from adjfeedback.models import AdjudicatorFeedback
+from results.models import SpeakerScoreByAdj
+
+logger = logging.getLogger(__name__)
+
+
+def expected_feedback_targets(debateadj, feedback_paths=None, debate=None):
+ """Returns a list of adjudicators and positions (adj, pos), each being
+ someone that the given DebateAdjudicator object is expected to give feedback
+ on. If the debate adjudicator's position and the tournament preferences
+ dictate that the source adjudicator should not submit feedback on anyone for
+ this debate, then it returns an empty list.
+
+ Each element of the returned list is a 2-tuple `(adj, pos)`, where `adj` is
+ an Adjudicator instance and `pos` is an AdjudicatorAllocation.POSITION_*
+ constant. DebateAdjudicator instances are not returned by this function; in
+ fact, the use of DebateAdjudicator instances for feedback targets is in
+ general discouraged, since feedback targets are Adjudicator instances, not
+ DebateAdjudicator instances.
+
+ `feedback_paths` can be used to avoid unnecessary tournament lookups,
+ and should be one of the available options in
+ options.dynamic_preferences_registry.FeedbackPaths.choices.
+
+ `debate` can be used to avoid unnecessary database hits populating
+ AdjudicatorAllocation, and should be equal to debateadj.debate.
+ """
+
+ if feedback_paths is None:
+ feedback_paths = debateadj.debate.round.tournament.pref('feedback_paths')
+ if debate is None:
+ debate = debateadj.debate
+ adjudicators = debate.adjudicators
+
+ if feedback_paths == 'all-adjs' or debateadj.type == DebateAdjudicator.TYPE_CHAIR:
+ targets = [(adj, pos) for adj, pos in adjudicators.with_positions() if adj.id != debateadj.adjudicator_id]
+ elif feedback_paths == 'with-p-on-c' and debateadj.type == DebateAdjudicator.TYPE_PANEL:
+ targets = [(adjudicators.chair, AdjudicatorAllocation.POSITION_CHAIR)]
+ else:
+ targets = []
+
+ if feedback_paths not in ['all-adjs', 'with-p-on-c', 'minimal']:
+ logger.error("Unrecognised preference: %s", feedback_paths)
+
+ return targets
+
+
+def get_feedback_overview(t, adjudicators):
+
+ all_debate_adjudicators = list(DebateAdjudicator.objects.all().select_related(
+ 'adjudicator'))
+ all_adj_feedbacks = list(AdjudicatorFeedback.objects.filter(confirmed=True).select_related(
+ 'adjudicator', 'source_adjudicator', 'source_team',
+ 'source_adjudicator__debate__round', 'source_team__debate__round').exclude(
+ source_adjudicator__type=DebateAdjudicator.TYPE_TRAINEE))
+ all_adj_scores = list(SpeakerScoreByAdj.objects.filter(
+ ballot_submission__confirmed=True).exclude(position=t.REPLY_POSITION).select_related(
+ 'debate_adjudicator__adjudicator__id', 'ballot_submission'))
+ rounds = t.prelim_rounds(until=t.current_round)
+
+ for adj in adjudicators:
+ # Gather feedback scores for graphs
+ feedbacks = [f for f in all_adj_feedbacks if f.adjudicator == adj]
+ debate_adjudications = [a for a in all_debate_adjudicators if a.adjudicator.id is adj.id]
+ scores = [s for s in all_adj_scores if s.debate_adjudicator.adjudicator.id is adj.id]
+
+ # Gather a dict of round-by-round feedback for the graph
+ adj.feedback_data = feedback_stats(adj, rounds, feedbacks, all_debate_adjudicators)
+ # Sum up remaining stats
+ adj = scoring_stats(adj, scores, debate_adjudications)
+
+ return adjudicators
+
+
+def feedback_stats(adj, rounds, feedbacks, all_debate_adjudicators):
+
+ # Start off with their test scores
+ feedback_data = [{'x': 0, 'y': adj.test_score, 'position': "Test Score"}]
+
+ for r in rounds:
+ # Filter all the feedback to focus on this particular rouond
+ adj_round_feedbacks = [f for f in feedbacks if (f.source_adjudicator and f.source_adjudicator.debate.round == r)]
+ adj_round_feedbacks.extend([f for f in feedbacks if (f.source_team and f.source_team.debate.round == r)])
+
+ if len(adj_round_feedbacks) > 0:
+ debates = [fb.source_team.debate for fb in adj_round_feedbacks if fb.source_team]
+ debates.extend([fb.source_adjudicator.debate for fb in adj_round_feedbacks if fb.source_adjudicator])
+ adj_da = next((da for da in all_debate_adjudicators if (da.adjudicator == adj and da.debate == debates[0])), None)
+ if adj_da:
+ if adj_da.type == adj_da.TYPE_CHAIR:
+ adj_type = "Chair"
+ elif adj_da.type == adj_da.TYPE_PANEL:
+ adj_type = "Panellist"
+ elif adj_da.type == adj_da.TYPE_TRAINEE:
+ adj_type = "Trainee"
+
+ total_score = [f.score for f in adj_round_feedbacks]
+ average_score = round(sum(total_score) / len(total_score), 2)
+
+ # Creating the object list for the graph
+ feedback_data.append({
+ 'x': r.seq,
+ 'y': average_score,
+ 'position': adj_type,
+ })
+
+ return feedback_data
+
+
+def scoring_stats(adj, scores, debate_adjudications):
+ # Processing scores to get average margins
+ adj.debates = len(debate_adjudications)
+ adj.avg_score = None
+ adj.avg_margin = None
+
+ if len(scores) > 0:
+ adj.avg_score = sum(s.score for s in scores) / len(scores)
+
+ ballot_ids = [score.ballot_submission for score in scores]
+ ballot_ids = sorted(set([b.id for b in ballot_ids])) # Deduplication of ballot IDS
+ ballot_margins = []
+
+ for ballot_id in ballot_ids:
+ # For each unique ballot id total its scores
+ single_round = [s for s in scores if s.ballot_submission.id is ballot_id]
+ adj_scores = [s.score for s in single_round] # TODO this is slow - should be prefetched
+ team_split = int(len(adj_scores) / 2)
+ try:
+ # adj_scores is a list of all scores from the debate
+ t_a_scores = adj_scores[:team_split]
+ t_b_scores = adj_scores[team_split:]
+ t_a_total, t_b_total = sum(t_a_scores), sum(t_b_scores)
+ largest_difference = max(t_a_total, t_b_total)
+ smallest_difference = min(t_a_total, t_b_total)
+ ballot_margins.append(
+ largest_difference - smallest_difference)
+ except TypeError:
+ print(team_split)
+
+ if ballot_margins:
+ print('has %s margins %s' % (len(ballot_margins), ballot_margins))
+ adj.avg_margin = sum(ballot_margins) / len(ballot_margins)
+
+ return adj
+
+
+def parse_feedback(feedback, questions):
+
+ if feedback.source_team:
+ source_annotation = " (" + feedback.source_team.result + ")"
+ elif feedback.source_adjudicator:
+ source_annotation = " (" + feedback.source_adjudicator.get_type_display() + ")"
+ else:
+ source_annotation = ""
+
+ data = {
+ 'round': feedback.round.abbreviation,
+ 'version': str(feedback.version) + (feedback.confirmed and "*" or ""),
+ 'bracket': feedback.debate.bracket,
+ 'matchup': feedback.debate.matchup,
+ 'source': feedback.source,
+ 'source_note': source_annotation,
+ 'score': feedback.score,
+ 'questions': []
+ }
+
+ for question in questions:
+ q = {
+ 'reference': question.reference,
+ 'text': question.text,
+ 'name': question.name
+ }
+ try:
+ q['answer'] = question.answer_set.get(feedback=feedback).answer
+ except ObjectDoesNotExist:
+ q['answer'] = "-"
+
+ data['questions'].append(q)
+
+ return data
diff --git a/tabbycat/adjfeedback/views.py b/tabbycat/adjfeedback/views.py
new file mode 100644
index 00000000000..8fcdaedc735
--- /dev/null
+++ b/tabbycat/adjfeedback/views.py
@@ -0,0 +1,649 @@
+import logging
+
+from django.contrib.auth.mixins import LoginRequiredMixin
+from django.contrib import messages
+from django.core.exceptions import ObjectDoesNotExist
+from django.core.mail import send_mail
+from django.conf import settings
+from django.http import HttpResponse
+from django.db.models import Q
+from django.shortcuts import get_object_or_404
+from django.views.generic.base import TemplateView
+from django.views.generic.edit import FormView
+
+from actionlog.mixins import LogActionMixin
+from actionlog.models import ActionLogEntry
+from participants.models import Adjudicator, Speaker, Team
+from participants.prefetch import populate_feedback_scores
+from results.mixins import PublicSubmissionFieldsMixin, TabroomSubmissionFieldsMixin
+from tournaments.mixins import PublicTournamentPageMixin, TournamentMixin
+
+from utils.misc import reverse_tournament
+from utils.mixins import CacheMixin, JsonDataResponseView, SingleObjectByRandomisedUrlMixin, SingleObjectFromTournamentMixin
+from utils.mixins import PostOnlyRedirectView, SuperuserOrTabroomAssistantTemplateResponseMixin, SuperuserRequiredMixin, VueTableTemplateView
+from utils.tables import TabbycatTableBuilder
+from utils.urlkeys import populate_url_keys
+
+from .models import AdjudicatorFeedback, AdjudicatorTestScoreHistory
+from .forms import make_feedback_form_class
+from .tables import FeedbackTableBuilder
+from .utils import get_feedback_overview, parse_feedback
+from .progress import get_feedback_progress
+
+logger = logging.getLogger(__name__)
+
+
+class GetAdjScores(LoginRequiredMixin, TournamentMixin, JsonDataResponseView):
+
+ def get_data(self):
+ feedback_weight = self.get_tournament().current_round.feedback_weight
+ data = {}
+ for adj in Adjudicator.objects.all():
+ data[adj.id] = adj.weighted_score(feedback_weight)
+ return data
+
+
+class GetAdjFeedbackJSON(LoginRequiredMixin, TournamentMixin, JsonDataResponseView):
+
+ def get_data(self):
+ adjudicator = get_object_or_404(Adjudicator, pk=self.kwargs['pk'])
+ feedback = adjudicator.get_feedback().filter(confirmed=True)
+ questions = self.get_tournament().adj_feedback_questions
+ data = [parse_feedback(f, questions) for f in feedback]
+ return data
+
+
+class FeedbackOverview(LoginRequiredMixin, TournamentMixin, VueTableTemplateView):
+
+ template_name = 'feedback_overview.html'
+ page_title = 'Adjudicator Feedback Summary'
+ page_emoji = '🙅'
+
+ def get_adjudicators(self):
+ t = self.get_tournament()
+ if t.pref('share_adjs'):
+ return Adjudicator.objects.filter(Q(tournament=t) | Q(tournament__isnull=True))
+ else:
+ return Adjudicator.objects.filter(tournament=t)
+
+ def get_context_data(self, **kwargs):
+ kwargs['breaking_count'] = self.get_adjudicators().filter(
+ breaking=True).count()
+ return super().get_context_data(**kwargs)
+
+ def get_table(self):
+ t = self.get_tournament()
+ adjudicators = self.get_adjudicators()
+ populate_feedback_scores(adjudicators)
+ adjudicators = get_feedback_overview(t, adjudicators)
+ table = FeedbackTableBuilder(view=self, sort_key='Overall Score',
+ sort_order='desc')
+ table.add_adjudicator_columns(adjudicators, hide_institution=True, subtext='institution')
+ table.add_breaking_checkbox(adjudicators)
+ table.add_score_columns(adjudicators)
+ table.add_feedback_graphs(adjudicators)
+ table.add_feedback_link_columns(adjudicators)
+ table.add_feedback_misc_columns(adjudicators)
+ return table
+
+
+class FeedbackByTargetView(LoginRequiredMixin, TournamentMixin, VueTableTemplateView):
+ template_name = "feedback_base.html"
+ page_title = 'Find Feedback on Adjudicator'
+ page_emoji = '🔍'
+
+ def get_table(self):
+ tournament = self.get_tournament()
+ table = TabbycatTableBuilder(view=self, sort_key="Name")
+ table.add_adjudicator_columns(tournament.adjudicator_set.all())
+ feedback_data = []
+ for adj in tournament.adjudicator_set.all():
+ count = adj.adjudicatorfeedback_set.count()
+ feedback_data.append({
+ 'text': "{:d} Feedbacks".format(count),
+ 'link': reverse_tournament('adjfeedback-view-on-adjudicator', tournament, kwargs={'pk': adj.id}),
+ })
+ table.add_column("Feedbacks", feedback_data)
+ return table
+
+
+class FeedbackBySourceView(LoginRequiredMixin, TournamentMixin, VueTableTemplateView):
+
+ template_name = "feedback_base.html"
+ page_title = 'Find Feedback'
+ page_emoji = '🔍'
+
+ def get_tables(self):
+ tournament = self.get_tournament()
+
+ teams = tournament.team_set.all()
+ team_table = TabbycatTableBuilder(
+ view=self, title='From Teams', sort_key='Name')
+ team_table.add_team_columns(teams)
+ team_feedback_data = []
+ for team in teams:
+ count = AdjudicatorFeedback.objects.filter(
+ source_team__team=team).select_related(
+ 'source_team__team').count()
+ team_feedback_data.append({
+ 'text': "{:d} Feedbacks".format(count),
+ 'link': reverse_tournament('adjfeedback-view-from-team',
+ tournament,
+ kwargs={'pk': team.id}),
+ })
+ team_table.add_column("Feedbacks", team_feedback_data)
+
+ adjs = tournament.adjudicator_set.all()
+ adj_table = TabbycatTableBuilder(
+ view=self, title='From Adjudicators', sort_key='Feedbacks')
+ adj_table.add_adjudicator_columns(adjs)
+ adj_feedback_data = []
+ for adj in adjs:
+ count = AdjudicatorFeedback.objects.filter(
+ source_adjudicator__adjudicator=adj).select_related(
+ 'source_adjudicator__adjudicator').count()
+ adj_feedback_data.append({
+ 'text': "{:d} Feedbacks".format(count),
+ 'link': reverse_tournament('adjfeedback-view-from-adjudicator',
+ tournament,
+ kwargs={'pk': adj.id}),
+ })
+ adj_table.add_column("Feedbacks", adj_feedback_data)
+
+ return [team_table, adj_table]
+
+
+class FeedbackCardsView(LoginRequiredMixin, TournamentMixin, TemplateView):
+ """Base class for views displaying feedback as cards."""
+
+ def get_score_thresholds(self):
+ tournament = self.get_tournament()
+ min_score = tournament.pref('adj_min_score')
+ max_score = tournament.pref('adj_max_score')
+ score_range = max_score - min_score
+ return {
+ 'low_score' : min_score + score_range / 10,
+ 'medium_score' : min_score + score_range / 5,
+ 'high_score' : max_score - score_range / 10,
+ }
+
+ def get_feedbacks(self):
+ questions = self.get_tournament().adj_feedback_questions
+ feedbacks = self.get_feedback_queryset()
+ for feedback in feedbacks:
+ feedback.items = []
+ for question in questions:
+ try:
+ answer = question.answer_set.get(feedback=feedback).answer
+ except ObjectDoesNotExist:
+ continue
+ feedback.items.append({'question': question, 'answer': answer})
+ return feedbacks
+
+ def get_feedback_queryset(self):
+ raise NotImplementedError()
+
+ def get_context_data(self, **kwargs):
+ kwargs['feedbacks'] = self.get_feedbacks()
+ kwargs['score_thresholds'] = self.get_score_thresholds()
+ return super().get_context_data(**kwargs)
+
+
+class LatestFeedbackView(FeedbackCardsView):
+ """View displaying the latest feedback."""
+
+ template_name = "feedback_latest.html"
+
+ def get_feedback_queryset(self):
+ return AdjudicatorFeedback.objects.order_by('-timestamp')[:50].select_related(
+ 'adjudicator', 'source_adjudicator__adjudicator', 'source_team__team')
+
+
+class FeedbackFromSourceView(SingleObjectFromTournamentMixin, FeedbackCardsView):
+ """Base class for views displaying feedback from a given team or adjudicator."""
+
+ template_name = "feedback_by_source.html"
+ source_name_attr = None
+ source_type = "from"
+ adjfeedback_filter_field = None
+
+ def get_context_data(self, **kwargs):
+ kwargs['source_name'] = getattr(self.object, self.source_name_attr, '')
+ kwargs['source_type'] = self.source_type
+ return super().get_context_data(**kwargs)
+
+ def get(self, request, *args, **kwargs):
+ self.object = self.get_object()
+ return super().get(request, *args, **kwargs)
+
+ def get_feedback_queryset(self):
+ kwargs = {self.adjfeedback_filter_field: self.object}
+ return AdjudicatorFeedback.objects.filter(**kwargs).order_by('-timestamp')
+
+
+class FeedbackOnAdjudicatorView(FeedbackFromSourceView):
+ """Base class for views displaying feedback from a given team or adjudicator."""
+
+ model = Adjudicator
+ source_name_attr = 'name'
+ source_type = "on"
+ adjfeedback_filter_field = 'adjudicator'
+ allow_null_tournament = True
+
+
+class FeedbackFromTeamView(FeedbackFromSourceView):
+ """View displaying feedback from a given source."""
+ model = Team
+ source_name_attr = 'short_name'
+ adjfeedback_filter_field = 'source_team__team'
+ allow_null_tournament = False
+
+
+class FeedbackFromAdjudicatorView(FeedbackFromSourceView):
+ """View displaying feedback from a given adjudicator."""
+ model = Adjudicator
+ source_name_attr = 'name'
+ adjfeedback_filter_field = 'source_adjudicator__adjudicator'
+ allow_null_tournament = True
+
+
+class GetAdjFeedback(LoginRequiredMixin, TournamentMixin, JsonDataResponseView):
+
+ def parse_feedback(self, f, questions):
+
+ if f.source_team:
+ source_annotation = " (" + f.source_team.result + ")"
+ elif f.source_adjudicator:
+ source_annotation = " (" + f.source_adjudicator.get_type_display() + ")"
+ else:
+ source_annotation = ""
+
+ data = [
+ str(f.round.abbreviation),
+ str(str(f.version) + (f.confirmed and "*" or "")),
+ f.debate.bracket,
+ f.debate.matchup,
+ str(str(f.source) + source_annotation),
+ f.score,
+ ]
+ for question in questions:
+ try:
+ data.append(question.answer_set.get(feedback=f).answer)
+ except ObjectDoesNotExist:
+ data.append("-")
+ data.append(f.confirmed)
+ return data
+
+ def get_data(self):
+ t = self.get_tournament()
+ adj = get_object_or_404(Adjudicator, pk=int(self.request.GET['id']))
+ feedback = adj.get_feedback().filter(confirmed=True)
+ questions = t.adj_feedback_questions
+
+ data = [self.parse_feedback(f, questions) for f in feedback]
+ data = [parse_feedback(f, questions) for f in feedback]
+ return {'aaData': data}
+
+
+class BaseAddFeedbackIndexView(TournamentMixin, TemplateView):
+
+ def get_context_data(self, **kwargs):
+ tournament = self.get_tournament()
+ kwargs['adjudicators'] = tournament.adjudicator_set.all() if not tournament.pref('share_adjs') \
+ else Adjudicator.objects.all()
+ kwargs['teams'] = tournament.team_set.all()
+ return super().get_context_data(**kwargs)
+
+
+class TabroomAddFeedbackIndexView(SuperuserOrTabroomAssistantTemplateResponseMixin, BaseAddFeedbackIndexView):
+ """View for the index page for tabroom officials to add feedback. The index
+ page lists all possible sources; officials should then choose the author
+ of the feedback."""
+
+ superuser_template_name = 'add_feedback.html'
+ assistant_template_name = 'assistant_add_feedback.html'
+
+
+class PublicAddFeedbackIndexView(CacheMixin, PublicTournamentPageMixin, BaseAddFeedbackIndexView):
+ """View for the index page for public users to add feedback. The index page
+ lists all possible sources; public users should then choose themselves."""
+
+ template_name = 'public_add_feedback.html'
+ public_page_preference = 'public_feedback'
+
+
+class BaseAddFeedbackView(LogActionMixin, SingleObjectFromTournamentMixin, FormView):
+ """Base class for views that allow users to add feedback."""
+
+ template_name = "enter_feedback.html"
+ pk_url_kwarg = 'source_id'
+ allow_null_tournament = True
+
+ def get_form_class(self):
+ return make_feedback_form_class(self.object, self.get_tournament(),
+ self.get_submitter_fields(), **self.feedback_form_class_kwargs)
+
+ def get_action_log_fields(self, **kwargs):
+ kwargs['adjudicator_feedback'] = self.adj_feedback
+ return super().get_action_log_fields(**kwargs)
+
+ def form_valid(self, form):
+ self.adj_feedback = form.save()
+ return super().form_valid(form)
+
+ def get_context_data(self, **kwargs):
+ source = self.object
+ if isinstance(source, Adjudicator):
+ kwargs['source_type'] = "adj"
+ elif isinstance(source, Team):
+ kwargs['source_type'] = "team"
+ kwargs['source_name'] = self.source_name
+ return super().get_context_data(**kwargs)
+
+ def _populate_source(self):
+ self.object = self.get_object() # For compatibility with SingleObjectMixin
+ if isinstance(self.object, Adjudicator):
+ self.source_name = self.object.name
+ elif isinstance(self.object, Team):
+ self.source_name = self.object.short_name
+ else:
+ self.source_name = ""
+
+ def get(self, request, *args, **kwargs):
+ self._populate_source()
+ return super().get(request, *args, **kwargs)
+
+ def post(self, request, *args, **kwargs):
+ self._populate_source()
+ return super().post(request, *args, **kwargs)
+
+
+class TabroomAddFeedbackView(TabroomSubmissionFieldsMixin, LoginRequiredMixin, BaseAddFeedbackView):
+ """View for tabroom officials to add feedback."""
+
+ action_log_type = ActionLogEntry.ACTION_TYPE_FEEDBACK_SAVE
+ feedback_form_class_kwargs = {
+ 'confirm_on_submit': True,
+ 'enforce_required': False,
+ 'include_unreleased_draws': True,
+ }
+
+ def form_valid(self, form):
+ result = super().form_valid(form)
+ messages.success(self.request, "Feedback from {} on {} added.".format(
+ self.source_name, self.adj_feedback.adjudicator.name))
+ return result
+
+ def get_success_url(self):
+ return reverse_tournament('adjfeedback-add-index', self.get_tournament())
+
+
+class PublicAddFeedbackView(PublicSubmissionFieldsMixin, PublicTournamentPageMixin, BaseAddFeedbackView):
+ """Base class for views for public users to add feedback."""
+
+ action_log_type = ActionLogEntry.ACTION_TYPE_FEEDBACK_SUBMIT
+ feedback_form_class_kwargs = {
+ 'confirm_on_submit': True,
+ 'enforce_required': True,
+ 'include_unreleased_draws': False,
+ }
+
+ def form_valid(self, form):
+ result = super().form_valid(form)
+ messages.success(self.request, "Thanks, {}! Your feedback on {} has been recorded.".format(
+ self.source_name, self.adj_feedback.adjudicator.name))
+ return result
+
+ def get_success_url(self):
+ return reverse_tournament('tournament-public-index', self.get_tournament())
+
+
+class PublicAddFeedbackByRandomisedUrlView(SingleObjectByRandomisedUrlMixin, PublicAddFeedbackView):
+ """View for public users to add feedback, where the URL is a randomised one."""
+ public_page_preference = 'public_feedback_randomised'
+
+
+class PublicAddFeedbackByIdUrlView(PublicAddFeedbackView):
+ """View for public users to add feedback, where the URL is by object ID."""
+ public_page_preference = 'public_feedback'
+
+
+class AdjudicatorActionError(RuntimeError):
+ pass
+
+
+class BaseAdjudicatorActionView(LogActionMixin, SuperuserRequiredMixin, TournamentMixin, PostOnlyRedirectView):
+
+ tournament_redirect_pattern_name = 'adjfeedback-overview'
+
+ def get_action_log_fields(self, **kwargs):
+ kwargs['adjudicator'] = self.adjudicator
+ return super().get_action_log_fields(**kwargs)
+
+ def get_adjudicator(self, request):
+ try:
+ adj_id = int(request.POST["adj_id"])
+ adjudicator = Adjudicator.objects.get(id=adj_id)
+ except (ValueError, Adjudicator.DoesNotExist, Adjudicator.MultipleObjectsReturned):
+ raise AdjudicatorActionError("Whoops! I didn't recognise that adjudicator: {}".format(adj_id))
+ return adjudicator
+
+ def post(self, request, *args, **kwargs):
+ try:
+ self.adjudicator = self.get_adjudicator(request)
+ self.modify_adjudicator(request, self.adjudicator)
+ self.log_action() # Need to call explicitly, since this isn't a form view
+ except AdjudicatorActionError as e:
+ messages.error(request, str(e))
+
+ return super().post(request, *args, **kwargs)
+
+
+class SetAdjudicatorTestScoreView(BaseAdjudicatorActionView):
+
+ action_log_type = ActionLogEntry.ACTION_TYPE_TEST_SCORE_EDIT
+
+ def get_action_log_fields(self, **kwargs):
+ kwargs['adjudicator_test_score_history'] = self.atsh
+ # Skip BaseAdjudicatorActionView
+ return super(BaseAdjudicatorActionView, self).get_action_log_fields(**kwargs)
+
+ def modify_adjudicator(self, request, adjudicator):
+ try:
+ score = float(request.POST["test_score"])
+ except ValueError:
+ raise AdjudicatorActionError("Whoops! The value isn't a valid test score.")
+
+ adjudicator.test_score = score
+ adjudicator.save()
+
+ atsh = AdjudicatorTestScoreHistory(
+ adjudicator=adjudicator, round=self.get_tournament().current_round,
+ score=score)
+ atsh.save()
+ self.atsh = atsh
+
+
+class SetAdjudicatorBreakingStatusView(BaseAdjudicatorActionView):
+
+ action_log_type = ActionLogEntry.ACTION_TYPE_ADJUDICATOR_BREAK_SET
+
+ def modify_adjudicator(self, request, adjudicator):
+ adjudicator.breaking = (str(request.POST["adj_breaking_status"]) == "true")
+ adjudicator.save()
+
+ def post(self, request, *args, **kwargs):
+ super().post(request, *args, **kwargs) # Discard redirect
+ return HttpResponse("ok")
+
+
+class SetAdjudicatorNoteView(BaseAdjudicatorActionView):
+
+ action_log_type = ActionLogEntry.ACTION_TYPE_ADJUDICATOR_NOTE_SET
+
+ def modify_adjudicator(self, request, adjudicator):
+ try:
+ note = str(request.POST["note"])
+ except ValueError as e:
+ raise AdjudicatorActionError("Whoop! There was an error interpreting that string: " + str(e))
+
+ adjudicator.notes = note
+ adjudicator.save()
+
+
+class BaseFeedbackProgressView(TournamentMixin, VueTableTemplateView):
+
+ page_title = 'Feedback Progress'
+ page_subtitle = ''
+ page_emoji = '🆘'
+
+ def get_feedback_progress(self):
+ if not hasattr(self, "_feedback_progress_result"):
+ self._feedback_progress_result = get_feedback_progress(self.get_tournament())
+ return self._feedback_progress_result
+
+ def get_page_subtitle(self):
+ teams_progress, adjs_progress = self.get_feedback_progress()
+ total_missing = sum([progress.num_unsubmitted() for progress in teams_progress + adjs_progress])
+ return "{:d} missing feedback submissions".format(total_missing)
+
+ def get_tables(self):
+ teams_progress, adjs_progress = self.get_feedback_progress()
+
+ adjs_table = FeedbackTableBuilder(view=self, title="From Adjudicators",
+ sort_key="Owed", sort_order="desc")
+ adjudicators = [progress.adjudicator for progress in adjs_progress]
+ adjs_table.add_adjudicator_columns(adjudicators, hide_metadata=True)
+ adjs_table.add_feedback_progress_columns(adjs_progress)
+
+ teams_table = FeedbackTableBuilder(view=self, title="From Teams",
+ sort_key="Owed", sort_order="desc")
+ teams = [progress.team for progress in teams_progress]
+ teams_table.add_team_columns(teams)
+ teams_table.add_feedback_progress_columns(teams_progress)
+
+ return [adjs_table, teams_table]
+
+
+class FeedbackProgress(SuperuserRequiredMixin, BaseFeedbackProgressView):
+ template_name = 'feedback_base.html'
+
+
+class PublicFeedbackProgress(PublicTournamentPageMixin, CacheMixin, BaseFeedbackProgressView):
+ public_page_preference = 'feedback_progress'
+
+
+class RandomisedUrlsView(SuperuserRequiredMixin, TournamentMixin, TemplateView):
+
+ template_name = 'randomised_urls.html'
+ show_emails = False
+
+ def get_context_data(self, **kwargs):
+ tournament = self.get_tournament()
+ kwargs['teams'] = tournament.team_set.all()
+ if not tournament.pref('share_adjs'):
+ kwargs['adjs'] = tournament.adjudicator_set.all()
+ else:
+ kwargs['adjs'] = Adjudicator.objects.all()
+ kwargs['exists'] = tournament.adjudicator_set.filter(url_key__isnull=False).exists() or \
+ tournament.team_set.filter(url_key__isnull=False).exists()
+ kwargs['tournament_slug'] = tournament.slug
+ return super().get_context_data(**kwargs)
+
+
+class GenerateRandomisedUrlsView(SuperuserRequiredMixin, TournamentMixin, PostOnlyRedirectView):
+
+ tournament_redirect_pattern_name = 'randomised-urls-view'
+
+ def post(self, request, *args, **kwargs):
+ tournament = self.get_tournament()
+
+ # Only works if there are no randomised URLs now
+ if tournament.adjudicator_set.filter(url_key__isnull=False).exists() or \
+ tournament.team_set.filter(url_key__isnull=False).exists():
+ messages.error(
+ self.request, "There are already randomised URLs. " +
+ "You must use the Django management commands to populate or " +
+ "delete randomised URLs.")
+ else:
+ populate_url_keys(tournament.adjudicator_set.all())
+ populate_url_keys(tournament.team_set.all())
+ messages.success(self.request, "Randomised URLs were generated for all teams and adjudicators.")
+
+ return super().post(request, *args, **kwargs)
+
+
+class EmailRandomisedUrlsView(RandomisedUrlsView):
+
+ show_emails = True
+ template_name = 'randomised_urls_email_list.html'
+
+
+class ConfirmEmailRandomisedUrlsView(SuperuserRequiredMixin, TournamentMixin, PostOnlyRedirectView):
+
+ tournament_redirect_pattern_name = 'randomised-urls-view'
+
+ def post(self, request, *args, **kwargs):
+ messages.success(self.request, "Emails were sent for all teams and adjudicators.")
+
+ tournament = self.get_tournament()
+ speakers = Speaker.objects.filter(team__tournament=tournament,
+ team__url_key__isnull=False, email__isnull=False)
+ adjudicators = tournament.adjudicator_set.filter(
+ url_key__isnull=False, email__isnull=False)
+
+ for speaker in speakers:
+ if speaker.email is None:
+ continue
+
+ team_path = reverse_tournament(
+ 'adjfeedback-public-add-from-team-randomised',
+ tournament, kwargs={'url_key': speaker.team.url_key})
+ team_link = self.request.build_absolute_uri(team_path)
+ message = (''
+ 'Hi %s, \n'
+ '\n'
+ 'At %s we are using an online feedback system. Feedback for \n'
+ 'your team (%s) can be submitted at the following URL. This URL \n'
+ 'is unique to your team — do not share it as anyone with this \n'
+ 'link can submit feedback on your behalf. It will not \n'
+ 'change so we suggest bookmarking it. The URL is: \n'
+ '\n'
+ '%s' % (speaker.name, tournament.short_name, speaker.team.short_name, team_link))
+
+ try:
+ send_mail("Your Feedback URL for %s" % tournament.short_name,
+ message, settings.DEFAULT_FROM_EMAIL, [speaker.email],
+ fail_silently=False)
+ logger.info("Sent email with key to %s (%s)" % (speaker.email, speaker.name))
+ except:
+ logger.info("Failed to send email to %s speaker.email")
+
+ for adjudicator in adjudicators:
+ if adjudicator.email is None:
+ continue
+
+ adj_path = reverse_tournament(
+ 'adjfeedback-public-add-from-adjudicator-randomised',
+ tournament, kwargs={'url_key': adjudicator.url_key})
+ adj_link = self.request.build_absolute_uri(adj_path)
+ message = (''
+ 'Hi %s, \n'
+ '\n'
+ 'At %s we are using an online feedback system. Your feedback \n'
+ 'can be submitted at the following URL. This URL \n'
+ 'is unique to you — do not share it as anyone with this \n'
+ 'link can submit feedback on your behalf. It will not \n'
+ 'change so we suggest bookmarking it. The URL is: \n'
+ '\n'
+ '%s' % (adjudicator.name, tournament.short_name, adj_link))
+
+ try:
+ send_mail("Your Feedback URL for %s" % tournament.short_name,
+ message, settings.DEFAULT_FROM_EMAIL, [adjudicator.email],
+ fail_silently=False)
+ logger.info("Sent email with key to %s (%s)" % (adjudicator.email, adjudicator.name))
+ except:
+ logger.info("Failed to send email %s" % adjudicator.email)
+
+ return super().post(request, *args, **kwargs)
diff --git a/availability/migrations/__init__.py b/tabbycat/availability/__init__.py
similarity index 100%
rename from availability/migrations/__init__.py
rename to tabbycat/availability/__init__.py
diff --git a/availability/admin.py b/tabbycat/availability/admin.py
similarity index 93%
rename from availability/admin.py
rename to tabbycat/availability/admin.py
index 338f6bf9f66..a84ffcff390 100644
--- a/availability/admin.py
+++ b/tabbycat/availability/admin.py
@@ -1,7 +1,7 @@
from django.contrib import admin
-from django import forms
-from .models import ActiveVenue, ActiveTeam, ActiveAdjudicator
+from .models import ActiveAdjudicator, ActiveTeam, ActiveVenue
+
# ==============================================================================
# ActiveVenue
@@ -14,6 +14,7 @@ class ActiveVenueAdmin(admin.ModelAdmin):
admin.site.register(ActiveVenue, ActiveVenueAdmin)
+
# ==============================================================================
# ActiveTeam
# ==============================================================================
@@ -25,6 +26,7 @@ class ActiveTeamAdmin(admin.ModelAdmin):
admin.site.register(ActiveTeam, ActiveTeamAdmin)
+
# ==============================================================================
# ActiveAdjudicator
# ==============================================================================
diff --git a/availability/migrations/0001_initial.py b/tabbycat/availability/migrations/0001_initial.py
similarity index 100%
rename from availability/migrations/0001_initial.py
rename to tabbycat/availability/migrations/0001_initial.py
diff --git a/availability/migrations/0002_checkin_person.py b/tabbycat/availability/migrations/0002_checkin_person.py
similarity index 100%
rename from availability/migrations/0002_checkin_person.py
rename to tabbycat/availability/migrations/0002_checkin_person.py
diff --git a/availability/migrations/0003_auto_20160103_1927.py b/tabbycat/availability/migrations/0003_auto_20160103_1927.py
similarity index 100%
rename from availability/migrations/0003_auto_20160103_1927.py
rename to tabbycat/availability/migrations/0003_auto_20160103_1927.py
diff --git a/breakqual/migrations/__init__.py b/tabbycat/availability/migrations/__init__.py
similarity index 100%
rename from breakqual/migrations/__init__.py
rename to tabbycat/availability/migrations/__init__.py
diff --git a/availability/models.py b/tabbycat/availability/models.py
similarity index 100%
rename from availability/models.py
rename to tabbycat/availability/models.py
diff --git a/availability/templates/availability_index.html b/tabbycat/availability/templates/availability_index.html
similarity index 67%
rename from availability/templates/availability_index.html
rename to tabbycat/availability/templates/availability_index.html
index 738c2f868c9..f866f58cff3 100644
--- a/availability/templates/availability_index.html
+++ b/tabbycat/availability/templates/availability_index.html
@@ -1,41 +1,39 @@
{% extends "base.html" %}
{% load debate_tags %}
-{% block page-title %}{{ round.name }} Check-Ins{% endblock %}
-{% block head-title %}📍️Check-Ins Overview{% endblock %}
-{% block sub-title %}For {{ round.name }}{% endblock %}
-
{% block page-subnav-sections %}
-
-
- Check In Teams
-
- {% if round.draw_type == "F" %}
-
- Check In All Breaking Teams
+ {% if round.is_break_round %}
+
+ Check In Teams
- {% elif round.draw_type == "B" %}
-
- Check In All Advancing Teams
+ {% else %}
+
+ Check In Teams
{% endif %}
-
+
Check In Venues
-
- Check In Adjudicators
+
+ Check In Adjs
{% if round.is_break_round %}
-
- Check In All Breaking Adjudicators
+
+ Check In All Breaking Adjs
{% else %}
-
- Check In All
+
+ Check In Everything
{% if round.prev %}
-
- Check In All Active in {{ round.prev.abbreviation }}
+
+ Check In Everything Active in {{ round.prev.abbreviation }}
{% endif %}
{% endif %}
@@ -44,7 +42,7 @@
{% block page-subnav-actions %}
- {% if round.draw_status == round.STATUS_CONFIRMED or round.draw_status = round.STATUS_RELEASED %}
+ {% if round.draw_status == round.STATUS_CONFIRMED or round.draw_status == round.STATUS_RELEASED %}
View Draw
@@ -106,7 +104,8 @@
{% if previous_unconfirmed > 0 and not round.is_break_round %}
Note: {{ previous_unconfirmed }} debates from {{ round.prev.name }}
- do not have a completed ballot — this may lead to a draw that fails or is incorrect depending on your draw rules.
+ do not have a completed ballot — this may lead to a draw that
+ fails or is incorrect depending on your draw rules.
- There need to be at least {{ min_adjudicators }} checked in adjudicators given the number of debates.
+ There need to be at least {{ min_adjudicators }} checked in adjudicators
+ given the number of debates.
{% endif %}
{% if min_venues > checkin_types.2.in_now %}
- There need to be at least {{ min_venues }} checked in venues given the number of debates.
+ There need to be at least {{ min_venues }} checked in venues given the
+ number of debates.
+
+ {% endif %}
+
+ {% if round.seq > current_round.seq and not round.is_break_round %}
+
+ This is a page for {{ round.name }}, however the current
+ round is still set to {{ current_round.name }}. Did you
+ forget to
+ advance to the next round?
+
+ {% endif %}
+
+ {% if not round.prev and round.draw_type = round.DRAW_POWERPAIRED %}
+
@@ -17,7 +16,7 @@
their ballots and adjudicator allocations — and cannot be undone.
You probably don't want to do this if any results have been entered!