Skip to content

Commit

Permalink
Merge branch 'docs' into docs_lili
Browse files Browse the repository at this point in the history
* docs:
  Split cells via Min Cut (#5885)
  Clean up backend util package (#6048)
  Guard against empty saves (#6052)
  Time tracking: Do not fail on empty timespans list (#6051)
  Fix clip button changing position (#6050)
  Include ParamFailure values in error chains (#6045)
  Fix non-32-aligned bucket requests (#6047)
  Don't enforce save state when saving is triggered by a timeout and reduce tracing layout analytics event count (#5999)
  Bump cached-path-relative from 1.0.2 to 1.1.0 (#5994)
  Volume annotation download: zip with BEST_SPEED (#6036)
  Sensible scalebar values (#6034)
  Faster CircleCI builds (#6040)
  move to Google Analytics 4 (#6031)
  Fix nightly (fix tokens, upgrade puppeteer) (#6032)
  Add neuron reconstruction job backend and frontend part (#5922)
  Allow uploading multi-layer volume annotations (#6028)
  • Loading branch information
hotzenklotz committed Feb 18, 2022
2 parents 622d905 + 5fd7e21 commit dca6181
Show file tree
Hide file tree
Showing 160 changed files with 3,609 additions and 2,284 deletions.
10 changes: 6 additions & 4 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@ version: 2
jobs:
build_test_deploy:
machine:
image: ubuntu-1604:201903-01
image: ubuntu-2004:202111-02
docker_layer_caching: true
resource_class: large
environment:
USER_NAME: circleci
USER_UID: 1001
Expand Down Expand Up @@ -253,7 +255,7 @@ jobs:
curl
-X POST
-H "X-Auth-Token: $RELEASE_API_TOKEN"
https://kube.scm.io/hooks/remove/webknossos/dev/master?user=CI+%28nightly%29
https://kubernetix.scm.io/hooks/remove/webknossos/dev/master?user=CI+%28nightly%29
- run:
name: Wait 3min
command: sleep 180
Expand All @@ -263,7 +265,7 @@ jobs:
curl
-X POST
-H "X-Auth-Token: $RELEASE_API_TOKEN"
https://kube.scm.io/hooks/install/webknossos/dev/master?user=CI+%28nightly%29
https://kubernetix.scm.io/hooks/install/webknossos/dev/master?user=CI+%28nightly%29
- run:
name: Install dependencies and sleep at least 3min
command: |
Expand All @@ -272,7 +274,7 @@ jobs:
wait
- run:
name: Refresh datasets
command: curl https://master.webknossos.xyz/data/triggers/checkInboxBlocking?token=secretSampleUserToken
command: curl -X POST --fail https://master.webknossos.xyz/data/triggers/checkInboxBlocking?token=$WK_AUTH_TOKEN
- run:
name: Run screenshot-tests
command: |
Expand Down
9 changes: 8 additions & 1 deletion CHANGELOG.unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,26 +11,33 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
[Commits](https://github.com/scalableminds/webknossos/compare/22.02.0...HEAD)

### Added
- Viewport scale bars are now dynamically adjusted to display sensible values. [#5418](https://github.com/scalableminds/webknossos/pull/6034)
- Added the option to make a segment's ID active via the right-click context menu in the segments list. [#5935](https://github.com/scalableminds/webknossos/pull/6006)
- Added a button next to the histogram which adapts the contrast and brightness to the currently visible data. [#5961](https://github.com/scalableminds/webknossos/pull/5961)
- Running uploads can now be cancelled. [#5958](https://github.com/scalableminds/webknossos/pull/5958)
- Added experimental min-cut feature to split a segment in a volume tracing with two seeds. [#5885](https://github.com/scalableminds/webknossos/pull/5885)
- Annotations with multiple volume layers can now be uploaded. (Note that merging multiple annotations with multiple volume layers each is not supported.) [#6028](https://github.com/scalableminds/webknossos/pull/6028)
- Decrease volume annotation download latency by using a different compression level. [#6036](https://github.com/scalableminds/webknossos/pull/6036)

### Changed
- Upgraded webpack build tool to v5 and all other webpack related dependencies to their latest version. Enabled persistent caching which speeds up server restarts during development as well as production builds. [#5969](https://github.com/scalableminds/webknossos/pull/5969)
- Improved stability when quickly volume-annotating large structures. [#6000](https://github.com/scalableminds/webknossos/pull/6000)
- The front-end API `labelVoxels` returns a promise now which fulfills as soon as the label operation was carried out. [#5955](https://github.com/scalableminds/webknossos/pull/5955)
- Changed that webKnossos no longer tries to reach a save state where all updates are sent to the backend to be in sync with the frontend when the save is triggered by a timeout. [#5999](https://github.com/scalableminds/webknossos/pull/5999)
- When changing which layers are visible in an annotation, this setting is persisted in the annotation, so when you share it, viewers will see the same visibility configuration. [#5967](https://github.com/scalableminds/webknossos/pull/5967)
- Downloading public annotations is now also allowed without being authenticated. [#6001](https://github.com/scalableminds/webknossos/pull/6001)
- Downloaded volume annotation layers no longer produce zero-byte zipfiles but rather a valid header-only zip file with no contents. [#6022](https://github.com/scalableminds/webknossos/pull/6022)
- Changed a number of API routes from GET to POST to avoid unwanted side effects. [#6023](https://github.com/scalableminds/webknossos/pull/6023)
- Removed unused datastore route `checkInbox` (use `checkInboxBlocking` instead). [#6023](https://github.com/scalableminds/webknossos/pull/6023)
- Migrated to Google Analytics 4. [#6031](https://github.com/scalableminds/webknossos/pull/6031)

### Fixed
- Fixed volume-related bugs which could corrupt the volume data in certain scenarios. [#5955](https://github.com/scalableminds/webknossos/pull/5955)
- Fixed the placeholder resolution computation for anisotropic layers with missing base resolutions. [#5983](https://github.com/scalableminds/webknossos/pull/5983)
- Fixed a bug where ad-hoc meshes were computed for a mapping, although it was disabled. [#5982](https://github.com/scalableminds/webknossos/pull/5982)
- Fixed a bug where volume annotation downloads would sometimes contain truncated zips. [#6009](https://github.com/scalableminds/webknossos/pull/6009)

- Fixed a bug where downloaded multi-layer volume annotations would have the wrong data.zip filenames. [#6028](https://github.com/scalableminds/webknossos/pull/6028)
- Fixed a bug which could cause an error message to appear when saving. [#6052](https://github.com/scalableminds/webknossos/pull/6052)

### Removed

Expand Down
1 change: 1 addition & 0 deletions MIGRATIONS.unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ User-facing changes are documented in the [changelog](CHANGELOG.released.md).

## Unreleased
[Commits](https://github.com/scalableminds/webknossos/compare/22.02.0...HEAD)
- The config field `googleAnalytics.trackingId` needs to be changed to [GA4 measurement id](https://support.google.com/analytics/answer/10089681), if used.

### Postgres Evolutions:
- [081-annotation-viewconfiguration.sql](conf/evolutions/081-annotation-viewconfiguration.sql)
146 changes: 102 additions & 44 deletions app/controllers/AnnotationIOController.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package controllers

import java.io.{BufferedOutputStream, File, FileOutputStream}
import java.util.zip.Deflater

import akka.actor.ActorSystem
import akka.stream.Materializer
Expand All @@ -11,7 +12,12 @@ import com.scalableminds.util.tools.{Fox, FoxImplicits, TextUtils}
import com.scalableminds.webknossos.datastore.SkeletonTracing.{SkeletonTracing, SkeletonTracingOpt, SkeletonTracings}
import com.scalableminds.webknossos.datastore.VolumeTracing.{VolumeTracing, VolumeTracingOpt, VolumeTracings}
import com.scalableminds.webknossos.datastore.helpers.ProtoGeometryImplicits
import com.scalableminds.webknossos.datastore.models.datasource.{AbstractSegmentationLayer, SegmentationLayer}
import com.scalableminds.webknossos.datastore.models.datasource.{
AbstractSegmentationLayer,
DataLayerLike,
GenericDataSource,
SegmentationLayer
}
import com.scalableminds.webknossos.tracingstore.tracings.TracingType
import com.scalableminds.webknossos.tracingstore.tracings.volume.VolumeTracingDefaults
import com.typesafe.scalalogging.LazyLogging
Expand All @@ -20,8 +26,8 @@ import javax.inject.Inject
import models.analytics.{AnalyticsService, DownloadAnnotationEvent, UploadAnnotationEvent}
import models.annotation.AnnotationState._
import models.annotation._
import models.annotation.nml.NmlResults.NmlParseResult
import models.annotation.nml.{NmlResults, NmlService, NmlWriter}
import models.annotation.nml.NmlResults.{NmlParseResult, NmlParseSuccess}
import models.annotation.nml.{NmlResults, NmlWriter}
import models.binary.{DataSet, DataSetDAO, DataSetService}
import models.organization.OrganizationDAO
import models.project.ProjectDAO
Expand Down Expand Up @@ -53,7 +59,7 @@ class AnnotationIOController @Inject()(
analyticsService: AnalyticsService,
sil: Silhouette[WkEnv],
provider: AnnotationInformationProvider,
nmlService: NmlService)(implicit ec: ExecutionContext, val materializer: Materializer)
annotationUploadService: AnnotationUploadService)(implicit ec: ExecutionContext, val materializer: Materializer)
extends Controller
with FoxImplicits
with ProtoGeometryImplicits
Expand All @@ -64,7 +70,10 @@ class AnnotationIOController @Inject()(
value =
"""Upload NML(s) or ZIP(s) of NML(s) to create a new explorative annotation.
Expects:
- As file attachment: any number of NML files or ZIP files containing NMLs, optionally with at most one volume data ZIP referenced from an NML in a ZIP
- As file attachment:
- Any number of NML files or ZIP files containing NMLs, optionally with volume data ZIPs referenced from an NML in a ZIP
- If multiple annotations are uploaded, they are merged into one.
- This is not supported if any of the annotations has multiple volume layers.
- As form parameter: createGroupForEachFile [String] should be one of "true" or "false"
- If "true": in merged annotation, create tree group wrapping the trees of each file
- If "false": in merged annotation, rename trees with the respective file name as prefix""",
Expand All @@ -86,42 +95,35 @@ Expects:
val overwritingDataSetName: Option[String] =
request.body.dataParts.get("datasetName").flatMap(_.headOption)
val attachedFiles = request.body.files.map(f => (f.ref.path.toFile, f.filename))
val parsedFiles = nmlService.extractFromFiles(attachedFiles, useZipName = true, overwritingDataSetName)
val tracingsProcessed = nmlService.wrapOrPrefixTrees(parsedFiles.parseResults, shouldCreateGroupForEachFile)

val parseSuccesses: List[NmlParseResult] = tracingsProcessed.filter(_.succeeded)
val parsedFiles =
annotationUploadService.extractFromFiles(attachedFiles, useZipName = true, overwritingDataSetName)
val parsedFilesWraped =
annotationUploadService.wrapOrPrefixTrees(parsedFiles.parseResults, shouldCreateGroupForEachFile)
val parseResultsFiltered: List[NmlParseResult] = parsedFilesWraped.filter(_.succeeded)

if (parseSuccesses.isEmpty) {
if (parseResultsFiltered.isEmpty) {
returnError(parsedFiles)
} else {
val (skeletonTracings, volumeTracingsWithDataLocations) = extractTracings(parseSuccesses)
val name = nameForUploaded(parseSuccesses.map(_.fileName))
val description = descriptionForNMLs(parseSuccesses.map(_.description))

for {
_ <- bool2Fox(skeletonTracings.nonEmpty || volumeTracingsWithDataLocations.nonEmpty) ?~> "nml.file.noFile"
dataSet <- findDataSetForUploadedAnnotations(skeletonTracings, volumeTracingsWithDataLocations.map(_._1))
parseSuccesses <- Fox.serialCombined(parseResultsFiltered)(r => r.toSuccessBox)
name = nameForUploaded(parseResultsFiltered.map(_.fileName))
description = descriptionForNMLs(parseResultsFiltered.map(_.description))
_ <- assertNonEmpty(parseSuccesses)
skeletonTracings = parseSuccesses.flatMap(_.skeletonTracing)
// Create a list of volume layers for each uploaded (non-skeleton-only) annotation.
// This is what determines the merging strategy for volume layers
volumeLayersGroupedRaw = parseSuccesses.map(_.volumeLayers).filter(_.nonEmpty)
dataSet <- findDataSetForUploadedAnnotations(skeletonTracings,
volumeLayersGroupedRaw.flatten.map(_.tracing))
volumeLayersGrouped <- adaptVolumeTracingsToFallbackLayer(volumeLayersGroupedRaw, dataSet)
tracingStoreClient <- tracingStoreService.clientFor(dataSet)
mergedVolumeTracingIdOpt <- Fox.runOptional(volumeTracingsWithDataLocations.headOption) { _ =>
for {
volumeTracingsAdapted <- Fox.serialCombined(volumeTracingsWithDataLocations)(v =>
adaptPropertiesToFallbackLayer(v._1, dataSet))
mergedIdOpt <- tracingStoreClient.mergeVolumeTracingsByContents(
VolumeTracings(volumeTracingsAdapted.map(v => VolumeTracingOpt(Some(v)))),
volumeTracingsWithDataLocations.map(t => parsedFiles.otherFiles.get(t._2).map(_.path.toFile)),
persistTracing = true
)
} yield mergedIdOpt
}
mergedSkeletonTracingIdOpt <- Fox.runOptional(skeletonTracings.headOption) { _ =>
tracingStoreClient.mergeSkeletonTracingsByContents(
SkeletonTracings(skeletonTracings.map(t => SkeletonTracingOpt(Some(t)))),
persistTracing = true)
}
annotationLayers <- AnnotationLayer.layersFromIds(mergedSkeletonTracingIdOpt, mergedVolumeTracingIdOpt)
mergedVolumeLayers <- mergeAndSaveVolumeLayers(volumeLayersGrouped,
tracingStoreClient,
parsedFiles.otherFiles)
mergedSkeletonLayers <- mergeAndSaveSkeletonLayers(skeletonTracings, tracingStoreClient)
annotation <- annotationService.createFrom(request.identity,
dataSet,
annotationLayers,
mergedSkeletonLayers ::: mergedVolumeLayers,
AnnotationType.Explorational,
name,
description)
Expand All @@ -135,6 +137,55 @@ Expects:
}
}

private def mergeAndSaveVolumeLayers(volumeLayersGrouped: Seq[List[UploadedVolumeLayer]],
client: WKRemoteTracingStoreClient,
otherFiles: Map[String, TemporaryFile]): Fox[List[AnnotationLayer]] = {
if (volumeLayersGrouped.isEmpty) return Fox.successful(List())
if (volumeLayersGrouped.length > 1 && volumeLayersGrouped.exists(_.length > 1))
return Fox.failure("Cannot merge multiple annotations that each have multiple volume layers.")
if (volumeLayersGrouped.length == 1) { // Just one annotation was uploaded, keep its layers separate
Fox.serialCombined(volumeLayersGrouped.toList.flatten) { uploadedVolumeLayer =>
for {
savedTracingId <- client.saveVolumeTracing(uploadedVolumeLayer.tracing,
uploadedVolumeLayer.getDataZipFrom(otherFiles))
} yield
AnnotationLayer(
savedTracingId,
AnnotationLayerType.Volume,
uploadedVolumeLayer.name
)
}
} else { // Multiple annotations with volume layers (but at most one each) was uploaded merge those volume layers into one
val uploadedVolumeLayersFlat = volumeLayersGrouped.toList.flatten
for {
mergedTracingId <- client.mergeVolumeTracingsByContents(
VolumeTracings(uploadedVolumeLayersFlat.map(v => VolumeTracingOpt(Some(v.tracing)))),
uploadedVolumeLayersFlat.map(v => v.getDataZipFrom(otherFiles)),
persistTracing = true
)
} yield
List(
AnnotationLayer(
mergedTracingId,
AnnotationLayerType.Volume,
None
))
}
}

private def mergeAndSaveSkeletonLayers(skeletonTracings: List[SkeletonTracing],
tracingStoreClient: WKRemoteTracingStoreClient): Fox[List[AnnotationLayer]] = {
if (skeletonTracings.isEmpty) return Fox.successful(List())
for {
mergedTracingId <- tracingStoreClient.mergeSkeletonTracingsByContents(
SkeletonTracings(skeletonTracings.map(t => SkeletonTracingOpt(Some(t)))),
persistTracing = true)
} yield List(AnnotationLayer(mergedTracingId, AnnotationLayerType.Skeleton, None))
}

private def assertNonEmpty(parseSuccesses: List[NmlParseSuccess]) =
bool2Fox(parseSuccesses.exists(p => p.skeletonTracing.nonEmpty || p.volumeLayers.nonEmpty)) ?~> "nml.file.noFile"

private def findDataSetForUploadedAnnotations(
skeletonTracings: List[SkeletonTracing],
volumeTracings: List[VolumeTracing])(implicit mp: MessagesProvider, ctx: DBAccessContext): Fox[DataSet] =
Expand Down Expand Up @@ -173,14 +224,6 @@ Expects:
Future.successful(JsonBadRequest(Messages("nml.file.noFile")))
}

private def extractTracings(
parseSuccesses: List[NmlParseResult]): (List[SkeletonTracing], List[(VolumeTracing, String)]) = {
val tracings = parseSuccesses.flatMap(_.bothTracingOpts)
val skeletons = tracings.flatMap(_._1)
val volumes = tracings.flatMap(_._2)
(skeletons, volumes)
}

private def assertAllOnSameDataSet(skeletons: List[SkeletonTracing], volumes: List[VolumeTracing]): Fox[String] =
for {
dataSetName <- volumes.headOption.map(_.dataSetName).orElse(skeletons.headOption.map(_.dataSetName)).toFox
Expand All @@ -197,9 +240,23 @@ Expects:
} yield organizationNames.headOption
}

private def adaptPropertiesToFallbackLayer(volumeTracing: VolumeTracing, dataSet: DataSet): Fox[VolumeTracing] =
private def adaptVolumeTracingsToFallbackLayer(volumeLayersGrouped: List[List[UploadedVolumeLayer]],
dataSet: DataSet): Fox[List[List[UploadedVolumeLayer]]] =
for {
dataSource <- dataSetService.dataSourceFor(dataSet).flatMap(_.toUsable)
allAdapted <- Fox.serialCombined(volumeLayersGrouped) { volumeLayers =>
Fox.serialCombined(volumeLayers) { volumeLayer =>
for {
tracingAdapted <- adaptPropertiesToFallbackLayer(volumeLayer.tracing, dataSource)
} yield volumeLayer.copy(tracing = tracingAdapted)
}
}
} yield allAdapted

private def adaptPropertiesToFallbackLayer[T <: DataLayerLike](volumeTracing: VolumeTracing,
dataSource: GenericDataSource[T]): Fox[VolumeTracing] =
for {
_ <- Fox.successful(())
fallbackLayer = dataSource.dataLayers.flatMap {
case layer: SegmentationLayer if volumeTracing.fallbackLayer contains layer.name => Some(layer)
case layer: AbstractSegmentationLayer if volumeTracing.fallbackLayer contains layer.name => Some(layer)
Expand Down Expand Up @@ -320,7 +377,8 @@ Expects:
_ = fetchedVolumeLayers.zipWithIndex.map {
case (volumeLayer, index) =>
volumeLayer.volumeDataOpt.foreach { volumeData =>
val dataZipName = volumeLayer.volumeDataZipName(index, fetchedSkeletonLayers.length == 1)
val dataZipName = volumeLayer.volumeDataZipName(index, fetchedVolumeLayers.length == 1)
zipper.stream.setLevel(Deflater.BEST_SPEED)
zipper.addFileFromBytes(dataZipName, volumeData)
}
}
Expand Down
4 changes: 2 additions & 2 deletions app/controllers/DataSetController.scala
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ package controllers

import com.mohiva.play.silhouette.api.Silhouette
import com.scalableminds.util.accesscontext.{DBAccessContext, GlobalAccessContext}
import com.scalableminds.util.geometry.Point3D
import com.scalableminds.util.geometry.Vec3Int
import com.scalableminds.util.mvc.Filter
import com.scalableminds.util.tools.DefaultConverters._
import com.scalableminds.util.tools.{Fox, JsonHelper, Math}
Expand Down Expand Up @@ -87,7 +87,7 @@ class DataSetController @Inject()(userService: UserService,
Fox.successful(a)
case _ =>
val defaultCenterOpt = dataSet.adminViewConfiguration.flatMap(c =>
c.get("position").flatMap(jsValue => JsonHelper.jsResultToOpt(jsValue.validate[Point3D])))
c.get("position").flatMap(jsValue => JsonHelper.jsResultToOpt(jsValue.validate[Vec3Int])))
val defaultZoomOpt = dataSet.adminViewConfiguration.flatMap(c =>
c.get("zoom").flatMap(jsValue => JsonHelper.jsResultToOpt(jsValue.validate[Double])))
dataSetService
Expand Down
Loading

0 comments on commit dca6181

Please sign in to comment.