diff --git a/NEWS.md b/NEWS.md index 3353086bd..3ffe80c50 100644 --- a/NEWS.md +++ b/NEWS.md @@ -8,8 +8,8 @@ - Add issue-based artifact-networks, in which issues form vertices connected by edges that represent issue references. If possible, disambiguate duplicate JIRA issue references that originate from [codeface-extraction](https://github.com/se-sic/codeface-extraction) (PR #244, PR #249, 98a93ee721a293410623aafe46890cfba9d81e72, 771bcc8d961d419b53a1e891e9dc536371f1143b, 368e79264adf5a5358c04518c94ad2e1c13e212b, fa3167c289c9785f3a5db03d9724848f1441a63d, 4646d581d5e1f63260692b396a8bd8f51b0da48fda, ed77bd726bf92e06c2fc9145a5847787a8d0588b) - Add a new `split.data.by.bins` function (not to be confused with a previously existing function that had the same name and was renamed in this context), which splits data based on given activity-based bins (PR #244, ece569ceaf557bb38cd0cfad437b69b30fe8a698, ed5feb214a123b605c9513262f187cfd72b9e1f4) -- Add new `assert.sparse.matrices.equal` function to compare two sparse matrices for equality for testing purposes (PR #248, 9784cdf12d1497ee122e2ae73b768b8c334210d4, d9f1a8d90e00a634d7caeb5e7f8f262776496838) -- Add tests for file `util-networks-misc.R` for issue #242 (PR #248, f3202a6f96723d11c170346556d036cf087521c8, 030574b9d0f3435db4032d0e195a3d407fb7244b, 380b02234275127297fcd508772c69db21c216de, 8b803c50d60fc593e4e527a08fd4c2068d801a48, 7335c3dd4d0302b024a66d18701d9800ed3fe806, 6b600df04bec1fe70c272604f274ec5309840e65) +- Add an `assert.sparse.matrices.equal` function to compare two sparse matrices for equality for testing purposes (PR #248, 9784cdf12d1497ee122e2ae73b768b8c334210d4, d9f1a8d90e00a634d7caeb5e7f8f262776496838) +- Add tests for file `util-networks-misc.R` (#242, PR #248, PR #258, f3202a6f96723d11c170346556d036cf087521c8, 030574b9d0f3435db4032d0e195a3d407fb7244b, 380b02234275127297fcd508772c69db21c216de, 8b803c50d60fc593e4e527a08fd4c2068d801a48, 7335c3dd4d0302b024a66d18701d9800ed3fe806, 6b600df04bec1fe70c272604f274ec5309840e65, a53fab85358b223af43749a088ad02e9fbcb0a30, faf19fc369beb901b556ecb8c4fa0bf6f1bd6304) - Add the possibility to simplify edges of multiple-relation networks into a single edge at all instead of a single edge per relation (PR #250, PR #255, 2105ea89b5227e7c9fa78fea9de1977f2d9e8faa, a34b5bd50351b9ccf3cc45fc323cfa2e84d65ea0, 34516415fed599eba0cc7d3cc4a9acd6b26db252, 78f43514962d7651e6b7a1e80ee22ce012f32535, d310fdc38690f0d701cd32c92112c33f7fdde0ff, 58d77b01ecc6a237104a4e72ee5fb9025efeaaf2) - Add tests for network simplification (PR #255, 338b06941eec1c9cfdb121e78ce0d9db6b75da19, 8a6f47bc115c10fbbe4eee21985d97aee5c9dc91, e01908c94eccc4dda5f2b3c0746b0eab0172dc07, 7b6848fb86f69db088ce6ef2bea8315ac94d48f9, 666d78444ffcb3bc8b36f2121284e4840176618e) - Add `get.bin.dates.from.ranges` function to convert date ranges into bins format (PR #249, a1842e9be46596321ee86860fd87d17a3c88f50f, 858b1812ebfc3194cc6a03c99f3ee7d161d1ca15) @@ -25,8 +25,10 @@ - Throw an error in `convert.adjacency.matrix.list.to.array` if the function is called with wrong parameters (PR #248, ece2d38b4972745af3a83e06f32317a06465a345, 1a3e510df15f5fa4e920e9fce3e0e162c27cd6d1) - Rename `compare.networks` to `assert.networks.equal` to better match the purpose of the function (PR #248, d9f1a8d90e00a634d7caeb5e7f8f262776496838) - Explicitly add R version 4.3 to the CI test pipeline (9f346d5bc3cfc553f01e5e80f0bbe51e1dc2b53e) -- Simplify call chain-, and branching-routes in network-splitting functions and consequently set the `bins` attribute on every output network-split (while minimizing recalculations) (PR #249, PR #257, a1842e9be46596321ee86860fd87d17a3c88f50f, 8695fbe7f21ccaa3ccd6d1016e754017d387b1fa) +- Simplify call chain-, and branching-routes in network-splitting functions and consequently set the `bins` attribute on every output network-split (while minimizing recalculations) (PR #249, #256, PR #257, a1842e9be46596321ee86860fd87d17a3c88f50f, 8695fbe7f21ccaa3ccd6d1016e754017d387b1fa) - Test for the presence and validity of the `bins` attribute on network-, and data-splits (PR #249, c064affcfff2eb170d8bdcb39d837a7ff62b2cbd, 93051ab848ec94de138b0513dac22f6da0d20885) +- Add a check for empty networks in the functions `metrics.scale.freeness` and `metrics.is.scale.free` and return `NA` if the network is empty (29418f2da38de8c39ec2a1fb3d445b63f320be40) +- Throw an error in `split.data.time.based.by.timestamps` if no custom event timestamps are available in the ProjectData object (6305adcee7f18747141994b00bdd94641f95e86f) ### Fixed @@ -34,10 +36,10 @@ - Fix an issue in activity-based splitting where elements close to the border of bins might be assigned to the wrong bin. The issue was caused by the usage of `split.data.time.based` inside `split.data.activity.based` to split data into the previously derived bins, when elements close to bin borders share the same timestamps. It is fixed by replacing `split.data.time.based` by `split.data.by.bins` (PR #244, ece569ceaf557bb38cd0cfad437b69b30fe8a698) - Remove the last range when using a sliding-window approach and the last range's elements are fully contained in the second last range (PR #244, 48ef4fa685adf6e5d85281e5b90a8ed8f6aeb197, 943228fbc91eed6854dacafa7075441e58b22675) - Rename vertex attribute `IssueEvent` to `Issue` in multi-networks, to be consistent with bipartite-networks (PR #244, 26d7b7e9fd6d33d1c0a8a08f19c5c2e30346a3d9) -- Fix `get.expanded.adjacency` to work if the provided author list does not contain all authors from network and add a warning when that happens since it causes some authors from the network to be lost in the resulting matrix (PR #248, ff59017e114b10812dcfb1704a19e01fc1586a13) -- Fix `get.expanded.adjacency.matrices` to have correct names for the columns and rows (PR #248, e72eff864a1cb1a4aecd430e450d4a6a5044fdf2) +- Fix `get.expanded.adjacency` to work if the provided author list does not contain all authors from the network and add a warning when that happens since it causes some authors from the network to be lost in the resulting matrix (PR #248, ff59017e114b10812dcfb1704a19e01fc1586a13) +- Fix `get.expanded.adjacency.matrices` to have correct names for the columns and rows (PR #248, PR #258, e72eff864a1cb1a4aecd430e450d4a6a5044fdf2, a53fab85358b223af43749a088ad02e9fbcb0a30) - Fix `get.expanded.adjacency.cumulated` so that it works if `weighted` parameter is set to `FALSE` (PR #248, 2fb9a5d446653f6aee808cbfc87c2dafeb9a749a) -- Fix broken error loggingin `metrics.smallworldness` (03e06881f06abf30d44b69d7988873f20b95232d) +- Fix broken error logging in `metrics.smallworldness` (03e06881f06abf30d44b69d7988873f20b95232d) - Fix multi-network construction to work with igraph version 2.0.1.1, which does not allow to add an empty list of vertices (PR #250, 5547896faa279f6adaae4b2b77c7ab9623ddf256) diff --git a/showcase.R b/showcase.R index 42a3e2e02..4a2c9a72e 100644 --- a/showcase.R +++ b/showcase.R @@ -16,7 +16,7 @@ ## Copyright 2017 by Christian Hechtl ## Copyright 2017 by Felix Prasse ## Copyright 2017-2018 by Thomas Bock -## Copyright 2020-2021 by Thomas Bock +## Copyright 2020-2021, 2024 by Thomas Bock ## Copyright 2018 by Jakob Kronawitter ## Copyright 2019 by Klara Schlueter ## Copyright 2020 by Anselm Fehnker @@ -219,7 +219,7 @@ cf.data = split.data.time.based(x.data, bins = mybins) ## construct (author) networks from range data my.networks = lapply(cf.data, function(range.data) { y = NetworkBuilder$new(project.data = range.data, network.conf = net.conf) - return (y$get.author.network()) + return(y$get.author.network()) }) ## add commit-count vertex attributes sample = add.vertex.attribute.author.commit.count(my.networks, x.data, aggregation.level = "range") diff --git a/tests/test-util-networks-misc.R b/tests/test-networks-misc.R similarity index 94% rename from tests/test-util-networks-misc.R rename to tests/test-networks-misc.R index 9be2029fc..3e7d72351 100644 --- a/tests/test-util-networks-misc.R +++ b/tests/test-networks-misc.R @@ -12,6 +12,7 @@ ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. ## ## Copyright 2024 by Leo Sendelbach +## Copyright 2024 by Thomas Bock ## All Rights Reserved. @@ -33,10 +34,10 @@ test_that("getting all authors of a list of networks, list length 0", { ## Act result = get.author.names.from.networks(networks = list(), globally = TRUE) - + ## Assert expected = list(c()) - + expect_equal(expected, result) }) @@ -59,7 +60,7 @@ test_that("getting all authors of a list of networks, list length 1", { ## Assert expected = list(c("Dieter", "Heinz", "Klaus")) - + expect_equal(expected, result) }) @@ -83,7 +84,7 @@ test_that("getting all authors of a list of networks, list length 1, not global" ## Assert expected = list(c("Dieter", "Heinz", "Klaus")) - + expect_equal(expected, result) }) @@ -118,7 +119,7 @@ test_that("getting all authors of a list of networks, list length 2", { ## Assert expected = list(c("Detlef", "Dieter", "Heinz", "Klaus")) - + expect_equal(expected, result) }) @@ -146,13 +147,13 @@ test_that("getting all authors of a list of networks, list length 2, not global" to = "Dieter" ) second.network = igraph::graph.data.frame(second.edges, directed = FALSE, vertices = second.vertices) - + ## Act result = get.author.names.from.networks(networks = list(first.network, second.network), globally = FALSE) ## Assert expected = list(c("Dieter", "Heinz", "Klaus"), c("Detlef", "Dieter")) - + expect_equal(expected, result) }) @@ -160,10 +161,10 @@ test_that("getting all authors of a list of data ranges, list length 0", { ## Act result = get.author.names.from.data(data.ranges = list()) - + ## Assert expected = list(c()) - + expect_equal(expected, result) }) @@ -176,11 +177,11 @@ test_that("getting all authors of a list of data ranges, list length 1", { ## Act result = get.author.names.from.data(data.ranges = list(range.data)) - + ## Assert - expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", + expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", "Karl", "Olaf", "Thomas", "udo")) - + expect_equal(expected, result) }) @@ -193,11 +194,11 @@ test_that("getting all authors of a list of data ranges, list length 1, not glob ## Act result = get.author.names.from.data(data.ranges = list(range.data), globally = FALSE) - + ## Assert - expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", + expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", "Karl", "Olaf", "Thomas", "udo")) - + expect_equal(expected, result) }) @@ -211,11 +212,11 @@ test_that("getting all authors of a list of data ranges, list length 2", { ## Act result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two)) - + ## Assert - expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", + expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", "Karl", "Max", "Olaf", "Thomas", "udo")) - + expect_equal(expected, result) }) @@ -229,11 +230,11 @@ test_that("getting all authors of a list of data ranges, list length 2, not glob ## Act result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), globally = FALSE) - + ## Assert expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", "Karl", "Olaf", "Thomas", "udo"), c("Björn", "Karl", "Max", "Olaf", "Thomas")) - + expect_equal(expected, result) }) @@ -246,14 +247,14 @@ test_that("getting all authors of a list of data ranges by data source 'mails', range.data.two = proj.data.base$get.data.cut.to.same.date("issues") ## Act - result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), + result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), data.sources = "mails", globally = FALSE) - + ## Assert - + expected = list(c("Björn", "Fritz fritz@example.org","georg", "Hans", "Olaf", "Thomas", "udo"), c("Björn", "Olaf", "Thomas")) - + expect_equal(expected, result) }) @@ -266,12 +267,12 @@ test_that("getting all authors of a list of data ranges by data source 'issues', range.data.two = proj.data.base$get.data.cut.to.same.date("issues") ## Act - result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), + result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), data.sources = "issues", globally = FALSE) - + ## Assert expected = list(c("Björn", "Karl", "Olaf", "Thomas"), c("Björn","Karl", "Max", "Olaf", "Thomas")) - + expect_equal(expected, result) }) @@ -284,13 +285,13 @@ test_that("getting all authors of a list of data ranges by data source 'commits' range.data.two = proj.data.base$get.data.cut.to.same.date("issues") ## Act - result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), + result = get.author.names.from.data(data.ranges = list(range.data.one, range.data.two), data.sources = "commits", globally = FALSE) - + ## Assert - + expected = list(c("Björn", "Olaf"), c("Björn", "Olaf", "Thomas")) - + expect_equal(expected, result) }) @@ -313,15 +314,15 @@ test_that("getting a sparse adjacency matrix for a network, single edge, matchin length(authors.in)), repr = "T") rownames(matrix.out) = authors.in colnames(matrix.out) = authors.in - + matrix.out["Heinz", "Dieter"] = 1 matrix.out["Dieter", "Heinz"] = 1 - + ## Act - result = get.expanded.adjacency(network =network.in, authors = authors.in) - + result = get.expanded.adjacency(network = network.in, authors = authors.in) + ## Assert - + expect_equal(matrix.out, result) }) @@ -348,12 +349,12 @@ test_that("getting a sparse adjacency matrix for a network, single edge, fewer a matrix.out["Heinz", "Dieter"] = 1 matrix.out["Dieter", "Heinz"] = 1 - + ## Act result = get.expanded.adjacency(network = network.in, authors = authors.in) - + ## Assert - + expect_equal(matrix.out, result) }) @@ -380,12 +381,12 @@ test_that("getting a sparse adjacency matrix for a network, single edge, more au matrix.out["Heinz", "Dieter"] = 1 matrix.out["Dieter", "Heinz"] = 1 - + ## Act - result = get.expanded.adjacency(network =network.in, authors = authors.in) - + result = get.expanded.adjacency(network = network.in, authors = authors.in) + ## Assert - + expect_equal(matrix.out, result) }) @@ -412,12 +413,12 @@ test_that("getting a sparse adjacency matrix for a network, single edge, no matc matrix.out["Heinz", "Dieter"] = 1 matrix.out["Dieter", "Heinz"] = 1 - + ## Act - result = get.expanded.adjacency(network =network.in, authors = authors.in) - + result = get.expanded.adjacency(network = network.in, authors = authors.in) + ## Assert - + expect_equal(matrix.out, result) }) @@ -441,12 +442,12 @@ test_that("getting a sparse adjacency matrix for a network, single edge, no over length(authors.in)), repr = "T") rownames(matrix.out) = authors.in colnames(matrix.out) = authors.in - + ## Act - result = get.expanded.adjacency(network =network.in, authors = authors.in) - + result = get.expanded.adjacency(network = network.in, authors = authors.in) + ## Assert - + expect_equal(matrix.out, result) }) @@ -477,9 +478,9 @@ test_that("getting a sparse adjacency matrix for a network, two edges, more auth matrix.out["Klaus", "Dieter"] = 1 matrix.out["Dieter", "Heinz"] = 1 matrix.out["Dieter", "Klaus"] = 1 - + ## Act - result = get.expanded.adjacency(network =network.in, authors = authors.in) + result = get.expanded.adjacency(network = network.in, authors = authors.in) ## Assert expect_equal(matrix.out, result) @@ -513,10 +514,10 @@ test_that("getting a sparse adjacency matrix for a network, three edges, more au matrix.out["Klaus", "Dieter"] = 3 matrix.out["Dieter", "Heinz"] = 5 matrix.out["Dieter", "Klaus"] = 3 - + ## Act - result = get.expanded.adjacency(network =network.in, authors = authors.in, weighted = TRUE) - + result = get.expanded.adjacency(network = network.in, authors = authors.in, weighted = TRUE) + ## Assert expect_equal(matrix.out, result) }) @@ -555,7 +556,7 @@ test_that("getting a sparse adjacency matrix per network, one network", { matrix.out["Klaus", "Dieter"] = 1 matrix.out["Dieter", "Heinz"] = 1 matrix.out["Dieter", "Klaus"] = 1 - + ## Act result = get.expanded.adjacency.matrices(networks = list(network.in)) @@ -576,19 +577,6 @@ test_that("getting a sparse adjacency matrix per network, two networks", { to = c("Dieter", "Klaus", "Heinz") ) network.in.one = igraph::graph.data.frame(edges, directed = FALSE, vertices = vertices) - authors.in.one = sort(c("Heinz", "Dieter", "Klaus")) - - matrix.out.one = Matrix::sparseMatrix(i = c(), j = c(), x = 0, dims = c(length(authors.in.one), - length(authors.in.one)), repr = "T") - rownames(matrix.out.one) = authors.in.one - colnames(matrix.out.one) = authors.in.one - - # order these statements so that the second arguments are ordered alphabetically - # or use the helper function as used below - matrix.out.one["Heinz", "Dieter"] = 1 - matrix.out.one["Klaus", "Dieter"] = 1 - matrix.out.one["Dieter", "Heinz"] = 1 - matrix.out.one["Dieter", "Klaus"] = 1 vertices = data.frame( name = c("Klaus", "Tobias"), @@ -600,21 +588,34 @@ test_that("getting a sparse adjacency matrix per network, two networks", { to = c("Tobias") ) network.in.two = igraph::graph.data.frame(edges, directed = FALSE, vertices = vertices) - authors.in.two = sort(c("Klaus", "Tobias")) - matrix.out.two = Matrix::sparseMatrix(i = c(), j = c(), x = 0, dims = c(length(authors.in.two), - length(authors.in.two)), repr = "T") - rownames(matrix.out.two) = authors.in.two - colnames(matrix.out.two) = authors.in.two + all.authors = sort(c("Heinz", "Dieter", "Klaus", "Tobias")) + + matrix.out.one = Matrix::sparseMatrix(i = c(), j = c(), x = 0, dims = c(length(all.authors), + length(all.authors)), repr = "T") + rownames(matrix.out.one) = all.authors + colnames(matrix.out.one) = all.authors - # order these statements so that the second arguments are ordered alphabetically + # order these statements so that the second arguments are ordered alphabetically + # or use the helper function as used below + matrix.out.one["Heinz", "Dieter"] = 1 + matrix.out.one["Klaus", "Dieter"] = 1 + matrix.out.one["Dieter", "Heinz"] = 1 + matrix.out.one["Dieter", "Klaus"] = 1 + + matrix.out.two = Matrix::sparseMatrix(i = c(), j = c(), x = 0, dims = c(length(all.authors), + length(all.authors)), repr = "T") + rownames(matrix.out.two) = all.authors + colnames(matrix.out.two) = all.authors + + # order these statements so that the second arguments are ordered alphabetically # or use the helper function as used below matrix.out.two["Tobias", "Klaus"] = 1 matrix.out.two["Klaus", "Tobias"] = 1 - + ## Act result = get.expanded.adjacency.matrices(networks = list(network.in.one, network.in.two)) - + ## Assert expect_equal(list(matrix.out.one, matrix.out.two), result) }) @@ -660,10 +661,10 @@ test_that("getting cumulative sums of adjacency matrices generated from networks matrix.out.two["Klaus", "Dieter"] = 1 matrix.out.two["Dieter", "Heinz"] = 1 matrix.out.two["Dieter", "Klaus"] = 1 - + ## Act result = get.expanded.adjacency.cumulated(networks = list(network.in.one, network.in.two)) - + ## Assert assert.sparse.matrices.equal(matrix.out.one, result[[1]]) assert.sparse.matrices.equal(matrix.out.two, result[[2]]) @@ -707,21 +708,21 @@ test_that("getting cumulative sums of adjacency matrices generated from networks length(authors.in.two)), repr = "T") rownames(matrix.out.two) = authors.in.two colnames(matrix.out.two) = authors.in.two - + matrix.out.two["Heinz", "Dieter"] = 2 matrix.out.two["Klaus", "Dieter"] = 3 matrix.out.two["Dieter", "Heinz"] = 2 matrix.out.two["Dieter", "Klaus"] = 3 - + ## Act result = get.expanded.adjacency.cumulated(networks = list(network.in.one, network.in.two), weighted = TRUE) - + ## Assert assert.sparse.matrices.equal(matrix.out.one, result[[1]]) assert.sparse.matrices.equal(matrix.out.two, result[[2]]) }) -test_that("getting cumulative sums of adjacency matrices generated from networks, +test_that("getting cumulative sums of adjacency matrices generated from networks, two networks, then convert to array", { ## Arrange @@ -760,12 +761,12 @@ test_that("getting cumulative sums of adjacency matrices generated from networks ## Act result.adjacency = get.expanded.adjacency.cumulated(networks = list(network.in.one, network.in.two)) result.array = convert.adjacency.matrix.list.to.array(result.adjacency) - + ## Assert expect_equal(expected.array, result.array) }) -test_that("getting cumulative sums of adjacency matrices generated from networks, +test_that("getting cumulative sums of adjacency matrices generated from networks, two networks, weighted, then convert to array", { ## Arrange @@ -807,7 +808,7 @@ test_that("getting cumulative sums of adjacency matrices generated from networks ## Act result.adjacency = get.expanded.adjacency.cumulated(networks = list(network.in.one, network.in.two), weighted = TRUE) result.array = convert.adjacency.matrix.list.to.array(result.adjacency) - + ## Assert expect_equal(expected.array, result.array) -}) \ No newline at end of file +}) diff --git a/util-data-misc.R b/util-data-misc.R index 6d0d1803c..61d5bb840 100644 --- a/util-data-misc.R +++ b/util-data-misc.R @@ -20,6 +20,7 @@ ## Copyright 2021 by Johannes Hostert ## Copyright 2021 by Christian Hechtl ## Copyright 2022 by Jonathan Baumann +## Copyright 2024 by Thomas Bock ## All Rights Reserved. ## / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / @@ -630,7 +631,7 @@ get.issue.comment.count = function(proj.data, type = c("all", "issues", "pull.re issue.id.to.events = get.key.to.value.from.df(df, "issue.id", "event.name") issue.id.to.comment.count = lapply(issue.id.to.events, function(df) { event.names = df[["data.vertices"]] - return (length(event.names[event.names == "commented"])) + return(length(event.names[event.names == "commented"])) }) logging::logdebug("get.issue.comment.count: finished") return(issue.id.to.comment.count) @@ -745,9 +746,9 @@ get.pr.open.merged.or.closed = function(proj.data, use.unfiltered.data = TRUE) { retained.cols = c("issue.id", "issue.state", "event.name")) issue.id.to.events = get.key.to.value.from.df(df, "issue.id", "event.name") issue.id.to.state = lapply(issue.id.to.events, function(df) { - return (if ("open" %in% df[["issue.state"]] || "reopened" %in% df[["issue.state"]]) "open" - else if ("merged" %in% df[["event.name"]]) "merged" - else "closed") + return(if ("open" %in% df[["issue.state"]] || "reopened" %in% df[["issue.state"]]) "open" + else if ("merged" %in% df[["event.name"]]) "merged" + else "closed") }) logging::logdebug("get.pr.open.merged.or.closed: finished") return(issue.id.to.state) diff --git a/util-data.R b/util-data.R index 80470d750..e8c9ee4d1 100644 --- a/util-data.R +++ b/util-data.R @@ -13,7 +13,7 @@ ## ## Copyright 2016-2019 by Claus Hunsen ## Copyright 2017-2019 by Thomas Bock -## Copyright 2020-2021, 2023 by Thomas Bock +## Copyright 2020-2021, 2023-2024 by Thomas Bock ## Copyright 2017 by Raphael Nömmer ## Copyright 2017-2018 by Christian Hechtl ## Copyright 2020 by Christian Hechtl @@ -90,7 +90,7 @@ DATASOURCE.TO.ADDITIONAL.ARTIFACT.FUNCTION = list( #' @return \code{lst}, with the keys changed rename.list.keys = function(lst, map.function) { names(lst) = lapply(names(lst), map.function) - return (lst) + return(lst) } ## Combine \code{DATASOURCE.TO.ARTIFACT.FUNCTION}, \code{DATASOURCE.TO.UNFILTERED.ARTIFACT.FUNCTION} @@ -288,8 +288,8 @@ ProjectData = R6::R6Class("ProjectData", } ## return the mails of the thread with all patchstack mails but the first one being removed - return (list(keep = thread[setdiff(seq_len(nrow(thread)), seq_len(i)[-1]), ], - patchstack = thread[seq_len(i), ])) + return(list(keep = thread[setdiff(seq_len(nrow(thread)), seq_len(i)[-1]), ], + patchstack = thread[seq_len(i), ])) }) ## override thread data with filtered thread data @@ -579,13 +579,15 @@ ProjectData = R6::R6Class("ProjectData", if (private$project.conf$get.value("pasta")) { ## merge PaStA data private$mails.unfiltered = merge(private$mails.unfiltered, private$pasta.mails, - by = "message.id", all.x = TRUE, sort = FALSE) + by = "message.id", all.x = TRUE, sort = FALSE) ## sort by date again because 'merge' disturbs the order - private$mails.unfiltered = private$mails.unfiltered[order(private$mails.unfiltered[["date"]], decreasing = FALSE), ] + private$mails.unfiltered = private$mails.unfiltered[order(private$mails.unfiltered[["date"]], + decreasing = FALSE), ] ## remove duplicated revision set ids - private$mails.unfiltered[["revision.set.id"]] = lapply(private$mails.unfiltered[["revision.set.id"]], function(rev.id) { + private$mails.unfiltered[["revision.set.id"]] = lapply(private$mails.unfiltered[["revision.set.id"]], + function(rev.id) { return(unique(rev.id)) }) } @@ -669,10 +671,11 @@ ProjectData = R6::R6Class("ProjectData", if (private$project.conf$get.value("synchronicity")) { ## merge synchronicity data private$commits.unfiltered = merge(private$commits.unfiltered, private$synchronicity, - by = "hash", all.x = TRUE, sort = FALSE) + by = "hash", all.x = TRUE, sort = FALSE) ## sort by date again because 'merge' disturbs the order - private$commits.unfiltered = private$commits.unfiltered[order(private$commits.unfiltered[["date"]], decreasing = FALSE), ] + private$commits.unfiltered = private$commits.unfiltered[order(private$commits.unfiltered[["date"]], + decreasing = FALSE), ] } ## remove previous synchronicity data private$commits["synchronicity"] = NULL @@ -685,16 +688,15 @@ ProjectData = R6::R6Class("ProjectData", by = "hash", all.x = TRUE, sort = FALSE) ## sort by date again because 'merge' disturbs the order - private$commits = private$commits[order(private$commits[["date"]], - decreasing = FALSE), ] + private$commits = private$commits[order(private$commits[["date"]], decreasing = FALSE), ] } ## get the caller function as a string stacktrace = get.stacktrace(sys.calls()) caller = get.second.last.element(stacktrace) - ## only print warning if this function has not been called by 'cleanup.synchronicity.data' including the case - ## that it is called manually, i.e. the stack is too short. + ## only print warning if this function has not been called by 'cleanup.synchronicity.data' including the + ## case that it is called manually, i.e. the stack is too short. if (all(is.na(caller)) || paste(caller, collapse = " ") != "cleanup.synchronicity.data()") { logging::logwarn("There might be synchronicity data that does not appear in the commit data. To clean this up you can call the function 'cleanup.synchronicity.data()'.") @@ -894,7 +896,7 @@ ProjectData = R6::R6Class("ProjectData", params.keep.environment = params %in% CONF.PARAMETERS.NO.RESET.ENVIRONMENT ## only reset if at least one of them should cause a reset - if(!all(params.keep.environment)) { + if (!all(params.keep.environment)) { self$reset.environment() } else { ## if the 'commit.messages' parameter has been changed, update the commit message data, since we want to @@ -1017,7 +1019,7 @@ ProjectData = R6::R6Class("ProjectData", #' @seealso get.commits get.commits.uncached = function(remove.untracked.files, remove.base.artifact, filter.bots = FALSE) { logging::loginfo("Getting commit data (uncached).") - return (private$filter.commits(self$get.commits.unfiltered(), remove.untracked.files, remove.base.artifact, filter.bots)) + return(private$filter.commits(self$get.commits.unfiltered(), remove.untracked.files, remove.base.artifact, filter.bots)) }, #' Get the list of commits which have the artifact kind configured in the \code{project.conf}. @@ -1513,7 +1515,7 @@ ProjectData = R6::R6Class("ProjectData", if (!self$is.data.source.cached("authors")) { ## read author data - author.data = read.authors(self$get.data.path()); + author.data = read.authors(self$get.data.path()) ## set author data and add gender data (if configured in the 'project.conf') self$set.authors(author.data) @@ -1567,7 +1569,7 @@ ProjectData = R6::R6Class("ProjectData", authors[["author.email"]]), "is.bot"] ## retain if entry is FALSE or NA bot.indices = !bot.indices | is.na(bot.indices) - return (data.to.filter[bot.indices,]) + return(data.to.filter[bot.indices,]) }, #' Get the issue data, filtered according to options in the project configuration: @@ -2137,7 +2139,7 @@ ProjectData = R6::R6Class("ProjectData", data = lapply(data.sources, function(data.source){ data.source.func = DATASOURCE.TO.ARTIFACT.FUNCTION[[data.source]] data.source.authors = self[[data.source.func]]()[c("author.name", "author.email")] - return (data.source.authors) + return(data.source.authors) }) data = plyr::rbind.fill(data) @@ -2145,7 +2147,7 @@ ProjectData = R6::R6Class("ProjectData", ## remove duplicates data = unique(data) - return (data) + return(data) }, #' Get the list of custom event timestamps, @@ -2158,14 +2160,14 @@ ProjectData = R6::R6Class("ProjectData", && !private$project.conf$get.value("custom.event.timestamps.locked")) { file.name = self$get.project.conf.entry("custom.event.timestamps.file") - if(is.na(file.name)) { + if (is.na(file.name)) { logging::logwarn("get.custom.event.timestamps: No file configured") - return (list()) + return(list()) } timestamps = read.custom.event.timestamps(self$get.data.path(), file.name) self$set.custom.event.timestamps(timestamps) } - return (private$custom.event.timestamps) + return(private$custom.event.timestamps) }, #' Set the list of custom event timestamps. @@ -2178,7 +2180,7 @@ ProjectData = R6::R6Class("ProjectData", logging::logerror(error.message) stop(error.message) } - if(length(custom.event.timestamps) != 0){ + if (length(custom.event.timestamps) != 0){ private$custom.event.timestamps = custom.event.timestamps[ order(unlist(get.date.from.string(custom.event.timestamps))) ] @@ -2305,7 +2307,7 @@ RangeData = R6::R6Class("RangeData", inherit = ProjectData, #' or of type character if input was a commit hash or version; #' or NULL if the string could not be parsed get.range.bounds = function() { - return (get.range.bounds(private$range)) + return(get.range.bounds(private$range)) }, #' Get the 'revision.callgraph' of the current instance diff --git a/util-misc.R b/util-misc.R index 3720d890b..4722ccb2e 100644 --- a/util-misc.R +++ b/util-misc.R @@ -16,7 +16,7 @@ ## Copyright 2017 by Christian Hechtl ## Copyright 2017 by Felix Prasse ## Copyright 2017-2018 by Thomas Bock -## Copyright 2020-2021, 2023 by Thomas Bock +## Copyright 2020-2021, 2023-2024 by Thomas Bock ## Copyright 2018-2019 by Jakob Kronawitter ## Copyright 2021 by Niklas Schneider ## Copyright 2022 by Jonathan Baumann @@ -977,11 +977,11 @@ get.range.bounds = function(range) { start.end = regmatches(range, gregexpr(pattern = pattern[[1]], range))[[1]] if (length(start.end) == 2) { - return (pattern[[2]](start.end)) + return(pattern[[2]](start.end)) } } - return (range) + return(range) } #' Obtain the start and end dates from given ranges. diff --git a/util-networks-covariates.R b/util-networks-covariates.R index 9d560fed0..5b68cbffe 100644 --- a/util-networks-covariates.R +++ b/util-networks-covariates.R @@ -14,7 +14,7 @@ ## Copyright 2017 by Felix Prasse ## Copyright 2018-2019 by Claus Hunsen ## Copyright 2018-2019 by Thomas Bock -## Copyright 2021, 2023 by Thomas Bock +## Copyright 2021, 2023-2024 by Thomas Bock ## Copyright 2018-2019 by Klara Schlüter ## Copyright 2018 by Jakob Kronawitter ## Copyright 2020 by Christian Hechtl @@ -136,7 +136,7 @@ add.vertex.attribute = function(net.to.range.list, attr.name, default.value, com } ) - return (nets.with.attr) + return(nets.with.attr) } diff --git a/util-networks-metrics.R b/util-networks-metrics.R index 5111ef5e3..dcdbbcf17 100644 --- a/util-networks-metrics.R +++ b/util-networks-metrics.R @@ -193,7 +193,7 @@ metrics.smallworldness = function(network) { ## indicator s.delta s.delta = gamma / lambda - return (c(smallworldness = s.delta)) + return(c(smallworldness = s.delta)) } #' Decide, whether a network is smallworld or not. @@ -217,8 +217,19 @@ metrics.is.smallworld = function(network) { #' @param minimum.number.vertices the minimum number of vertices with which #' a network can be scale free [default: 30] #' -#' @return A dataframe containing the different values, connected to scale-freeness. +#' @return If the network is empty (i.e., has no vertices), \code{NA}. +#' Otherwise, a dataframe containing the different values, connected to scale-freeness. metrics.scale.freeness = function(network, minimum.number.vertices = 30) { + + ## check whether the network is empty, i.e., if it has no vertices + if (igraph::vcount(network) == 0) { + ## print user warning instead of igraph error + logging::logwarn("The input network has no vertices. Will return NA right away.") + + ## cancel the execution and return NA + return(NA) + } + v.degree = sort(igraph::degree(network, mode = "total"), decreasing = TRUE) ## Power-law fiting @@ -235,7 +246,7 @@ metrics.scale.freeness = function(network, minimum.number.vertices = 30) { ## If less than minimum.number.vertices vertices are in the power law, set x_min manually ## to include a minimum of number of vertices and recompute the powerlaw fit non.zero.degree.v.count = length(v.degree[v.degree > 0]) - if(res[["num.power.law"]] < minimum.number.vertices + if (res[["num.power.law"]] < minimum.number.vertices & non.zero.degree.v.count >= minimum.number.vertices) { ## vertex degree is sorted above x.min = v.degree[[minimum.number.vertices]] @@ -248,7 +259,7 @@ metrics.scale.freeness = function(network, minimum.number.vertices = 30) { } ## Remove non conclusive sample sizes - if(res[["num.power.law"]] < minimum.number.vertices) { + if (res[["num.power.law"]] < minimum.number.vertices) { res[["KS.p"]] = 0 # 0 instead of NA } @@ -263,10 +274,15 @@ metrics.scale.freeness = function(network, minimum.number.vertices = 30) { #' a network can be scale free [default: 30] #' #' @return \code{TRUE}, if the network is scale free, -#' \code{FALSE}, otherwise. +#' \code{FALSE}, if it is not scale free, +#' \code{NA}, if the network is empty (i.e., has no vertices). metrics.is.scale.free = function(network, minimum.number.vertices = 30) { df = metrics.scale.freeness(network, minimum.number.vertices) - return(df[["KS.p"]] >= 0.05) + if (is.single.na(df)) { + return(NA) + } else { + return(df[["KS.p"]] >= 0.05) + } } #' Calculate the hierarchy values for a network, i.e., the vertex degrees and the local diff --git a/util-networks-misc.R b/util-networks-misc.R index 544126fa9..a183f6039 100644 --- a/util-networks-misc.R +++ b/util-networks-misc.R @@ -14,7 +14,7 @@ ## Copyright 2016-2017 by Sofie Kemper ## Copyright 2016-2017 by Claus Hunsen ## Copyright 2016-2018 by Thomas Bock -## Copyright 2020, 2023 by Thomas Bock +## Copyright 2020, 2023-2024 by Thomas Bock ## Copyright 2017 by Angelika Schmid ## Copyright 2019 by Jakob Kronawitter ## Copyright 2019-2020 by Anselm Fehnker @@ -139,15 +139,15 @@ get.expanded.adjacency = function(network, authors, weighted = FALSE) { ## get the unweighted sparse adjacency matrix for the current network matrix.data = igraph::get.adjacency(network) } - + network.authors.num = nrow(matrix.data) ## order the adjacency matrix and filter out authors that were not in authors list if (nrow(matrix.data) > 1) { # for a 1x1 matrix ordering does not work - matrix.data = matrix.data[order((rownames(matrix.data)[rownames(matrix.data) %in% authors])), + matrix.data = matrix.data[order((rownames(matrix.data)[rownames(matrix.data) %in% authors])), order((rownames(matrix.data)[rownames(matrix.data) %in% authors]))] } - if (network.authors.num > nrow(matrix.data)) { + if (network.authors.num > nrow(matrix.data)) { # write a warning with the number of authors from the network that we ignore warning.string = sprintf("The network had %d authors that will not be displayed in the matrix!", network.authors.num - nrow(matrix.data)) @@ -169,7 +169,7 @@ get.expanded.adjacency = function(network, authors, weighted = FALSE) { } #' Calculates a sparse adjacency matrix for each network in the list. -#' All adjacency matrices are expanded in such a way that the use the same set +#' All adjacency matrices are expanded in such a way that they use the same set #' of authors derived from all networks in the list. #' #' @param networks list of networks @@ -178,10 +178,9 @@ get.expanded.adjacency = function(network, authors, weighted = FALSE) { #' @return the list of adjacency matrices get.expanded.adjacency.matrices = function(networks, weighted = FALSE){ - adjacency.matrices = parallel::mclapply(networks, function(network) { - active.authors = sort(igraph::V(network)$name) - return(get.expanded.adjacency(network = network, authors = active.authors, weighted = weighted)) - }) + authors = get.author.names.from.networks(networks)[[1]] + + adjacency.matrices = parallel::mclapply(networks, get.expanded.adjacency, authors, weighted) return(adjacency.matrices) } @@ -214,7 +213,7 @@ get.expanded.adjacency.cumulated = function(networks, weighted = FALSE) { ## search for a non-zero entry and set them to an arbitray number (e.g., 42) ## to force that all non-zero entries are correctly set to 1 afterwards if (length(matrices.cumulated[[m]]@i) > 0) { - + ## the first non-zero entry of a sparse matrix is at the first position pointed to by ## the lists @i and @j of the matrix. Since these lists store the position 0-based, ## but the list access we use them for is 1-based, we need to add 1 to both values. @@ -249,6 +248,7 @@ convert.adjacency.matrix.list.to.array = function(adjacency.list){ if (length(adjacency.list) > 1) { for (i in 2:length(adjacency.list)) { + if (!identical(rownames, rownames(adjacency.list[[i]])) || !identical(colnames, colnames(adjacency.list[[i]]))) { error.string = sprintf("The matrix at position %d has different col or rownames from the first!", i) logging::logerror(error.string) @@ -256,7 +256,7 @@ convert.adjacency.matrix.list.to.array = function(adjacency.list){ } } } - + ## create a 3-dimensional array representing the adjacency matrices (SIENA data format) as result array = array(data = 0, dim = c(nrow(adjacency.list[[1]]), nrow(adjacency.list[[1]]), length(adjacency.list))) rownames(array) = rownames diff --git a/util-networks.R b/util-networks.R index ba6242762..b02eab694 100644 --- a/util-networks.R +++ b/util-networks.R @@ -619,7 +619,7 @@ NetworkBuilder = R6::R6Class("NetworkBuilder", attr(bip.relation, "vertex.kind") = private$get.vertex.kind.for.relation(relation) attr(bip.relation, "relation") = relation - return (bip.relation) + return(bip.relation) }) names(bip.relations) = relations @@ -1047,9 +1047,9 @@ NetworkBuilder = R6::R6Class("NetworkBuilder", ## 1) merge the existing networks u = igraph::disjoint_union(authors.net, artifacts.net) - ## As there is a bug in 'igraph::disjoint_union' in igraph versions 1.4.0, 1.4.1, and 1.4.2 - ## (see https://github.com/igraph/rigraph/issues/761), we need to adjust the type of the date attribute - ## of the outcome of 'igraph::disjoint_union'. + ## As there is a bug in 'igraph::disjoint_union' in igraph from its version 1.4.0 on, which is still + ## present, at least, until its version 2.0.3 (see https://github.com/igraph/rigraph/issues/761), we need + ## to adjust the type of the date attribute of the outcome of 'igraph::disjoint_union'. ## Note: The following temporary fix only considers the 'date' attribute. However, this problem could also ## affect several other attributes, whose classes are not adjusted in our temporary fix. ## The following code block should be redundant as soon as igraph has fixed their bug. @@ -1123,8 +1123,8 @@ construct.edge.list.from.key.value.list = function(list, network.conf, directed keys.number = length(list) - ## if edges in an artifact network contain the \code{artifact} attribute - ## replace it with the \code{author.name} attribute as artifacts cannot cause + ## if edges in an artifact network contain the \code{artifact} attribute + ## replace it with the \code{author.name} attribute as artifacts cannot cause ## edges in artifact networks, authors can edge.attributes = network.conf$get.value("edge.attributes") if (artifact.edges) { @@ -1805,7 +1805,7 @@ get.data.sources.from.relations = function(network) { ## check for a \code{character(0)} relation and abort if there is one if (length(relation) == 0) { logging::logwarn("There seems to be an empty relation in the network. Cannot proceed.") - return (NA) + return(NA) } ## use the translation constant to get the appropriate data source diff --git a/util-read.R b/util-read.R index 25c3a87dc..8cfe1a802 100644 --- a/util-read.R +++ b/util-read.R @@ -17,7 +17,7 @@ ## Copyright 2020-2022 by Christian Hechtl ## Copyright 2017 by Felix Prasse ## Copyright 2017-2018 by Thomas Bock -## Copyright 2023 by Thomas Bock +## Copyright 2023-2024 by Thomas Bock ## Copyright 2018 by Jakob Kronawitter ## Copyright 2018-2019 by Anselm Fehnker ## Copyright 2020-2021, 2023 by Niklas Schneider @@ -207,7 +207,7 @@ read.commits = function(data.path, artifact) { #' #' @return the empty dataframe create.empty.commits.list = function() { - return (create.empty.data.frame(COMMITS.LIST.COLUMNS, COMMITS.LIST.DATA.TYPES)) + return(create.empty.data.frame(COMMITS.LIST.COLUMNS, COMMITS.LIST.DATA.TYPES)) } ## * Mail data ------------------------------------------------------------- @@ -293,7 +293,7 @@ read.mails = function(data.path) { #' #' @return the empty dataframe create.empty.mails.list = function() { - return (create.empty.data.frame(MAILS.LIST.COLUMNS, MAILS.LIST.DATA.TYPES)) + return(create.empty.data.frame(MAILS.LIST.COLUMNS, MAILS.LIST.DATA.TYPES)) } ## * Issue data ------------------------------------------------------------ @@ -428,7 +428,7 @@ read.issues = function(data.path, issues.sources = c("jira", "github")) { #' #' @return the empty dataframe create.empty.issues.list = function() { - return (create.empty.data.frame(ISSUES.LIST.COLUMNS, ISSUES.LIST.DATA.TYPES)) + return(create.empty.data.frame(ISSUES.LIST.COLUMNS, ISSUES.LIST.DATA.TYPES)) } @@ -555,7 +555,7 @@ read.authors = function(data.path) { #' #' @return the empty dataframe create.empty.authors.list = function() { - return (create.empty.data.frame(AUTHORS.LIST.COLUMNS, AUTHORS.LIST.DATA.TYPES)) + return(create.empty.data.frame(AUTHORS.LIST.COLUMNS, AUTHORS.LIST.DATA.TYPES)) } @@ -643,7 +643,7 @@ read.gender = function(data.path) { #' #' @return the empty dataframe create.empty.gender.list = function() { - return (create.empty.data.frame(GENDER.LIST.COLUMNS, GENDER.LIST.DATA.TYPES)) + return(create.empty.data.frame(GENDER.LIST.COLUMNS, GENDER.LIST.DATA.TYPES)) } @@ -753,7 +753,7 @@ read.commit.messages = function(data.path) { #' #' @return the empty dataframe create.empty.commit.message.list = function() { - return (create.empty.data.frame(COMMIT.MESSAGE.LIST.COLUMNS, COMMIT.MESSAGE.LIST.DATA.TYPES)) + return(create.empty.data.frame(COMMIT.MESSAGE.LIST.COLUMNS, COMMIT.MESSAGE.LIST.DATA.TYPES)) } ## * PaStA data ------------------------------------------------------------ @@ -840,7 +840,7 @@ read.pasta = function(data.path) { #' #' @return the empty dataframe create.empty.pasta.list = function() { - return (create.empty.data.frame(PASTA.LIST.COLUMNS, PASTA.LIST.DATA.TYPES)) + return(create.empty.data.frame(PASTA.LIST.COLUMNS, PASTA.LIST.DATA.TYPES)) } ## * Synchronicity data ---------------------------------------------------- @@ -907,7 +907,7 @@ read.synchronicity = function(data.path, artifact, time.window) { #' #' @return the empty dataframe create.empty.synchronicity.list = function() { - return (create.empty.data.frame(SYNCHRONICITY.LIST.COLUMNS, SYNCHRONICITY.LIST.DATA.TYPES)) + return(create.empty.data.frame(SYNCHRONICITY.LIST.COLUMNS, SYNCHRONICITY.LIST.DATA.TYPES)) } @@ -955,7 +955,7 @@ read.custom.event.timestamps = function(data.path, file.name) { } logging::logdebug("read.custom.event.timestamps: finished.") - return (timestamps) + return(timestamps) } ## Helper functions -------------------------------------------------------- @@ -969,7 +969,7 @@ COMMIT.ID.FORMAT = "" #' #' @return a vector with the formatted commit ids format.commit.ids = function(commit.ids) { - return (sprintf(COMMIT.ID.FORMAT, commit.ids)) + return(sprintf(COMMIT.ID.FORMAT, commit.ids)) } ## declare a global format for issue.ids in several data frame columns diff --git a/util-split.R b/util-split.R index cdbf00bf2..d68f9caee 100644 --- a/util-split.R +++ b/util-split.R @@ -18,7 +18,7 @@ ## Copyright 2020 by Christian Hechtl ## Copyright 2017 by Felix Prasse ## Copyright 2017-2018 by Thomas Bock -## Copyright 2020 by Thomas Bock +## Copyright 2020, 2024 by Thomas Bock ## Copyright 2021 by Niklas Schneider ## Copyright 2021 by Johannes Hostert ## Copyright 2022 by Jonathan Baumann @@ -135,7 +135,7 @@ split.data.by.bins = function(project.data, activity.amount, bins, split.basis = #' and the last range ends with the last timestamp. #' #' If timestamps are not provided, the custom event timestamps in \code{project.data} are -#' used instead. +#' used instead. If no custom event timestamps are available in \code{project.data}, an error is thrown. #' #' @param project.data the *Data object from which the data is retrieved #' @param bins a vector of timestamps [default: NULL] @@ -148,9 +148,15 @@ split.data.time.based.by.timestamps = function(project.data, bins = NULL, projec if (is.null(bins)) { # bins were not provided, use custom timestamps from project bins = unlist(project.data$get.custom.event.timestamps()) + + if (is.null(bins)) { # stop if no custom timestamps are available + logging::logerror("There are no custom timestamps available for splitting (configured file: %s).", + project.data$get.project.conf.entry("custom.event.timestamps.file")) + stop("Stopping due to missing data.") + } } - return (split.data.time.based(project.data, bins = bins, project.conf.new)); + return(split.data.time.based(project.data, bins = bins, project.conf.new)) } #' Split project data in activity-based ranges as specified @@ -467,7 +473,7 @@ split.data.time.based.by.ranges = function(project.data, ranges) { range.data = split.data.time.based(project.data, bins = start.end, sliding.window = FALSE)[[1]] ## 2) return the data - return (range.data) + return(range.data) }) } return(data.split) @@ -810,7 +816,7 @@ split.network.time.based.by.ranges = function(network, ranges, remove.isolates = remove.isolates = remove.isolates)[[1]] ## 2) return the network - return (range.net) + return(range.net) } )