From f9183a7003ae5db75ae9803d271b55a379b2a031 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Sat, 26 Aug 2023 13:26:54 -0500 Subject: [PATCH 01/21] Initiating v0.9.5.9000 --- DESCRIPTION | 2 +- NEWS.md | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/DESCRIPTION b/DESCRIPTION index 0a97d508..f0c59ad9 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,5 +1,5 @@ Package: FSA -Version: 0.9.5 +Version: 0.9.5.9000 Date: 2023-8-25 Title: Simple Fisheries Stock Assessment Methods Description: A variety of simple fish stock assessment methods. diff --git a/NEWS.md b/NEWS.md index 1df696df..aad80d89 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,3 +1,5 @@ +# FSA 0.9.5.9000 + # FSA 0.9.5 * Fixed FSA-package \alias problem using the "automatic approach" (i.e., adding a "_PACKAGE" line to FSA.R) suggested in an e-mail from Kurt Hornik on 19-Aug-2023. From 4ea604f12a2572da1bedeff473334675292f636a Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Fri, 1 Nov 2024 16:26:36 -0500 Subject: [PATCH 02/21] Updated growthModels docs (fix 112 and 113) --- NEWS.md | 1 + R/growthModels.R | 706 +++++++++++++++++++++++------------------------ 2 files changed, 354 insertions(+), 353 deletions(-) diff --git a/NEWS.md b/NEWS.md index aad80d89..32ee1a22 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,5 @@ # FSA 0.9.5.9000 +* `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). # FSA 0.9.5 * Fixed FSA-package \alias problem using the "automatic approach" (i.e., adding a "_PACKAGE" line to FSA.R) suggested in an e-mail from Kurt Hornik on 19-Aug-2023. diff --git a/R/growthModels.R b/R/growthModels.R index d6bea54f..0a1274c5 100644 --- a/R/growthModels.R +++ b/R/growthModels.R @@ -26,7 +26,7 @@ #' \itemize{ #' \item The \sQuote{Ricker2} and \sQuote{QuinnDeriso1} are synonymous, as are \sQuote{Ricker3} and \sQuote{QuinnDeriso2}. #' \item The parameterizations and parameters for the Gompertz function are varied and confusing in the literature. I have attempted to use a uniform set of parameters in these functions, but that makes a direct comparison to the literature difficult. Common sources for Gompertz models are listed in the references below. I make some comments here to aid comparisons to the literature. -#' \item Within FSA, L0 is the mean length at age 0, Linf is the mean asymptotic length, ti is the age at the inflection point, gi is the instantaneous growth rate at the inflection point, t* is a dimensionless parameter related to time/age, and a is a dimensionless parameter related to growth. +#' \item Within FSA, L0 is the mean length at age 0, Linf is the mean asymptotic length, ti is the age at the inflection point, gi is the instantaneous growth rate at the inflection point, t0 is a dimensionless parameter related to time/age, and a is a dimensionless parameter related to growth. #' \item In the Quinn and Deriso (1999) functions (the \sQuote{QuinnDerisoX} functions), the a parameter here is equal to lambda/K there and the gi parameter here is equal to the K parameter there. Also note that their Y is L here. #' \item In the Ricker (1979)[p. 705] functions (the \sQuote{RickerX} functions), the a parameter here is equal to k there and the gi parameter here is equal to the g parameter there. Also note that their w is L here. In the Ricker (1979) functions as presented in Campana and Jones (1992), the a parameter here is equal to k parameter there and the gi parameter here is equal to the G parameter there. Also note that their X is L here. #' \item The function in Ricker (1975)[p. 232] is the same as \sQuote{Ricker2} where the a parameter here is qual to G there and the gi parameter here is equal to the g parameter there. Also note that their w is L here. @@ -36,7 +36,7 @@ #' \item Richards #' \itemize{ #' \item Within FSA, Linf is the mean asymptotic length, ti is the age at the inflection point, k controls the slope at the inflection point (maximum relative growth rate), b is dimensionless but related to the vertical position (i.e., size) of the inflection point, a is dimensionless but related to the horizontal position (i.e., age) of the inflection point, and L0 is the mean length at age-0. -#' \item The parameterizations (1-6) correspond to functions/equations 1, 4, 5, 6, 7, and 8, respectively, in Tjorve and Tjorve (2010). Note that their A, S, k, d, and B are Linf, a, k, b, and L0, respectively, here (in FSA). THeir (Tjorve and Tjorve 2010) K does not appear here. +#' \item The parameterizations (1-6) correspond to functions/equations 1, 4, 5, 6, 7, and 8, respectively, in Tjorve and Tjorve (2010). Note that their A, S, k, d, and B are Linf, a, k, b, and L0, respectively, here (in FSA). Their (Tjorve and Tjorve 2010) K does not appear here. #' } #' \item logistic #' \itemize{ @@ -52,7 +52,7 @@ #' #' @references Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. #' -#' Campana, S.E. and C.M. Jones. 1992. Analysis of otolith microstructure data. Pages 73-100 In D.K. Stevenson and S.E. Campana, editors. Otolith microstructure examination and analysis. Canadian Special Publication of Fisheries and Aquatic Sciences 117. [Was (is?) from http://www.dfo-mpo.gc.ca/Library/141734.pdf.] +#' Campana, S.E. and C.M. Jones. 1992. Analysis of otolith microstructure data. Pages 73-100 In D.K. Stevenson and S.E. Campana, editors. Otolith microstructure examination and analysis. Canadian Special Publication of Fisheries and Aquatic Sciences 117. [Was (is?) from https://waves-vagues.dfo-mpo.gc.ca/library-bibliotheque/141734.pdf.] #' #' Fabens, A. 1965. Properties and fitting of the von Bertalanffy growth curve. Growth 29:265-289. #' @@ -60,45 +60,45 @@ #' #' Gallucci, V.F. and T.J. Quinn II. 1979. Reparameterizing, fitting, and testing a simple growth model. Transactions of the American Fisheries Society, 108:14-25. #' -#' Garcia-Berthou, E., G. Carmona-Catot, R. Merciai, and D.H. Ogle. A technical note on seasonal growth models. Reviews in Fish Biology and Fisheries 22:635-640. [Was (is?) from https://www.researchgate.net/publication/257658359_A_technical_note_on_seasonal_growth_models.] +#' Garcia-Berthou, E., G. Carmona-Catot, R. Merciai, and D.H. Ogle. A technical note on seasonal growth models. Reviews in Fish Biology and Fisheries 22:635-640. #' -#' Gompertz, B. 1825. On the nature of the function expressive of the law of human mortality, and on a new method of determining the value of life contingencies. Philosophical Transactions of the Royal Society of London. 115:513-583. +#' Gompertz, B. 1825. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. Philosophical Transactions of the Royal Society of London. 115:513-583. #' -#' Haddon, M., C. Mundy, and D. Tarbath. 2008. Using an inverse-logistic model to describe growth increments of Blacklip Abalone (\emph{Haliotis rubra}) in Tasmania. Fishery Bulletin 106:58-71. [Was (is?) from http://aquaticcommons.org/8857/1/haddon_Fish_Bull_2008.pdf.] +#' Haddon, M., C. Mundy, and D. Tarbath. 2008. Using an inverse-logistic model to describe growth increments of blacklip abalone (\emph{Haliotis rubra}) in Tasmania. Fishery Bulletin 106:58-71. [Was (is?) from https://spo.nmfs.noaa.gov/sites/default/files/pdf-content/2008/1061/haddon.pdf.] #' -#' Karkach, A. S. 2006. Trajectories and models of individual growth. Demographic Research 15:347-400. [Was (is?) from http://www.demographic-research.org/volumes/vol15/12/15-12.pdf.] +#' Karkach, A. S. 2006. Trajectories and models of individual growth. Demographic Research 15:347-400. [Was (is?) from https://www.demographic-research.org/volumes/vol15/12/15-12.pdf.] #' #' Katsanevakis, S. and C.D. Maravelias. 2008. Modeling fish growth: multi-model inference as a better alternative to a priori using von Bertalanffy equation. Fish and Fisheries 9:178-187. #' -#' Mooij, W.M., J.M. Van Rooij, and S. Wijnhoven. 1999. Analysis and comparison of fish growth from small samples of length-at-age data: Detection of sexual dimorphism in Eurasian Perch as an example. Transactions of the American Fisheries Society 128:483-490. +#' Mooij, W.M., J.M. Van Rooij, and S. Wijnhoven. 1999. Analysis and comparison of fish growth from small samples of length-at-age data: Detection of sexual dimorphism in Eurasian perch as an example. Transactions of the American Fisheries Society 128:483-490. #' -#' Polacheck, T., J.P. Eveson, and G.M. Laslett. 2004. Increase in growth rates of southern Bluefin Tuna (\emph{Thunnus maccoyii}) over four decades: 1960 to 2000. Canadian Journal of Fisheries and Aquatic Sciences, 61:307-322. +#' Polacheck, T., J.P. Eveson, and G.M. Laslett. 2004. Increase in growth rates of southern bluefin tuna (\emph{Thunnus maccoyii}) over four decades: 1960 to 2000. Canadian Journal of Fisheries and Aquatic Sciences, 61:307-322. #' #' Quinn, T. J. and R. B. Deriso. 1999. Quantitative Fish Dynamics. Oxford University Press, New York, New York. 542 pages. #' -#' Quist, M.C., M.A. Pegg, and D.R. DeVries. 2012. Age and Growth. Chapter 15 in A.V. Zale, D.L Parrish, and T.M. Sutton, Editors Fisheries Techniques, Third Edition. American Fisheries Society, Bethesda, MD. +#' Quist, M.C., M.A. Pegg, and D.R. DeVries. 2012. Age and growth. Chapter 15 in A.V. Zale, D.L Parrish, and T.M. Sutton, editors. Fisheries Techniques, Third Edition. American Fisheries Society, Bethesda, MD. #' #' Richards, F. J. 1959. A flexible growth function for empirical use. Journal of Experimental Biology 10:290-300. #' -#' Ricker, W.E. 1975. Computation and interpretation of biological statistics of fish populations. Technical Report Bulletin 191, Bulletin of the Fisheries Research Board of Canada. [Was (is?) from http://www.dfo-mpo.gc.ca/Library/1485.pdf.] +#' Ricker, W.E. 1975. Computation and interpretation of biological statistics of fish populations. Technical Report Bulletin 191, Bulletin of the Fisheries Research Board of Canada. [Was (is?) from https://publications.gc.ca/collections/collection_2015/mpo-dfo/Fs94-191-eng.pdf.] #' -#' Ricker, W.E. 1979. Growth rates and models. Pages 677-743 In W.S. Hoar, D.J. Randall, and J.R. Brett, editors. Fish Physiology, Vol. 8: Bioenergetics and Growth. Academic Press, NY, NY. [Was (is?) from https://books.google.com/books?id=CB1qu2VbKwQC&pg=PA705&lpg=PA705&dq=Gompertz+fish&source=bl&ots=y34lhFP4IU&sig=EM_DGEQMPGIn_DlgTcGIi_wbItE&hl=en&sa=X&ei=QmM4VZK6EpDAgwTt24CABw&ved=0CE8Q6AEwBw#v=onepage&q=Gompertz\%20fish&f=false.] +#' Ricker, W.E. 1979. Growth rates and models. Pages 677-743 In W.S. Hoar, D.J. Randall, and J.R. Brett, editors. Fish Physiology, Vol. 8: Bioenergetics and Growth. Academic Press, New York, NY. [Was (is?) from https://books.google.com/books?id=CB1qu2VbKwQC&pg=PA705&lpg=PA705&dq=Gompertz+fish&source=bl&ots=y34lhFP4IU&sig=EM_DGEQMPGIn_DlgTcGIi_wbItE&hl=en&sa=X&ei=QmM4VZK6EpDAgwTt24CABw&ved=0CE8Q6AEwBw#v=onepage&q=Gompertz\%20fish&f=false.] #' #' Schnute, J. 1981. A versatile growth model with statistically stable parameters. Canadian Journal of Fisheries and Aquatic Sciences, 38:1128-1140. #' -#' Somers, I. F. 1988. On a seasonally oscillating growth function. Fishbyte 6(1):8-11. [Was (is?) from http://www.worldfishcenter.org/Naga/na_2914.pdf.] +#' Somers, I. F. 1988. On a seasonally oscillating growth function. Fishbyte 6(1):8-11. [Was (is?) from https://www.fishbase.us/manual/English/fishbaseSeasonal_Growth.htm.] #' #' Tjorve, E. and K. M. C. Tjorve. 2010. A unified approach to the Richards-model family for use in growth analyses: Why we need only two model forms. Journal of Theoretical Biology 267:417-425. [Was (is?) from https://www.researchgate.net/profile/Even_Tjorve/publication/46218377_A_unified_approach_to_the_Richards-model_family_for_use_in_growth_analyses_why_we_need_only_two_model_forms/links/54ba83b80cf29e0cb04bd24e.pdf.] #' #' Troynikov, V. S., R. W. Day, and A. M. Leorke. Estimation of seasonal growth parameters using a stochastic Gompertz model for tagging data. Journal of Shellfish Research 17:833-838. [Was (is?) from https://www.researchgate.net/profile/Robert_Day2/publication/249340562_Estimation_of_seasonal_growth_parameters_using_a_stochastic_gompertz_model_for_tagging_data/links/54200fa30cf203f155c2a08a.pdf.] #' -#' Vaughan, D. S. and T. E. Helser. 1990. Status of the Red Drum stock of the Atlantic coast: Stock assessment report for 1989. NOAA Technical Memorandum NMFS-SEFC-263, 117 p. [Was (is?) from http://docs.lib.noaa.gov/noaa_documents/NMFS/SEFSC/TM_NMFS_SEFSC/NMFS_SEFSC_TM_263.pdf.] +#' Vaughan, D. S. and T. E. Helser. 1990. Status of the Red Drum stock of the Atlantic coast: Stock assessment report for 1989. NOAA Technical Memorandum NMFS-SEFC-263, 117 p. [Was (is?) from https://repository.library.noaa.gov/view/noaa/5927/noaa_5927_DS1.pdf.] #' #' Wang, Y.-G. 1998. An improved Fabens method for estimation of growth parameters in the von Bertalanffy model with individual asymptotes. Canadian Journal of Fisheries and Aquatic Sciences 55:397-400. #' #' Weisberg, S., G.R. Spangler, and L. S. Richmond. 2010. Mixed effects models for fish growth. Canadian Journal of Fisheries And Aquatic Sciences 67:269-277. #' -#' Winsor, C.P. 1932. The Gompertz curve as a growth curve. Proceedings of the National Academy of Sciences. 18:1-8. [Was (is?) from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1076153/pdf/pnas01729-0009.pdf.] +#' Winsor, C.P. 1932. The Gompertz curve as a growth curve. Proceedings of the National Academy of Sciences. 18:1-8. [Was (is?) from https://pmc.ncbi.nlm.nih.gov/articles/PMC1076153/pdf/pnas01729-0009.pdf.] #' #' @keywords manip hplot #' @@ -292,82 +292,82 @@ NULL #' @rdname growthModels #' @export vbFuns <- function(param=c("Typical","typical","Traditional","traditional","BevertonHolt", - "Original","original","vonBertalanffy", - "GQ","GallucciQuinn","Mooij","Weisberg","Ogle", - "Schnute","Francis","Laslett","Polacheck", - "Somers","Somers2","Pauly", - "Fabens","Fabens2","Wang","Wang2","Wang3", - "Francis2","Francis3"), + "Original","original","vonBertalanffy", + "GQ","GallucciQuinn","Mooij","Weisberg","Ogle", + "Schnute","Francis","Laslett","Polacheck", + "Somers","Somers2","Pauly", + "Fabens","Fabens2","Wang","Wang2","Wang3", + "Francis2","Francis3"), simple=FALSE,msg=FALSE) { Ogle <- function(t,Linf,K=NULL,tr=NULL,Lr=NULL) { if (length(Linf)==4) { - Lr <- Linf[[4]] - tr <- Linf[[3]] - K <- Linf[[2]] - Linf <- Linf[[1]] } + Lr <- Linf[[4]] + tr <- Linf[[3]] + K <- Linf[[2]] + Linf <- Linf[[1]] } Lr+(Linf-Lr)*(1-exp(-K*(t-tr))) } SOgle <- function(t,Linf,K,tr,Lr) { Lr+(Linf-Lr)*(1-exp(-K*(t-tr))) } Typical <- typical <- Traditional <- traditional <- BevertonHolt <- function(t,Linf,K=NULL,t0=NULL) { - if (length(Linf)==3) { K <- Linf[[2]] - t0 <- Linf[[3]] - Linf <- Linf[[1]] } - Linf*(1-exp(-K*(t-t0))) + if (length(Linf)==3) { K <- Linf[[2]] + t0 <- Linf[[3]] + Linf <- Linf[[1]] } + Linf*(1-exp(-K*(t-t0))) } STypical <- Stypical <- STraditional <- Straditional <- SBevertonHolt <- function(t,Linf,K,t0) { Linf*(1-exp(-K*(t-t0))) } Original <- original <- vonBertalanffy <- function(t,Linf,K=NULL,L0=NULL) { - if (length(Linf)==3) { K <- Linf[[2]] - L0 <- Linf[[3]] - Linf <- Linf[[1]] } - Linf-(Linf-L0)*exp(-K*t) + if (length(Linf)==3) { K <- Linf[[2]] + L0 <- Linf[[3]] + Linf <- Linf[[1]] } + Linf-(Linf-L0)*exp(-K*t) } SOriginal <- Soriginal <- SvonBertalanffy <- function(t,Linf,K,L0) { Linf-(Linf-L0)*exp(-K*t) } GQ <- GallucciQuinn <- function(t,omega,K=NULL,t0=NULL) { - if (length(omega)==3) { K <- omega[[2]] - t0 <- omega[[3]] - omega <- omega[[1]] } - (omega/K)*(1-exp(-K*(t-t0))) + if (length(omega)==3) { K <- omega[[2]] + t0 <- omega[[3]] + omega <- omega[[1]] } + (omega/K)*(1-exp(-K*(t-t0))) } SGQ <- SGallucciQuinn <- function(t,omega,K,t0) { (omega/K)*(1-exp(-K*(t-t0))) } Mooij <- function(t,Linf,L0=NULL,omega=NULL) { - if (length(Linf)==3) { L0 <- Linf[[2]] - omega <- Linf[[3]] - Linf <- Linf[[1]] } - Linf-(Linf-L0)*exp(-(omega/Linf)*t) + if (length(Linf)==3) { L0 <- Linf[[2]] + omega <- Linf[[3]] + Linf <- Linf[[1]] } + Linf-(Linf-L0)*exp(-(omega/Linf)*t) } SMooij <- function(t,Linf,L0,omega) { Linf-(Linf-L0)*exp(-(omega/Linf)*t) } Weisberg <- function(t,Linf,t50=NULL,t0=NULL) { - if (length(Linf)==3) { t50 <- Linf[[2]] - t0 <- Linf[[3]] - Linf <- Linf[[1]] } - Linf*(1-exp(-(log(2)/(t50-t0))*(t-t0))) + if (length(Linf)==3) { t50 <- Linf[[2]] + t0 <- Linf[[3]] + Linf <- Linf[[1]] } + Linf*(1-exp(-(log(2)/(t50-t0))*(t-t0))) } SWeisberg <- function(t,Linf,t50,t0) { Linf*(1-exp(-(log(2)/(t50-t0))*(t-t0))) } Schnute <- function(t,L1,L3=NULL,K=NULL,t1,t3=NULL) { - if (length(L1)==3) { L3 <- L1[[2]]; K <- L1[[3]]; L1 <- L1[[1]] } - if (length(t1)==2) { t3 <- t1[[2]]; t1 <- t1[[1]] } - L1+(L3-L1)*((1-exp(-K*(t-t1)))/(1-exp(-K*(t3-t1)))) + if (length(L1)==3) { L3 <- L1[[2]]; K <- L1[[3]]; L1 <- L1[[1]] } + if (length(t1)==2) { t3 <- t1[[2]]; t1 <- t1[[1]] } + L1+(L3-L1)*((1-exp(-K*(t-t1)))/(1-exp(-K*(t3-t1)))) } SSchnute <- function(t,L1,L3,K,t1,t3) { L1+(L3-L1)*((1-exp(-K*(t-t1)))/(1-exp(-K*(t3-t1)))) } Francis <- function(t,L1,L2=NULL,L3=NULL,t1,t3=NULL) { - if (length(L1)==3) { L2 <- L1[[2]]; L3 <- L1[[3]]; L1 <- L1[[1]] } - if (length(t1)==2) { t3 <- t1[[2]]; t1 <- t1[[1]] } - r <- (L3-L2)/(L2-L1) - L1+(L3-L1)*((1-r^(2*((t-t1)/(t3-t1))))/(1-r^2)) + if (length(L1)==3) { L2 <- L1[[2]]; L3 <- L1[[3]]; L1 <- L1[[1]] } + if (length(t1)==2) { t3 <- t1[[2]]; t1 <- t1[[1]] } + r <- (L3-L2)/(L2-L1) + L1+(L3-L1)*((1-r^(2*((t-t1)/(t3-t1))))/(1-r^2)) } SFrancis <- function(t,L1,L2,L3,t1,t3) { r <- (L3-L2)/(L2-L1) @@ -383,23 +383,23 @@ vbFuns <- function(param=c("Typical","typical","Traditional","traditional","Beve Linf*(1-exp(-K2*(t-t0))*((1+exp(-b*(t-t0-a)))/(1+exp(a*b)))^(-(K2-K1)/b)) } Somers <- function(t,Linf,K,t0,C,ts) { - if (length(Linf)==5) { K <- Linf[[2]]; t0 <- Linf[[3]] - C <- Linf[[4]]; ts <- Linf[[5]] - Linf <- Linf[[1]] } - St <- (C*K)/(2*pi)*sin(2*pi*(t-ts)) - Sto <- (C*K)/(2*pi)*sin(2*pi*(t0-ts)) - Linf*(1-exp(-K*(t-t0)-St+Sto)) + if (length(Linf)==5) { K <- Linf[[2]]; t0 <- Linf[[3]] + C <- Linf[[4]]; ts <- Linf[[5]] + Linf <- Linf[[1]] } + St <- (C*K)/(2*pi)*sin(2*pi*(t-ts)) + Sto <- (C*K)/(2*pi)*sin(2*pi*(t0-ts)) + Linf*(1-exp(-K*(t-t0)-St+Sto)) } SSomers <- function(t,Linf,K,t0,C,ts) { Linf*(1-exp(-K*(t-t0)-(C*K)/(2*pi)*sin(2*pi*(t-ts))+(C*K)/(2*pi)*sin(2*pi*(t0-ts)))) } Somers2 <- function(t,Linf,K,t0,C,WP) { - if (length(Linf)==5) { K <- Linf[[2]]; t0 <- Linf[[3]] - C <- Linf[[4]]; WP <- Linf[[5]] - Linf <- Linf[[1]] } - Rt <- (C*K)/(2*pi)*sin(2*pi*(t-WP+0.5)) - Rto <- (C*K)/(2*pi)*sin(2*pi*(t0-WP+0.5)) - Linf*(1-exp(-K*(t-t0)-Rt+Rto)) + if (length(Linf)==5) { K <- Linf[[2]]; t0 <- Linf[[3]] + C <- Linf[[4]]; WP <- Linf[[5]] + Linf <- Linf[[1]] } + Rt <- (C*K)/(2*pi)*sin(2*pi*(t-WP+0.5)) + Rto <- (C*K)/(2*pi)*sin(2*pi*(t0-WP+0.5)) + Linf*(1-exp(-K*(t-t0)-Rt+Rto)) } SSomers2 <- function(t,Linf,K,t0,C,WP) { Linf*(1-exp(-K*(t-t0)-(C*K)/(2*pi)*sin(2*pi*(t-WP+0.5))+(C*K)/(2*pi)*sin(2*pi*(t0-WP+0.5)))) @@ -437,7 +437,7 @@ vbFuns <- function(param=c("Typical","typical","Traditional","traditional","Beve } Wang <- function(Lm,dt,Linf,K=NULL,b=NULL) { if (length(Linf)==3) { b <- Linf[[3]]; K <- Linf[[2]] - Linf <- Linf[[1]] } + Linf <- Linf[[1]] } (Linf+b*(Lm-mean(Lm))-Lm)*(1-exp(-K*dt)) } SWang <- function(Lm,dt,Linf,K,b) { @@ -481,192 +481,192 @@ vbFuns <- function(param=c("Typical","typical","Traditional","traditional","Beve param <- match.arg(param) if (msg) { switch(param, - Ogle= { - message("You have chosen the 'Ogle-Isermann' parameterization.\n\n", - " E[L|t] = (Linf-Lr)*(1-exp(-K*(t-tr)))\n\n", - " where Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n", - " tr = mean age at Lr\n", - " Lr = mean length at tr\n\n", - "NOTE: either tr or Lr must be set by the user.\n\n") - }, - Typical=,typical=,Traditional=,traditional=,BevertonHolt= { - message("You have chosen the 'Typical', 'Traditional', or 'BevertonHolt' parameterization.\n\n", - " E[L|t] = Linf*(1-exp(-K*(t-t0)))\n\n", - " where Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n\n") - }, - Original=,original=,vonBertalanffy={ - message("You have chosen the 'Original' or 'vonBertalanffy` parameterization.\n\n", - " E[L|t] = Linf-(Linf-L0)*exp(-K*t)\n\n", - " where Linf = asymptotic mean length\n", - " L0 = the mean length at age-0 (i.e., hatching or birth)\n", - " K = exponential rate of approach to Linf\n\n") - }, - Francis={ - message("You have chosen the 'Francis' parameterization.\n\n", - " E[L|t] = L1+(L3-L1)*[(1-r^(2*[(t-t1)/(t3-t1)]))/(1-r^2)]\n\n", - " where r = [(L3-L2)/(L2-L1)] and\n\n", - " L1 = the mean length at the first (small) reference age\n", - " L2 = the mean length at the intermediate reference age\n", - " L3 = the mean length at the third (large) reference age\n\n", - "You must also give values (i.e., they are NOT model parameters) for\n", - " t1 = the first (usually a younger) reference age\n", - " t3 = the third (usually an older) reference age\n\n") - }, - GQ=,GallucciQuinn={ - message("You have chosen the 'GQ' or 'GallucciQuinn' parameterization.\n\n", - " E[L|t] = [omega/K]*(1-exp(-K*(t-t0)))\n\n", - " where omega = growth rate near t0\n", - " K = exponential rate of approach to Linf\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n\n") - }, - Mooij={ - message("You have chosen the 'Mooij' parameterization.\n\n", - " E[L|t] = Linf-(Linf-L0)*exp(-(omega/Linf)*t)\n\n", - " where Linf = asymptotic mean length\n", - " L0 = the mean length at age-0 (i.e., hatching or birth)\n", - " omega = growth rate near L0\n\n") - }, - Weisberg= { - message("You have chosen the 'Weisberg' parameterization.\n\n", - " E[L|t] = Linf*(1-exp(-(log(2)/(t50-t0))*(t-t0)))\n\n", - " where Linf = asymptotic mean length\n", - " t50 = age when half of Linf is reached\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n\n") - }, - Schnute={ - message("You have chosen the 'Schnute' parameterization.\n\n", - " E[L|t] = L1+(L2-L1)*[(1-exp(-K*(t-t1)))/(1-exp(-K*(t2-t1)))]\n\n", - " where L1 = the mean length at the youngest age in the sample\n", - " L2 = the mean length at the oldest age in the sample\n", - " K = exponential rate of approach to Linf\n\n", - " You must also give values (i.e., they are NOT model parameters) for\n", - " t1 = the youngest age in the sample\n", - " t2 = the oldest age in the sample\n\n") - }, - Laslett=,Polacheck={ - message("You have chosen the 'Laslett/Polacheck' 'double' parameterization.\n\n", - " E[L|t] = Linf*[1-exp(-K2*(t-to))((1+exp(-b(t-t0-a)))/(1+exp(ab)))^(-(K2-K1)/b)]\n\n", - " where Linf = asymptotic mean length\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n", - " K1 = the first (younger ages) exponential rate of approach to Linf\n", - " K2 = the second (older ages) exponential rate of approach to Linf\n", - " b = governs the rate of transition from K1 to K2\n", - " a = the central age of the transition from K1 to K2\n\n") - }, - Somers={ - message("You have chosen the 'Somers Seasonal' parameterization.\n\n", - " E[L|t] = Linf*(1-exp(-K*(t-to)-St+St0))\n\n", - " where St = (CK/2*pi)*sin(2*pi*(t-ts)) and\n", - " St0 = (CK/2*pi)*sin(2*pi*(t0-ts)) and\n\n", - " and Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n", - " C = proportional growth depression at 'winter peak'\n", - " ts = time from t=0 until the first growth oscillation begins.\n\n") - }, - Somers2={ - message("You have chosen the modified 'Somers2 Seasonal' parameterization.\n\n", - " E[L|t] = Linf*(1-exp(-K*(t-to)-Rt+Rt0))\n\n", - " where Rt = (CK/2*pi)*sin(2*pi*(t-WP+0.5)) and\n", - " Rt0 = (CK/2*pi)*sin(2*pi*(t0-WP+0.5)) and\n\n", - " and Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n", - " C = proportional growth depression at 'winter peak'\n", - " WP = the 'winter peak' (point of slowest growth).\n\n") - }, - Pauly={ - message("You have chosen the 'Pauly Seasonal Cessation' parameterization.\n\n", - " E[L|t] = Linf*(1-exp(-K'*(t'-to)-Vt'+Vt0))\n\n", - " where Vt' = (K'(1-NGT)/2*pi)*sin(2*pi*(t'-ts)/(1-NGT)) and\n", - " Vt0 = (K'(1-NGT)/2*pi)*sin(2*pi*(t0-ts)/(1-NGT)) and\n\n", - " and Linf = asymptotic mean length\n", - " K' = exponential rate of approach to Linf during growth period\n", - " t0 = the theoretical age when length = 0 (a modeling artifact)\n", - " ts = time from t=0 until the first growth oscillation begins\n", - " NGT = length of no-growth period.\n\n") - }, - Fabens={ - message("You have chosen the 'Fabens' parameterization for tag-return data.\n\n", - " E[dL|Lm,dt] = (Linf-Lm)*(1-exp(-K*dt))\n\n", - " where Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n\n", - " and the data are dL = change in length (from mark to recapture)\n", - " Lm = length at time of marking\n", - " dt = time between marking and recapture.\n\n") - }, - Fabens2={ - message("You have chosen the 'Fabens2' parameterization for tag-return data.\n\n", - " E[Lr|Lm,dt] = Lm + (Linf-Lm)*(1-exp(-K*dt))\n\n", - " where Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n\n", - " and the data are Lr = length at time of recapture\n", - " Lm = length at time of marking\n", - " dt = time between marking and recapture.\n\n") - }, - Wang={ - message("You have chosen the 'Wang' parameterization for tag-return data.\n\n", - " E[dL|Lm,dt] = (Linf+b(Lm-E(Lm))-Lm)*(1-exp(-K*dt))\n\n", - " where Linf = asymptotic mean length\n", - " K = exponential rate of approach to Linf\n", - " b = parameter\n\n", - " and the data are dL = change in length (from mark to recapture)\n", - " Lm = length at time of marking\n", - " dt = time between marking and recapture.\n\n", - " and with E(Lm) = expectation (i.e., mean) of Lm.\n\n") - }, - Wang2={ - message("You have chosen the 'Wang2' parameterization for tag-return data.\n\n", - " E[dL|Lm,dt] = (a+bLm)*(1-exp(-K*dt))\n\n", - " where K = exponential rate of approach to Linf\n", - " a, b = parameters\n\n", - " and the data are dL = change in length (from mark to recapture)\n", - " Lm = length at time of marking\n", - " dt = time between marking and recapture.\n\n") - }, - Wang3={ - message("You have chosen the 'Wang3' parameterization for tag-return data.\n\n", - " E[Lr|Lm,dt] = Lm+(a+bLm)*(1-exp(-K*dt))\n\n", - " where K = exponential rate of approach to Linf\n", - " a, b = parameters\n\n", - " and the data are Lr = length at time of recapture\n", - " Lm = length at time of marking\n", - " dt = time between marking and recapture.\n\n") - }, - Francis2={ - message("You have chosen the 'Francis2' parameterization for tag-return data.\n\n", - " E[dL|Lm,dt] = ((L2g1-L1g2)/(g1-g2)-Lm)*(1-(1+(g1-g2)/(L1-L2))^dt)\n\n", - " where g1 = mean growth rate at the first (small) reference length L1\n", - " g2 = mean growth rate at the second (large) reference length L2\n\n", - "You must also give values (i.e., they are NOT model parameters) for\n", - " L1 = the first (usually a shorter) reference length\n", - " L2 = the second (usually a longer) reference length\n", - "The data are dL = change in length (from mark to recapture)\n", - " Lm = length at time of marking\n", - " dt = time between marking and recapture.\n\n") - }, - Francis3={ - message("You have chosen the 'Francis3' parameterization for tag-return data\n", - " with a seasonal component.\n\n", - " E[dL|Lm,t1,t2] = ((L2g1-L1g2)/(g1-g2)-Lm)*(1-(1+(g1-g2)/(L1-L2))^((t2-t1)+S2-S1))\n\n", - " where S1 = u*sin(2*pi*(t1-w))/(2*pi) and\n", - " S2 = u*sin(2*pi*(t2-w))/(2*pi) and\n\n", - " where g1 = mean growth rate at the first (small) reference length L1\n", - " g2 = mean growth rate at the second (large) reference length L2\n", - " w = time of year when the growth rate is maximum\n", - " u = describes the extent of seasonality.\n\n", - "You must also give values (i.e., they are NOT model parameters) for\n", - " L1 = the first (usually a shorter) reference length\n", - " L2 = the second (usually a longer) reference length\n", - "The data are dL = change in length (from mark to recapture)\n", - " Lm = length at time of marking\n", - " t1 = time at marking\n", - " t2 = time at recapture.\n\n") - } - + Ogle= { + message("You have chosen the 'Ogle-Isermann' parameterization.\n\n", + " E[L|t] = (Linf-Lr)*(1-exp(-K*(t-tr)))\n\n", + " where Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n", + " tr = mean age at Lr\n", + " Lr = mean length at tr\n\n", + "NOTE: either tr or Lr must be set by the user.\n\n") + }, + Typical=,typical=,Traditional=,traditional=,BevertonHolt= { + message("You have chosen the 'Typical', 'Traditional', or 'BevertonHolt' parameterization.\n\n", + " E[L|t] = Linf*(1-exp(-K*(t-t0)))\n\n", + " where Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n\n") + }, + Original=,original=,vonBertalanffy={ + message("You have chosen the 'Original' or 'vonBertalanffy` parameterization.\n\n", + " E[L|t] = Linf-(Linf-L0)*exp(-K*t)\n\n", + " where Linf = asymptotic mean length\n", + " L0 = the mean length at age-0 (i.e., hatching or birth)\n", + " K = exponential rate of approach to Linf\n\n") + }, + Francis={ + message("You have chosen the 'Francis' parameterization.\n\n", + " E[L|t] = L1+(L3-L1)*[(1-r^(2*[(t-t1)/(t3-t1)]))/(1-r^2)]\n\n", + " where r = [(L3-L2)/(L2-L1)] and\n\n", + " L1 = the mean length at the first (small) reference age\n", + " L2 = the mean length at the intermediate reference age\n", + " L3 = the mean length at the third (large) reference age\n\n", + "You must also give values (i.e., they are NOT model parameters) for\n", + " t1 = the first (usually a younger) reference age\n", + " t3 = the third (usually an older) reference age\n\n") + }, + GQ=,GallucciQuinn={ + message("You have chosen the 'GQ' or 'GallucciQuinn' parameterization.\n\n", + " E[L|t] = [omega/K]*(1-exp(-K*(t-t0)))\n\n", + " where omega = growth rate near t0\n", + " K = exponential rate of approach to Linf\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n\n") + }, + Mooij={ + message("You have chosen the 'Mooij' parameterization.\n\n", + " E[L|t] = Linf-(Linf-L0)*exp(-(omega/Linf)*t)\n\n", + " where Linf = asymptotic mean length\n", + " L0 = the mean length at age-0 (i.e., hatching or birth)\n", + " omega = growth rate near L0\n\n") + }, + Weisberg= { + message("You have chosen the 'Weisberg' parameterization.\n\n", + " E[L|t] = Linf*(1-exp(-(log(2)/(t50-t0))*(t-t0)))\n\n", + " where Linf = asymptotic mean length\n", + " t50 = age when half of Linf is reached\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n\n") + }, + Schnute={ + message("You have chosen the 'Schnute' parameterization.\n\n", + " E[L|t] = L1+(L2-L1)*[(1-exp(-K*(t-t1)))/(1-exp(-K*(t2-t1)))]\n\n", + " where L1 = the mean length at the youngest age in the sample\n", + " L2 = the mean length at the oldest age in the sample\n", + " K = exponential rate of approach to Linf\n\n", + " You must also give values (i.e., they are NOT model parameters) for\n", + " t1 = the youngest age in the sample\n", + " t2 = the oldest age in the sample\n\n") + }, + Laslett=,Polacheck={ + message("You have chosen the 'Laslett/Polacheck' 'double' parameterization.\n\n", + " E[L|t] = Linf*[1-exp(-K2*(t-to))((1+exp(-b(t-t0-a)))/(1+exp(ab)))^(-(K2-K1)/b)]\n\n", + " where Linf = asymptotic mean length\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n", + " K1 = the first (younger ages) exponential rate of approach to Linf\n", + " K2 = the second (older ages) exponential rate of approach to Linf\n", + " b = governs the rate of transition from K1 to K2\n", + " a = the central age of the transition from K1 to K2\n\n") + }, + Somers={ + message("You have chosen the 'Somers Seasonal' parameterization.\n\n", + " E[L|t] = Linf*(1-exp(-K*(t-to)-St+St0))\n\n", + " where St = (CK/2*pi)*sin(2*pi*(t-ts)) and\n", + " St0 = (CK/2*pi)*sin(2*pi*(t0-ts)) and\n\n", + " and Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n", + " C = proportional growth depression at 'winter peak'\n", + " ts = time from t=0 until the first growth oscillation begins.\n\n") + }, + Somers2={ + message("You have chosen the modified 'Somers2 Seasonal' parameterization.\n\n", + " E[L|t] = Linf*(1-exp(-K*(t-to)-Rt+Rt0))\n\n", + " where Rt = (CK/2*pi)*sin(2*pi*(t-WP+0.5)) and\n", + " Rt0 = (CK/2*pi)*sin(2*pi*(t0-WP+0.5)) and\n\n", + " and Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n", + " C = proportional growth depression at 'winter peak'\n", + " WP = the 'winter peak' (point of slowest growth).\n\n") + }, + Pauly={ + message("You have chosen the 'Pauly Seasonal Cessation' parameterization.\n\n", + " E[L|t] = Linf*(1-exp(-K'*(t'-to)-Vt'+Vt0))\n\n", + " where Vt' = (K'(1-NGT)/2*pi)*sin(2*pi*(t'-ts)/(1-NGT)) and\n", + " Vt0 = (K'(1-NGT)/2*pi)*sin(2*pi*(t0-ts)/(1-NGT)) and\n\n", + " and Linf = asymptotic mean length\n", + " K' = exponential rate of approach to Linf during growth period\n", + " t0 = the theoretical age when length = 0 (a modeling artifact)\n", + " ts = time from t=0 until the first growth oscillation begins\n", + " NGT = length of no-growth period.\n\n") + }, + Fabens={ + message("You have chosen the 'Fabens' parameterization for tag-return data.\n\n", + " E[dL|Lm,dt] = (Linf-Lm)*(1-exp(-K*dt))\n\n", + " where Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n\n", + " and the data are dL = change in length (from mark to recapture)\n", + " Lm = length at time of marking\n", + " dt = time between marking and recapture.\n\n") + }, + Fabens2={ + message("You have chosen the 'Fabens2' parameterization for tag-return data.\n\n", + " E[Lr|Lm,dt] = Lm + (Linf-Lm)*(1-exp(-K*dt))\n\n", + " where Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n\n", + " and the data are Lr = length at time of recapture\n", + " Lm = length at time of marking\n", + " dt = time between marking and recapture.\n\n") + }, + Wang={ + message("You have chosen the 'Wang' parameterization for tag-return data.\n\n", + " E[dL|Lm,dt] = (Linf+b(Lm-E(Lm))-Lm)*(1-exp(-K*dt))\n\n", + " where Linf = asymptotic mean length\n", + " K = exponential rate of approach to Linf\n", + " b = parameter\n\n", + " and the data are dL = change in length (from mark to recapture)\n", + " Lm = length at time of marking\n", + " dt = time between marking and recapture.\n\n", + " and with E(Lm) = expectation (i.e., mean) of Lm.\n\n") + }, + Wang2={ + message("You have chosen the 'Wang2' parameterization for tag-return data.\n\n", + " E[dL|Lm,dt] = (a+bLm)*(1-exp(-K*dt))\n\n", + " where K = exponential rate of approach to Linf\n", + " a, b = parameters\n\n", + " and the data are dL = change in length (from mark to recapture)\n", + " Lm = length at time of marking\n", + " dt = time between marking and recapture.\n\n") + }, + Wang3={ + message("You have chosen the 'Wang3' parameterization for tag-return data.\n\n", + " E[Lr|Lm,dt] = Lm+(a+bLm)*(1-exp(-K*dt))\n\n", + " where K = exponential rate of approach to Linf\n", + " a, b = parameters\n\n", + " and the data are Lr = length at time of recapture\n", + " Lm = length at time of marking\n", + " dt = time between marking and recapture.\n\n") + }, + Francis2={ + message("You have chosen the 'Francis2' parameterization for tag-return data.\n\n", + " E[dL|Lm,dt] = ((L2g1-L1g2)/(g1-g2)-Lm)*(1-(1+(g1-g2)/(L1-L2))^dt)\n\n", + " where g1 = mean growth rate at the first (small) reference length L1\n", + " g2 = mean growth rate at the second (large) reference length L2\n\n", + "You must also give values (i.e., they are NOT model parameters) for\n", + " L1 = the first (usually a shorter) reference length\n", + " L2 = the second (usually a longer) reference length\n", + "The data are dL = change in length (from mark to recapture)\n", + " Lm = length at time of marking\n", + " dt = time between marking and recapture.\n\n") + }, + Francis3={ + message("You have chosen the 'Francis3' parameterization for tag-return data\n", + " with a seasonal component.\n\n", + " E[dL|Lm,t1,t2] = ((L2g1-L1g2)/(g1-g2)-Lm)*(1-(1+(g1-g2)/(L1-L2))^((t2-t1)+S2-S1))\n\n", + " where S1 = u*sin(2*pi*(t1-w))/(2*pi) and\n", + " S2 = u*sin(2*pi*(t2-w))/(2*pi) and\n\n", + " where g1 = mean growth rate at the first (small) reference length L1\n", + " g2 = mean growth rate at the second (large) reference length L2\n", + " w = time of year when the growth rate is maximum\n", + " u = describes the extent of seasonality.\n\n", + "You must also give values (i.e., they are NOT model parameters) for\n", + " L1 = the first (usually a shorter) reference length\n", + " L2 = the second (usually a longer) reference length\n", + "The data are dL = change in length (from mark to recapture)\n", + " Lm = length at time of marking\n", + " t1 = time at marking\n", + " t2 = time at recapture.\n\n") + } + ) } if (simple) param <- paste0("S",param) @@ -678,10 +678,10 @@ vbFuns <- function(param=c("Typical","typical","Traditional","traditional","Beve #' @rdname growthModels #' @export GompertzFuns <- function(param=c("Ricker1","Ricker2","Ricker3", - "QuinnDeriso1","QuinnDeriso2","QuinnDeriso3", - "QD1","QD2","QD3", - "Original","original", - "Troynikov1","Troynikov2"), + "QuinnDeriso1","QuinnDeriso2","QuinnDeriso3", + "QD1","QD2","QD3", + "Original","original", + "Troynikov1","Troynikov2"), simple=FALSE,msg=FALSE) { Original <- original <- function(t,Linf,a=NULL,gi=NULL) { if (length(Linf)==3) { a <- Linf[[2]] @@ -1171,102 +1171,102 @@ growthFunShow <- function(type=c("vonBertalanffy","Gompertz","Richards", ## Internal functions for growth model expressions ################################################################################ iSGF_VB <- function(param=c("Original","original","vonBertalanffy", - "Typical","typical","Traditional","traditional","BevertonHolt", - "GallucciQuinn","GQ","Mooij","Weisberg","Ogle", - "Schnute","Francis","Laslett","Polacheck", - "Somers","Somers2","Pauly", - "Fabens","Fabens2","Wang","Wang2","Wang3")) { + "Typical","typical","Traditional","traditional","BevertonHolt", + "GallucciQuinn","GQ","Mooij","Weisberg","Ogle", + "Schnute","Francis","Laslett","Polacheck", + "Somers","Somers2","Pauly", + "Fabens","Fabens2","Wang","Wang2","Wang3")) { if(!is.character(param)) STOP("'param' must be a character string.") param <- match.arg(param) switch(param, - Ogle= { - expr <- expression(E(L[t])==L[infinity]~-~(L[infinity]-L[r])*~e^{-K(t~-~t[r])}) - }, - Typical=,typical=,Traditional=,traditional=,BevertonHolt= { - expr <- expression(E(L[t])==L[infinity]*bgroup("(",1-e^{-K*(t~-~t[0])},")")) - }, - Original=,original=,vonBertalanffy= { - expr <- expression(E(L[t])==L[infinity]~-~(L[infinity]-L[0])*~e^{-Kt}) - }, - GallucciQuinn=,GQ= { - expr <- expression(E(L[t])==frac(omega,K)*~bgroup("(",1-e^{-K*(t~-~t[0])},")")) - }, - Mooij= { - expr <- expression(E(L[t])==L[infinity]~-~(L[infinity]-L[0])*~e^{-frac(omega,L[infinity])*~t}) - }, - Weisberg= { - expr <- expression(E(L[t])==L[infinity]*bgroup("(",1-e^{-frac(log(2),(t[50]~-~t[0]))*(t~-~t[0])},")")) - }, - Schnute= { - expr <- expression(E(L[t])==L[1]+(L[3]-L[1])*~frac(1-e^{-K*(~t~-~t[1])},1-e^{-K*(~t[3]~-~t[1])})) - }, - Francis= { - expr <- expression(atop(E(L[t])==L[1]+(L[3]-L[1])*~frac(1-r^{2*frac(t-t[1],t[3]-t[1])},1-r^{2}), - plain("where" )~r==frac(L[3]-L[2],L[2]-L[1]))) - }, - Laslett= { - expr <- expression(plain("Not Yet Implemented")) - }, - Polacheck= { - expr <- expression(plain("Not Yet Implemented")) - }, - Somers= { - expr <- expression(atop(E(L[t])==L[infinity]*bgroup("(",1-e^{-K*(t~-~t[0])-S(t)+S(t[0])},")"), - plain("where" )~S(t)==bgroup("(",frac(C*K,2*~pi),")")*~sin(2*pi*(t-t[s])))) - }, - Somers2= { - expr <- expression(atop(E(L[t])==L[infinity]*bgroup("(",1-e^{-K*(t~-~t[0])-R(t)+R(t[0])},")"), - plain("where" )~R(t)==bgroup("(",frac(C*K,2*~pi),")")*~sin(2*pi*(t-WP+0.5)))) - }, - Pauly= { - expr <- expression(atop(E(L[t])==L[infinity]*bgroup("(",1-e^{-Kpr*(tpr~-~t[0])-V(tpr)+V(t[0])},")"), - plain("where" )~V(t)==bgroup("(",frac(Kpr(1-NGT),2*~pi),")")*~sin(frac(2*pi,1-NGT)*(t-t[s])))) - }, - Fabens= { - expr <- expression(E(L[r]-L[m])==(L[infinity]-L[m])*bgroup("(",1-e^{-K*Delta*t},")")) - }, - Fabens2= { - expr <- expression(E(L[r])==L[m]+(L[infinity]-L[m])*bgroup("(",1-e^{-K*Delta*t},")")) - }, - Wang= { - expr <- expression(E(L[r]-L[m])==(L[infinity]+beta*(L[t]-L[t])-L[m])*bgroup("(",1-e^{-K*Delta*t},")")) - }, - Wang2= { - expr <- expression(E(L[r]-L[m])==(alpha+beta*L[t])*bgroup("(",1-e^{-K*Delta*t},")")) - }, - Wang3= { - expr <- expression(E(L[r])==L[m]+(alpha+beta*L[t])*bgroup("(",1-e^{-K*Delta*t},")")) - }) + Ogle= { + expr <- expression(E(L[t])==L[infinity]~-~(L[infinity]-L[r])*~e^{-K(t~-~t[r])}) + }, + Typical=,typical=,Traditional=,traditional=,BevertonHolt= { + expr <- expression(E(L[t])==L[infinity]*bgroup("(",1-e^{-K*(t~-~t[0])},")")) + }, + Original=,original=,vonBertalanffy= { + expr <- expression(E(L[t])==L[infinity]~-~(L[infinity]-L[0])*~e^{-Kt}) + }, + GallucciQuinn=,GQ= { + expr <- expression(E(L[t])==frac(omega,K)*~bgroup("(",1-e^{-K*(t~-~t[0])},")")) + }, + Mooij= { + expr <- expression(E(L[t])==L[infinity]~-~(L[infinity]-L[0])*~e^{-frac(omega,L[infinity])*~t}) + }, + Weisberg= { + expr <- expression(E(L[t])==L[infinity]*bgroup("(",1-e^{-frac(log(2),(t[50]~-~t[0]))*(t~-~t[0])},")")) + }, + Schnute= { + expr <- expression(E(L[t])==L[1]+(L[3]-L[1])*~frac(1-e^{-K*(~t~-~t[1])},1-e^{-K*(~t[3]~-~t[1])})) + }, + Francis= { + expr <- expression(atop(E(L[t])==L[1]+(L[3]-L[1])*~frac(1-r^{2*frac(t-t[1],t[3]-t[1])},1-r^{2}), + plain("where" )~r==frac(L[3]-L[2],L[2]-L[1]))) + }, + Laslett= { + expr <- expression(plain("Not Yet Implemented")) + }, + Polacheck= { + expr <- expression(plain("Not Yet Implemented")) + }, + Somers= { + expr <- expression(atop(E(L[t])==L[infinity]*bgroup("(",1-e^{-K*(t~-~t[0])-S(t)+S(t[0])},")"), + plain("where" )~S(t)==bgroup("(",frac(C*K,2*~pi),")")*~sin(2*pi*(t-t[s])))) + }, + Somers2= { + expr <- expression(atop(E(L[t])==L[infinity]*bgroup("(",1-e^{-K*(t~-~t[0])-R(t)+R(t[0])},")"), + plain("where" )~R(t)==bgroup("(",frac(C*K,2*~pi),")")*~sin(2*pi*(t-WP+0.5)))) + }, + Pauly= { + expr <- expression(atop(E(L[t])==L[infinity]*bgroup("(",1-e^{-Kpr*(tpr~-~t[0])-V(tpr)+V(t[0])},")"), + plain("where" )~V(t)==bgroup("(",frac(Kpr(1-NGT),2*~pi),")")*~sin(frac(2*pi,1-NGT)*(t-t[s])))) + }, + Fabens= { + expr <- expression(E(L[r]-L[m])==(L[infinity]-L[m])*bgroup("(",1-e^{-K*Delta*t},")")) + }, + Fabens2= { + expr <- expression(E(L[r])==L[m]+(L[infinity]-L[m])*bgroup("(",1-e^{-K*Delta*t},")")) + }, + Wang= { + expr <- expression(E(L[r]-L[m])==(L[infinity]+beta*(L[t]-L[t])-L[m])*bgroup("(",1-e^{-K*Delta*t},")")) + }, + Wang2= { + expr <- expression(E(L[r]-L[m])==(alpha+beta*L[t])*bgroup("(",1-e^{-K*Delta*t},")")) + }, + Wang3= { + expr <- expression(E(L[r])==L[m]+(alpha+beta*L[t])*bgroup("(",1-e^{-K*Delta*t},")")) + }) expr } iSGF_GOMP <- function(param=c("Original","original","Ricker1","Ricker2","Ricker3", - "QuinnDeriso1","QuinnDeriso2","QuinnDeriso3","QD1","QD2","QD3", - "Troynikov1","Troynikov2")) { + "QuinnDeriso1","QuinnDeriso2","QuinnDeriso3","QD1","QD2","QD3", + "Troynikov1","Troynikov2")) { if(!is.character(param)) STOP("'param' must be a character string.") param <- match.arg(param) switch(param, - Original=,original= { - expr <- expression(E(L[t])==L[infinity]*~e^{-e^{a-g[i]*t}}) - }, - Ricker1= { - expr <- expression(E(L[t])==L[infinity]*~e^{-e^{-g[i]*(t-t[i])}}) - }, - Ricker2=,QuinnDeriso1=,QD1= { - expr <- expression(E(L[t])==L[0]*~e^{a*bgroup("(",1-e^{-g[i]*t},")")}) - }, - Ricker3=,QuinnDeriso2=,QD2= { - expr <- expression(E(L[t])==L[infinity]*~e^{-a*~e^{-g[i]*t}}) - }, - QuinnDeriso3=,QD3= { - expr <- expression(E(L[t])==L[infinity]*~e^{-~frac(1,g[i])*~e^{-g[i]*~(~t~-~t^{plain("*")})}}) - }, - Troynikov1= { - expr <- expression(E(L[r]-L[m])==L[infinity]*~bgroup("(",frac(L[m],L[infinity]),")")^{e^{-g[i]*Delta*t}}-L[m]) - }, - Troynikov2= { - expr <- expression(E(L[r])==L[infinity]*~bgroup("(",frac(L[m],L[infinity]),")")^{e^{-g[i]*Delta*t}}) - }) + Original=,original= { + expr <- expression(E(L[t])==L[infinity]*~e^{-e^{a-g[i]*t}}) + }, + Ricker1= { + expr <- expression(E(L[t])==L[infinity]*~e^{-e^{-g[i]*(t-t[i])}}) + }, + Ricker2=,QuinnDeriso1=,QD1= { + expr <- expression(E(L[t])==L[0]*~e^{a*bgroup("(",1-e^{-g[i]*t},")")}) + }, + Ricker3=,QuinnDeriso2=,QD2= { + expr <- expression(E(L[t])==L[infinity]*~e^{-a*~e^{-g[i]*t}}) + }, + QuinnDeriso3=,QD3= { + expr <- expression(E(L[t])==L[infinity]*~e^{-~frac(1,g[i])*~e^{-g[i]*~(~t~-~t^{plain("*")})}}) + }, + Troynikov1= { + expr <- expression(E(L[r]-L[m])==L[infinity]*~bgroup("(",frac(L[m],L[infinity]),")")^{e^{-g[i]*Delta*t}}-L[m]) + }, + Troynikov2= { + expr <- expression(E(L[r])==L[infinity]*~bgroup("(",frac(L[m],L[infinity]),")")^{e^{-g[i]*Delta*t}}) + }) expr } @@ -1293,18 +1293,18 @@ iSGF_LOGISTIC <- function(param=c("CJ1","CJ2","Karkach","Haddon","CampanaJones1" if(!is.character(param)) STOP("'param' must be a character string.") param <- match.arg(param) switch(param, - CJ1=,CampanaJones1= { - expr <- expression(E(L[t])==frac(L[infinity],1+e^{-g[-infinity]*(t-t[i])})) - }, - CJ2=,CampanaJones2= { - expr <- expression(E(L[t])==frac(L[infinity],1+~ae^{-g[-infinity]*t})) - }, - Karkach= { - expr <- expression(E(L[t])==frac(L[0]*L[infinity],L[0]+(L[infinity]-L[0])*e^{-g[-infinity]*t})) - }, - Haddon= { - expr <- expression(E(L[r]-L[m])==frac(Delta*L[max],1+e^{log(19)*frac(L[m]~-~L[50],L[95]~-~L[50])})) - }) + CJ1=,CampanaJones1= { + expr <- expression(E(L[t])==frac(L[infinity],1+e^{-g[-infinity]*(t-t[i])})) + }, + CJ2=,CampanaJones2= { + expr <- expression(E(L[t])==frac(L[infinity],1+~ae^{-g[-infinity]*t})) + }, + Karkach= { + expr <- expression(E(L[t])==frac(L[0]*L[infinity],L[0]+(L[infinity]-L[0])*e^{-g[-infinity]*t})) + }, + Haddon= { + expr <- expression(E(L[r]-L[m])==frac(Delta*L[max],1+e^{log(19)*frac(L[m]~-~L[50],L[95]~-~L[50])})) + }) expr } From 2d63744f85a4d0323257192f0b0c2d6019836a79 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Mon, 30 Dec 2024 20:37:11 -0600 Subject: [PATCH 03/21] Updated tests to use testthat >3.0.0 --- DESCRIPTION | 7 +- FSA.Rproj | 1 + NEWS.md | 9 + tests/test-all.R | 2 - tests/testthat.R | 12 + tests/testthat/testthat_AgeComparisons.R | 50 ++- tests/testthat/testthat_AgeLengthKey.R | 43 +-- tests/testthat/testthat_FSAUtils.R | 80 ++--- tests/testthat/testthat_Internals.R | 118 +++--- tests/testthat/testthat_LWCompPreds.R | 6 +- tests/testthat/testthat_PSD.R | 102 +++--- tests/testthat/testthat_StockRecruit.R | 84 ++--- tests/testthat/testthat_Summarize.R | 62 ++-- tests/testthat/testthat_VonBertalanffy.R | 438 +++++++++++------------ tests/testthat/testthat_WSWR.R | 12 +- tests/testthat/testthat_addZeroCatch.R | 10 +- tests/testthat/testthat_bootstrap.R | 32 +- tests/testthat/testthat_capHist.R | 76 ++-- tests/testthat/testthat_catchCurve.R | 77 ++-- tests/testthat/testthat_ciDists.R | 53 ++- tests/testthat/testthat_depletion.R | 251 ++++++------- tests/testthat/testthat_dunnTest.R | 146 ++++---- tests/testthat/testthat_expandCounts.R | 92 ++--- tests/testthat/testthat_expandLenFreq.R | 16 +- tests/testthat/testthat_extraTests.R | 69 ++-- tests/testthat/testthat_growthFuns.R | 44 +-- tests/testthat/testthat_ksTest.R | 6 +- tests/testthat/testthat_lencat.R | 62 ++-- tests/testthat/testthat_metaM.R | 38 +- tests/testthat/testthat_mrClosed.R | 224 ++++++------ tests/testthat/testthat_mrOpen.R | 33 +- tests/testthat/testthat_nlsTracePlot.R | 6 +- tests/testthat/testthat_removal.R | 135 ++++--- tests/testthat/testthat_sumTable.R | 29 +- 34 files changed, 1201 insertions(+), 1224 deletions(-) delete mode 100644 tests/test-all.R create mode 100644 tests/testthat.R diff --git a/DESCRIPTION b/DESCRIPTION index f0c59ad9..030e8a7c 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,6 +1,6 @@ Package: FSA Version: 0.9.5.9000 -Date: 2023-8-25 +Date: 2024-12-30 Title: Simple Fisheries Stock Assessment Methods Description: A variety of simple fish stock assessment methods. Authors@R: c( @@ -46,8 +46,9 @@ Suggests: psych, Rcapture, rmarkdown, - testthat, + testthat (>= 3.0.0), tibble, covr Encoding: UTF-8 -RoxygenNote: 7.2.3 +RoxygenNote: 7.3.2 +Config/testthat/edition: 3 diff --git a/FSA.Rproj b/FSA.Rproj index c9f9fb2b..cc628fa0 100644 --- a/FSA.Rproj +++ b/FSA.Rproj @@ -1,4 +1,5 @@ Version: 1.0 +ProjectId: 41ed5498-c0fd-4cc3-8f0f-c170ffdd4aa6 RestoreWorkspace: Default SaveWorkspace: Default diff --git a/NEWS.md b/NEWS.md index 32ee1a22..01480c95 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,13 @@ # FSA 0.9.5.9000 +* Updated testing to use `testthat` v3.0.0. + * Changes to `DESCRIPTION`. + * Replaced MANY `expect_is()` with `expect_equal(class())` idioms. + * Replaced many `expect_equivalent()` with `expect_equal()` as `expect_equivalent()` was not needed to begin with. + * Replaced many `expect_equivalent()` with `expect_equal(,ignore_attr=TRUE)` as `expect_equivalent()` was deprecated. + * Had to correct many tests where I expected just `matrix` but the class was `c("matrix","array")`. + * Had to handle multiple warnings for some tests (see [this article](https://testthat.r-lib.org/articles/third-edition.html#warnings)). + * Removed many `require()` that were not needed. + * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). # FSA 0.9.5 diff --git a/tests/test-all.R b/tests/test-all.R deleted file mode 100644 index 6b3f9c68..00000000 --- a/tests/test-all.R +++ /dev/null @@ -1,2 +0,0 @@ -library(testthat) -test_check("FSA") \ No newline at end of file diff --git a/tests/testthat.R b/tests/testthat.R new file mode 100644 index 00000000..50e48886 --- /dev/null +++ b/tests/testthat.R @@ -0,0 +1,12 @@ +# This file is part of the standard setup for testthat. +# It is recommended that you do not modify it. +# +# Where should you do additional test configuration? +# Learn more about the roles of various files in: +# * https://r-pkgs.org/testing-design.html#sec-tests-files-overview +# * https://testthat.r-lib.org/articles/special-files.html + +library(testthat) +library(FSA) + +test_check("FSA") diff --git a/tests/testthat/testthat_AgeComparisons.R b/tests/testthat/testthat_AgeComparisons.R index cb29635f..2b20975d 100644 --- a/tests/testthat/testthat_AgeComparisons.R +++ b/tests/testthat/testthat_AgeComparisons.R @@ -75,26 +75,26 @@ test_that("agePrecision() summary() output titles",{ test_that("agePrecision() types and specifics",{ ap1 <- agePrecision(~otolithC+scaleC,data=WhitefishLC) - expect_is(ap1,"agePrec") + expect_equal(class(ap1),"agePrec") expect_equal(names(ap1),c("detail","rawdiff","absdiff","ASD","ACV","ACV2", "AAD","APE","APE2","AD","PercAgree","R","n","validn")) - expect_is(ap1$detail,"data.frame") + expect_equal(class(ap1$detail),"data.frame") expect_equal(names(ap1$detail), c("otolithC","scaleC","mean","median","mode", "SD","CV","CV2","AD","PE","PE2","D")) - expect_is(ap1$rawdiff,"table") - expect_is(ap1$absdiff,"table") + expect_equal(class(ap1$rawdiff),"table") + expect_equal(class(ap1$absdiff),"table") ap2 <- agePrecision(~otolithC+finrayC+scaleC,data=WhitefishLC) - expect_is(ap2,"agePrec") + expect_equal(class(ap2),"agePrec") expect_equal(names(ap2),c("detail","rawdiff","absdiff","ASD","ACV","ACV2", "AAD","APE","APE2","AD","PercAgree","R","n","validn")) - expect_is(ap2$detail,"data.frame") + expect_equal(class(ap2$detail),"data.frame") expect_equal(names(ap2$detail), c("otolithC","finrayC","scaleC","mean","median","mode", "SD","CV","CV2","AD","PE","PE2","D")) - expect_is(ap2$rawdiff,"table") - expect_is(ap2$absdiff,"table") + expect_equal(class(ap2$rawdiff),"table") + expect_equal(class(ap2$absdiff),"table") }) ## Validate Results ---- @@ -148,7 +148,7 @@ test_that("ageBias() symmetry tests match results in Evans and Hoenig (2008)",{ }) test_that("test ageBias() against compare2() with AlewifeLH data",{ - if (require(FSAdata) & require(fishmethods)) { + if (require(FSAdata,quietly=TRUE) & require(fishmethods,quietly=TRUE)) { data(AlewifeLH,package="FSAdata") ab2 <- compare2(AlewifeLH,barplot=FALSE) ## no continuity correction @@ -178,15 +178,13 @@ test_that("test ageBias() against compare2() with AlewifeLH data",{ }) test_that("ageBias() compared to http://www.nefsc.noaa.gov/fbp/age-prec/ calculations for AlewifeLH data",{ - if (require(FSAdata)) { - data(AlewifeLH,package="FSAdata") - suppressWarnings(ab1 <- ageBias(scales~otoliths,data=AlewifeLH, - ref.lab="Otolith Age",nref.lab="Scale Age")) - expect_equal(ab1$bias$n, c(2,18,20,13,18,10,8,7,5,1,2)) - ## the fbp result is actually 4.62 for age-6 - expect_equal(round(ab1$bias$mean,2), c(0.00,1.11,2.20,2.85,3.78,4.20, - 4.62,5.00,4.80,6.00,6.00)) - } + data(AlewifeLH,package="FSAdata") + suppressWarnings(ab1 <- ageBias(scales~otoliths,data=AlewifeLH, + ref.lab="Otolith Age",nref.lab="Scale Age")) + expect_equal(ab1$bias$n, c(2,18,20,13,18,10,8,7,5,1,2)) + ## the fbp result is actually 4.62 for age-6 + expect_equal(round(ab1$bias$mean,2), c(0.00,1.11,2.20,2.85,3.78,4.20, + 4.62,5.00,4.80,6.00,6.00)) }) test_that("agePrecision() gives correct precision values -- First Example",{ @@ -225,18 +223,16 @@ test_that("agePrecision() gives correct precision values -- Second Example",{ junk <- capture.output( sum2 <- summary(ap2,what="absolute",trunc.diff=4) ) expect_equal(dim(sum2),5) expect_equal(sum1[1:4],sum2[1:4]) - expect_equivalent(sum(sum1[5:15]),sum2[5]) + expect_equal(sum(sum1[5:15]),sum2[5],ignore_attr=TRUE) }) test_that("agePrecision() compared to http://www.nefsc.noaa.gov/fbp/age-prec/ calculations for AlewifeLH data",{ - if (require(FSAdata)) { - data(AlewifeLH,package="FSAdata") - ap3 <- agePrecision(~otoliths+scales,data=AlewifeLH) - expect_equal(ap3$n, 104) - expect_equal(ap3$R, 2) - expect_equal(round(ap3$ACV,2), 12.54) - expect_equal(round(ap3$PercAgree,1), 58.7) - } + data(AlewifeLH,package="FSAdata") + ap3 <- agePrecision(~otoliths+scales,data=AlewifeLH) + expect_equal(ap3$n, 104) + expect_equal(ap3$R, 2) + expect_equal(round(ap3$ACV,2), 12.54) + expect_equal(round(ap3$PercAgree,1), 58.7) }) test_that("agePrecision() differences for simple data",{ diff --git a/tests/testthat/testthat_AgeLengthKey.R b/tests/testthat/testthat_AgeLengthKey.R index e225eb52..574b4a36 100644 --- a/tests/testthat/testthat_AgeLengthKey.R +++ b/tests/testthat/testthat_AgeLengthKey.R @@ -159,7 +159,7 @@ test_that("Does 'seed=' work in alkIndivAge()",{ suppressWarnings(sum1 <- Summarize(len~age,data=WR1.comb)) suppressWarnings(sum2 <- Summarize(len~age,data=WR1.comb)) diff <- as.matrix(sum1[,-1]-sum2[,-1]) - expect_equivalent(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff))) + expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) WR1.comb <- rbind(WR1.age, alkIndivAge(WR1.key,age~len,data=WR1.len,type="CR", seed=1234343)) WR1.comb2 <- rbind(WR1.age, alkIndivAge(WR1.key,age~len,data=WR1.len,type="CR", @@ -167,7 +167,7 @@ test_that("Does 'seed=' work in alkIndivAge()",{ suppressWarnings(sum1 <- Summarize(len~age,data=WR1.comb)) suppressWarnings(sum2 <- Summarize(len~age,data=WR1.comb)) diff <- as.matrix(sum1[,-1]-sum2[,-1]) - expect_equivalent(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff))) + expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) }) test_that("Are same results achieved when handling a missing row differently",{ @@ -200,9 +200,9 @@ test_that("Are same results achieved when handling a missing row differently",{ suppressWarnings(sum3 <- Summarize(len~age,data=WR1.comb3)) ## Compare the different results diff12 <- as.matrix(sum1[,-1]-sum2[,-1]) - expect_equivalent(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12))) + expect_equal(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12)),ignore_attr=TRUE) diff23 <- as.matrix(sum2[,-1]-sum3[,-1]) - expect_equivalent(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23))) + expect_equal(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23)),ignore_attr=TRUE) ## Apply the different ALKs with alkAgeDist len.n1 <- xtabs(~LCat1,data=WR1) @@ -216,9 +216,9 @@ test_that("Are same results achieved when handling a missing row differently",{ sum3 <- alkAgeDist(WR1.key3,len.An3,len.n3) ## Compare the different results diff12 <- as.matrix(sum1[,-1]-sum2[,-1]) - expect_equivalent(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12))) + expect_equal(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12)),ignore_attr=TRUE) diff23 <- as.matrix(sum2[,-1]-sum3[,-1]) - expect_equivalent(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23))) + expect_equal(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23)),ignore_attr=TRUE) ## Apply the different ALKs with alkMeanVar suppressWarnings(sum1 <- alkMeanVar(WR1.key1,len~LCat1+age,WR1.age,len.n1)) @@ -226,21 +226,24 @@ test_that("Are same results achieved when handling a missing row differently",{ suppressWarnings(sum3 <- alkMeanVar(WR1.key3,len~LCat3+age,WR1.age,len.n3)) ## Compare the different results diff12 <- as.matrix(sum1[,-1]-sum2[,-1]) - expect_equivalent(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12))) + expect_equal(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12)),ignore_attr=TRUE) diff23 <- as.matrix(sum2[,-1]-sum3[,-1]) - expect_equivalent(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23))) + expect_equal(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23)),ignore_attr=TRUE) - suppressWarnings(sum1 <- alkMeanVar(WR1.key1,len~LCat1+age,WR1.age,len.n1, - method="QuinnDeriso")) - suppressWarnings(sum2 <- alkMeanVar(WR1.key2,len~LCat2+age,WR1.age,len.n2, - method="QuinnDeriso")) - suppressWarnings(sum3 <- alkMeanVar(WR1.key3,len~LCat3+age,WR1.age,len.n3, - method="QuinnDeriso")) + suppressMessages(suppressWarnings( + sum1 <- alkMeanVar(WR1.key1,len~LCat1+age,WR1.age,len.n1,method="QuinnDeriso") + )) + suppressMessages(suppressWarnings( + sum2 <- alkMeanVar(WR1.key2,len~LCat2+age,WR1.age,len.n2,method="QuinnDeriso") + )) + suppressMessages(suppressWarnings( + sum3 <- alkMeanVar(WR1.key3,len~LCat3+age,WR1.age,len.n3,method="QuinnDeriso") + )) ## Compare the different results diff12 <- as.matrix(sum1[,-1]-sum2[,-1]) - expect_equivalent(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12))) + expect_equal(diff12,matrix(0,nrow=nrow(diff12),ncol=ncol(diff12)),ignore_attr=TRUE) diff23 <- as.matrix(sum2[,-1]-sum3[,-1]) - expect_equivalent(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23))) + expect_equal(diff23,matrix(0,nrow=nrow(diff23),ncol=ncol(diff23)),ignore_attr=TRUE) }) @@ -271,12 +274,12 @@ test_that("Assigned ages are correct (within rounding) with semi-random alkIndiv }) test_that("alkAgeDist() reproduces results from Table 8.4 (left) of Quinn and Deriso (1999)",{ - if (require(fishmethods)) { + if (require(fishmethods,quietly=TRUE)) { ## Q&D (1999) data are alkdata and alkprop reproduces Table 8.4 results data(alkdata,package="fishmethods") tmp1 <- alkprop(alkdata)$results } - if (require(FSAdata)) { + if (require(FSAdata,quietly=TRUE)) { ## Same data in SnapperHG2 in a different format ## create ALK and intermediate results data(SnapperHG2,package="FSAdata") @@ -289,7 +292,7 @@ test_that("alkAgeDist() reproduces results from Table 8.4 (left) of Quinn and De ## Find difference in results diff <- as.matrix(tmp2[,-1]-tmp1[,-3]) - expect_equivalent(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff))) + expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) ## enter Q&D results as a guard against fishmethods changing props <- c(0.0003,0.0213,0.1624,0.0926,0.1533,0.1461,0.1260, @@ -297,6 +300,6 @@ test_that("alkAgeDist() reproduces results from Table 8.4 (left) of Quinn and De ses <- c(0.0003,0.0056,0.0157,0.0158,0.0185,0.0182,0.0150, 0.0050,0.0074,0.0083,0.0047,0.0050,0.0031,0.0063) diff <- as.matrix(round(tmp2[,-1],4)-cbind(props,ses)) - expect_equivalent(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff))) + expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) } }) diff --git a/tests/testthat/testthat_FSAUtils.R b/tests/testthat/testthat_FSAUtils.R index 86463b04..17ce759d 100644 --- a/tests/testthat/testthat_FSAUtils.R +++ b/tests/testthat/testthat_FSAUtils.R @@ -139,10 +139,10 @@ test_that("capFirst() results",{ df$rnd <- runif(nrow(df)) df$junk <- sample(c("Derek","Hugh","Ogle"),nrow(df),replace=TRUE) ## actual tests - expect_equivalent(levels(factor(capFirst(df$species))), - c("Bluefin Tuna","Bluegill","Lmb")) - expect_equivalent(levels(factor(capFirst(df$species,which="first"))), - c("Bluefin tuna","Bluegill","Lmb")) + expect_equal(levels(factor(capFirst(df$species))), + c("Bluefin Tuna","Bluegill","Lmb")) + expect_equal(levels(factor(capFirst(df$species,which="first"))), + c("Bluefin tuna","Bluegill","Lmb")) }) test_that("capFirst() returned classes",{ @@ -151,12 +151,12 @@ test_that("capFirst() returned classes",{ fvec <- factor(vec) ## first example of non-factor vector vec1 <- capFirst(vec) - expect_equivalent(class(vec),class(vec1)) - expect_equivalent(class(vec1),"character") + expect_equal(class(vec),class(vec1)) + expect_equal(class(vec1),"character") ## second example of factored vector fvec1 <- capFirst(fvec) - expect_equivalent(class(fvec),class(fvec1)) - expect_equivalent(class(fvec1),"factor") + expect_equal(class(fvec),class(fvec1)) + expect_equal(class(fvec1),"factor") }) test_that("col2rgbt() results",{ @@ -171,7 +171,7 @@ test_that("fact2num() results",{ nums <- c(1,2,6,9,3) tmp <- fact2num(factor(nums)) expect_equal(tmp,nums) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) }) @@ -195,22 +195,22 @@ test_that("fishR() return values",{ test_that("geomean() / geosd() results",{ ## Geometric mean # match wikipedia example - expect_equivalent(geomean(c(1/32,1,4)),1/2) + expect_equal(geomean(c(1/32,1,4)),1/2) # match http://www.thinkingapplied.com/means_folder/deceptive_means.htm tmp <- c(1.0978,1.1174,1.1341,0.9712,1.1513,1.2286,1.0930,0.9915,1.0150) tmp2 <- c(NA,tmp) - expect_equivalent(round(geomean(tmp),4),1.0861) - expect_equivalent(round(geosd(tmp),4),1.0795) + expect_equal(round(geomean(tmp),4),1.0861) + expect_equal(round(geosd(tmp),4),1.0795) # match geometric.mean in psych package - if (require(psych)) { - expect_equivalent(geomean(tmp),psych::geometric.mean(tmp)) - expect_equivalent(geomean(tmp2,na.rm=TRUE),psych::geometric.mean(tmp2)) + if (require(psych,quietly=TRUE)) { + expect_equal(geomean(tmp),psych::geometric.mean(tmp)) + expect_equal(geomean(tmp2,na.rm=TRUE),psych::geometric.mean(tmp2)) } - if (require(DescTools)) { - expect_equivalent(geomean(tmp),DescTools::Gmean(tmp)) - expect_equivalent(geomean(tmp2,na.rm=TRUE),DescTools::Gmean(tmp2,na.rm=TRUE)) - expect_equivalent(geosd(tmp),DescTools::Gsd(tmp)) - expect_equivalent(geosd(tmp2,na.rm=TRUE),DescTools::Gsd(tmp2,na.rm=TRUE)) + if (require(DescTools,quietly=TRUE)) { + expect_equal(geomean(tmp),DescTools::Gmean(tmp)) + expect_equal(geomean(tmp2,na.rm=TRUE),DescTools::Gmean(tmp2,na.rm=TRUE)) + expect_equal(geosd(tmp),DescTools::Gsd(tmp)) + expect_equal(geosd(tmp2,na.rm=TRUE),DescTools::Gsd(tmp2,na.rm=TRUE)) } }) @@ -220,7 +220,7 @@ test_that("headtail() return values",{ expect_equal(nrow(tmp),2*n) expect_equal(ncol(tmp),ncol(iris)) expect_equal(names(tmp),names(iris)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,rbind(head(iris,n=n),tail(iris,n=n))) ## check more rows n <- 6 @@ -228,7 +228,7 @@ test_that("headtail() return values",{ expect_equal(nrow(tmp),2*n) expect_equal(ncol(tmp),ncol(iris)) expect_equal(names(tmp),names(iris)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,rbind(head(iris,n=n),tail(iris,n=n))) ## check of restricted columns n <- 3 @@ -237,7 +237,7 @@ test_that("headtail() return values",{ expect_equal(nrow(tmp),2*n) expect_equal(ncol(tmp),length(cols)) expect_equal(names(tmp),names(iris)[cols]) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") ## check for matrix miris <- as.matrix(iris[,seq_len(4)]) @@ -245,17 +245,17 @@ test_that("headtail() return values",{ expect_equal(nrow(tmp),2*n) expect_equal(ncol(tmp),ncol(miris)) expect_equal(names(tmp),names(miris)) - expect_is(tmp,"matrix") - expect_equivalent(tmp,rbind(head(miris,n=n),tail(miris,n=n))) + expect_equal(class(tmp),c("matrix","array")) + expect_equal(tmp,rbind(head(miris,n=n),tail(miris,n=n)),ignore_attr=TRUE) # check of addrownums tmp <- FSA::headtail(miris,addrownums=FALSE) expect_true(is.null(rownames(tmp))) ## check how it handles tbl_df object - if (require(tibble)) { + if (require(tibble,quietly=TRUE)) { iris2 <- tibble::as_tibble(iris) tmp <- FSA::headtail(iris2,n=15) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") } }) @@ -265,14 +265,14 @@ test_that("peek() return values",{ expect_equal(nrow(tmp),n) expect_equal(ncol(tmp),ncol(iris)) expect_equal(names(tmp),names(iris)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") ## check more rows n <- 10 tmp <- FSA::peek(iris,n=n) expect_equal(nrow(tmp),n) expect_equal(ncol(tmp),ncol(iris)) expect_equal(names(tmp),names(iris)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") ## check of restricted columns n <- 20 cols <- 2:3 @@ -280,7 +280,7 @@ test_that("peek() return values",{ expect_equal(nrow(tmp),n) expect_equal(ncol(tmp),length(cols)) expect_equal(names(tmp),names(iris)[cols]) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") ## check for matrix miris <- as.matrix(iris[,seq_len(4)]) @@ -288,17 +288,15 @@ test_that("peek() return values",{ expect_equal(nrow(tmp),n) expect_equal(ncol(tmp),ncol(miris)) expect_equal(names(tmp),names(miris)) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) # check of addrownums tmp <- FSA::peek(miris,addrownums=FALSE) expect_true(is.null(rownames(tmp))) ## check how it handles tbl_df object - if (require(tibble)) { - iris2 <- tibble::as_tibble(iris) - tmp <- FSA::peek(iris2,n=15) - expect_is(tmp,"data.frame") - } + iris2 <- tibble::as_tibble(iris) + tmp <- FSA::peek(iris2,n=15) + expect_equal(class(tmp),"data.frame") }) test_that("lagratio() calculations",{ @@ -352,8 +350,8 @@ test_that("logbtcf() output",{ cp10 <- cf10*(10^(predict(lm10,data.frame(log10x=log10(10))))) ## Check output type - expect_is(cfe,"numeric") - expect_is(cf10,"numeric") + expect_equal(class(cfe),"numeric") + expect_equal(class(cf10),"numeric") ## Results should be equal expect_equal(cfe,cf10) @@ -367,7 +365,7 @@ test_that("oddeven() return values",{ expect_false(is.even(1)) expect_equal(is.odd(1:4),c(TRUE,FALSE,TRUE,FALSE)) expect_equal(is.even(1:4),c(FALSE,TRUE,FALSE,TRUE)) - expect_is(is.odd(1:4),"logical") + expect_equal(class(is.odd(1:4)),"logical") }) test_that("perc() return values",{ @@ -417,8 +415,8 @@ test_that("repeatedRows2Keep() return values",{ stringsAsFactors=FALSE) keepFirst <- repeatedRows2Keep(test1,cols2ignore=1:2) keepLast <- repeatedRows2Keep(test1,cols2use=3:4,keep="last") - expect_is(keepFirst,"logical") - expect_is(keepLast,"logical") + expect_equal(class(keepFirst),"logical") + expect_equal(class(keepLast),"logical") tmp <- droplevels(subset(test1,keepFirst)) expect_equal(tmp$ID,c(1,3:7,10)) expect_true(all(tmp$KEEP %in% c("First","Both"))) diff --git a/tests/testthat/testthat_Internals.R b/tests/testthat/testthat_Internals.R index 24ba885a..bc7e739d 100644 --- a/tests/testthat/testthat_Internals.R +++ b/tests/testthat/testthat_Internals.R @@ -91,123 +91,123 @@ test_that("iHndlCols2Use() messages and results",{ ## use one column by number ind <- 1 tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=ind) - expect_equivalent(tmp,df1[,ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind,drop=FALSE]) + expect_equal(names(tmp),nms[ind]) ## use one column by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use="net") - expect_equivalent(tmp,df1[,ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind,drop=FALSE]) + expect_equal(names(tmp),nms[ind]) ## use two contiguous columns by number ind <- 1:2 tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=ind) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use two contiguous columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=c("net","eff")) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use two non-contiguous columns by number ind <- c(1,3) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=ind) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use two non-contiguous columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=c("net","species")) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use three columns by number ind <- c(1,3,4) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=ind) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use three columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=c("net","species","catch")) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use four columns by number ind <- 1:4 tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=ind) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## use four columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2use=c("net","eff","species","catch")) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) #### Check results using cols2isnore ## ignore one column by number ind <- 1 tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,-ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind,drop=FALSE]) + expect_equal(names(tmp),nms[-ind]) ## ignore one column by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore="net") - expect_equivalent(tmp,df1[,-ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind,drop=FALSE]) + expect_equal(names(tmp),nms[-ind]) ## ignore two contiguous columns by number ind <- 1:2 tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,-ind]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind]) + expect_equal(names(tmp),nms[-ind]) ## ignore two contiguous columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=c("net","eff")) - expect_equivalent(tmp,df1[,-ind]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind]) + expect_equal(names(tmp),nms[-ind]) ## ignore two non-contiguous columns by number ind <- c(1,3) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,-ind]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind]) + expect_equal(names(tmp),nms[-ind]) ## ignore two non-contiguous columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=c("net","species")) - expect_equivalent(tmp,df1[,-ind]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind]) + expect_equal(names(tmp),nms[-ind]) ## ignore three columns by number ind <- c(1,3,4) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,-ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind,drop=FALSE]) + expect_equal(names(tmp),nms[-ind]) ## ignore three columns by name tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=c("net","species","catch")) - expect_equivalent(tmp,df1[,-ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind,drop=FALSE]) + expect_equal(names(tmp),nms[-ind]) ## ignore four columns by number ind <- 1:4 tmp <- suppressWarnings(FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind)) - expect_equivalent(tmp,df1[,-ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind,drop=FALSE]) + expect_equal(names(tmp),nms[-ind]) ## ignore four columns by name tmp <- suppressWarnings( FSA:::iHndlCols2UseIgnore(df1,cols2ignore=c("net","eff","species","catch"))) - expect_equivalent(tmp,df1[,-ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[-ind]) + expect_equal(tmp,df1[,-ind,drop=FALSE]) + expect_equal(names(tmp),nms[-ind]) #### Check results with cols2ignore using negative indices ## ignore one column by number ind <- -1 tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind,drop=FALSE]) + expect_equal(names(tmp),nms[ind]) ## ignore two contiguous columns by number ind <- -c(1:2) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## ignore two non-contiguous columns by number ind <- -c(1,3) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,ind]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind]) + expect_equal(names(tmp),nms[ind]) ## ignore three columns by number ind <- -c(1,3,4) tmp <- FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind) - expect_equivalent(tmp,df1[,ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind,drop=FALSE]) + expect_equal(names(tmp),nms[ind]) ## ignore three columns by number ind <- -c(1:4) tmp <- suppressWarnings(FSA:::iHndlCols2UseIgnore(df1,cols2ignore=ind)) - expect_equivalent(tmp,df1[,ind,drop=FALSE]) - expect_equivalent(names(tmp),nms[ind]) + expect_equal(tmp,df1[,ind,drop=FALSE]) + expect_equal(names(tmp),nms[ind]) }) @@ -222,7 +222,7 @@ test_that("iHndlMultWhat() messages and results",{ test_that("iLegendHelp() messages and results",{ - expect_error(iLegendHelp("Derek"),"Must use proper keyword") + expect_error(FSA:::iLegendHelp("Derek"),"Must use proper keyword") tmp <- FSA:::iLegendHelp("topright") expect_true(tmp$do.legend) expect_equal(tmp$x,"topright") @@ -280,21 +280,21 @@ test_that("iTypeoflm() messages and results",{ Mirex$year <- factor(Mirex$year) ## Check return types tmp <- lm(mirex~weight*year*species,data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("IVR","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("IVR","list")) tmp <- lm(mirex~weight*year,data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("IVR","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("IVR","list")) tmp <- lm(mirex~weight+year,data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("IVR","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("IVR","list")) tmp <- lm(mirex~weight,data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("SLR","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("SLR","list")) tmp <- lm(mirex~year,data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("ONEWAY","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("ONEWAY","list")) tmp <- lm(mirex~year*species,data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("TWOWAY","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("TWOWAY","list")) tmp <- lm(mirex~weight+I(weight^2),data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("POLY","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("POLY","list")) tmp <- lm(mirex~weight+rnorm(nrow(Mirex)+rnorm(nrow(Mirex))),data=Mirex) - expect_is(FSA:::iTypeoflm(tmp),c("MLR","list")) + expect_equal(class(FSA:::iTypeoflm(tmp)),c("MLR","list")) ## Check some errors glm1 <- glm(year~weight,data=Mirex,family="binomial") expect_error(FSA:::iTypeoflm(glm1),"only works with") diff --git a/tests/testthat/testthat_LWCompPreds.R b/tests/testthat/testthat_LWCompPreds.R index fcc77905..de32c7c3 100644 --- a/tests/testthat/testthat_LWCompPreds.R +++ b/tests/testthat/testthat_LWCompPreds.R @@ -38,7 +38,7 @@ test_that("lwCompPred() returns",{ tmp2 <- FSA:::iMakeLWPred(tmp,lens=c(10,100,200),grps=levels(ChinookArg$loc), vn=c("logtl","loc"),interval="confidence", center.value=0,base=exp(1)) - expect_is(tmp2,"data.frame") + expect_equal(class(tmp2),"data.frame") expect_equal(mode(tmp2),"list") expect_equal(nrow(tmp2),3) expect_equal(ncol(tmp2),3) @@ -48,7 +48,7 @@ test_that("lwCompPred() returns",{ tmp2 <- FSA:::iMakeLWPred(tmp,lens=c(10,100,200),grps=levels(ChinookArg$loc), vn=c("logtl","loc"),interval="prediction", center.value=0,base=exp(1)) - expect_is(tmp2,"data.frame") + expect_equal(class(tmp2),"data.frame") expect_equal(mode(tmp2),"list") expect_equal(nrow(tmp2),3) expect_equal(ncol(tmp2),3) @@ -58,7 +58,7 @@ test_that("lwCompPred() returns",{ tmp2 <- FSA:::iMakeLWPred(tmp,lens=c(10,100,200),grps=levels(ChinookArg$loc), vn=c("logtl","loc"),interval="both", center.value=0,base=exp(1)) - expect_is(tmp2,"data.frame") + expect_equal(class(tmp2),"data.frame") expect_equal(mode(tmp2),"list") expect_equal(nrow(tmp2),3) expect_equal(ncol(tmp2),5) diff --git a/tests/testthat/testthat_PSD.R b/tests/testthat/testthat_PSD.R index 8be2e853..708fe62e 100644 --- a/tests/testthat/testthat_PSD.R +++ b/tests/testthat/testthat_PSD.R @@ -113,8 +113,9 @@ test_that("psdCalc() messages",{ "no stock-length fish in the sample") ## restrict data.frame to no >=quality fish tmp <- subset(df,tl% + expect_warning("No fish in larger than 'stock' categories") %>% + suppressWarnings() ## bad formulae expect_error(psdCalc(tl,data=df,species="Yellow perch"), @@ -268,37 +269,37 @@ test_that("tictactoe() errors and warnings",{ test_that("psdVal() returns",{ ## check values for yellow perch tmp <- psdVal("Yellow Perch",incl.zero=FALSE) - expect_equivalent(tmp,c(130,200,250,300,380)) + expect_equal(tmp,c(130,200,250,300,380),ignore_attr=TRUE) expect_equal(names(tmp),c("stock","quality","preferred","memorable","trophy")) tmp <- psdVal("Yellow Perch",incl.zero=FALSE,units="in") - expect_equivalent(tmp,c(5,8,10,12,15)) + expect_equal(tmp,c(5,8,10,12,15),ignore_attr=TRUE) expect_equal(names(tmp),c("stock","quality","preferred","memorable","trophy")) tmp <- psdVal("Yellow Perch",units="in") - expect_equivalent(tmp,c(0,5,8,10,12,15)) + expect_equal(tmp,c(0,5,8,10,12,15),ignore_attr=TRUE) expect_equal(names(tmp),c("substock","stock","quality","preferred", "memorable","trophy" )) tmp <- psdVal("Yellow Perch",units="in",addLens=c(7,9)) - expect_equivalent(tmp,c(0,5,7,8,9,10,12,15)) + expect_equal(tmp,c(0,5,7,8,9,10,12,15),ignore_attr=TRUE) expect_equal(names(tmp),c("substock","stock","7","quality","9", "preferred","memorable","trophy" )) tmp <- psdVal("Yellow Perch",units="in",addLens=c(7,9), addNames=c("minSlot","maxSlot")) - expect_equivalent(tmp,c(0,5,7,8,9,10,12,15)) + expect_equal(tmp,c(0,5,7,8,9,10,12,15),ignore_attr=TRUE) expect_equal(names(tmp),c("substock","stock","minSlot","quality","maxSlot", "preferred","memorable","trophy")) tmp <- psdVal("Yellow Perch",units="in",addLens=c(minSlot=7,maxSlot=9), addNames=c("minSlot","maxSlot")) - expect_equivalent(tmp,c(0,5,7,8,9,10,12,15)) + expect_equal(tmp,c(0,5,7,8,9,10,12,15),ignore_attr=TRUE) expect_equal(names(tmp),c("substock","stock","minSlot","quality","maxSlot", "preferred","memorable","trophy")) tmp <- psdVal("yellow perch") - expect_equivalent(tmp,c(0,130,200,250,300,380)) + expect_equal(tmp,c(0,130,200,250,300,380),ignore_attr=TRUE) expect_equal(names(tmp),c("substock","stock","quality","preferred", "memorable","trophy")) tmp <- psdVal("yellow Perch",showJustSource=TRUE) - expect_is(tmp,"data.frame") - expect_equivalent(ncol(tmp),2) - expect_equivalent(nrow(tmp),1) + expect_equal(class(tmp),"data.frame") + expect_equal(ncol(tmp),2) + expect_equal(nrow(tmp),1) }) test_that("psdCI() returns",{ @@ -306,13 +307,13 @@ test_that("psdCI() returns",{ ipsd <- c(130,491,253,123)/n ## single binomial tmp <- psdCI(c(0,0,1,1),ipsd,n=n) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),3) expect_equal(colnames(tmp),c("Estimate","95% LCI","95% UCI")) tmp <- psdCI(c(1,0,0,0),ipsd,n=n,label="PSD S-Q") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),3) @@ -320,13 +321,13 @@ test_that("psdCI() returns",{ expect_equal(rownames(tmp),"PSD S-Q") ## single multinomial tmp <- psdCI(c(0,0,1,1),ipsd,n=n,method="multinomial") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),3) expect_equal(colnames(tmp),c("Estimate","95% LCI","95% UCI")) tmp <- psdCI(c(1,0,0,0),ipsd,n=n,method="multinomial",label="PSD S-Q") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),3) @@ -346,17 +347,17 @@ test_that("psdCI() returns",{ ipsd <- c(1,0,0,0) n <- 455 tmp <- psdCI(c(1,1,0,0),ipsd,n=n) - expect_equivalent(tmp,matrix(c(100,NA,NA),nrow=1)) + expect_equal(tmp,matrix(c(100,NA,NA),nrow=1),ignore_attr=TRUE) ipsd <- c(0,0,0,1) n <- 455 tmp <- psdCI(c(1,1,0,0),ipsd,n=n) - expect_equivalent(tmp,matrix(c(0,NA,NA),nrow=1)) + expect_equal(tmp,matrix(c(0,NA,NA),nrow=1),ignore_attr=TRUE) }) test_that("psdCalc() returns",{ ## All values tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),8) expect_equal(ncol(tmp),3) @@ -366,7 +367,7 @@ test_that("psdCalc() returns",{ ## Traditional values tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch", what="traditional")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),4) expect_equal(ncol(tmp),3) @@ -376,7 +377,7 @@ test_that("psdCalc() returns",{ ## Incremental values tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch", what="incremental")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),4) expect_equal(ncol(tmp),3) @@ -385,7 +386,7 @@ test_that("psdCalc() returns",{ ## All values, but don't drop 0s tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch", drop0Est=FALSE)) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),8) expect_equal(ncol(tmp),3) @@ -395,7 +396,7 @@ test_that("psdCalc() returns",{ ## All values, with some additional lengths tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch", addLens=225,addNames="Derek")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),10) expect_equal(ncol(tmp),3) @@ -406,7 +407,7 @@ test_that("psdCalc() returns",{ ## All values, with some additional lengths but no names tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch", addLens=c(225,245),drop0Est=FALSE)) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),12) expect_equal(ncol(tmp),3) @@ -418,7 +419,7 @@ test_that("psdCalc() returns",{ tmp <- suppressWarnings(psdCalc(~tl,data=df,species="Yellow perch", addLens=c(225,245),drop0Est=FALSE, justAdds=TRUE)) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),5) expect_equal(ncol(tmp),3) @@ -428,7 +429,7 @@ test_that("psdCalc() returns",{ ## All values, but df only has values greater than stock values df1 <- droplevels(subset(df,tl>=130)) tmp <- suppressWarnings(psdCalc(~tl,data=df1,species="Yellow perch")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),8) expect_equal(ncol(tmp),3) @@ -438,7 +439,7 @@ test_that("psdCalc() returns",{ ## All values, but df only has values greater than quality values df1 <- droplevels(subset(df,tl>=200)) tmp <- suppressWarnings(psdCalc(~tl,data=df1,species="Yellow perch")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),7) expect_equal(ncol(tmp),3) @@ -448,7 +449,7 @@ test_that("psdCalc() returns",{ ## All values, but df only has values greater than memorable value df1 <- droplevels(subset(df,tl>=300)) tmp <- suppressWarnings(psdCalc(~tl,data=df1,species="Yellow perch")) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),5) expect_equal(ncol(tmp),3) @@ -459,7 +460,7 @@ test_that("psdCalc() returns",{ tmp <- suppressWarnings(psdCalc(~tl,data=df, addLens=c("stock"=130,"quality"=200,"preferred"=250, "memorable"=300,"trophy"=380))) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),8) expect_equal(ncol(tmp),3) @@ -469,7 +470,7 @@ test_that("psdCalc() returns",{ tmp <- suppressWarnings(psdCalc(~tl,data=df, addLens=c("stock"=130,"name1"=200,"name2"=250))) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),4) expect_equal(ncol(tmp),3) @@ -478,7 +479,7 @@ test_that("psdCalc() returns",{ tmp <- suppressWarnings(psdCalc(~tl,data=df, addLens=c("stock"=130,"name1"=200))) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),2) expect_equal(ncol(tmp),3) @@ -500,7 +501,7 @@ test_that("psdAdd() returns",{ ## Validate Results ---- test_that("Does psdAdd() create correct Gabelhouse categories?",{ suppressMessages(df2$gcatn <- psdAdd(tl~species,data=df2)) - expect_equivalent(df2$gcatn,df2$GCATN) + expect_equal(df2$gcatn,df2$GCATN) }) test_that("Does psdAdd() properly handle NA in species?",{ @@ -510,7 +511,7 @@ test_that("Does psdAdd() properly handle NA in species?",{ Spp=c("White Crappie",NA,"White Crappie", "White Crappie","White Crappie")) suppressMessages(gcat <- psdAdd(TL~Spp,data=testdf,drop.levels=TRUE)) - expect_equivalent(which(is.na(testdf$TL) | is.na(testdf$Spp)), + expect_equal(which(is.na(testdf$TL) | is.na(testdf$Spp)), which(is.na(gcat))) # Just NAs for species only multiple other species @@ -518,7 +519,7 @@ test_that("Does psdAdd() properly handle NA in species?",{ Spp=c("White Crappie",NA,"White Crappie", "White Crappie","Black Crappie")) suppressMessages(gcat <- psdAdd(TL~Spp,data=testdf,drop.levels=TRUE)) - expect_equivalent(which(is.na(testdf$TL) | is.na(testdf$Spp)), + expect_equal(which(is.na(testdf$TL) | is.na(testdf$Spp)), which(is.na(gcat))) # Just NAs for species, but with a species w/o Gabelhous lengths @@ -526,7 +527,7 @@ test_that("Does psdAdd() properly handle NA in species?",{ Spp=c("White Crappie",NA,"badSpp", "White Crappie","Black Crappie")) suppressMessages(gcat <- psdAdd(TL~Spp,data=testdf,drop.levels=TRUE)) - expect_equivalent(which(is.na(testdf$TL) | is.na(testdf$Spp) | testdf$Spp=="badSpp"), + expect_equal(which(is.na(testdf$TL) | is.na(testdf$Spp) | testdf$Spp=="badSpp"), which(is.na(gcat))) # NAs for length and species @@ -534,30 +535,29 @@ test_that("Does psdAdd() properly handle NA in species?",{ Spp=c("White Crappie",NA,"White Crappie", "White Crappie","Black Crappie")) suppressMessages(gcat <- psdAdd(TL~Spp,data=testdf,drop.levels=TRUE)) - expect_equivalent(which(is.na(testdf$TL) | is.na(testdf$Spp)), + expect_equal(which(is.na(testdf$TL) | is.na(testdf$Spp)), which(is.na(gcat))) }) test_that("Does psdCalc() compute correct PSD values?",{ suppressWarnings(bgres <- psdCalc(~tl,data=df2bg,species="Bluegill")) - expect_equivalent(bgres[,"Estimate"],c(80,60,40,20,20,20,20,20)) + expect_equal(bgres[,"Estimate"],c(80,60,40,20,20,20,20,20),ignore_attr=TRUE) suppressWarnings(lmbres <- psdCalc(~tl,data=df2lmb,species="Largemouth Bass")) - expect_equivalent(lmbres[,"Estimate"],c(60,30,10,40,30,20,10)) + expect_equal(lmbres[,"Estimate"],c(60,30,10,40,30,20,10),ignore_attr=TRUE) ## pretend like no species is given (but using bluegill results) suppressWarnings(bgres <- psdCalc(~tl,data=df2bg, addLens=c("stock"=80,"quality"=150, "preferred"=200,"memorable"=250, "trophy"=300))) - expect_equivalent(bgres[,"Estimate"],c(80,60,40,20,20,20,20,20)) - + expect_equal(bgres[,"Estimate"],c(80,60,40,20,20,20,20,20),ignore_attr=TRUE) }) test_that("Does psdCalc() work with a tibble?",{ tmp <- tibble::as_tibble(df2bg) suppressWarnings(bgres <- psdCalc(~tl,data=df2bg,species="Bluegill")) suppressWarnings(bgres2 <- psdCalc(~tl,data=tmp,species="Bluegill")) - expect_equivalent(bgres,bgres2) + expect_equal(bgres,bgres2) }) test_that("Does psdCI results match Brenden et al. (2008) results",{ @@ -609,9 +609,9 @@ test_that("Does psdCI results match Brenden et al. (2008) results",{ expect_equal(nrow(resX),4) ## Are values the same diffs <- round(resXY[,"Estimate"]-psdXYs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) diffs <- round(resX[,"Estimate"]-psdXs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) ## Do things still work if all sub-stock fish are removed tmp <- droplevels(subset(df3,tl>=brks["stock"])) @@ -625,9 +625,9 @@ test_that("Does psdCI results match Brenden et al. (2008) results",{ expect_equal(nrow(resX),4) ## Are values the same diffs <- round(resXY[,"Estimate"]-psdXYs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) diffs <- round(resX[,"Estimate"]-psdXs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) ## Do things still work if all sub-stock and stock fish are removed psdXYs <- prop.table(freq[-c(1:2)])*100 @@ -645,9 +645,9 @@ test_that("Does psdCI results match Brenden et al. (2008) results",{ expect_equal(nrow(resX),4) # all should be there ## Are values the same diffs <- round(resXY[,"Estimate"]-psdXYs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) diffs <- round(resX[-1,"Estimate"]-psdXs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) ## Do things still work if all trophy fish are removed psdXYs <- prop.table(freq[-c(1,length(freq))])*100 @@ -664,9 +664,9 @@ test_that("Does psdCI results match Brenden et al. (2008) results",{ expect_equal(nrow(resX),3) # no T row ## Are values the same diffs <- round(resXY[,"Estimate"]-psdXYs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) diffs <- round(resX[,"Estimate"]-psdXs,7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) }) test_that("Does manual calculation after psdAdd() equal psdCalc() results?",{ @@ -688,7 +688,7 @@ test_that("Does manual calculation after psdAdd() equal psdCalc() results?",{ ## do PSD X-Y results match for two species ## Are values the same diffs <- round(res["Bluegill",1:3]-psdBG[3:5,"Estimate"],7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) diffs <- round(res["Largemouth Bass",1:3]-psdLMB[3:5,"Estimate"],7) - expect_equivalent(diffs,rep(0,length(diffs))) + expect_equal(diffs,rep(0,length(diffs)),ignore_attr=TRUE) }) diff --git a/tests/testthat/testthat_StockRecruit.R b/tests/testthat/testthat_StockRecruit.R index b8d2d902..6d3f4269 100644 --- a/tests/testthat/testthat_StockRecruit.R +++ b/tests/testthat/testthat_StockRecruit.R @@ -80,12 +80,12 @@ test_that("srStarts() messages",{ expect_error(srStarts(recruits~stock,data=CodNorwegian,param=3, fixed=list(Rp=1)), "is not a parameter") - expect_warning(srStarts(recruits~stock,data=CodNorwegian,param=1, - fixed=list(a=-1)), - "not positive") - expect_warning(srStarts(recruits~stock,data=CodNorwegian,param=1, - fixed=list(b=-1)), - "not positive") + srStarts(recruits~stock,data=CodNorwegian,param=1,fixed=list(a=-1)) %>% + expect_warning("'a' parameter not positive") %>% + suppressWarnings() + srStarts(recruits~stock,data=CodNorwegian,param=1,fixed=list(b=-1)) %>% + expect_warning("'b' parameter not positive") %>% + suppressWarnings() expect_warning(srStarts(recruits~stock,data=CodNorwegian,param=2, fixed=list(Rp=-1)), "not positive") @@ -122,12 +122,12 @@ test_that("srStarts() messages",{ expect_error(srStarts(recruits~stock,data=CodNorwegian,type="Shepherd", fixed=list(a=3,d=1)), "not named") - expect_warning(srStarts(recruits~stock,data=CodNorwegian,type="Shepherd", - fixed=list(a=-1)), - "not positive") - expect_warning(srStarts(recruits~stock,data=CodNorwegian,type="Shepherd", - fixed=list(b=-1)), - "not positive") + srStarts(recruits~stock,data=CodNorwegian,type="Shepherd",fixed=list(a=-1)) %>% + expect_warning("'a' parameter not positive") %>% + suppressWarnings() + srStarts(recruits~stock,data=CodNorwegian,type="Shepherd",fixed=list(b=-1)) %>% + expect_warning("'b' parameter not positive") %>% + suppressWarnings() expect_warning(srStarts(recruits~stock,data=CodNorwegian,type="Shepherd", fixed=list(c=-1)), "not positive") @@ -173,27 +173,27 @@ test_that("srFuns() output",{ test_that("srFuns() output",{ ## Do all choices return a function - expect_is(srFuns("BevertonHolt",param=1),"function") - expect_is(srFuns("BevertonHolt",param=2),"function") - expect_is(srFuns("BevertonHolt",param=3),"function") - expect_is(srFuns("BevertonHolt",param=4),"function") - expect_is(srFuns("Ricker",param=1),"function") - expect_is(srFuns("Ricker",param=2),"function") - expect_is(srFuns("Ricker",param=3),"function") - expect_is(srFuns("Shepherd"),"function") - expect_is(srFuns("Saila"),"function") - expect_is(srFuns("independence"),"function") + expect_equal(class(srFuns("BevertonHolt",param=1)),"function") + expect_equal(class(srFuns("BevertonHolt",param=2)),"function") + expect_equal(class(srFuns("BevertonHolt",param=3)),"function") + expect_equal(class(srFuns("BevertonHolt",param=4)),"function") + expect_equal(class(srFuns("Ricker",param=1)),"function") + expect_equal(class(srFuns("Ricker",param=2)),"function") + expect_equal(class(srFuns("Ricker",param=3)),"function") + expect_equal(class(srFuns("Shepherd")),"function") + expect_equal(class(srFuns("Saila")),"function") + expect_equal(class(srFuns("independence")),"function") - expect_is(srFuns("BevertonHolt",param=1,simple=TRUE),"function") - expect_is(srFuns("BevertonHolt",param=2,simple=TRUE),"function") - expect_is(srFuns("BevertonHolt",param=3,simple=TRUE),"function") - expect_is(srFuns("BevertonHolt",param=4,simple=TRUE),"function") - expect_is(srFuns("Ricker",param=1,simple=TRUE),"function") - expect_is(srFuns("Ricker",param=2,simple=TRUE),"function") - expect_is(srFuns("Ricker",param=3,simple=TRUE),"function") - expect_is(srFuns("Shepherd",simple=TRUE),"function") - expect_is(srFuns("Saila",simple=TRUE),"function") - expect_is(srFuns("independence",simple=TRUE),"function") + expect_equal(class(srFuns("BevertonHolt",param=1,simple=TRUE)),"function") + expect_equal(class(srFuns("BevertonHolt",param=2,simple=TRUE)),"function") + expect_equal(class(srFuns("BevertonHolt",param=3,simple=TRUE)),"function") + expect_equal(class(srFuns("BevertonHolt",param=4,simple=TRUE)),"function") + expect_equal(class(srFuns("Ricker",param=1,simple=TRUE)),"function") + expect_equal(class(srFuns("Ricker",param=2,simple=TRUE)),"function") + expect_equal(class(srFuns("Ricker",param=3,simple=TRUE)),"function") + expect_equal(class(srFuns("Shepherd",simple=TRUE)),"function") + expect_equal(class(srFuns("Saila",simple=TRUE)),"function") + expect_equal(class(srFuns("independence",simple=TRUE)),"function") ## Do all choices return a message with the name of the function in it expect_message(srFuns("BevertonHolt",param=1,msg=TRUE), @@ -219,34 +219,34 @@ test_that("srStarts() output",{ CodNorwegian$fyear <- factor(CodNorwegian$year) ## Returns a list with proper names tmp <- srStarts(recruits~stock,data=CodNorwegian,type="BevertonHolt",param=1) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","b")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="BevertonHolt",param=2) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","Rp")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="BevertonHolt",param=3) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","b")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="BevertonHolt",param=4) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","Rp")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="Ricker",param=1) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","b")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="Ricker",param=2) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","b")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="Ricker",param=3) - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","Rp")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="Shepherd") - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","b","c")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="SailaLorda") - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),c("a","b","c")) tmp <- srStarts(recruits~stock,data=CodNorwegian,type="independence") - expect_is(tmp,"list") + expect_equal(class(tmp),"list") expect_equal(names(tmp),"a") }) diff --git a/tests/testthat/testthat_Summarize.R b/tests/testthat/testthat_Summarize.R index 2f0fa506..d3eaefd7 100644 --- a/tests/testthat/testthat_Summarize.R +++ b/tests/testthat/testthat_Summarize.R @@ -64,32 +64,32 @@ test_that("Summarize() results, single quantitative variable",{ exp <- c(9,9,15,sd(d1$q2),11,13,15,17,19,0) # all possible results names(exp) <- qnms1 tmp <- Summarize(~q2,data=d1) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2,10)]) # drop nvalid & percZero from expectations tmp <- Summarize(~q2,data=d1,nvalid="always") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(10)]) # drop percZero from expectations tmp <- Summarize(~q2,data=d1,percZero="always") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2)]) # drop nvalid from expectations tmp <- Summarize(~q2,data=d1,nvalid="always",percZero="always") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp) ## no NAs, but zeros exp <- c(9,9,4,sd(d1$q1),0,2,4,6,8,1/9*100) # all possible results names(exp) <- qnms1 tmp <- Summarize(~q1,data=d1) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2)]) # drop nvalid from expectations tmp <- Summarize(~q1,data=d1,percZero="never") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2,10)]) # drop nvalid & percZero from expectations tmp <- Summarize(~q1,data=d1,nvalid="always",percZero="never") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(10)]) # drop percZero from expectations tmp <- Summarize(~q1,data=d1,nvalid="always") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp) ## NAs and zeros @@ -97,16 +97,16 @@ test_that("Summarize() results, single quantitative variable",{ ,0,1.75,3.5,5.25,7,1/8*100) # all possible results names(exp) <- qnms1 tmp <- Summarize(~q4,data=d1) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp) tmp <- Summarize(~q4,data=d1,nvalid="never",percZero="never") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2,10)]) # drop nvalid & percZero from expectations tmp <- Summarize(~q4,data=d1,percZero="never") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(10)]) # drop percZero from expectations tmp <- Summarize(~q4,data=d1,nvalid="never") - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2)]) # drop nvalid from expectations }) @@ -118,7 +118,7 @@ test_that("Summarize() results, quantitative variable by single factor",{ max=c(13,16,19),percZero=rep(0,3), stringsAsFactors=FALSE) tmp <- Summarize(q2~f1,data=d1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,exp[,-c(3,11)]) tmp <- Summarize(q2~f1,data=d1,nvalid="always") expect_equal(tmp,exp[,-c(11)]) # drop percZero from expectations @@ -134,7 +134,7 @@ test_that("Summarize() results, quantitative variable by single factor",{ max=c(2,5,8),percZero=c(1,0,0)/3*100, stringsAsFactors=FALSE) tmp <- Summarize(q1~f1,data=d1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,exp[,-c(3)]) # drop nvalid from expectations tmp <- Summarize(q1~f1,data=d1,percZero="never") expect_equal(tmp,exp[,-c(3,11)]) # drop nvalid & percZero from expectations @@ -151,7 +151,7 @@ test_that("Summarize() results, quantitative variable by single factor",{ max=c(1,4,7),percZero=c(50,0,0), stringsAsFactors=FALSE) tmp <- Summarize(q4~f1,data=d1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,exp) tmp <- Summarize(q4~f1,data=d1,nvalid="never",percZero="never") expect_equal(tmp,exp[,-c(3,11)]) # drop nvalid & percZero from expectations @@ -167,7 +167,7 @@ test_that("Summarize() results, quantitative variable by single character",{ Q1=c(11,14,17)+0.5,median=c(12,15,18),Q3=c(12,15,18)+0.5, max=c(13,16,19),percZero=rep(0,3),stringsAsFactors=FALSE) tmp <- Summarize(q2~c1,data=d1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,exp[,-c(3,11)]) tmp <- Summarize(q2~c1,data=d1,nvalid="always") expect_equal(tmp,exp[,-c(11)]) # drop percZero from expectations @@ -185,7 +185,7 @@ test_that("Summarize() results, quantitative variable by two factors",{ Q3=11:19,max=11:19,percZero=rep(0,9), stringsAsFactors=FALSE) tmp <- Summarize(q2~f2+f1,data=d1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,exp[,-c(4,12)]) # drop nvalid & percZero from expectations tmp <- Summarize(q2~f2+f1,data=d1,nvalid="always") expect_equal(tmp,exp[,-c(12)]) # drop percZero from expectations @@ -202,7 +202,7 @@ test_that("Summarize() results using exclude=",{ max=c(4,7),percZero=rep(0,2), stringsAsFactors=FALSE) tmp <- Summarize(q4~f1,data=d1,exclude="A") - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(tmp,exp[,-c(3,11)]) # drop nvalid & percZero from expectations }) @@ -213,7 +213,7 @@ test_that("Summarize() Results from 1-d matrices and data.frames",{ tmp <- Summarize(d) exp <- c(9,9,5,sd(d),1,3,5,7,9,0) names(exp) <- qnms1 - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2,10)]) # drop nvalid and percZero from expectations tmp <- Summarize(d,nvalid="always") expect_equal(tmp,exp[-c(10)]) # drop percZero from expectations @@ -223,27 +223,27 @@ test_that("Summarize() Results from 1-d matrices and data.frames",{ tmp <- Summarize(~V1,data=d) exp <- c(9,9,5,sd(d$V1),1,3,5,7,9,0) names(exp) <- qnms1 - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(tmp,exp[-c(2,10)]) # drop nvalid and percZero from expectations tmp <- Summarize(~V1,data=d,percZero="always") expect_equal(tmp,exp[-c(2)]) # drop nvalid from expectations }) test_that("Summarize() results, assure 'by' variable is the expected mode/type",{ - expect_is(Summarize(q1~f1,data=d1)$f1,"factor") - expect_is(Summarize(q1~c1,data=d1)$c1,"character") - expect_is(Summarize(q1~q3,data=d1)$q3,"numeric") + expect_equal(class(Summarize(q1~f1,data=d1)$f1),"factor") + expect_equal(class(Summarize(q1~c1,data=d1)$c1),"character") + expect_equal(class(Summarize(q1~q3,data=d1)$q3),"numeric") tmp <- Summarize(q1~f1+f2,data=d1) - expect_is(tmp$f1,"factor") - expect_is(tmp$f2,"factor") + expect_equal(class(tmp$f1),"factor") + expect_equal(class(tmp$f2),"factor") tmp <- Summarize(q1~c1+f1,data=d1) - expect_is(tmp$c1,"character") - expect_is(tmp$f1,"factor") + expect_equal(class(tmp$c1),"character") + expect_equal(class(tmp$f1),"factor") tmp <- Summarize(q1~c1+q3,data=d1) - expect_is(tmp$c1,"character") - expect_is(tmp$q3,"numeric") + expect_equal(class(tmp$c1),"character") + expect_equal(class(tmp$q3),"numeric") tmp <- Summarize(q1~f1+q3,data=d1) - expect_is(tmp$f1,"factor") - expect_is(tmp$q3,"numeric") + expect_equal(class(tmp$f1),"factor") + expect_equal(class(tmp$q3),"numeric") }) diff --git a/tests/testthat/testthat_VonBertalanffy.R b/tests/testthat/testthat_VonBertalanffy.R index e100a4e1..1c7b4055 100644 --- a/tests/testthat/testthat_VonBertalanffy.R +++ b/tests/testthat/testthat_VonBertalanffy.R @@ -3,7 +3,7 @@ ## Test Messages ---- test_that("vbStarts() messages",{ ## Get some data for the following attempts - if (require(fishmethods)) { + if (require(fishmethods,quietly=TRUE)) { data(Kimura,package="fishmethods") ## Asked for a dynamicPlot, which now does not exist expect_warning(vbStarts(length~age,data=Kimura,dynamicPlot=TRUE), @@ -85,11 +85,13 @@ test_that("vbStarts() messages",{ expect_error(vbStarts(length~age,data=subset(Kimura,age<3)), "cannot be automatically determined") } - if (require(FSAdata)) { + if (require(FSAdata,quietly=TRUE)) { data(SpottedSucker1,package="FSAdata") ## gives warning about a poor estimate for K and Linf sv <- list(Linf=max(SpottedSucker1$tl),K=0.3,t0=0) - expect_warning(vbStarts(tl~age,data=SpottedSucker1,param="typical")) + vbStarts(tl~age,data=SpottedSucker1,param="typical") %>% + expect_warning("Starting value for Linf is very different from the observed") %>% + expect_warning("The suggested starting value for K is negative") ## too few ages to estimate Linf expect_error(vbStarts(tl~age,data=subset(SpottedSucker1,age<5)), "cannot be automatically determined") @@ -100,225 +102,219 @@ test_that("vbStarts() messages",{ ## Test Output Types ---- test_that("vbStarts() output",{ ## Get some data for the following attempts - if (require(fishmethods)) { - data(Kimura,package="fishmethods") - ## Returns a list with proper names - tmp <- vbStarts(length~age,data=Kimura,param="typical") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="typical",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="typical", - fixed=list(Linf=30)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - expect_equal(tmp[["Linf"]],30) - tmp <- vbStarts(length~age,data=Kimura,param="typical", - fixed=list(Linf=30,K=0.3)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["K"]],0.3) - tmp <- vbStarts(length~age,data=Kimura,param="typical", - fixed=list(Linf=30,K=0.3,t0=0)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["K"]],0.3) - expect_equal(tmp[["t0"]],0) - tmp <- vbStarts(length~age,data=Kimura,param="BevertonHolt") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="BevertonHolt",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="original") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","L0")) - tmp <- vbStarts(length~age,data=Kimura,param="original",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","L0")) - tmp <- vbStarts(length~age,data=Kimura,param="original", - fixed=list(Linf=30,K=0.3,L0=2)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","L0")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["K"]],0.3) - expect_equal(tmp[["L0"]],2) - tmp <- vbStarts(length~age,data=Kimura,param="vonBertalanffy") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","L0")) - tmp <- vbStarts(length~age,data=Kimura,param="vonBertalanffy",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","L0")) - tmp <- vbStarts(length~age,data=Kimura,param="GQ") - expect_is(tmp,"list") - expect_equal(names(tmp),c("omega","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="GQ",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("omega","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="GQ", - fixed=list(omega=20,K=0.3,t0=0)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("omega","K","t0")) - expect_equal(tmp[["omega"]],20) - expect_equal(tmp[["K"]],0.3) - expect_equal(tmp[["t0"]],0) - tmp <- vbStarts(length~age,data=Kimura,param="GallucciQuinn") - expect_is(tmp,"list") - expect_equal(names(tmp),c("omega","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="GallucciQuinn",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("omega","K","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="Mooij") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","L0","omega")) - tmp <- vbStarts(length~age,data=Kimura,param="Mooij",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","L0","omega")) - tmp <- vbStarts(length~age,data=Kimura,param="Mooij", - fixed=list(Linf=30,L0=2,omega=20)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","L0","omega")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["L0"]],2) - expect_equal(tmp[["omega"]],20) - tmp <- vbStarts(length~age,data=Kimura,param="Weisberg") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","t50","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="Weisberg",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","t50","t0")) - tmp <- vbStarts(length~age,data=Kimura,param="Weisberg", - fixed=list(Linf=30,t50=2,t0=0)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","t50","t0")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["t50"]],2) - expect_equal(tmp[["t0"]],0) - tmp <- vbStarts(length~age,data=Kimura,param="Schnute",ages2use=c(1,10)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("L1","L3","K")) - tmp <- vbStarts(length~age,data=Kimura,param="Schnute",ages2use=c(1,10), - methEV="means") - expect_is(tmp,"list") - expect_equal(names(tmp),c("L1","L3","K")) - tmp <- vbStarts(length~age,data=Kimura,param="Schnute",ages2use=c(1,10), - fixed=list(L1=15,L3=60,K=0.3)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("L1","L3","K")) - expect_equal(tmp[["L1"]],15) - expect_equal(tmp[["L3"]],60) - expect_equal(tmp[["K"]],0.3) - tmp <- vbStarts(length~age,data=Kimura,param="Francis",ages2use=c(1,10)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("L1","L2","L3")) - tmp <- vbStarts(length~age,data=Kimura,param="Francis",ages2use=c(1,10), - methEV="means") - expect_is(tmp,"list") - expect_equal(names(tmp),c("L1","L2","L3")) - tmp <- vbStarts(length~age,data=Kimura,param="Francis",ages2use=c(1,10), - fixed=list(L1=15,L2=40,L3=60)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("L1","L2","L3")) - expect_equal(tmp[["L1"]],15) - expect_equal(tmp[["L2"]],40) - expect_equal(tmp[["L3"]],60) - tmp <- vbStarts(length~age,data=Kimura,param="Somers") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0","C","ts")) - tmp <- vbStarts(length~age,data=Kimura,param="Somers",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0","C","ts")) - tmp <- vbStarts(length~age,data=Kimura,param="Somers", - fixed=list(Linf=30,K=0.3,t0=0,C=0.3,ts=0.5)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0","C","ts")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["K"]],0.3) - expect_equal(tmp[["t0"]],0) - expect_equal(tmp[["C"]],0.3) - expect_equal(tmp[["ts"]],0.5) - tmp <- vbStarts(length~age,data=Kimura,param="Somers2") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0","C","WP")) - tmp <- vbStarts(length~age,data=Kimura,param="Somers2",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0","C","WP")) - tmp <- vbStarts(length~age,data=Kimura,param="Somers2", - fixed=list(Linf=30,K=0.3,t0=0,C=0.3,WP=0.5)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","K","t0","C","WP")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["K"]],0.3) - expect_equal(tmp[["t0"]],0) - expect_equal(tmp[["C"]],0.3) - expect_equal(tmp[["WP"]],0.5) - tmp <- vbStarts(length~age,data=Kimura,param="Pauly") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","Kpr","t0","ts","NGT")) - tmp <- vbStarts(length~age,data=Kimura,param="Pauly",meth0="yngAge") - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","Kpr","t0","ts","NGT")) - tmp <- vbStarts(length~age,data=Kimura,param="Pauly", - fixed=list(Linf=30,Kpr=0.3,t0=0,ts=0.5,NGT=0.2)) - expect_is(tmp,"list") - expect_equal(names(tmp),c("Linf","Kpr","t0","ts","NGT")) - expect_equal(tmp[["Linf"]],30) - expect_equal(tmp[["Kpr"]],0.3) - expect_equal(tmp[["t0"]],0) - expect_equal(tmp[["ts"]],0.5) - expect_equal(tmp[["NGT"]],0.2) - } + data(Kimura,package="fishmethods") + ## Returns a list with proper names + tmp <- vbStarts(length~age,data=Kimura,param="typical") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="typical",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="typical", + fixed=list(Linf=30)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + expect_equal(tmp[["Linf"]],30) + tmp <- vbStarts(length~age,data=Kimura,param="typical", + fixed=list(Linf=30,K=0.3)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["K"]],0.3) + tmp <- vbStarts(length~age,data=Kimura,param="typical", + fixed=list(Linf=30,K=0.3,t0=0)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["K"]],0.3) + expect_equal(tmp[["t0"]],0) + tmp <- vbStarts(length~age,data=Kimura,param="BevertonHolt") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="BevertonHolt",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="original") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","L0")) + tmp <- vbStarts(length~age,data=Kimura,param="original",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","L0")) + tmp <- vbStarts(length~age,data=Kimura,param="original", + fixed=list(Linf=30,K=0.3,L0=2)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","L0")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["K"]],0.3) + expect_equal(tmp[["L0"]],2) + tmp <- vbStarts(length~age,data=Kimura,param="vonBertalanffy") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","L0")) + tmp <- vbStarts(length~age,data=Kimura,param="vonBertalanffy",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","L0")) + tmp <- vbStarts(length~age,data=Kimura,param="GQ") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("omega","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="GQ",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("omega","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="GQ", + fixed=list(omega=20,K=0.3,t0=0)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("omega","K","t0")) + expect_equal(tmp[["omega"]],20) + expect_equal(tmp[["K"]],0.3) + expect_equal(tmp[["t0"]],0) + tmp <- vbStarts(length~age,data=Kimura,param="GallucciQuinn") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("omega","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="GallucciQuinn",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("omega","K","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="Mooij") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","L0","omega")) + tmp <- vbStarts(length~age,data=Kimura,param="Mooij",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","L0","omega")) + tmp <- vbStarts(length~age,data=Kimura,param="Mooij", + fixed=list(Linf=30,L0=2,omega=20)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","L0","omega")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["L0"]],2) + expect_equal(tmp[["omega"]],20) + tmp <- vbStarts(length~age,data=Kimura,param="Weisberg") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","t50","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="Weisberg",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","t50","t0")) + tmp <- vbStarts(length~age,data=Kimura,param="Weisberg", + fixed=list(Linf=30,t50=2,t0=0)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","t50","t0")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["t50"]],2) + expect_equal(tmp[["t0"]],0) + tmp <- vbStarts(length~age,data=Kimura,param="Schnute",ages2use=c(1,10)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("L1","L3","K")) + tmp <- vbStarts(length~age,data=Kimura,param="Schnute",ages2use=c(1,10), + methEV="means") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("L1","L3","K")) + tmp <- vbStarts(length~age,data=Kimura,param="Schnute",ages2use=c(1,10), + fixed=list(L1=15,L3=60,K=0.3)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("L1","L3","K")) + expect_equal(tmp[["L1"]],15) + expect_equal(tmp[["L3"]],60) + expect_equal(tmp[["K"]],0.3) + tmp <- vbStarts(length~age,data=Kimura,param="Francis",ages2use=c(1,10)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("L1","L2","L3")) + tmp <- vbStarts(length~age,data=Kimura,param="Francis",ages2use=c(1,10), + methEV="means") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("L1","L2","L3")) + tmp <- vbStarts(length~age,data=Kimura,param="Francis",ages2use=c(1,10), + fixed=list(L1=15,L2=40,L3=60)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("L1","L2","L3")) + expect_equal(tmp[["L1"]],15) + expect_equal(tmp[["L2"]],40) + expect_equal(tmp[["L3"]],60) + tmp <- vbStarts(length~age,data=Kimura,param="Somers") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0","C","ts")) + tmp <- vbStarts(length~age,data=Kimura,param="Somers",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0","C","ts")) + tmp <- vbStarts(length~age,data=Kimura,param="Somers", + fixed=list(Linf=30,K=0.3,t0=0,C=0.3,ts=0.5)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0","C","ts")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["K"]],0.3) + expect_equal(tmp[["t0"]],0) + expect_equal(tmp[["C"]],0.3) + expect_equal(tmp[["ts"]],0.5) + tmp <- vbStarts(length~age,data=Kimura,param="Somers2") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0","C","WP")) + tmp <- vbStarts(length~age,data=Kimura,param="Somers2",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0","C","WP")) + tmp <- vbStarts(length~age,data=Kimura,param="Somers2", + fixed=list(Linf=30,K=0.3,t0=0,C=0.3,WP=0.5)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","K","t0","C","WP")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["K"]],0.3) + expect_equal(tmp[["t0"]],0) + expect_equal(tmp[["C"]],0.3) + expect_equal(tmp[["WP"]],0.5) + tmp <- vbStarts(length~age,data=Kimura,param="Pauly") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","Kpr","t0","ts","NGT")) + tmp <- vbStarts(length~age,data=Kimura,param="Pauly",meth0="yngAge") + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","Kpr","t0","ts","NGT")) + tmp <- vbStarts(length~age,data=Kimura,param="Pauly", + fixed=list(Linf=30,Kpr=0.3,t0=0,ts=0.5,NGT=0.2)) + expect_equal(class(tmp),"list") + expect_equal(names(tmp),c("Linf","Kpr","t0","ts","NGT")) + expect_equal(tmp[["Linf"]],30) + expect_equal(tmp[["Kpr"]],0.3) + expect_equal(tmp[["t0"]],0) + expect_equal(tmp[["ts"]],0.5) + expect_equal(tmp[["NGT"]],0.2) }) ## Validate Results ---- test_that("vbFuns() and vbStarts() fit to Kimura match Haddon book (2nd ed, p237) results (Excel).",{ - if (require(fishmethods)) { - data(Kimura,package="fishmethods") - ## Get typical Von B function - vbT <- vbFuns("typical") - ## Examine females - KimuraF <- droplevels(subset(Kimura,sex=="F")) - svF <- vbStarts(length~age,data=KimuraF,param="typical") - fitF <- nls(length~vbT(age,Linf,K,t0),data=KimuraF,start=svF) - cfF <- coef(fitF) - # double brackets to remove name attribute - expect_equal(round(cfF[["Linf"]],2),61.23) - expect_equal(round(cfF[["K"]],4),0.2963) - expect_equal(round(cfF[["t0"]],4),-0.0573) - ## Examine males - KimuraM <- droplevels(subset(Kimura,sex=="M")) - svM <- vbStarts(length~age,data=KimuraM,param="typical") - fitM <- nls(length~vbT(age,Linf,K,t0),data=KimuraM,start=svM) - cfM <- coef(fitM) - expect_equal(round(cfM[["Linf"]],2),55.98) - expect_equal(round(cfM[["K"]],4),0.3856) - expect_equal(round(cfM[["t0"]],4),0.1713) - } + data(Kimura,package="fishmethods") + ## Get typical Von B function + vbT <- vbFuns("typical") + ## Examine females + KimuraF <- droplevels(subset(Kimura,sex=="F")) + svF <- vbStarts(length~age,data=KimuraF,param="typical") + fitF <- nls(length~vbT(age,Linf,K,t0),data=KimuraF,start=svF) + cfF <- coef(fitF) + # double brackets to remove name attribute + expect_equal(round(cfF[["Linf"]],2),61.23) + expect_equal(round(cfF[["K"]],4),0.2963) + expect_equal(round(cfF[["t0"]],4),-0.0573) + ## Examine males + KimuraM <- droplevels(subset(Kimura,sex=="M")) + svM <- vbStarts(length~age,data=KimuraM,param="typical") + fitM <- nls(length~vbT(age,Linf,K,t0),data=KimuraM,start=svM) + cfM <- coef(fitM) + expect_equal(round(cfM[["Linf"]],2),55.98) + expect_equal(round(cfM[["K"]],4),0.3856) + expect_equal(round(cfM[["t0"]],4),0.1713) }) test_that("vbFuns() and vbStarts() fit to AIFFD book (Box 5.4) results (SAS).",{ # This is a weak test because of the messiness of the data. - if (require(FSAdata)) { - data(SpottedSucker1,package="FSAdata") - ## Get typical Von B function - vbT <- vbFuns("typical") - sv <- list(Linf=max(SpottedSucker1$tl),K=0.3,t0=0) - fit <- nls(tl~vbT(age,Linf,K,t0),data=SpottedSucker1,start=sv) - cf <- coef(fit) - # double brackets to remove name attribute - expect_equal(round(cf[["Linf"]],0),516) - expect_equal(round(cf[["K"]],3),0.190) - expect_equal(round(cf[["t0"]],2),-4.54) - } + data(SpottedSucker1,package="FSAdata") + ## Get typical Von B function + vbT <- vbFuns("typical") + sv <- list(Linf=max(SpottedSucker1$tl),K=0.3,t0=0) + fit <- nls(tl~vbT(age,Linf,K,t0),data=SpottedSucker1,start=sv) + cf <- coef(fit) + # double brackets to remove name attribute + expect_equal(round(cf[["Linf"]],0),516) + expect_equal(round(cf[["K"]],3),0.190) + expect_equal(round(cf[["t0"]],2),-4.54) }) test_that("vbFuns() and vbStarts() fit to Kimura separated by sex match fishmethods (and Kimura) results.",{ - if (require(fishmethods) & require(lmtest)) { + if (require(lmtest,quietly=TRUE)) { data(Kimura,package="fishmethods") ### get fishmethods results (straight from example) @@ -379,19 +375,19 @@ test_that("vbFuns() and vbStarts() fit to Kimura separated by sex match fishmeth ## Do parameter estimates match those in Kimura (Table 3) # general model - expect_equivalent(round(coef(fitGen)[1:2],2),c(61.23,55.98)) - expect_equivalent(round(coef(fitGen)[3:6],3),c(0.296,0.386,-0.057,0.171)) - # Linf equivalent model (H3) - expect_equivalent(round(coef(fit2Kt)[1],2),c(59.40)) - expect_equivalent(round(coef(fit2Kt)[2:5],3),c(0.337,0.297,0.087,-0.111)) - # K equivalent model (H2) Linf slightly off in 2nd decimal for 2nd value - expect_equivalent(round(coef(fit2Lt)[1:2],1),c(60.1,57.4)) - expect_equivalent(round(coef(fit2Lt)[3:5],3),c(0.330,0.095,-0.021)) - # t0 equivalent model (H1) - expect_equivalent(round(coef(fit2LK)[1:2],2),c(60.77,56.45)) - expect_equivalent(round(coef(fit2LK)[3:5],3),c(0.313,0.361,0.057)) + expect_equal(round(coef(fitGen)[1:2],2),c(61.23,55.98),ignore_attr=TRUE) + expect_equal(round(coef(fitGen)[3:6],3),c(0.296,0.386,-0.057,0.171),ignore_attr=TRUE) + # Linf equal model (H3) + expect_equal(round(coef(fit2Kt)[1],2),c(59.40),ignore_attr=TRUE) + expect_equal(round(coef(fit2Kt)[2:5],3),c(0.337,0.297,0.087,-0.111),ignore_attr=TRUE) + # K equal model (H2) Linf slightly off in 2nd decimal for 2nd value + expect_equal(round(coef(fit2Lt)[1:2],1),c(60.1,57.4),ignore_attr=TRUE) + expect_equal(round(coef(fit2Lt)[3:5],3),c(0.330,0.095,-0.021),ignore_attr=TRUE) + # t0 equal model (H1) + expect_equal(round(coef(fit2LK)[1:2],2),c(60.77,56.45),ignore_attr=TRUE) + expect_equal(round(coef(fit2LK)[3:5],3),c(0.313,0.361,0.057),ignore_attr=TRUE) # common model (H4) - expect_equivalent(round(coef(fitCom),2),c(59.29,0.32,0.01)) + expect_equal(round(coef(fitCom),2),c(59.29,0.32,0.01),ignore_attr=TRUE) } }) diff --git a/tests/testthat/testthat_WSWR.R b/tests/testthat/testthat_WSWR.R index 8ec1202b..a19a7f8d 100644 --- a/tests/testthat/testthat_WSWR.R +++ b/tests/testthat/testthat_WSWR.R @@ -63,25 +63,25 @@ test_that("wsVal() results",{ bg1 <- wsVal("Bluegill") bg2 <- WSlit[WSlit$species=="Bluegill" & WSlit$units=="metric",] bg2 <- bg2[,-which(names(bg2) %in% c("max.len","quad","comment"))] - expect_equivalent(bg1,bg2) + expect_equal(bg1,bg2,ignore_attr=TRUE) bg1 <- wsVal("Bluegill",units="English") bg2 <- WSlit[WSlit$species=="Bluegill" & WSlit$units=="English",] bg2 <- bg2[,-which(names(bg2) %in% c("max.len","quad","comment"))] - expect_equivalent(bg1,bg2) + expect_equal(bg1,bg2,ignore_attr=TRUE) bg1 <- wsVal("Bluegill",units="English",simplify=TRUE) bg2 <- WSlit[WSlit$species=="Bluegill" & WSlit$units=="English",] bg2 <- bg2[,which(names(bg2) %in% c("species","min.len","int","slope"))] - expect_equivalent(bg1,bg2) + expect_equal(bg1,bg2,ignore_attr=TRUE) ## Do Ruffe results match ruf1 <- wsVal("Ruffe") ruf2 <- WSlit[WSlit$species=="Ruffe" & WSlit$units=="metric" & WSlit$ref=="75",] ruf2 <- ruf2[-which(names(ruf2) %in% c("comment"))] - expect_equivalent(ruf1,ruf2) + expect_equal(ruf1,ruf2,ignore_attr=TRUE) ruf1 <- wsVal("Ruffe",simplify=TRUE) ruf2 <- WSlit[WSlit$species=="Ruffe" & WSlit$units=="metric" & WSlit$ref=="75",] ruf2 <- ruf2[,which(names(ruf2) %in% c("species","min.len","max.len", "int","slope","quad"))] - expect_equivalent(ruf1,ruf2) + expect_equal(ruf1,ruf2,ignore_attr=TRUE) ## expect_message(capture.output(wsVal("List")),"must be one of following") expect_output(suppressMessages(wsVal("List"))) @@ -95,5 +95,5 @@ test_that("wrAdd() matches values computed in Excel.",{ df <- read.csv(ftmp) df$wr <- wrAdd(wt~tl+species,data=df) - expect_equivalent(df$wr,df$WR) + expect_equal(df$wr,df$WR) }) diff --git a/tests/testthat/testthat_addZeroCatch.R b/tests/testthat/testthat_addZeroCatch.R index 4ee2e9d7..1aa86ce2 100644 --- a/tests/testthat/testthat_addZeroCatch.R +++ b/tests/testthat/testthat_addZeroCatch.R @@ -52,26 +52,26 @@ test_that("addZeroCatch() test messages",{ ## Test Output Types ---- test_that("addZeroCatch() test output types",{ df1mod <- addZeroCatch(df1,eventvar="net",specvar="species",zerovar="catch") - expect_is(df1mod,"data.frame") + expect_equal(class(df1mod),"data.frame") expect_equal(names(df1mod),names(df1mod)) expect_true(nrow(df1mod)>nrow(df1)) df2mod <- addZeroCatch(df2,eventvar="net",specvar="species",zerovar="catch") - expect_is(df2mod,"data.frame") + expect_equal(class(df2mod),"data.frame") expect_equal(names(df2mod),names(df2mod)) expect_true(nrow(df2mod)>nrow(df2)) df3mod <- suppressWarnings(addZeroCatch(df3,eventvar="net",specvar="species", zerovar="catch")) - expect_is(df3mod,"data.frame") + expect_equal(class(df3mod),"data.frame") expect_equal(names(df3mod),names(df3mod)) expect_true(nrow(df3mod)==nrow(df3)) df4mod <- addZeroCatch(df4,eventvar="net",specvar="species", zerovar=c("catch","recaps")) - expect_is(df4mod,"data.frame") + expect_equal(class(df4mod),"data.frame") expect_equal(names(df4mod),names(df4mod)) expect_true(nrow(df4mod)>nrow(df4)) df5mod <- addZeroCatch(df5,eventvar="net",specvar=c("species","sex"), zerovar="catch") - expect_is(df5mod,"data.frame") + expect_equal(class(df5mod),"data.frame") expect_equal(names(df5mod),names(df5mod)) expect_true(nrow(df5mod)>nrow(df5)) }) diff --git a/tests/testthat/testthat_bootstrap.R b/tests/testthat/testthat_bootstrap.R index a12f2905..48ccc4f5 100644 --- a/tests/testthat/testthat_bootstrap.R +++ b/tests/testthat/testthat_bootstrap.R @@ -104,21 +104,21 @@ test_that("Boot() methods messages",{ test_that("nlsBoot() methods output types",{ # testing confint() tmp <- confint(nlsBoot1) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_equal(rownames(tmp),c("B1","B2","B3")) expect_equal(nrow(tmp),3) expect_equal(ncol(tmp),2) tmp <- confint(nlsBoot1,"B1") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_equal(rownames(tmp),c("B1")) expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) tmp <- confint(nlsBoot1,c(1,3)) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_equal(rownames(tmp),c("B1","B3")) @@ -126,14 +126,14 @@ test_that("nlsBoot() methods output types",{ expect_equal(ncol(tmp),2) # testing htest() tmp <- htest(nlsBoot1,"B1") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Ho Value","p value")) expect_equal(rownames(tmp),c("B1")) expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) tmp <- htest(nlsBoot1,1) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Ho Value","p value")) expect_equal(rownames(tmp),c("B1")) @@ -141,14 +141,14 @@ test_that("nlsBoot() methods output types",{ expect_equal(ncol(tmp),2) # testing predict() tmp <- predict(nlsBoot1,fnx,days=3) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("days","Median","95% LCI","95% UCI")) expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),4) expect_equal(tmp[,"days"],c(days=3)) tmp <- predict(nlsBoot1,fnx,days=1:5) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("days","Median","95% LCI","95% UCI")) expect_equal(nrow(tmp),5) @@ -156,7 +156,7 @@ test_that("nlsBoot() methods output types",{ expect_equal(tmp[,"days"],1:5) # get same output when digits are used? tmp <- predict(nlsBoot1,fnx,days=1:5,digits=2) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("days","Median","95% LCI","95% UCI")) expect_equal(nrow(tmp),5) @@ -167,21 +167,21 @@ test_that("nlsBoot() methods output types",{ test_that("Boot() methods output types",{ # testing confint() tmp <- confint(Boot1) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("confint.boot","matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Estimate","95% LCI","95% UCI")) expect_equal(rownames(tmp),c("B1","B2","B3")) expect_equal(nrow(tmp),3) expect_equal(ncol(tmp),3) tmp <- confint(Boot1,"B1") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("confint.boot","matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Estimate","95% LCI","95% UCI")) expect_equal(rownames(tmp),c("B1")) expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),3) tmp <- confint(Boot1,c(1,3)) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("confint.boot","matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Estimate","95% LCI","95% UCI")) expect_equal(rownames(tmp),c("B1","B3")) @@ -189,14 +189,14 @@ test_that("Boot() methods output types",{ expect_equal(ncol(tmp),3) # testing htest() tmp <- htest(Boot1,"B1") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Ho Value","p value")) expect_equal(rownames(tmp),c("B1")) expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) tmp <- htest(Boot1,1) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("Ho Value","p value")) expect_equal(rownames(tmp),c("B1")) @@ -204,14 +204,14 @@ test_that("Boot() methods output types",{ expect_equal(ncol(tmp),2) # testing predict() tmp <- predict(Boot1,fnx,days=3) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("days","Median","95% LCI","95% UCI")) expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),4) expect_equal(tmp[,"days"],c(days=3)) tmp <- predict(Boot1,fnx,days=1:5) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("days","Median","95% LCI","95% UCI")) expect_equal(nrow(tmp),5) @@ -219,7 +219,7 @@ test_that("Boot() methods output types",{ expect_equal(tmp[,"days"],1:5) # get same output when digits are used? tmp <- predict(Boot1,fnx,days=1:5,digits=2) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(colnames(tmp),c("days","Median","95% LCI","95% UCI")) expect_equal(nrow(tmp),5) diff --git a/tests/testthat/testthat_capHist.R b/tests/testthat/testthat_capHist.R index 173b12ac..f21f50a1 100644 --- a/tests/testthat/testthat_capHist.R +++ b/tests/testthat/testthat_capHist.R @@ -118,31 +118,31 @@ test_that("capHistConvert() messages",{ test_that("capHistSum() results",{ ## Simple two-sample test data ch <- capHistSum(d,cols2use=3:4) - expect_is(ch,"CapHist") + expect_equal(class(ch),"CapHist") expect_equal(mode(ch),"list") expect_equal(names(ch),c("caphist","sum")) - expect_is(ch$caphist,"table") - expect_is(ch$sum,"data.frame") + expect_equal(class(ch$caphist),"table") + expect_equal(class(ch$sum),"data.frame") ## Simple three-sample test data ch <- capHistSum(d,cols2use=3:5) - expect_is(ch,"CapHist") + expect_equal(class(ch),"CapHist") expect_equal(mode(ch),"list") expect_equal(names(ch),c("caphist","sum","methodB.top", "methodB.bot","m.array")) - expect_is(ch$caphist,"table") - expect_is(ch$sum,"data.frame") - expect_is(ch$methodB.top,"matrix") + expect_equal(class(ch$caphist),"table") + expect_equal(class(ch$sum),"data.frame") + expect_equal(class(ch$methodB.top),c("matrix","array")) expect_equal(nrow(ch$methodB.top),3) expect_equal(ncol(ch$methodB.top),3) expect_equal(colnames(ch$methodB.top),paste0("i=",seq_len(3))) expect_equal(rownames(ch$methodB.top),paste0("j=",seq_len(3))) - expect_is(ch$methodB.bot,"matrix") + expect_equal(class(ch$methodB.bot),c("matrix","array")) expect_equal(nrow(ch$methodB.bot),4) expect_equal(ncol(ch$methodB.bot),3) expect_equal(colnames(ch$methodB.bot),paste0("i=",seq_len(3))) expect_equal(rownames(ch$methodB.bot),c("m","u","n","R")) - expect_is(ch$m.array,"matrix") + expect_equal(class(ch$m.array),c("matrix","array")) expect_equal(nrow(ch$m.array),3) expect_equal(ncol(ch$m.array),4) expect_equal(rownames(ch$m.array),paste0("i=",seq_len(3))) @@ -152,39 +152,39 @@ test_that("capHistSum() results",{ test_that("capHistConvert() results",{ ## Individual format created from ... # event format - expect_is(ex1.E2I,"data.frame") + expect_equal(class(ex1.E2I),"data.frame") expect_equal(nrow(ex1.E2I),length(unique(ex1$fish))) expect_equal(ncol(ex1.E2I),length(unique(ex1$yr))+1) expect_equal(sum(ex1.E2I[2:5]),nrow(ex1)) expect_equal(names(ex1.E2I),c("fish",unique(ex1$yr))) - expect_is(ex1.E2I$fish,"character") + expect_equal(class(ex1.E2I$fish),"character") # frequency format (without id) - expect_is(ex1.F2I,"data.frame") + expect_equal(class(ex1.F2I),"data.frame") expect_equal(nrow(ex1.F2I),length(unique(ex1$fish))) expect_equal(ncol(ex1.F2I),length(unique(ex1$yr))) expect_equal(sum(ex1.F2I),nrow(ex1)) expect_equal(names(ex1.F2I),as.character(unique(ex1$yr))) # frequency format (with id) - expect_is(ex1.F2I2,"data.frame") + expect_equal(class(ex1.F2I2),"data.frame") expect_equal(nrow(ex1.F2I2),length(unique(ex1$fish))) expect_equal(ncol(ex1.F2I2),length(unique(ex1$yr))+1) expect_equal(sum(ex1.F2I2[2:5]),nrow(ex1)) expect_equal(names(ex1.F2I2),c("id",unique(ex1$yr))) # MARK format - expect_is(ex1.M2I,"data.frame") + expect_equal(class(ex1.M2I),"data.frame") expect_equal(nrow(ex1.M2I),length(unique(ex1$fish))) expect_equal(ncol(ex1.M2I),length(unique(ex1$yr))) expect_equal(sum(ex1.M2I),nrow(ex1)) expect_equal(names(ex1.M2I),paste0("event",seq_along(unique(ex1$yr)))) # MARK format (with id) - expect_is(ex1.M2I2,"data.frame") + expect_equal(class(ex1.M2I2),"data.frame") expect_equal(nrow(ex1.M2I2),length(unique(ex1$fish))) expect_equal(ncol(ex1.M2I2),length(unique(ex1$yr))+1) expect_equal(sum(ex1.M2I2[,2:5]),nrow(ex1)) expect_equal(names(ex1.M2I2),c("id",paste0("event", seq_along(unique(ex1$yr))))) # RMARK format - expect_is(ex1.R2I,"data.frame") + expect_equal(class(ex1.R2I),"data.frame") expect_equal(nrow(ex1.R2I),length(unique(ex1$fish))) expect_equal(ncol(ex1.R2I),length(unique(ex1$yr))+1) expect_equal(sum(ex1.R2I[,2:5]),nrow(ex1)) @@ -193,23 +193,23 @@ test_that("capHistConvert() results",{ ## Frequency created from # event format - expect_is(ex1.E2F,"data.frame") + expect_equal(class(ex1.E2F),"data.frame") expect_equal(ncol(ex1.E2F),length(unique(ex1$yr))+1) expect_equal(sum(ex1.E2F$freq),length(unique(ex1$fish))) expect_equal(names(ex1.E2F),c(unique(ex1$yr),"freq")) # Individual format - expect_is(ex1.I2F,"data.frame") + expect_equal(class(ex1.I2F),"data.frame") expect_equal(ncol(ex1.I2F),length(unique(ex1$yr))+1) expect_equal(sum(ex1.I2F$freq),length(unique(ex1$fish))) expect_equal(names(ex1.I2F),c(unique(ex1$yr),"freq")) # Mark format - expect_is(ex1.M2F,"data.frame") + expect_equal(class(ex1.M2F),"data.frame") expect_equal(ncol(ex1.M2F),length(unique(ex1$yr))+1) expect_equal(sum(ex1.M2F$freq),length(unique(ex1$fish))) expect_equal(names(ex1.M2F),c(paste0("event",seq_along(unique(ex1$yr))), "freq")) # RMark format - expect_is(ex1.R2F,"data.frame") + expect_equal(class(ex1.R2F),"data.frame") expect_equal(ncol(ex1.R2F),length(unique(ex1$yr))+1) expect_equal(sum(ex1.R2F$freq),length(unique(ex1$fish))) expect_equal(names(ex1.R2F),c(paste0("event",seq_along(unique(ex1$yr))), @@ -217,54 +217,54 @@ test_that("capHistConvert() results",{ ## MARK created from # event format - expect_is(ex1.E2M,"data.frame") + expect_equal(class(ex1.E2M),"data.frame") expect_equal(ncol(ex1.E2M),2) expect_equal(names(ex1.E2M),c("ch","freq")) # frequency format - expect_is(ex1.F2M,"data.frame") + expect_equal(class(ex1.F2M),"data.frame") expect_equal(ncol(ex1.F2M),2) expect_equal(names(ex1.F2M),c("ch","freq")) # Individual format - expect_is(ex1.I2M,"data.frame") + expect_equal(class(ex1.I2M),"data.frame") expect_equal(ncol(ex1.I2M),2) expect_equal(names(ex1.I2M),c("ch","freq")) # RMARK format - expect_is(ex1.R2M,"data.frame") + expect_equal(class(ex1.R2M),"data.frame") expect_equal(ncol(ex1.R2M),2) expect_equal(names(ex1.R2M),c("ch","freq")) ## RMARK created from # event format - expect_is(ex1.E2R,"data.frame") + expect_equal(class(ex1.E2R),"data.frame") expect_equal(nrow(ex1.E2R),length(unique(ex1$fish))) expect_equal(ncol(ex1.E2R),2) expect_equal(names(ex1.E2R),c("fish","ch")) - expect_is(ex1.E2R$fish,"character") + expect_equal(class(ex1.E2R$fish),"character") expect_equal(ex1.E2R$fish,as.character(sort(unique(ex1$fish)))) # frequency format - expect_is(ex1.F2R,"data.frame") + expect_equal(class(ex1.F2R),"data.frame") expect_equal(ncol(ex1.F2R),1) expect_equal(nrow(ex1.F2R),length(unique(ex1$fish))) expect_equal(names(ex1.F2R),"ch") # frequency format (with id) - expect_is(ex1.F2R2,"data.frame") + expect_equal(class(ex1.F2R2),"data.frame") expect_equal(ncol(ex1.F2R2),2) expect_equal(nrow(ex1.F2R2),length(unique(ex1$fish))) expect_equal(names(ex1.F2R2),c("id","ch")) # Individual format - expect_is(ex1.I2R,"data.frame") + expect_equal(class(ex1.I2R),"data.frame") expect_equal(ncol(ex1.I2R),2) expect_equal(nrow(ex1.I2R),length(unique(ex1$fish))) expect_equal(names(ex1.I2R),c("fish","ch")) - expect_is(ex1.I2R$fish,"character") + expect_equal(class(ex1.I2R$fish),"character") expect_equal(ex1.I2R$fish,as.character(sort(unique(ex1$fish)))) # MARK format - expect_is(ex1.M2R,"data.frame") + expect_equal(class(ex1.M2R),"data.frame") expect_equal(ncol(ex1.M2R),1) expect_equal(nrow(ex1.M2R),length(unique(ex1$fish))) expect_equal(names(ex1.M2R),"ch") # RMARK format (with id) - expect_is(ex1.M2R2,"data.frame") + expect_equal(class(ex1.M2R2),"data.frame") expect_equal(nrow(ex1.M2R2),length(unique(ex1$fish))) expect_equal(ncol(ex1.M2R2),2) expect_equal(names(ex1.M2R2),c("id","ch")) @@ -275,14 +275,14 @@ test_that("capHistConvert() results",{ test_that("capHistSum() results",{ ## Simple two-sample test data ch <- capHistSum(d,cols2use=3:4) - expect_equivalent(ch$caphist,as.table(c(2,2,1))) - expect_equivalent(names(ch$caphist),c("01","10","11")) + expect_equal(ch$caphist,as.table(c(2,2,1)),ignore_attr=TRUE) + expect_equal(names(ch$caphist),c("01","10","11")) expect_equal(ch$sum,data.frame(M=3,n=3,m=1)) ## Simple three-sample test data ch <- capHistSum(d,cols2use=3:5) - expect_equivalent(ch$caphist,as.table(c(1,1,1,1,1))) - expect_equivalent(names(ch$caphist),c("010","011","100","101","110")) + expect_equal(ch$caphist,as.table(c(1,1,1,1,1)),ignore_attr=TRUE) + expect_equal(names(ch$caphist),c("010","011","100","101","110")) tmp <- data.frame(n=c(3,3,2),m=c(0,1,2),R=c(3,3,0),M=c(0,3,5), u=c(3,3,0),v=c(1,3,2),f=c(2,3,0)) expect_equal(ch$sum,tmp) @@ -291,8 +291,8 @@ test_that("capHistSum() results",{ as.numeric(rep(NA,6))) expect_equal(ch$sum,tmp) tmp <- matrix(c(0,3,3,3,1,2,3,3,2,0,2,0),nrow=4) - expect_equivalent(ch$methodB.bot,tmp) + expect_equal(ch$methodB.bot,tmp,ignore_attr=TRUE) tmp <- matrix(c(3,3,2,1,NA,NA,1,1,NA,1,2,2),nrow=3) - expect_equivalent(ch$m.array,tmp) + expect_equal(ch$m.array,tmp,ignore_attr=TRUE) }) diff --git a/tests/testthat/testthat_catchCurve.R b/tests/testthat/testthat_catchCurve.R index fc87d63b..f55f222a 100644 --- a/tests/testthat/testthat_catchCurve.R +++ b/tests/testthat/testthat_catchCurve.R @@ -138,56 +138,56 @@ test_that("chapmanRobson errors and warnings",{ ## Test Output Types ---- test_that("catchCurve() outpute types",{ - expect_is(cc,"catchCurve") + expect_equal(class(cc),"catchCurve") # coef, unweighted ccA <- coef(cc) expect_true(is.vector(ccA)) - expect_is(ccA,"numeric") + expect_equal(class(ccA),"numeric") expect_equal(length(ccA),2) expect_equal(names(ccA),c("Z","A")) ccA <- coef(cc,parm="Z") expect_true(is.vector(ccA)) - expect_is(ccA,"numeric") + expect_equal(class(ccA),"numeric") expect_equal(length(ccA),1) expect_equal(names(ccA),c("Z")) ccA <- coef(cc,parm="lm") expect_true(is.vector(ccA)) - expect_is(ccA,"numeric") + expect_equal(class(ccA),"numeric") expect_equal(length(ccA),2) expect_equal(names(ccA),c("(Intercept)","age.e")) # coef, weighted cc2A <- coef(cc2) expect_true(is.vector(cc2A)) - expect_is(cc2A,"numeric") + expect_equal(class(cc2A),"numeric") expect_equal(length(cc2A),2) expect_equal(names(cc2A),c("Z","A")) cc2A <- coef(cc2,parm="A") expect_true(is.vector(cc2A)) - expect_is(cc2A,"numeric") + expect_equal(class(cc2A),"numeric") expect_equal(length(cc2A),1) expect_equal(names(cc2A),c("A")) cc2A <- coef(cc2,parm="lm") expect_true(is.vector(cc2A)) - expect_is(cc2A,"numeric") + expect_equal(class(cc2A),"numeric") expect_equal(length(cc2A),2) expect_equal(names(cc2A),c("(Intercept)","age.e")) # confint, unweighted ccA <- confint(cc) - expect_is(ccA,"matrix") + expect_equal(class(ccA),c("matrix","array")) expect_equal(mode(ccA),"numeric") expect_equal(nrow(ccA),2) expect_equal(ncol(ccA),2) expect_equal(rownames(ccA),c("Z","A")) expect_equal(colnames(ccA),c("95% LCI","95% UCI")) ccA <- confint(cc,parm="Z") - expect_is(ccA,"matrix") + expect_equal(class(ccA),c("matrix","array")) expect_equal(mode(ccA),"numeric") expect_equal(nrow(ccA),1) expect_equal(ncol(ccA),2) expect_equal(rownames(ccA),c("Z")) expect_equal(colnames(ccA),c("95% LCI","95% UCI")) ccA <- confint(cc,parm="lm") - expect_is(ccA,"matrix") + expect_equal(class(ccA),c("matrix","array")) expect_equal(mode(ccA),"numeric") expect_equal(nrow(ccA),2) expect_equal(ncol(ccA),2) @@ -195,21 +195,21 @@ test_that("catchCurve() outpute types",{ expect_equal(colnames(ccA),c("95% LCI","95% UCI")) # confint, weighted cc2A <- confint(cc2) - expect_is(cc2A,"matrix") + expect_equal(class(cc2A),c("matrix","array")) expect_equal(mode(cc2A),"numeric") expect_equal(nrow(cc2A),2) expect_equal(ncol(cc2A),2) expect_equal(rownames(cc2A),c("Z","A")) expect_equal(colnames(cc2A),c("95% LCI","95% UCI")) cc2A <- confint(cc2,parm="Z") - expect_is(cc2A,"matrix") + expect_equal(class(cc2A),c("matrix","array")) expect_equal(mode(cc2A),"numeric") expect_equal(nrow(cc2A),1) expect_equal(ncol(cc2A),2) expect_equal(rownames(cc2A),c("Z")) expect_equal(colnames(cc2A),c("95% LCI","95% UCI")) cc2A <- confint(cc2,parm="lm") - expect_is(cc2A,"matrix") + expect_equal(class(cc2A),c("matrix","array")) expect_equal(mode(cc2A),"numeric") expect_equal(nrow(cc2A),2) expect_equal(ncol(cc2A),2) @@ -217,21 +217,21 @@ test_that("catchCurve() outpute types",{ expect_equal(colnames(cc2A),c("95% LCI","95% UCI")) # summary cc2A <- summary(cc2) - expect_is(cc2A,"matrix") + expect_equal(class(cc2A),c("matrix","array")) expect_equal(mode(cc2A),"numeric") expect_equal(nrow(cc2A),2) expect_equal(ncol(cc2A),4) expect_equal(rownames(cc2A),c("Z","A")) expect_equal(colnames(cc2A),c("Estimate","Std. Error","t value","Pr(>|t|)")) cc2A <- summary(cc2,parm="Z") - expect_is(cc2A,"matrix") + expect_equal(class(cc2A),c("matrix","array")) expect_equal(mode(cc2A),"numeric") expect_equal(nrow(cc2A),1) expect_equal(ncol(cc2A),4) expect_equal(rownames(cc2A),c("Z")) expect_equal(colnames(cc2A),c("Estimate","Std. Error","t value","Pr(>|t|)")) cc2A <- summary(cc2,parm="lm") - expect_is(cc2A,"summary.lm") + expect_equal(class(cc2A),"summary.lm") # r-squared expect_true(is.numeric(rSquared(cc))) @@ -242,28 +242,28 @@ test_that("catchCurve() outpute types",{ }) test_that("chapmanRobson() output types",{ - expect_is(cr,"chapmanRobson") + expect_equal(class(cr),"chapmanRobson") # coef crA <- coef(cr) expect_true(is.vector(crA)) - expect_is(crA,"numeric") + expect_equal(class(crA),"numeric") expect_equal(length(crA),2) expect_equal(names(crA),c("S","Z")) crA <- coef(cr,parm="S") expect_true(is.vector(crA)) - expect_is(crA,"numeric") + expect_equal(class(crA),"numeric") expect_equal(length(crA),1) expect_equal(names(crA),c("S")) # summary crA <- summary(cr) - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),2) expect_equal(ncol(crA),2) expect_equal(rownames(crA),c("S","Z")) expect_equal(colnames(crA),c("Estimate","Std. Error")) crA <- summary(cr,parm="S") - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),1) expect_equal(ncol(crA),2) @@ -271,30 +271,30 @@ test_that("chapmanRobson() output types",{ expect_equal(colnames(crA),c("Estimate","Std. Error")) # confint crA <- confint(cr) - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),2) expect_equal(ncol(crA),2) expect_equal(rownames(crA),c("S","Z")) expect_equal(colnames(crA),c("95% LCI","95% UCI")) crA <- confint(cr,parm="S") - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),1) expect_equal(ncol(crA),2) expect_equal(rownames(crA),c("S")) expect_equal(colnames(crA),c("95% LCI","95% UCI")) - expect_is(cr1,"chapmanRobson") + expect_equal(class(cr1),"chapmanRobson") # coef crA <- coef(cr1) expect_true(is.vector(crA)) - expect_is(crA,"numeric") + expect_equal(class(crA),"numeric") expect_equal(length(crA),2) expect_equal(names(crA),c("S","Z")) # summary crA <- summary(cr1) - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),2) expect_equal(ncol(crA),2) @@ -302,23 +302,23 @@ test_that("chapmanRobson() output types",{ expect_equal(colnames(crA),c("Estimate","Std. Error")) # confint crA <- confint(cr1) - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),2) expect_equal(ncol(crA),2) expect_equal(rownames(crA),c("S","Z")) expect_equal(colnames(crA),c("95% LCI","95% UCI")) - expect_is(cr1,"chapmanRobson") + expect_equal(class(cr1),"chapmanRobson") # coef crA <- coef(cr2) expect_true(is.vector(crA)) - expect_is(crA,"numeric") + expect_equal(class(crA),"numeric") expect_equal(length(crA),2) expect_equal(names(crA),c("S","Z")) # summary crA <- summary(cr2) - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),2) expect_equal(ncol(crA),2) @@ -326,7 +326,7 @@ test_that("chapmanRobson() output types",{ expect_equal(colnames(crA),c("Estimate","Std. Error")) # confint crA <- confint(cr2) - expect_is(crA,"matrix") + expect_equal(class(crA),c("matrix","array")) expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),2) expect_equal(ncol(crA),2) @@ -373,7 +373,7 @@ test_that("catchCurve() and ChaptmanRobson() does negative ages2use properly.",{ scc1 <- summary(cc1) cc2 <- catchCurve(n~age,data=df,ages2use=-(1:2)) scc2 <- summary(cc2) - expect_equal(cc1,cc2) + expect_equal(cc1,cc2,ignore_attr=TRUE) expect_equal(scc1,scc2) cr1 <- chapmanRobson(n~age,data=df,ages2use=3:10) @@ -406,7 +406,7 @@ test_that("catchCurve() and chapmanRobson() match Miranda & Bettoli (2007) boxes }) test_that("catchCurve() and chapmanRobson() match results from fishmethods package",{ - if (require(fishmethods)) { + if (require(fishmethods,quietly=TRUE)) { ## get data data(rockbass,package="fishmethods") ## fishmethods results @@ -422,21 +422,20 @@ test_that("catchCurve() and chapmanRobson() match results from fishmethods packa scr2 <- summary(cr2) # catchCurve results match expect_equal(round(scc1["Z","Estimate"],3), - round(fm$Estimate[fm$Method=="Linear Regression" & fm$Parameter=="Z"]),3) + round(fm$Estimate[fm$Method=="Linear Regression" & fm$Parameter=="Z"],3)) expect_equal(round(scc1["Z","Std. Error"],3), - round(fm$SE[fm$Method=="Linear Regression" & fm$Parameter=="Z"]),3) + round(fm$SE[fm$Method=="Linear Regression" & fm$Parameter=="Z"],3)) # chapmanRobson results match expect_equal(round(scr1["Z","Estimate"],3), - round(fm$Estimate[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"]),3) + round(fm$Estimate[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"],3)) expect_equal(round(scr1["Z","Std. Error"],3), - round(fm$SE[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"]),3) + round(fm$SE[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"],3)) # chapmanRobson (with Smith et al. (2012) bias corrections) # results match for the point estimates but not the SE # fishmethods appears to use eqn 5 from smith et al. for # the uncorrected SE of Z, whereas FSA uses eqn 2 expect_equal(round(scr2["Z","Estimate"],3), - round(fm$Estimate[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"]),3) + round(fm$Estimate[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"],3)) #expect_equal(round(scr2["Z","Std. Error"],3),round(fm$SE[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"]),3) } }) - diff --git a/tests/testthat/testthat_ciDists.R b/tests/testthat/testthat_ciDists.R index 82010c95..112062a7 100644 --- a/tests/testthat/testthat_ciDists.R +++ b/tests/testthat/testthat_ciDists.R @@ -61,52 +61,52 @@ test_that("poiCI() messages",{ ## Test Output Types ---- test_that("binCI() output types",{ res <- binCI(7,10) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),3) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) expect_equal(rownames(res),c("Exact","Wilson","Asymptotic")) res <- binCI(c(3,7),10,type="wilson") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- binCI(c(3,7),10,type="exact") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- binCI(c(3,7),10,type="asymptotic") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- binCI(7,10,verbose=TRUE) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),3) expect_equal(ncol(res),5) expect_equal(colnames(res),c("x","n","proportion","95% LCI","95% UCI")) expect_equal(rownames(res),c("Exact","Wilson","Asymptotic")) res <- binCI(c(3,7),10,type="wilson",verbose=TRUE) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),5) expect_equal(colnames(res),c("x","n","proportion","95% LCI","95% UCI")) res <- binCI(c(3,7),10,type="exact",verbose=TRUE) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),5) expect_equal(colnames(res),c("x","n","proportion","95% LCI","95% UCI")) res <- binCI(c(3,7),10,type="asymptotic",verbose=TRUE) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),5) @@ -115,7 +115,7 @@ test_that("binCI() output types",{ test_that("hyperCI() output types",{ res <- hyperCI(20,20,10) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),1) expect_equal(ncol(res),2) @@ -124,52 +124,52 @@ test_that("hyperCI() output types",{ test_that("poiCI() output types",{ res <- poiCI(10) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),4) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) expect_equal(rownames(res),c("Exact","Daly","Byar","Asymptotic")) res <- poiCI(10,type="exact") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),1) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10,type="daly") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),1) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10,type="byar") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),1) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10,type="asymptotic") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),1) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10,type=c("exact","daly")) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) expect_equal(rownames(res),c("Exact","Daly")) res <- poiCI(10,type="exact",verbose=TRUE) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),1) expect_equal(ncol(res),3) expect_equal(colnames(res),c("x","95% LCI","95% UCI")) expect_equal(rownames(res),"Exact") res <- poiCI(10,type=c("exact","daly"),verbose=TRUE) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),3) @@ -177,32 +177,32 @@ test_that("poiCI() output types",{ expect_equal(rownames(res),c("Exact","Daly")) res <- poiCI(10:11,type="exact") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10:11,type="daly") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10:11,type="byar") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(10:11,type="asymptotic") - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),2) expect_equal(ncol(res),2) expect_equal(colnames(res),c("95% LCI","95% UCI")) res <- poiCI(0) - expect_is(res,"matrix") + expect_equal(class(res),c("matrix","array")) expect_true(is.numeric(res)) expect_equal(nrow(res),4) expect_equal(ncol(res),2) @@ -221,7 +221,7 @@ test_that("binCI() compared to epitools functions",{ 7,10,0.7,0.4159742,0.9840258),nrow=3,byrow=TRUE) rownames(resepi) <- c("Exact","Wilson","Asymptotic") colnames(resepi) <- c("x","n","proportion","lower","upper") - expect_equivalent(res,resepi) + expect_equal(res,resepi,ignore_attr=TRUE) res <- binCI(5:7,10,type="wilson",verbose=TRUE) res[,4:5] <- round(res[,4:5],7) @@ -230,7 +230,7 @@ test_that("binCI() compared to epitools functions",{ 6,10,0.6,0.3126738,0.8318197, 7,10,0.7,0.3967781,0.8922087),nrow=3,byrow=TRUE) colnames(resepi) <- c("x","n","proportion","lower","upper") - expect_equivalent(res,resepi) + expect_equal(res,resepi,ignore_attr=TRUE) }) test_that("poiCI() compared to epitools functions",{ @@ -244,7 +244,7 @@ test_that("poiCI() compared to epitools functions",{ 10,3.802050,16.19795),nrow=4,byrow=TRUE) rownames(resepi) <- c("Exact","Daly","Byar","Asymptotic") colnames(resepi) <- c("x","lower","upper") - expect_equivalent(res,resepi) + expect_equal(res,resepi,ignore_attr=TRUE) res <- poiCI(5:7,type="exact",verbose=TRUE) res[,2] <- round(res[,2],6) @@ -254,6 +254,5 @@ test_that("poiCI() compared to epitools functions",{ 6,2.201891,13.05948, 7,2.814358,14.42268),nrow=3,byrow=TRUE) colnames(resepi) <- c("x","lower","upper") - expect_equivalent(res,resepi) + expect_equal(res,resepi,ignore_attr=TRUE) }) - diff --git a/tests/testthat/testthat_depletion.R b/tests/testthat/testthat_depletion.R index dd86a802..d4de64d2 100644 --- a/tests/testthat/testthat_depletion.R +++ b/tests/testthat/testthat_depletion.R @@ -1,6 +1,6 @@ ## Results for validation tests below ---- # fishmethods's Darter data -if (require(fishmethods)) { +if (require(fishmethods,quietly=TRUE)) { data(darter,package="fishmethods") # fishmethod Leslie deplet(catch=darter$catch,effort=darter$effort,method="l") @@ -23,7 +23,7 @@ if (require(fishmethods)) { } # DeLury's Lobster Data -if (require(FSAdata)) { +if (require(FSAdata,quietly=TRUE)) { data(LobsterPEI,package="FSAdata") df <- subset(LobsterPEI,day>16) # fishmethod Leslie @@ -46,51 +46,49 @@ if (require(FSAdata)) { } # Fischler's Blue Crab data -if (require(FSAdata)) { - data(BlueCrab,package="FSAdata") - # fishmethod Leslie - deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="l") - cf5fm <- l.out$results[,1:2] - ci5fm <- l.out$results[,3:4] - # FSA Leslie - ex5 <- with(BlueCrab,depletion(catch,effort)) - cf5 <- summary(ex5) - ci5 <- confint(ex5) - - # fishmethod DeLury - deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="d") - cf6fm <- d.out$results[,1:2] - ci6fm <- d.out$results[,3:4] - # FSA DeLury - ex6 <- with(BlueCrab,depletion(catch,effort,method="DeLury",Ricker.mod=TRUE)) - cf6 <- summary(ex6) - ci6 <- confint(ex6) -} +data(BlueCrab,package="FSAdata") +# fishmethod Leslie +deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="l") +cf5fm <- l.out$results[,1:2] +ci5fm <- l.out$results[,3:4] +# FSA Leslie +ex5 <- with(BlueCrab,depletion(catch,effort)) +cf5 <- summary(ex5) +ci5 <- confint(ex5) + +# fishmethod DeLury +deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="d") +cf6fm <- d.out$results[,1:2] +ci6fm <- d.out$results[,3:4] +# FSA DeLury +ex6 <- with(BlueCrab,depletion(catch,effort,method="DeLury",Ricker.mod=TRUE)) +cf6 <- summary(ex6) +ci6 <- confint(ex6) + # Omand's SMB data -if (require(fishmethods)) { - # fishmethod Leslie - deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="l") - cf7fm <- l.out$results[,1:2] - ci7fm <- l.out$results[,3:4] - # FSA Leslie - ex7 <- with(SMBassLS,depletion(catch,effort)) - cf7 <- summary(ex7) - ci7 <- confint(ex7) - # FSA Leslie with Ricker mod - ex7r <- with(SMBassLS,depletion(catch,effort,Ricker.mod=TRUE)) - cf7r <- summary(ex7r) - ci7r <- confint(ex7r) - - # fishmethod DeLury - deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="d") - cf8fm <- d.out$results[,1:2] - ci8fm <- d.out$results[,3:4] - # FSA DeLury - ex8 <- with(SMBassLS,depletion(catch,effort,method="DeLury",Ricker.mod=TRUE)) - cf8 <- summary(ex8) - ci8 <- confint(ex8) -} +# fishmethod Leslie +deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="l") +cf7fm <- l.out$results[,1:2] +ci7fm <- l.out$results[,3:4] +# FSA Leslie +ex7 <- with(SMBassLS,depletion(catch,effort)) +cf7 <- summary(ex7) +ci7 <- confint(ex7) +# FSA Leslie with Ricker mod +ex7r <- with(SMBassLS,depletion(catch,effort,Ricker.mod=TRUE)) +cf7r <- summary(ex7r) +ci7r <- confint(ex7r) + +# fishmethod DeLury +deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="d") +cf8fm <- d.out$results[,1:2] +ci8fm <- d.out$results[,3:4] +# FSA DeLury +ex8 <- with(SMBassLS,depletion(catch,effort,method="DeLury",Ricker.mod=TRUE)) +cf8 <- summary(ex8) +ci8 <- confint(ex8) + ## Test Messages ---- test_that("depletion() messages",{ @@ -153,21 +151,21 @@ test_that("depletion() messages",{ ## Test Output Types ---- test_that("depletion() output types",{ l1 <- depletion(SMBassLS$catch,SMBassLS$effort,method="Leslie") - expect_is(l1,"depletion") + expect_equal(class(l1),"depletion") # coef()s l1A <- coef(l1) expect_true(is.vector(l1A)) - expect_is(l1A,"numeric") + expect_equal(class(l1A),"numeric") expect_equal(length(l1A),2) expect_equal(names(l1A),c("No","q")) l1A <- coef(l1,parm="No") expect_true(is.vector(l1A)) - expect_is(l1A,"numeric") + expect_equal(class(l1A),"numeric") expect_equal(length(l1A),1) expect_equal(names(l1A),c("No")) l1A <- coef(l1,parm="lm") expect_true(is.vector(l1A)) - expect_is(l1A,"numeric") + expect_equal(class(l1A),"numeric") expect_equal(length(l1A),2) expect_equal(names(l1A),c("(Intercept)","K")) # confint()s @@ -218,21 +216,21 @@ test_that("depletion() output types",{ expect_true(is.numeric(rSquared(l1))) d1 <- depletion(SMBassLS$catch,SMBassLS$effort,method="DeLury") - expect_is(d1,"depletion") + expect_equal(class(d1),"depletion") # coef()s d1A <- coef(d1) expect_true(is.vector(d1A)) - expect_is(d1A,"numeric") + expect_equal(class(d1A),"numeric") expect_equal(length(d1A),2) expect_equal(names(d1A),c("No","q")) d1A <- coef(d1,parm="No") expect_true(is.vector(d1A)) - expect_is(d1A,"numeric") + expect_equal(class(d1A),"numeric") expect_equal(length(d1A),1) expect_equal(names(d1A),c("No")) d1A <- coef(d1,parm="lm") expect_true(is.vector(d1A)) - expect_is(d1A,"numeric") + expect_equal(class(d1A),"numeric") expect_equal(length(d1A),2) # confint()s d1A <- confint(d1) @@ -283,134 +281,107 @@ test_that("depletion() output types",{ ## Validate Results ---- test_that("depletion() with 'Leslie' matches fishmethod's 'deplet' for darter data",{ - if (require(fishmethods)) { - expect_equal(round(cf1[["No","Estimate"]],0),round(cf1fm[["N","Estimate"]],0)) - expect_equal(round(cf1[["No","Std. Err."]],1),round(cf1fm[["N","SE"]],1)) - expect_equivalent(round(cf1["q",],7),round(cf1fm["q",],7)) - expect_equivalent(round(ci1["No",],1),round(ci1fm["N",],1)) - expect_equivalent(round(ci1["q",],6),round(ci1fm["q",],6)) - } + expect_equal(round(cf1[["No","Estimate"]],0),round(cf1fm[["N","Estimate"]],0)) + expect_equal(round(cf1[["No","Std. Err."]],1),round(cf1fm[["N","SE"]],1)) + expect_equal(round(cf1["q",],7),round(cf1fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci1["No",],1),round(ci1fm["N",],1)) + expect_equal(round(ci1["q",],6),round(ci1fm["q",],6)) }) test_that("depletion() with 'Leslie' matches fishmethod's 'deplet' for lobster data",{ - if (require(fishmethods)) { - expect_equal(round(cf3[["No","Estimate"]],0),round(cf3fm[["N","Estimate"]],0)) - expect_equal(round(cf3[["No","Std. Err."]],1),round(cf3fm[["N","SE"]],1)) - expect_equivalent(round(cf3["q",],7),round(cf3fm["q",],7)) - expect_equivalent(round(ci3["No",],1),round(ci3fm["N",],1)) - expect_equivalent(round(ci3["q",],5),round(ci3fm["q",],5)) - } + expect_equal(round(cf3[["No","Estimate"]],0),round(cf3fm[["N","Estimate"]],0)) + expect_equal(round(cf3[["No","Std. Err."]],1),round(cf3fm[["N","SE"]],1)) + expect_equal(round(cf3["q",],7),round(cf3fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci3["No",],1),round(ci3fm["N",],1)) + expect_equal(round(ci3["q",],5),round(ci3fm["q",],5)) }) test_that("depletion() with 'Leslie' matches example 7.1 (p. 299) in Seber (2002) for lobster data",{ - if (require(FSAdata)) { - expect_equal(round(cf3["No","Estimate"],1),120.5) - expect_equal(round(cf3["q","Estimate"],4),0.0074) - ## way off (52%) - #expect_equal(round(ci3["No",],0),c(77,327)) - ## off by only 0.0001 for the LCI - #expect_equal(round(ci3["q",],4),c(0.0058,0.0090)) - } + expect_equal(round(cf3["No","Estimate"],1),120.5) + expect_equal(round(cf3["q","Estimate"],4),0.0074) + ## way off (52%) + #expect_equal(round(ci3["No",],0),c(77,327)) + ## off by only 0.0001 for the LCI + #expect_equal(round(ci3["q",],4),c(0.0058,0.0090)) }) test_that("depletion() with 'Leslie' matches fishmethod's 'deplet' for blue crab data",{ - if (require(fishmethods)) { - expect_equal(round(cf5[["No","Estimate"]],0),round(cf5fm[["N","Estimate"]],0)) - expect_equal(round(cf5[["No","Std. Err."]],1),round(cf5fm[["N","SE"]],1)) - expect_equivalent(round(cf5["q",],7),round(cf5fm["q",],7)) - expect_equivalent(round(ci5["No",],1),round(ci5fm["N",],1)) - expect_equivalent(round(ci5["q",],6),round(ci5fm["q",],6)) - } + expect_equal(round(cf5[["No","Estimate"]],0),round(cf5fm[["N","Estimate"]],0)) + expect_equal(round(cf5[["No","Std. Err."]],1),round(cf5fm[["N","SE"]],1)) + expect_equal(round(cf5["q",],7),round(cf5fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci5["No",],1),round(ci5fm["N",],1)) + expect_equal(round(ci5["q",],6),round(ci5fm["q",],6)) }) test_that("depletion with 'Leslie' matches example 7.2 (p. 300) in Seber (2002) for blue crab",{ - if (require(FSAdata)) { - ## Off by <0.3% - #expect_equal(round(cf5["No","Estimate"],-2),330300) - expect_equal(round(cf5["q","Estimate"],5),0.00056) - ## off by ~2% - #expect_equal(round(ci5["No",],-2),c(299600,373600)) - ## off by 0.00001 for the LCI - #expect_equal(round(ci5["q",],5),c(0.00045,0.00067)) - } + ## Off by <0.3% + #expect_equal(round(cf5["No","Estimate"],-2),330300) + expect_equal(round(cf5["q","Estimate"],5),0.00056) + ## off by ~2% + #expect_equal(round(ci5["No",],-2),c(299600,373600)) + ## off by 0.00001 for the LCI + #expect_equal(round(ci5["q",],5),c(0.00045,0.00067)) }) test_that("depletion() with 'Leslie' matches fishmethod's 'deplet' for SMB data",{ - if (require(fishmethods)) { - expect_equal(round(cf7[["No","Estimate"]],0),round(cf7fm[["N","Estimate"]],0)) - expect_equal(round(cf7[["No","Std. Err."]],1),round(cf7fm[["N","SE"]],1)) - expect_equivalent(round(cf7["q",],7),round(cf7fm["q",],7)) - expect_equivalent(round(ci7["No",],1),round(ci7fm["N",],1)) - expect_equivalent(round(ci7["q",],5),round(ci7fm["q",],5)) - } + expect_equal(round(cf7[["No","Estimate"]],0),round(cf7fm[["N","Estimate"]],0)) + expect_equal(round(cf7[["No","Std. Err."]],1),round(cf7fm[["N","SE"]],1)) + expect_equal(round(cf7["q",],7),round(cf7fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci7["No",],1),round(ci7fm["N",],1)) + expect_equal(round(ci7["q",],5),round(ci7fm["q",],5)) }) test_that("depletion() with 'Leslie' with Ricker.mod matches example 6.1 (p. 151) in Ricker (1975).",{ - if (require(FSAdata)) { - expect_equal(round(cf7r["No","Estimate"],0),1078) - # multiplied by 7 because Ricker fit on Ct rather than CPEt but ft was a constant 7 - expect_equal(round(cf7r["q","Estimate"]*7,5),0.10676) - ## Way different ... Ricker used a method by DeLury rather than Seber (2002) - #expect_equal(round(ci7r["No",],0),c(814,2507)) - } + expect_equal(round(cf7r["No","Estimate"],0),1078) + # multiplied by 7 because Ricker fit on Ct rather than CPEt but ft was a constant 7 + expect_equal(round(cf7r["q","Estimate"]*7,5),0.10676) + ## Way different ... Ricker used a method by DeLury rather than Seber (2002) + #expect_equal(round(ci7r["No",],0),c(814,2507)) }) test_that("depletion() with 'DeLury' and Ricker.mod matches fishmethod's 'deplet'",{ - if (require(fishmethods)) { - expect_equal(round(cf2[["No","Estimate"]],0),round(cf2fm[["N","Estimate"]],0)) - expect_equal(round(cf2[["No","Std. Err."]],1),round(cf2fm[["N","SE"]],1)) - expect_equivalent(round(cf2["q",],7),round(cf2fm["q",],7)) - expect_equivalent(round(ci2["No",],1),round(ci2fm["N",],1)) - expect_equivalent(round(ci2["q",],6),round(ci2fm["q",],6)) - } + expect_equal(round(cf2[["No","Estimate"]],0),round(cf2fm[["N","Estimate"]],0)) + expect_equal(round(cf2[["No","Std. Err."]],1),round(cf2fm[["N","SE"]],1)) + expect_equal(round(cf2["q",],7),round(cf2fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci2["No",],1),round(ci2fm["N",],1)) + expect_equal(round(ci2["q",],6),round(ci2fm["q",],6)) }) test_that("depletion() with 'DeLury' and Ricker.mod matches fishmethod's 'deplet' for the lobster data",{ - if (require(fishmethods)) { - expect_equal(round(cf4[["No","Estimate"]],0),round(cf4fm[["N","Estimate"]],0)) - expect_equal(round(cf4[["No","Std. Err."]],1),round(cf4fm[["N","SE"]],1)) - expect_equivalent(round(cf4["q",],7),round(cf4fm["q",],7)) - expect_equivalent(round(ci4["No",],1),round(ci4fm["N",],1)) - expect_equivalent(round(ci4["q",],7),round(ci4fm["q",],7)) - } + expect_equal(round(cf4[["No","Estimate"]],0),round(cf4fm[["N","Estimate"]],0)) + expect_equal(round(cf4[["No","Std. Err."]],1),round(cf4fm[["N","SE"]],1)) + expect_equal(round(cf4["q",],7),round(cf4fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci4["No",],1),round(ci4fm["N",],1)) + expect_equal(round(ci4["q",],7),round(ci4fm["q",],7)) }) #test_that("depletion() with 'DeLury' matches DeLury (1947) for lobster data",{ -# if (require(FSAdata)) { ## REGRESSION RESULTS WERE DIFFERENT ## off by 4% #expect_equal(round(cf4["No","Estimate"],1),116.33) ## off by 7.7% (but that is fourth decimal) # expect_equal(round(cf4[[1,"q"]],7),0.0079835) -# } #}) test_that("depletion() with 'DeLury' and Ricker.mod matches fishmethod's 'deplet' for the blue crab data",{ - if (require(fishmethods)) { - expect_equal(round(cf6[["No","Estimate"]],0),round(cf6fm[["N","Estimate"]],0)) - expect_equal(round(cf6[["No","Std. Err."]],1),round(cf6fm[["N","SE"]],1)) - expect_equivalent(round(cf6["q",],7),round(cf6fm["q",],7)) - expect_equivalent(round(ci6["No",],1),round(ci6fm["N",],1)) - expect_equivalent(round(ci6["q",],5),round(ci6fm["q",],5)) - } + expect_equal(round(cf6[["No","Estimate"]],0),round(cf6fm[["N","Estimate"]],0)) + expect_equal(round(cf6[["No","Std. Err."]],1),round(cf6fm[["N","SE"]],1)) + expect_equal(round(cf6["q",],7),round(cf6fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci6["No",],1),round(ci6fm["N",],1)) + expect_equal(round(ci6["q",],5),round(ci6fm["q",],5)) }) test_that("depletion() with 'DeLury' and Ricker.mod matches fishmethod's 'deplet' for the SMB data",{ - if (require(fishmethods)) { - expect_equal(round(cf8[["No","Estimate"]],0),round(cf8fm[["N","Estimate"]],0)) - expect_equal(round(cf8[["No","Std. Err."]],1),round(cf8fm[["N","SE"]],1)) - expect_equivalent(round(cf8["q",],7),round(cf8fm["q",],7)) - expect_equivalent(round(ci8["No",],1),round(ci8fm["N",],1)) - expect_equivalent(round(ci8["q",],6),round(ci8fm["q",],6)) - } + expect_equal(round(cf8[["No","Estimate"]],0),round(cf8fm[["N","Estimate"]],0)) + expect_equal(round(cf8[["No","Std. Err."]],1),round(cf8fm[["N","SE"]],1)) + expect_equal(round(cf8["q",],7),round(cf8fm["q",],7),ignore_attr=TRUE) + expect_equal(round(ci8["No",],1),round(ci8fm["N",],1)) + expect_equal(round(ci8["q",],6),round(ci8fm["q",],6)) }) test_that("depletion() with 'DeLury' with Ricker.mod matches example 6.2 (p. 154) in Ricker (1975).",{ - if (require(FSAdata)) { - expect_equal(round(cf8["No","Estimate"],0),1150) - expect_equal(round(cf8["q","Estimate"],5),0.01319) - } + expect_equal(round(cf8["No","Estimate"],0),1150) + expect_equal(round(cf8["q","Estimate"],5),0.01319) }) - diff --git a/tests/testthat/testthat_dunnTest.R b/tests/testthat/testthat_dunnTest.R index de1e6e6a..70703059 100644 --- a/tests/testthat/testthat_dunnTest.R +++ b/tests/testthat/testthat_dunnTest.R @@ -30,7 +30,7 @@ test_that("dunnTest() error and warning messages",{ ## Test Output Types ---- test_that("dunnTest() output",{ - if (require(dunn.test)) { + if (require(dunn.test,quietly=TRUE)) { ## Loop through all methods in p.adjustment.methods lbls <- c("No Adjustment","Bonferroni","Sidak","Holm","Holm-Sidak", "Hochberg","Benjamini-Hochberg","Benjamini-Yekuteili") @@ -40,7 +40,7 @@ test_that("dunnTest() output",{ expect_true(is.list(tmp)) expect_equal(names(tmp),c("method","res","dtres")) expect_equal(tmp$method,lbls[i]) - expect_is(tmp$res,"data.frame") + expect_equal(class(tmp$res),"data.frame") expect_equal(names(tmp$res),c("Comparison","Z","P.unadj","P.adj")) } for (i in seq_along(meths)) { ## For one-sided cases @@ -48,7 +48,7 @@ test_that("dunnTest() output",{ expect_true(is.list(tmp)) expect_equal(names(tmp),c("method","res","dtres")) expect_equal(tmp$method,lbls[i]) - expect_is(tmp$res,"data.frame") + expect_equal(class(tmp$res),"data.frame") expect_equal(names(tmp$res),c("Comparison","Z","P.unadj","P.adj")) } } @@ -69,86 +69,80 @@ test_that("Check dunnTest() output labels",{ ## Validate Results ---- test_that("dunnTest matches UnitStat results",{ tmp <- dunnTest(pH~fpond,data=ponds,method="bonferroni") - expect_equivalent(round(tmp$res$P.adj,4),unistat$P.adj) + expect_equal(round(tmp$res$P.adj,4),unistat$P.adj) }) test_that("dunnTest matches dunn.test results for ponds data",{ - if (require(dunn.test)) { - ## Loop through all methods in p.adjustment.methods - for (m in dunn.test::p.adjustment.methods) { # for one-sided results - tmp <- dunnTest(pH~fpond,data=ponds,method=m,two.sided=FALSE)$res$P.adj - junk <- utils::capture.output(tmp2 <- dunn.test(ponds$pH,ponds$fpond, - method=m)$P.adjusted) - expect_equivalent(tmp,tmp2) - } - for (m in dunn.test::p.adjustment.methods) { # for two-sided results - tmp <- dunnTest(pH~fpond,data=ponds,method=m,two.sided=TRUE)$res$P.adj - junk <- utils::capture.output( - tmp2 <- dunn.test(ponds$pH,ponds$fpond, - method=m,altp=TRUE)$altP.adjusted) - expect_equivalent(tmp,tmp2) - } - for (m in dunn.test::p.adjustment.methods) { # for one-sided results with missing data - suppressWarnings( - tmp <- dunnTest(pH~fpond,data=ponds2,method=m, - two.sided=FALSE)$res$P.adj) - junk <- utils::capture.output(tmp2 <- dunn.test(ponds2$pH,ponds2$fpond, - method=m)$P.adjusted) - expect_equivalent(tmp,tmp2) - } - for (m in dunn.test::p.adjustment.methods) { # for two-sided results with missing data - suppressWarnings( - tmp <- dunnTest(pH~fpond,data=ponds2,method=m, - two.sided=TRUE)$res$P.adj) - junk <- utils::capture.output( - tmp2 <- dunn.test(ponds2$pH,ponds2$fpond, - method=m,altp=TRUE)$altP.adjusted) - expect_equivalent(tmp,tmp2) - } - } # end require() + ## Loop through all methods in p.adjustment.methods + for (m in dunn.test::p.adjustment.methods) { # for one-sided results + tmp <- dunnTest(pH~fpond,data=ponds,method=m,two.sided=FALSE)$res$P.adj + junk <- utils::capture.output(tmp2 <- dunn.test(ponds$pH,ponds$fpond, + method=m)$P.adjusted) + expect_equal(tmp,tmp2) + } + for (m in dunn.test::p.adjustment.methods) { # for two-sided results + tmp <- dunnTest(pH~fpond,data=ponds,method=m,two.sided=TRUE)$res$P.adj + junk <- utils::capture.output( + tmp2 <- dunn.test(ponds$pH,ponds$fpond, + method=m,altp=TRUE)$altP.adjusted) + expect_equal(tmp,tmp2) + } + for (m in dunn.test::p.adjustment.methods) { # for one-sided results with missing data + suppressWarnings( + tmp <- dunnTest(pH~fpond,data=ponds2,method=m, + two.sided=FALSE)$res$P.adj) + junk <- utils::capture.output(tmp2 <- dunn.test(ponds2$pH,ponds2$fpond, + method=m)$P.adjusted) + expect_equal(tmp,tmp2) + } + for (m in dunn.test::p.adjustment.methods) { # for two-sided results with missing data + suppressWarnings( + tmp <- dunnTest(pH~fpond,data=ponds2,method=m, + two.sided=TRUE)$res$P.adj) + junk <- utils::capture.output( + tmp2 <- dunn.test(ponds2$pH,ponds2$fpond, + method=m,altp=TRUE)$altP.adjusted) + expect_equal(tmp,tmp2) + } }) test_that("dunnTest matches dunn.test results for homecare data",{ - if (require(dunn.test)) { - data(homecare,package="dunn.test") - ## Loop through all methods in p.adjustment.methods - for (m in dunn.test::p.adjustment.methods) { # for one-sided results - tmp <- dunnTest(occupation~eligibility,data=homecare, - method=m,two.sided=FALSE)$res$P.adj - junk <- utils::capture.output( - tmp2 <- dunn.test(homecare$occupation,homecare$eligibility, - method=m)$P.adjusted) - expect_equivalent(tmp,tmp2) - } - for (m in dunn.test::p.adjustment.methods) { # for two-sided results - tmp <- dunnTest(occupation~eligibility,data=homecare,method=m, - two.sided=TRUE)$res$P.adj - junk <- utils::capture.output( - tmp2 <- dunn.test(homecare$occupation,homecare$eligibility, - method=m,altp=TRUE)$altP.adjusted) - expect_equivalent(tmp,tmp2) - } - } # end require() + data(homecare,package="dunn.test") + ## Loop through all methods in p.adjustment.methods + for (m in dunn.test::p.adjustment.methods) { # for one-sided results + tmp <- dunnTest(occupation~eligibility,data=homecare, + method=m,two.sided=FALSE)$res$P.adj + junk <- utils::capture.output( + tmp2 <- dunn.test(homecare$occupation,homecare$eligibility, + method=m)$P.adjusted) + expect_equal(tmp,tmp2) + } + for (m in dunn.test::p.adjustment.methods) { # for two-sided results + tmp <- dunnTest(occupation~eligibility,data=homecare,method=m, + two.sided=TRUE)$res$P.adj + junk <- utils::capture.output( + tmp2 <- dunn.test(homecare$occupation,homecare$eligibility, + method=m,altp=TRUE)$altP.adjusted) + expect_equal(tmp,tmp2) + } }) test_that("dunnTest matches dunn.test results for airquality data",{ - if (require(dunn.test)) { - data(airquality,package="datasets") - ## Loop through all methods in p.adjustment.methods - for (m in dunn.test::p.adjustment.methods) { # for one-sided results - suppressWarnings(tmp <- dunnTest(Ozone~Month,data=airquality, - method=m,two.sided=FALSE)$res$P.adj) - junk <- utils::capture.output( - tmp2 <- dunn.test(airquality$Ozone,airquality$Month,method=m)$P.adjusted) - expect_equivalent(tmp,tmp2) - } - for (m in dunn.test::p.adjustment.methods) { # for two-sided results - suppressWarnings(tmp <- dunnTest(Ozone~Month,data=airquality, - method=m,two.sided=TRUE)$res$P.adj) - junk <- utils::capture.output( - tmp2 <- dunn.test(airquality$Ozone,airquality$Month,method=m, - altp=TRUE)$altP.adjusted) - expect_equivalent(tmp,tmp2) - } - } # end require() + data(airquality,package="datasets") + ## Loop through all methods in p.adjustment.methods + for (m in dunn.test::p.adjustment.methods) { # for one-sided results + suppressWarnings(tmp <- dunnTest(Ozone~Month,data=airquality, + method=m,two.sided=FALSE)$res$P.adj) + junk <- utils::capture.output( + tmp2 <- dunn.test(airquality$Ozone,airquality$Month,method=m)$P.adjusted) + expect_equal(tmp,tmp2) + } + for (m in dunn.test::p.adjustment.methods) { # for two-sided results + suppressWarnings(tmp <- dunnTest(Ozone~Month,data=airquality, + method=m,two.sided=TRUE)$res$P.adj) + junk <- utils::capture.output( + tmp2 <- dunn.test(airquality$Ozone,airquality$Month,method=m, + altp=TRUE)$altP.adjusted) + expect_equal(tmp,tmp2) + } }) diff --git a/tests/testthat/testthat_expandCounts.R b/tests/testthat/testthat_expandCounts.R index 0d0cd706..84358009 100644 --- a/tests/testthat/testthat_expandCounts.R +++ b/tests/testthat/testthat_expandCounts.R @@ -101,147 +101,147 @@ test_that("expandCounts() messages",{ ## Test Output Types and Validate Results ---- test_that("expandCounts() type and value of results",{ tmp <- expandCounts(good1,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good1)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good1$freq)) expect_equal(max(apply(matrix(tmp$newlen,ncol=1),1,FSA:::iGetDecimals)),1) exp <- tapply(good1$freq,good1$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) # same as above, just with diffrent new.name tmp <- expandCounts(good1,~freq,~lwr.bin+upr.bin,new.name="DEREK",verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good1)[-4],"DEREK","lennote")) expect_equal(nrow(tmp),sum(good1$freq)) expect_equal(max(apply(matrix(tmp$DEREK,ncol=1),1,FSA:::iGetDecimals)),1) exp <- tapply(good1$freq,good1$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) # same as first, but different lprec tmp <- expandCounts(good1,~freq,~lwr.bin+upr.bin,lprec=1,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good1)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good1$freq)) expect_equal(max(apply(matrix(tmp$newlen,ncol=1),1,FSA:::iGetDecimals)),0) exp <- tapply(good1$freq,good1$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) # same as first, but different lprec tmp <- expandCounts(good1,~freq,~lwr.bin+upr.bin,lprec=0.01,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good1)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good1$freq)) expect_equal(max(apply(matrix(tmp$newlen,ncol=1),1,FSA:::iGetDecimals)),2) exp <- tapply(good1$freq,good1$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good2,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good2)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good2$freq)) exp <- tapply(good2$freq,good2$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good3,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good3)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good3$freq)) exp <- tapply(good3$freq,good3$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good4,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good4)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good4$freq)) exp <- tapply(good4$freq,good4$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good5,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good5)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good5$freq)) exp <- tapply(good5$freq,good5$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good6,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good6)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good6$freq)) exp <- tapply(good6$freq,good6$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good7,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good7)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good7$freq)) exp <- tapply(good7$freq,good7$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good8,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good8)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good8$freq)+2) # adjusted for NAs exp <- tapply(good8$freq,good8$name,FUN=sum) obs <- xtabs(~name,data=droplevels(subset(tmp,!is.na(lwr.bin)))) - expect_equivalent(sum(obs),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(sum(obs),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good9,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good9)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good9$freq,na.rm=TRUE)+2) # adjusted for NAs exp <- tapply(good9$freq,good9$name,FUN=sum,na.rm=TRUE) obs <- xtabs(~name,data=droplevels(subset(tmp,!is.na(lwr.bin)))) - expect_equivalent(sum(obs),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(sum(obs),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) tmp <- expandCounts(good10,~freq,~lwr.bin+upr.bin,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good10)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good10$freq,na.rm=TRUE)) exp <- tapply(good10$freq,good10$name,FUN=sum,na.rm=TRUE) obs <- xtabs(~name,data=droplevels(subset(tmp,!is.na(lwr.bin)))) - expect_equivalent(sum(obs),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(sum(obs),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) # Same as first, but not lower and upper bins tmp <- expandCounts(good1,~freq,verbose=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good1)[-4])) expect_equal(nrow(tmp),sum(good1$freq)) exp <- tapply(good1$freq,good1$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) # Same as first but with verbose expect_message( tmp <- expandCounts(good1,~freq,~lwr.bin+upr.bin,verbose=TRUE) ) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(names(tmp),c(names(good1)[-4],"newlen","lennote")) expect_equal(nrow(tmp),sum(good1$freq)) expect_equal(max(apply(matrix(tmp$newlen,ncol=1),1,FSA:::iGetDecimals)),1) exp <- tapply(good1$freq,good1$name,FUN=sum) obs <- xtabs(~name,data=tmp) - expect_equivalent(nrow(tmp),sum(exp)) - expect_equivalent(as.vector(obs),as.vector(exp)) -}) \ No newline at end of file + expect_equal(nrow(tmp),sum(exp)) + expect_equal(as.vector(obs),as.vector(exp)) +}) diff --git a/tests/testthat/testthat_expandLenFreq.R b/tests/testthat/testthat_expandLenFreq.R index 69bc8508..edfa3f9d 100644 --- a/tests/testthat/testthat_expandLenFreq.R +++ b/tests/testthat/testthat_expandLenFreq.R @@ -19,7 +19,7 @@ test_that("expandLenFreq() results",{ lens1 <- c(1,2,3) # simple example using additional tmp <- expandLenFreq(lens1,1,9,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),9) expect_equal(min(tmp),1) @@ -27,14 +27,14 @@ test_that("expandLenFreq() results",{ # not so simple (but set seed to make reproducible) using additional set.seed(1) tmp <- expandLenFreq(lens1,1,10,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),10) expect_equal(as.numeric(xtabs(~tmp)),c(3,4,3)) expect_equal(min(tmp),1) # simple example using total tmp <- expandLenFreq(lens1,1,total=12,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),9) expect_equal(min(tmp),1) @@ -42,7 +42,7 @@ test_that("expandLenFreq() results",{ # not so simple (but set seed to make reproducible) using additional set.seed(1) tmp <- expandLenFreq(lens1,1,total=13,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),10) expect_equal(as.numeric(xtabs(~tmp)),c(3,4,3)) @@ -50,7 +50,7 @@ test_that("expandLenFreq() results",{ ## With decimals lens2 <- c(1.1,1.2,2.3,2.4,3.5,3.6) tmp <- expandLenFreq(lens2,1,9,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),9) expect_equal(min(tmp),1) @@ -58,7 +58,7 @@ test_that("expandLenFreq() results",{ # not so simple (but set seed to make reproducible) using additional set.seed(1) tmp <- expandLenFreq(lens2,1,10,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),10) expect_equal(as.numeric(xtabs(~tmp)),c(3,4,3)) @@ -66,7 +66,7 @@ test_that("expandLenFreq() results",{ ## With decimals, but unequal numbers lens3 <- c(1.1,1.2,1.3,2.4,2.5,3.6) tmp <- expandLenFreq(lens3,1,12,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),12) expect_equal(min(tmp),1) @@ -74,7 +74,7 @@ test_that("expandLenFreq() results",{ # not so simple (but set seed to make reproducible) using additional set.seed(1) tmp <- expandLenFreq(lens3,1,13,show.summary=FALSE) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_true(is.vector(tmp)) expect_equal(length(tmp),13) expect_equal(as.numeric(xtabs(~tmp)),c(7,4,2)) diff --git a/tests/testthat/testthat_extraTests.R b/tests/testthat/testthat_extraTests.R index a250d51f..17a94286 100644 --- a/tests/testthat/testthat_extraTests.R +++ b/tests/testthat/testthat_extraTests.R @@ -19,7 +19,7 @@ nls.0 <- nls(y~rep(c,length(df$y)),data=df,start=list(c=10)) nls.1 <- nls(y~a*x+c,data=df,start=list(a=1,c=1)) nls.2 <- nls(y~b*x2+a*x+c,data=df,start=list(a=-1,b=0.3,c=10)) -if (suppressMessages(require(nlme))) { +if (suppressMessages(require(nlme,quietly=TRUE))) { gls.0 <- gls(y~1,data=df,method="ML") gls.1 <- gls(y~x,data=df,method="ML") gls.2 <- gls(y~x+x2,data=df,method="ML") @@ -89,11 +89,11 @@ test_that("extraSS() and lrt() messages",{ test_that("extraSS() and lrt() output",{ # extraSS() returns a matrix of class extraTest tmp1 <- extraSS(lm.0,com=lm.1) - expect_is(tmp1,"extraTest") + expect_equal(class(tmp1),"extraTest") expect_true(is.matrix(tmp1)) expect_equal(nrow(tmp1),1) tmp2 <- extraSS(lm.0,lm.1,com=lm.2) - expect_is(tmp2,"extraTest") + expect_equal(class(tmp2),"extraTest") expect_true(is.matrix(tmp2)) expect_equal(nrow(tmp2),2) # print() returns x @@ -103,11 +103,11 @@ test_that("extraSS() and lrt() output",{ expect_equal(tmp2,tmp) # lrt() returns a matrix of class extraTest tmp1 <- lrt(lm.0,com=lm.1) - expect_is(tmp1,"extraTest") + expect_equal(class(tmp1),"extraTest") expect_true(is.matrix(tmp1)) expect_equal(nrow(tmp1),1) tmp2 <- lrt(lm.0,lm.1,com=lm.2) - expect_is(tmp2,"extraTest") + expect_equal(class(tmp2),"extraTest") expect_true(is.matrix(tmp2)) expect_equal(nrow(tmp2),2) # print() returns x @@ -123,58 +123,57 @@ test_that("extraSS() computations",{ ## Two model lm comparisons tmp1 <- extraSS(lm.0,com=lm.1) tmp2 <- anova(lm.0,lm.1) - expect_equivalent(tmp1[1,"F"],tmp2[2,"F"]) - expect_equivalent(tmp1[1,"Df"],tmp2[2,"Df"]) - expect_equivalent(tmp1[1,"SS"],tmp2[2,"Sum of Sq"]) + expect_equal(tmp1[1,"F"],tmp2[2,"F"]) + expect_equal(tmp1[1,"Df"],tmp2[2,"Df"]) + expect_equal(tmp1[1,"SS"],tmp2[2,"Sum of Sq"]) ## Three model lm comparisons (only can compare to last) tmp1 <- extraSS(lm.0,lm.1,com=lm.2) tmp2 <- anova(lm.0,lm.1,lm.2) - expect_equivalent(tmp1[2,"F"],tmp2[3,"F"]) - expect_equivalent(tmp1[2,"Df"],tmp2[3,"Df"]) - expect_equivalent(tmp1[2,"SS"],tmp2[3,"Sum of Sq"]) + expect_equal(tmp1[2,"F"],tmp2[3,"F"]) + expect_equal(tmp1[2,"Df"],tmp2[3,"Df"]) + expect_equal(tmp1[2,"SS"],tmp2[3,"Sum of Sq"]) ## Two model nls comparisons tmp1 <- extraSS(nls.0,com=nls.1) tmp2 <- anova(nls.0,nls.1) - expect_equivalent(tmp1[1,"F"],tmp2[2,"F value"]) - expect_equivalent(tmp1[1,"Df"],tmp2[2,"Df"]) - expect_equivalent(tmp1[1,"SS"],tmp2[2,"Sum Sq"]) + expect_equal(tmp1[1,"F"],tmp2[2,"F value"]) + expect_equal(tmp1[1,"Df"],tmp2[2,"Df"]) + expect_equal(tmp1[1,"SS"],tmp2[2,"Sum Sq"]) ## Three model nls comparisons (only can compare to last) tmp1 <- extraSS(nls.0,nls.1,com=nls.2) tmp2 <- anova(nls.0,nls.1,nls.2) - expect_equivalent(tmp1[2,"F"],tmp2[3,"F value"]) - expect_equivalent(tmp1[2,"Df"],tmp2[3,"Df"]) - expect_equivalent(tmp1[2,"SS"],tmp2[3,"Sum Sq"]) + expect_equal(tmp1[2,"F"],tmp2[3,"F value"]) + expect_equal(tmp1[2,"Df"],tmp2[3,"Df"]) + expect_equal(tmp1[2,"SS"],tmp2[3,"Sum Sq"]) }) test_that("lrt() computations",{ - require(lmtest) + require(lmtest,quietly=TRUE) ## Two model lm comparisons tmp1 <- lrt(lm.0,com=lm.1) tmp2 <- lrtest(lm.0,lm.1) - expect_equivalent(tmp1[1,"Chisq"],tmp2[2,"Chisq"]) - expect_equivalent(tmp1[1,"Df"],tmp2[2,"Df"]) - expect_equivalent(tmp1[1,"logLikO"],tmp2[1,"LogLik"]) - expect_equivalent(tmp1[1,"logLikA"],tmp2[2,"LogLik"]) + expect_equal(tmp1[1,"Chisq"],tmp2[2,"Chisq"]) + expect_equal(tmp1[1,"Df"],tmp2[2,"Df"]) + expect_equal(tmp1[1,"logLikO"],tmp2[1,"LogLik"]) + expect_equal(tmp1[1,"logLikA"],tmp2[2,"LogLik"]) ## Three model lm comparisons (only can compare to last) tmp1 <- lrt(lm.0,lm.1,com=lm.2) tmp2 <- lrtest(lm.0,lm.1,lm.2) - expect_equivalent(tmp1[2,"Chisq"],tmp2[3,"Chisq"]) - expect_equivalent(tmp1[2,"Df"],tmp2[3,"Df"]) - expect_equivalent(tmp1[2,"logLikO"],tmp2[2,"LogLik"]) - expect_equivalent(tmp1[2,"logLikA"],tmp2[3,"LogLik"]) + expect_equal(tmp1[2,"Chisq"],tmp2[3,"Chisq"]) + expect_equal(tmp1[2,"Df"],tmp2[3,"Df"]) + expect_equal(tmp1[2,"logLikO"],tmp2[2,"LogLik"]) + expect_equal(tmp1[2,"logLikA"],tmp2[3,"LogLik"]) ## Two model nls comparisons tmp1 <- lrt(nls.0,com=nls.1) tmp2 <- lrtest(nls.0,nls.1) - expect_equivalent(tmp1[1,"Chisq"],tmp2[2,"Chisq"]) - expect_equivalent(tmp1[1,"Df"],tmp2[2,"Df"]) - expect_equivalent(tmp1[1,"logLikO"],tmp2[1,"LogLik"]) - expect_equivalent(tmp1[1,"logLikA"],tmp2[2,"LogLik"]) + expect_equal(tmp1[1,"Chisq"],tmp2[2,"Chisq"]) + expect_equal(tmp1[1,"Df"],tmp2[2,"Df"]) + expect_equal(tmp1[1,"logLikO"],tmp2[1,"LogLik"]) + expect_equal(tmp1[1,"logLikA"],tmp2[2,"LogLik"]) ## Three model nls comparisons (only can compare to last) tmp1 <- lrt(nls.0,nls.1,com=nls.2) tmp2 <- lrtest(nls.0,nls.1,nls.2) - expect_equivalent(tmp1[2,"Chisq"],tmp2[3,"Chisq"]) - expect_equivalent(tmp1[2,"Df"],tmp2[3,"Df"]) - expect_equivalent(tmp1[2,"logLikO"],tmp2[2,"LogLik"]) - expect_equivalent(tmp1[2,"logLikA"],tmp2[3,"LogLik"]) + expect_equal(tmp1[2,"Chisq"],tmp2[3,"Chisq"]) + expect_equal(tmp1[2,"Df"],tmp2[3,"Df"]) + expect_equal(tmp1[2,"logLikO"],tmp2[2,"LogLik"]) + expect_equal(tmp1[2,"logLikA"],tmp2[3,"LogLik"]) }) - diff --git a/tests/testthat/testthat_growthFuns.R b/tests/testthat/testthat_growthFuns.R index b9ff8d1a..7c4fbaad 100644 --- a/tests/testthat/testthat_growthFuns.R +++ b/tests/testthat/testthat_growthFuns.R @@ -62,32 +62,32 @@ test_that("growthFunShow() & Schnute() messages",{ ## Test Output Types ---- test_that("growthFunShow() results",{ for (i in vbs) { - expect_is(growthFunShow("vonBertalanffy",param=i),"expression") - expect_is(growthFunShow("vonBertalanffy",param=i,plot=TRUE),"expression") + expect_equal(class(growthFunShow("vonBertalanffy",param=i)),"expression") + expect_equal(class(growthFunShow("vonBertalanffy",param=i,plot=TRUE)),"expression") } for (i in gomps) { - expect_is(growthFunShow("Gompertz",param=i),"expression") - expect_is(growthFunShow("Gompertz",param=i,plot=TRUE),"expression") + expect_equal(class(growthFunShow("Gompertz",param=i)),"expression") + expect_equal(class(growthFunShow("Gompertz",param=i,plot=TRUE)),"expression") } for (i in logistics) { - expect_is(growthFunShow("Logistic",param=i),"expression") - expect_is(growthFunShow("Logistic",param=i,plot=TRUE),"expression") + expect_equal(class(growthFunShow("Logistic",param=i)),"expression") + expect_equal(class(growthFunShow("Logistic",param=i,plot=TRUE)),"expression") } for (i in 1:6) { - expect_is(growthFunShow("Richards",param=i),"expression") - expect_is(growthFunShow("Richards",param=i,plot=TRUE),"expression") + expect_equal(class(growthFunShow("Richards",param=i)),"expression") + expect_equal(class(growthFunShow("Richards",param=i,plot=TRUE)),"expression") } for (i in 1:4) { - expect_is(growthFunShow("Schnute",param=i),"expression") - expect_is(growthFunShow("Schnute",param=i,plot=TRUE),"expression") + expect_equal(class(growthFunShow("Schnute",param=i)),"expression") + expect_equal(class(growthFunShow("Schnute",param=i,plot=TRUE)),"expression") } }) test_that("vbFuns() output",{ ## Do all choices return a function for (i in vbs) { - expect_is(vbFuns(i),"function") - expect_is(vbFuns(i,simple=TRUE),"function") + expect_equal(class(vbFuns(i)),"function") + expect_equal(class(vbFuns(i,simple=TRUE)),"function") } ## Do all choices return a message with the name of the function in it for (i in vbs) expect_message(vbFuns(i,msg=TRUE),i) @@ -110,8 +110,8 @@ test_that("vbFuns() arguments are in same order as vbStarts() list",{ test_that("GompertzFuns() output",{ ## Do all choices return a function for (i in gomps) { - expect_is(GompertzFuns(i),"function") - expect_is(GompertzFuns(i,simple=TRUE),"function") + expect_equal(class(GompertzFuns(i)),"function") + expect_equal(class(GompertzFuns(i,simple=TRUE)),"function") } ## Do all choices return a message with the name of the function in it for (i in gomps) expect_message(GompertzFuns(i,msg=TRUE),i) @@ -120,8 +120,8 @@ test_that("GompertzFuns() output",{ test_that("logisticFuns() output",{ ## Do all choices return a function for (i in logistics) { - expect_is(logisticFuns(i),"function") - expect_is(logisticFuns(i,simple=TRUE),"function") + expect_equal(class(logisticFuns(i)),"function") + expect_equal(class(logisticFuns(i,simple=TRUE)),"function") } ## Do all choices return a message with the name of the function in it for (i in logistics) expect_message(logisticFuns(i,msg=TRUE),i) @@ -130,18 +130,18 @@ test_that("logisticFuns() output",{ test_that("RichardsFuns() output",{ ## Do all choices return a function for (i in 1:6) { - expect_is(RichardsFuns(i),"function") - expect_is(RichardsFuns(i,simple=TRUE),"function") + expect_equal(class(RichardsFuns(i)),"function") + expect_equal(class(RichardsFuns(i,simple=TRUE)),"function") } ## Do all choices return a message with the name of the function in it for (i in 1:6) expect_message(RichardsFuns(i,msg=TRUE),paste0("Richards",i)) }) test_that("Schnute() output",{ - expect_is(Schnute(3,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1),"numeric") - expect_is(Schnute(3,case=2,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1),"numeric") - expect_is(Schnute(3,case=3,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1),"numeric") - expect_is(Schnute(3,case=4,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1),"numeric") + expect_equal(class(Schnute(3,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1)),"numeric") + expect_equal(class(Schnute(3,case=2,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1)),"numeric") + expect_equal(class(Schnute(3,case=3,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1)),"numeric") + expect_equal(class(Schnute(3,case=4,t1=1,t3=15,L1=30,L3=400,a=0.3,b=1)),"numeric") }) diff --git a/tests/testthat/testthat_ksTest.R b/tests/testthat/testthat_ksTest.R index a0358fc7..7824eba3 100644 --- a/tests/testthat/testthat_ksTest.R +++ b/tests/testthat/testthat_ksTest.R @@ -36,20 +36,20 @@ test_that("Does ksTest() match ks.test()",{ ## one-sample (from ks.test) still works tmp1 <- ksTest(x+2,"pgamma",3,2) tmp1o <- ks.test(x+2,"pgamma",3,2) - expect_is(tmp1,"htest") + expect_equal(class(tmp1),c("ks.test","htest")) expect_equal(mode(tmp1),"list") expect_equal(names(tmp1),names(tmp1o)) ## first two-sample example in ?ks.test tmp1 <- ksTest(x,y) tmp1o <- ks.test(x,y) - expect_is(tmp1,"htest") + expect_equal(class(tmp1),c("ks.test","htest")) expect_equal(mode(tmp1),"list") expect_equal(names(tmp1),names(tmp1o)) # using formula notation tmp1f <- ksTest(dat~grp,data=df) - expect_is(tmp1f,"htest") + expect_equal(class(tmp1f),c("ks.test","htest")) expect_equal(mode(tmp1f),"list") expect_equal(names(tmp1f),names(tmp1o)) expect_identical(tmp1,tmp1f) diff --git a/tests/testthat/testthat_lencat.R b/tests/testthat/testthat_lencat.R index 889a6efe..19d4ab22 100644 --- a/tests/testthat/testthat_lencat.R +++ b/tests/testthat/testthat_lencat.R @@ -47,74 +47,74 @@ test_that("lencat() messages",{ test_that("lencat() results",{ ## Simple examples (same width as that created) tmp <- lencat(~len1,data=df2,w=1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") tmp <- lencat(~len0.1,data=df2,w=0.1) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") tmp <- lencat(~len0.01,data=df2,w=0.01) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") tmp <- lencat(~len10,data=df2,w=10) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") ## Does it handle 1-column data.frame tmp <- lencat(data.frame(df2$len1),w=1) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") ## Different widths (don't control startcat) tmp <- lencat(~len1,data=df2,w=2) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") tmp <- lencat(~len0.1,data=df2,w=0.2) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") tmp <- lencat(~len10,data=df2,w=20) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") ## Different widths (control startcat) tmp <- lencat(~len1,data=df2,w=2,startcat=1) - expect_is(tmp$LCat,"numeric") - expect_is(tmp,"data.frame") + expect_equal(class(tmp$LCat),"numeric") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) tmp <- lencat(~len0.1,data=df2,w=0.2,startcat=0.1) - expect_is(tmp$LCat,"numeric") - expect_is(tmp,"data.frame") + expect_equal(class(tmp$LCat),"numeric") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) tmp <- lencat(~len10,data=df2,w=20,startcat=10) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") ## Using breaks tmp <- lencat(~len1,data=df2,breaks=c(1,2,4)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") tmp <- lencat(~len0.1,data=df2,breaks=c(1,2,4)/10) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") ## using named breaks # but don't use names tmp <- lencat(~len1,data=df2,breaks=c(one=1,two=2,four=4)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"numeric") + expect_equal(class(tmp$LCat),"numeric") # but do use names tmp <- lencat(~len1,data=df2,breaks=c(one=1,two=2,four=4),use.names=TRUE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"factor") + expect_equal(class(tmp$LCat),"factor") # use names but don't return as a factor tmp <- lencat(~len1,data=df2,breaks=c(one=1,two=2,four=4), use.names=TRUE,as.fact=FALSE) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(colnames(tmp),c(colnames(df2),"LCat")) - expect_is(tmp$LCat,"character") + expect_equal(class(tmp$LCat),"character") }) ## Validate Results ---- @@ -130,7 +130,7 @@ test_that("lencat() results",{ expect_equal(as.numeric(xtabs(~LCat,data=tmp)),freq) ## Does it handle 1-column data.frame tmp <- lencat(data.frame(df2$len1),w=1) - expect_is(tmp,"numeric") + expect_equal(class(tmp),"numeric") expect_equal(as.numeric(xtabs(~tmp)),freq) ## Different widths (don't control startcat) freqtmp <- c(0,freq[c(2,4)])+freq[c(1,3,5)] diff --git a/tests/testthat/testthat_metaM.R b/tests/testthat/testthat_metaM.R index 5eb09d6f..5154dda0 100644 --- a/tests/testthat/testthat_metaM.R +++ b/tests/testthat/testthat_metaM.R @@ -29,8 +29,9 @@ test_that("metaM() messages",{ "must be given to") ## Bad parameter values - expect_warning(metaM("PaulyL",Linf=200,K=0.3,T=-3), - "seems unreasonable") + metaM("PaulyL",Linf=200,K=0.3,T=-3) %>% + expect_warning( "'Temp' value seems unreasonable") %>% + suppressWarnings() expect_warning(metaM("PaulyL",Linf=200,K=0.3,T=33), "seems unreasonable") expect_error(metaM("PaulyL",Linf=200,K=-0.3,T=13), @@ -58,15 +59,17 @@ test_that("metaM() output",{ ## Individual methods with justM for (i in meths) { - expect_is(metaM(i,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, - b=3.22,t50=0.83,T=17,L=3),"numeric") + expect_equal(class(metaM(i,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3)),"numeric") } ## Individual methods without justM for (i in meths) { - tmp <- metaM(i,justM=FALSE,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + suppressMessages( + tmp <- metaM(i,justM=FALSE,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"metaM") + ) + expect_equal(class(tmp),"metaM") expect_equal(mode(tmp),"list") expect_equal(tmp[["method"]],i) expect_message(print(tmp)) @@ -75,7 +78,7 @@ test_that("metaM() output",{ ## Multiple selected methods tmp <- metaM(meths[1:2],tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),2) expect_equal(names(tmp),c("method","M")) expect_equal(tmp$method,meths[1:2]) @@ -83,7 +86,7 @@ test_that("metaM() output",{ tmp <- metaM(meths,tmax=3,Linf=12.93,Winf=20.79, K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(meths)) expect_equal(names(tmp),c("method","M")) expect_equal(tmp$method,meths) @@ -94,28 +97,28 @@ test_that("metaM() output",{ tmp <- metaM(Mmethods("tmax"),tmax=3,Linf=12.93,Winf=20.79, K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("tmax"))) expect_equal(names(tmp),c("method","M")) expect_equal(tmp$method,Mmethods("tmax")) expect_equal(ncol(tmp),2) tmp <- metaM(Mmethods("K"),tmax=3,Linf=12.93,Winf=20.79, K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("K"))) expect_equal(names(tmp),c("method","M")) expect_equal(tmp$method,Mmethods("K")) expect_equal(ncol(tmp),2) tmp <- metaM(Mmethods("Pauly"),tmax=3,Linf=12.93,Winf=20.79, K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("Pauly"))) expect_equal(names(tmp),c("method","M")) expect_equal(tmp$method,Mmethods("Pauly")) expect_equal(ncol(tmp),2) tmp <- metaM(Mmethods("Hoenig"),tmax=3,Linf=12.93,Winf=20.79, K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("Hoenig"))) expect_equal(names(tmp),c("method","M")) expect_equal(tmp$method,Mmethods("Hoenig")) @@ -134,7 +137,7 @@ test_that("metaM() matches results from Kenchington (2014)",{ b=3.22,t50=0.83,T=17,L=3) tmp <- data.frame(tmp,expM=c(0.69,0.66,1.43,1.42,1.69,1.83,0.35,2.71, 2.30,2.06,1.17,1.58,1.22)) - expect_equivalent(round(tmp$M,2)[-7],tmp$expM[-7]) + expect_equal(round(tmp$M,2)[-7],tmp$expM[-7]) ## Norwegian Fjord Lanternfish ## matches except for the HoenigOF and HoenigO2F, which Kenchington acknowledged as erroneous @@ -142,7 +145,7 @@ test_that("metaM() matches results from Kenchington (2014)",{ b=3.26,t50=2,T=8,L=2) tmp <- data.frame(tmp,expM=c(0.51,0.46,0.55,0.61,0.58,0.64,0.31, 2.44,0.71,1.73,0.77,0.77,0.93)) - expect_equivalent(round(tmp$M,2)[-c(4,6)],tmp$expM[-c(4,6)]) + expect_equal(round(tmp$M,2)[-c(4,6)],tmp$expM[-c(4,6)]) ## Rio Formosa Seahorse ## ALL MATCHES (note use of ZhangMegreyP) @@ -153,19 +156,18 @@ test_that("metaM() matches results from Kenchington (2014)",{ ,b=3.276,t50=0.49,T=19,L=10) tmp <- data.frame(tmp,expM=c(1.16,1.30,0.79,0.77,0.88,0.87,0.86, 1.78,0.75,1.59,0.33,2.39,1.53)) - expect_equivalent(round(tmp$M,2),tmp$expM) + expect_equal(round(tmp$M,2),tmp$expM) }) test_that("metaM() matches M.empirical() from fishmethods for Rio Formosa Seahorse",{ ## ALL MATCHES - if (require(fishmethods)) { + if (require(fishmethods,quietly=TRUE)) { meths <- c("PaulyL","PaulyW","HoenigO","HoenigOF","AlversonCarney","Gislason") tmp <- metaM(meths,tmax=5.5,Linf=19.76,Winf=17.3,K=0.571,t0=-0.91, b=3.276,t50=0.49,T=19,L=10) tmp2 <- M.empirical(Linf=19.76,Winf=17.3,Kl=0.571,Kw=0.571,TC=19, Bl=10,tmax=5.5,method=c(1:4,9)) tmp <- data.frame(tmp,tmp2) - expect_equivalent(round(tmp$M,3),tmp$M.1) + expect_equal(round(tmp$M,3),tmp$M.1) } }) - diff --git a/tests/testthat/testthat_mrClosed.R b/tests/testthat/testthat_mrClosed.R index 05f208ac..22b4ac00 100644 --- a/tests/testthat/testthat_mrClosed.R +++ b/tests/testthat/testthat_mrClosed.R @@ -117,69 +117,69 @@ test_that("mrClosed Multiple Census errors and warnings",{ test_that("mrClosed() Single Census output",{ ch1 <- capHistSum(BluegillJL) mr1 <- mrClosed(ch1) - expect_is(mr1,"mrClosed1") + expect_equal(class(mr1),"mrClosed1") expect_equal(mode(mr1),"list") expect_equal(mr1$method,"Petersen") mr2 <- mrClosed(ch1,method="Chapman") - expect_is(mr2,"mrClosed1") + expect_equal(class(mr2),"mrClosed1") expect_equal(mode(mr2),"list") expect_equal(mr2$method,"Chapman") mr3 <- mrClosed(ch1,method="Ricker") - expect_is(mr3,"mrClosed1") + expect_equal(class(mr3),"mrClosed1") expect_equal(mode(mr3),"list") expect_equal(mr3$method,"Ricker") mr4 <- mrClosed(ch1,method="Bailey") - expect_is(mr4,"mrClosed1") + expect_equal(class(mr4),"mrClosed1") expect_equal(mode(mr4),"list") expect_equal(mr4$method,"Bailey") tmp <- summary(mr1) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") tmp <- summary(mr1,incl.SE=TRUE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("N","SE")) expect_message(summary(mr1,verbose=TRUE),"Petersen") tmp <- summary(mr2) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") tmp <- summary(mr2,incl.SE=TRUE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("N","SE")) expect_message(summary(mr1,verbose=TRUE),"Petersen") tmp <- summary(mr3) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") tmp <- summary(mr3,incl.SE=TRUE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("N","SE")) expect_message(summary(mr1,verbose=TRUE),"Petersen") tmp <- summary(mr4) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") tmp <- summary(mr4,incl.SE=TRUE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) @@ -187,38 +187,38 @@ test_that("mrClosed() Single Census output",{ expect_message(summary(mr1,verbose=TRUE),"Petersen") expect_message(tmp <- confint(mr1,verbose=TRUE),"Poisson") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr1,verbose=TRUE,type="binomial"),"binomial") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr1,verbose=TRUE,type="hypergeometric"), "hypergeometric") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr2,verbose=TRUE),"Poisson") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr3,verbose=TRUE),"Poisson") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr4,verbose=TRUE),"Poisson") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) @@ -232,56 +232,68 @@ test_that("mrClosed() Single Census with subgroups output",{ lbls <- c("YOY","Juvenile","Stock","Quality","Preferred","Memorable") mr1 <- mrClosed(marked,captured,recaps) mr2 <- mrClosed(marked,captured,recaps,labels=lbls) - expect_is(mr1,"mrClosed1") + expect_equal(class(mr1),"mrClosed1") expect_equal(mode(mr1),"list") - expect_is(mr2,"mrClosed1") + expect_equal(class(mr2),"mrClosed1") expect_equal(mode(mr2),"list") - expect_message(tmp <- summary(mr1,verbose=TRUE),"Petersen") - expect_is(tmp,"matrix") + summary(mr1,verbose=TRUE) %>% + expect_message("Used the 'naive' Petersen method") %>% + expect_message("A: M=93, n=103, and m=20") %>% + suppressMessages() + suppressMessages(tmp <- summary(mr1,verbose=TRUE)) + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)+1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") expect_equal(rownames(tmp),c(LETTERS[seq_along(marked)],"All")) tmp <- summary(mr1,incl.SE=TRUE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)+1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("N","SE")) expect_equal(rownames(tmp),c(LETTERS[seq_along(marked)],"All")) tmp <- summary(mr1,incl.SE=TRUE,incl.all=FALSE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("N","SE")) expect_equal(rownames(tmp),LETTERS[seq_along(marked)]) tmp <- summary(mr2,incl.SE=TRUE,incl.all=FALSE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("N","SE")) expect_equal(rownames(tmp),lbls) - expect_message(tmp <- confint(mr1,verbose=TRUE,type="binomial"),"binomial") - expect_is(tmp,"matrix") + confint(mr1,verbose=TRUE,type="binomial") %>% + expect_message("A - The binomial") %>% + expect_message("B - The binomial") %>% + expect_message("C - The binomial") %>% + expect_message("D - The binomial") %>% + expect_message("E - The binomial") %>% + expect_message("F - The binomial") %>% + expect_message("All - The normal distribution was used.") + suppressMessages(tmp <- confint(mr1,verbose=TRUE,type="binomial")) + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)+1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_equal(rownames(tmp),c(LETTERS[seq_along(marked)],"All")) tmp <- confint(mr1,incl.all=FALSE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_equal(rownames(tmp),LETTERS[seq_along(marked)]) tmp <- confint(mr2,incl.all=FALSE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),length(marked)) expect_equal(ncol(tmp),2) @@ -293,36 +305,36 @@ test_that("mrClosed() Schnabel output",{ mr1 <- with(PikeNY,mrClosed(n=n,m=m,R=R,method="Schnabel")) mr2 <- with(PikeNY,mrClosed(n=n,m=m,R=R,method="Schnabel",chapman.mod=FALSE)) - expect_is(mr1,"mrClosed2") + expect_equal(class(mr1),"mrClosed2") expect_equal(mode(mr1),"list") expect_equal(mr1$method,"Schnabel") expect_true(mr1$chapman.mod) - expect_is(mr2,"mrClosed2") + expect_equal(class(mr2),"mrClosed2") expect_equal(mode(mr2),"list") expect_equal(mr2$method,"Schnabel") expect_false(mr2$chapman.mod) expect_message(tmp <- summary(mr1,verbose=TRUE),"Schnabel") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") expect_message(tmp <- confint(mr1,verbose=TRUE),"normal") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr1,verbose=TRUE,type="Poisson"),"Poisson") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr1,verbose=TRUE,type="normal"),"normal") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) @@ -335,30 +347,30 @@ test_that("mrClosed() Schnabel with capHistSum() output",{ mr1 <- mrClosed(ch,method="Schnabel") mr2 <- mrClosed(ch,method="Schnabel",chapman.mod=FALSE) - expect_is(mr1,"mrClosed2") + expect_equal(class(mr1),"mrClosed2") expect_equal(mode(mr1),"list") expect_equal(mr1$method,"Schnabel") expect_true(mr1$chapman.mod) - expect_is(mr2,"mrClosed2") + expect_equal(class(mr2),"mrClosed2") expect_equal(mode(mr2),"list") expect_equal(mr2$method,"Schnabel") expect_false(mr2$chapman.mod) expect_message(tmp <- summary(mr1,verbose=TRUE),"Schnabel") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") expect_message(tmp <- confint(mr1,verbose=TRUE,type="Poisson"),"Poisson") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) expect_equal(colnames(tmp),c("95% LCI","95% UCI")) expect_message(tmp <- confint(mr1,verbose=TRUE,type="normal"),"normal") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) @@ -368,18 +380,18 @@ test_that("mrClosed() Schnabel with capHistSum() output",{ test_that("mrClosed() Schumacher-Eschmeyer output",{ mr1 <- with(PikeNY,mrClosed(n=n,m=m,R=R,method="Schumacher")) - expect_is(mr1,"mrClosed2") + expect_equal(class(mr1),"mrClosed2") expect_equal(mode(mr1),"list") expect_message(tmp <- summary(mr1,verbose=TRUE),"Schumacher") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") expect_message(tmp <- confint(mr1,verbose=TRUE),"normal") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) @@ -390,18 +402,18 @@ test_that("mrClosed() Schumacher-Eschmeyer capHistSum() output",{ ch <- capHistSum(PikeNYPartial1,cols2ignore="id") mr1 <- mrClosed(ch,method="Schumacher") - expect_is(mr1,"mrClosed2") + expect_equal(class(mr1),"mrClosed2") expect_equal(mode(mr1),"list") expect_message(tmp <- summary(mr1,verbose=TRUE),"Schumacher") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),1) expect_equal(colnames(tmp),"N") expect_message(tmp <- confint(mr1,verbose=TRUE),"normal") - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),1) expect_equal(ncol(tmp),2) @@ -464,7 +476,7 @@ test_that("mrClosed match the Chapman results from Table 3.7 and 3.8 in Seber (2 }) test_that("mrClosed match the Chapman results from mrN.single() from fishmethods",{ - if (require(fishmethods)) { + if (require(fishmethods,quietly=TRUE)) { tmp1 <- mrN.single(M=948,C=421,R=167) tmp <- mrClosed(M=948,n=421,m=167,method="Chapman") @@ -482,24 +494,22 @@ test_that("mrClosed match the Chapman results from mrN.single() from fishmethods test_that("mrClosed match the Bailey results from mrN.single() from fishmethods",{ - if (require(fishmethods)) { - tmp1 <- mrN.single(M=948,C=421,R=167) - - tmp <- mrClosed(M=948,n=421,m=167,method="Bailey") - stmp <- summary(tmp,incl.SE=TRUE) - expect_equal(stmp[[1,"N"]], round(tmp1$N[2],0)) - expect_equal(stmp[[1,"SE"]], round(tmp1$SE[2],1)) - ctmp <- confint(tmp,type="binomial",bin.type="wilson") - ## CI does not match (<0.1%) ... fishmethods uses qbinom, FSA uses binCI() - #expect_equal(ctmp[[1,"95% LCI"]], round(tmp1$LCI[2],0)) - #expect_equal(ctmp[[1,"95% UCI"]], round(tmp1$UCI[2],0)) - } + tmp1 <- mrN.single(M=948,C=421,R=167) + + tmp <- mrClosed(M=948,n=421,m=167,method="Bailey") + stmp <- summary(tmp,incl.SE=TRUE) + expect_equal(stmp[[1,"N"]], round(tmp1$N[2],0)) + expect_equal(stmp[[1,"SE"]], round(tmp1$SE[2],1)) + ctmp <- confint(tmp,type="binomial",bin.type="wilson") + ## CI does not match (<0.1%) ... fishmethods uses qbinom, FSA uses binCI() + #expect_equal(ctmp[[1,"95% LCI"]], round(tmp1$LCI[2],0)) + #expect_equal(ctmp[[1,"95% UCI"]], round(tmp1$UCI[2],0)) }) test_that("mrClosed match the Schnabel Results from p. 32 Krebs (1989)",{ - if (require(FSAdata)) { + if (require(FSAdata,quietly=TRUE)) { data(SunfishIN,package="FSAdata") tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, @@ -525,62 +535,56 @@ test_that("mrClosed match the Schnabel Results from p. 32 Krebs (1989)",{ test_that("mrClosed match the Schnabel results from p. 99 Ricker (1975)",{ - if (require(FSAdata)) { - data(SunfishIN,package="FSAdata") - - tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, - method="Schnabel",chapman.mod=FALSE)) - stmp <- summary(tmp) - expect_equal(stmp[[1,"N"]], 448) - ## See if intermediate calculations match Krebs - expect_equal(tmp$sum.m, 24) # sum R in Ricker - expect_equal(tmp$sum.nM, 10740) # sum CM in Ricker - expect_equal(tmp$sum.nM2, 970296) # sum CM^2 in Ricker - expect_equal(tmp$sum.mM, 2294) # sum RM in Ricker - expect_equal(round(tmp$sum.m2dn,3), 7.745) # sum R^2/C in Ricker - ctmp <- confint(tmp,type="normal") - ## The CIs do not equal ... ??? - #expect_equal(ctmp[[1,"95% LCI"]], 320) - #expect_equal(ctmp[[1,"95% UCI"]], 746) - - tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, - method="Schnabel",chapman.mod=TRUE)) - stmp <- summary(tmp) - expect_equal(stmp[[1,"N"]], 430) - ctmp <- confint(tmp,type="Poisson") - ## The CIs do not equal ... ??? - #expect_equal(ctmp[[1,"95% LCI"]], 302) - #expect_equal(ctmp[[1,"95% UCI"]], 697) - } + data(SunfishIN,package="FSAdata") + + tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, + method="Schnabel",chapman.mod=FALSE)) + stmp <- summary(tmp) + expect_equal(stmp[[1,"N"]], 448) + ## See if intermediate calculations match Krebs + expect_equal(tmp$sum.m, 24) # sum R in Ricker + expect_equal(tmp$sum.nM, 10740) # sum CM in Ricker + expect_equal(tmp$sum.nM2, 970296) # sum CM^2 in Ricker + expect_equal(tmp$sum.mM, 2294) # sum RM in Ricker + expect_equal(round(tmp$sum.m2dn,3), 7.745) # sum R^2/C in Ricker + ctmp <- confint(tmp,type="normal") + ## The CIs do not equal ... ??? + #expect_equal(ctmp[[1,"95% LCI"]], 320) + #expect_equal(ctmp[[1,"95% UCI"]], 746) + + tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, + method="Schnabel",chapman.mod=TRUE)) + stmp <- summary(tmp) + expect_equal(stmp[[1,"N"]], 430) + ctmp <- confint(tmp,type="Poisson") + ## The CIs do not equal ... ??? + #expect_equal(ctmp[[1,"95% LCI"]], 302) + #expect_equal(ctmp[[1,"95% UCI"]], 697) }) test_that("mrClosed match the Schumacher-Eschmeyer results from p. 33 Krebs (1989)",{ - if (require(FSAdata)) { - data(SunfishIN,package="FSAdata") - - tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, - method="Schumacher")) - stmp <- summary(tmp) - expect_equal(stmp[[1,"N"]], 423) - ctmp <- confint(tmp,type="normal") - expect_equal(ctmp[[1,"95% LCI"]], 300) - expect_equal(ctmp[[1,"95% UCI"]], 719) - } + data(SunfishIN,package="FSAdata") + + tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, + method="Schumacher")) + stmp <- summary(tmp) + expect_equal(stmp[[1,"N"]], 423) + ctmp <- confint(tmp,type="normal") + expect_equal(ctmp[[1,"95% LCI"]], 300) + expect_equal(ctmp[[1,"95% UCI"]], 719) }) test_that("mrClosed match the Schumacher-Eschmeyer results from p. 99 Ricker (1975)",{ - if (require(FSAdata)) { - data(SunfishIN,package="FSAdata") - - tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, - method="Schumacher")) - stmp <- summary(tmp) - expect_equal(stmp[[1,"N"]], 423) - ctmp <- confint(tmp,type="normal") - ## The CIs do not equal ... ??? - #expect_equal(ctmp[[1,"95% LCI"]], 304) - #expect_equal(ctmp[[1,"95% UCI"]], 696) - } + data(SunfishIN,package="FSAdata") + + tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, + method="Schumacher")) + stmp <- summary(tmp) + expect_equal(stmp[[1,"N"]], 423) + ctmp <- confint(tmp,type="normal") + ## The CIs do not equal ... ??? + #expect_equal(ctmp[[1,"95% LCI"]], 304) + #expect_equal(ctmp[[1,"95% UCI"]], 696) }) diff --git a/tests/testthat/testthat_mrOpen.R b/tests/testthat/testthat_mrOpen.R index 2f454670..6aafe5a3 100644 --- a/tests/testthat/testthat_mrOpen.R +++ b/tests/testthat/testthat_mrOpen.R @@ -214,8 +214,9 @@ test_that("mrOpen() messages",{ "must be numeric") expect_warning(confint(cutt,conf.level=0.95), "It cannot be changed here") - expect_warning(confint(cutt2,conf.level=0.95), - "It cannot be changed here") + confint(cutt2,conf.level=0.95) %>% + expect_warning("It cannot be changed here") %>% + suppressMessages() expect_message(suppressWarnings(confint(cutt2,conf.level=0.95)), "Manly did not provide a method for constructing") @@ -230,58 +231,58 @@ test_that("mrOpen() messages",{ ## Test Output Types ---- test_that("mrOpen() returns",{ # non-Manly type - expect_is(cutt,"mrOpen") + expect_equal(class(cutt),"mrOpen") expect_equal(mode(cutt),"list") tmp <- summary(cutt) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),8) expect_equal(names(tmp),c("M","M.se","N","N.se","phi","phi.se","B","B.se")) tmp <- summary(cutt,parm="N") - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),2) expect_equal(names(tmp),c("N","N.se")) tmp <- summary(cutt,parm=c("N","phi")) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),4) expect_equal(names(tmp),c("N","N.se","phi","phi.se")) tmp <- confint(cutt) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),6) expect_equal(names(tmp),c("N.lci","N.uci","phi.lci","phi.uci","B.lci","B.uci")) tmp <- confint(cutt,parm="N") - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),2) expect_equal(names(tmp),c("N.lci","N.uci")) tmp <- confint(cutt,parm=c("N","phi")) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),4) expect_equal(names(tmp),c("N.lci","N.uci","phi.lci","phi.uci")) # Manly type - expect_is(cutt2,"mrOpen") + expect_equal(class(cutt2),"mrOpen") expect_equal(mode(cutt2),"list") tmp <- summary(cutt2) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),4) expect_equal(names(tmp),c("M","N","phi","B")) tmp <- summary(cutt2,parm="N") - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),1) expect_equal(names(tmp),"N") tmp <- summary(cutt2,parm=c("N","phi")) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),2) expect_equal(names(tmp),c("N","phi")) tmp <- suppressMessages(confint(cutt2)) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),4) expect_equal(names(tmp),c("N.lci","N.uci","phi.lci","phi.uci")) tmp <- confint(cutt2,parm="N") - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),2) expect_equal(names(tmp),c("N.lci","N.uci")) tmp <- confint(cutt2,parm=c("N","phi")) - expect_is(tmp,"data.frame") + expect_equal(class(tmp),"data.frame") expect_equal(ncol(tmp),4) expect_equal(names(tmp),c("N.lci","N.uci","phi.lci","phi.uci")) }) diff --git a/tests/testthat/testthat_nlsTracePlot.R b/tests/testthat/testthat_nlsTracePlot.R index 8d0afcd6..dbb26a7b 100644 --- a/tests/testthat/testthat_nlsTracePlot.R +++ b/tests/testthat/testthat_nlsTracePlot.R @@ -21,11 +21,11 @@ test_that("nlsTracePlot() test messages",{ test_that("nlsTracePlot() test output",{ # successful fit tmp <- nlsTracePlot(fit1,vb1,add=FALSE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(ncol(tmp),3) # unsuccessful fit - if (require(FSAdata)) { + if (require(FSAdata,quietly=TRUE)) { data(BSkateGB,package="FSAdata") wtr <- droplevels(subset(BSkateGB,season=="winter")) bh1 <- srFuns() @@ -35,7 +35,7 @@ test_that("nlsTracePlot() test output",{ trace=TRUE)) )) tmp <- nlsTracePlot(trc,bh1,add=FALSE) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(ncol(tmp),2) } diff --git a/tests/testthat/testthat_removal.R b/tests/testthat/testthat_removal.R index 492b5500..32711964 100644 --- a/tests/testthat/testthat_removal.R +++ b/tests/testthat/testthat_removal.R @@ -157,45 +157,45 @@ test_that("removal() return types",{ # summary() results tmp <- removal(c(38,26,12)) tmp2 <- summary(tmp) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),2) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p")) expect_equal(colnames(tmp2),c("Estimate","Std. Error")) tmp2 <- summary(tmp,parm="p") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("p")) expect_equal(colnames(tmp2),c("Estimate","Std. Error")) tmp2 <- summary(tmp,parm="No") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("Estimate","Std. Error")) tmp <- removal(c(38,26,12),method="Schnute") tmp2 <- summary(tmp) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),3) expect_equal(ncol(tmp2),1) expect_equal(rownames(tmp2),c("No","p","p1")) expect_equal(colnames(tmp2),c("Estimate")) tmp2 <- summary(tmp,parm="p1") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),1) expect_equal(rownames(tmp2),c("p1")) expect_equal(colnames(tmp2),c("Estimate")) tmp2 <- summary(tmp,parm=c("p","No")) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),2) expect_equal(ncol(tmp2),1) expect_equal(rownames(tmp2),c("No","p")) expect_equal(colnames(tmp2),c("Estimate")) tmp <- removal(c(38,26),method="Seber2") tmp2 <- summary(tmp) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),2) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p")) @@ -204,51 +204,51 @@ test_that("removal() return types",{ # confint() results tmp <- removal(c(38,26,12)) tmp2 <- confint(tmp) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),2) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp2 <- confint(tmp,parm="p") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("p")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp2 <- confint(tmp,parm="No") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp <- removal(c(38,26,12),method="Schnute") tmp2 <- confint(tmp) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp2 <- confint(tmp,parm="No") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp <- removal(c(38,26),method="Seber2") tmp2 <- confint(tmp) - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),2) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp2 <- confint(tmp,parm="p") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("p")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp2 <- confint(tmp,parm="No") - expect_is(tmp2,"matrix") + expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) @@ -297,7 +297,7 @@ test_that("removal with 'CarleStrub' matches Cowx (1983) page 77",{ }) test_that("removal with 'CarleStrub' match results from Jones & Stockwell (1995)",{ - if (require(FSAdata)) { + if (require(FSAdata,quietly=TRUE)) { data(JonesStockwell,package="FSAdata") # isolate captures and Carle-Strub estimates ... for non-rejected estimates JS.caps <- JonesStockwell[!JonesStockwell$rejected,4:6] @@ -339,65 +339,60 @@ test_that("removal with 'RobsonRegier2' matches Cowx (1983) page 75",{ test_that("removal with 'Moran' matches Schnute (1983)",{ - if (require(FSAdata)) { - data(BrookTroutNEWP1,package="FSAdata") - Ns <- ps <- LHs <- NLCI <- NUCI <- numeric(nrow(BrookTroutNEWP1)) - for (i in seq_len(nrow(BrookTroutNEWP1))) { - tmp <- removal(as.numeric(BrookTroutNEWP1[i,c("first","second", - "third","fourth")]), - method="Moran") - Ns[i] <- round(tmp$est[["No"]],1) - ps[i] <- round(tmp$est[["p"]],2) - LHs[i] <- round(tmp$min.nlogLH,2) - tmp <- confint(tmp) - NLCI[i] <- tmp[1] - NUCI[i] <- tmp[2] - } - ## check point estimates - tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),Ns,ps,LHs, - BrookTroutNEWP1[,c("Moran.N","Moran.p","Moran.LH")]) - ## perfect matches - expect_equal(tmp[,"Ns"],BrookTroutNEWP1$Moran.N[]) - expect_equal(tmp[,"ps"],BrookTroutNEWP1$Moran.p[]) - expect_equal(tmp[,"LHs"],BrookTroutNEWP1$Moran.LH[]) - ## Check CIs (off by no more than 0.1 in a small handful of the UCIs) - tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),NLCI,NUCI, - BrookTroutNEWP1[,c("Moran.NLCI","Moran.NUCI")]) - expect_true(all(abs(tmp[,2:3]-tmp[,4:5])<=0.1001)) + data(BrookTroutNEWP1,package="FSAdata") + Ns <- ps <- LHs <- NLCI <- NUCI <- numeric(nrow(BrookTroutNEWP1)) + for (i in seq_len(nrow(BrookTroutNEWP1))) { + tmp <- removal(as.numeric(BrookTroutNEWP1[i,c("first","second", + "third","fourth")]), + method="Moran") + Ns[i] <- round(tmp$est[["No"]],1) + ps[i] <- round(tmp$est[["p"]],2) + LHs[i] <- round(tmp$min.nlogLH,2) + tmp <- confint(tmp) + NLCI[i] <- tmp[1] + NUCI[i] <- tmp[2] } + ## check point estimates + tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),Ns,ps,LHs, + BrookTroutNEWP1[,c("Moran.N","Moran.p","Moran.LH")]) + ## perfect matches + expect_equal(tmp[,"Ns"],BrookTroutNEWP1$Moran.N[]) + expect_equal(tmp[,"ps"],BrookTroutNEWP1$Moran.p[]) + expect_equal(tmp[,"LHs"],BrookTroutNEWP1$Moran.LH[]) + ## Check CIs (off by no more than 0.1 in a small handful of the UCIs) + tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),NLCI,NUCI, + BrookTroutNEWP1[,c("Moran.NLCI","Moran.NUCI")]) + expect_true(all(abs(tmp[,2:3]-tmp[,4:5])<=0.1001)) }) test_that("removal with 'Schnute' matches Schnute (1983)",{ - if (require(FSAdata)) { - data(BrookTroutNEWP1,package="FSAdata") - Ns <- p1s <- ps <- LHs <- NLCI <- NUCI <- numeric(nrow(BrookTroutNEWP1)) - for (i in seq_len(nrow(BrookTroutNEWP1))) { - tmp <- removal(as.numeric(BrookTroutNEWP1[i,c("first","second", - "third","fourth")]), - method="Schnute") - Ns[i] <- round(tmp$est[["No"]],1) - p1s[i] <- round(tmp$est[["p1"]],2) - ps[i] <- round(tmp$est[["p"]],2) - LHs[i] <- round(tmp$min.nlogLH,2) - tmp <- confint(tmp) - NLCI[i] <- tmp[1] - NUCI[i] <- tmp[2] - } - ## check point estimates - tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),Ns,p1s,ps,LHs, - BrookTroutNEWP1[,c("Schnute.N","Schnute.p1", - "Schnute.p","Schnute.LH")]) - ## perfect matches except sample 5 N is off by 0.1 - expect_equal(tmp[-5,"Ns"],BrookTroutNEWP1$Schnute.N[-5]) - expect_equal(tmp[,"p1s"],BrookTroutNEWP1$Schnute.p1[]) - expect_equal(tmp[,"ps"],BrookTroutNEWP1$Schnute.p[]) - expect_equal(tmp[,"LHs"],BrookTroutNEWP1$Schnute.LH[]) - ## Check CIs (off by no more than 0.1) - tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),NLCI,NUCI, - BrookTroutNEWP1[,c("Schnute.NLCI","Schnute.NUCI")]) - expect_true(all(abs(tmp[,2:3]-tmp[,4:5])<=0.1001,na.rm=TRUE)) + data(BrookTroutNEWP1,package="FSAdata") + Ns <- p1s <- ps <- LHs <- NLCI <- NUCI <- numeric(nrow(BrookTroutNEWP1)) + for (i in seq_len(nrow(BrookTroutNEWP1))) { + tmp <- removal(as.numeric(BrookTroutNEWP1[i,c("first","second", + "third","fourth")]), + method="Schnute") + Ns[i] <- round(tmp$est[["No"]],1) + p1s[i] <- round(tmp$est[["p1"]],2) + ps[i] <- round(tmp$est[["p"]],2) + LHs[i] <- round(tmp$min.nlogLH,2) + tmp <- confint(tmp) + NLCI[i] <- tmp[1] + NUCI[i] <- tmp[2] } - + ## check point estimates + tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),Ns,p1s,ps,LHs, + BrookTroutNEWP1[,c("Schnute.N","Schnute.p1", + "Schnute.p","Schnute.LH")]) + ## perfect matches except sample 5 N is off by 0.1 + expect_equal(tmp[-5,"Ns"],BrookTroutNEWP1$Schnute.N[-5]) + expect_equal(tmp[,"p1s"],BrookTroutNEWP1$Schnute.p1[]) + expect_equal(tmp[,"ps"],BrookTroutNEWP1$Schnute.p[]) + expect_equal(tmp[,"LHs"],BrookTroutNEWP1$Schnute.LH[]) + ## Check CIs (off by no more than 0.1) + tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),NLCI,NUCI, + BrookTroutNEWP1[,c("Schnute.NLCI","Schnute.NUCI")]) + expect_true(all(abs(tmp[,2:3]-tmp[,4:5])<=0.1001,na.rm=TRUE)) }) test_that("removal with 'Burnham' match results from (Van Deventer 1989) page 13",{ tmp <- removal(c(124,61,35,14),method="Burnham",CIMicroFish=TRUE) diff --git a/tests/testthat/testthat_sumTable.R b/tests/testthat/testthat_sumTable.R index 3eaffd85..6cbec51a 100644 --- a/tests/testthat/testthat_sumTable.R +++ b/tests/testthat/testthat_sumTable.R @@ -29,44 +29,43 @@ test_that("sumTable() messages",{ test_that("SumTable() Results",{ # 1-D example of mean tmp <- sumTable(dat~g1,data=d) - expect_is(tmp,"array") + expect_equal(class(tmp),"array") expect_equal(mode(tmp),"numeric") - expect_equivalent(tmp,array(c(2,5,8),dimnames=list(c("A","B","C")))) + expect_equal(tmp,array(c(2,5,8),dimnames=list(c("A","B","C")))) # 1-D example of mean tmp <- sumTable(dat~g2,data=d) - expect_is(tmp,"array") + expect_equal(class(tmp),"array") expect_equal(mode(tmp),"numeric") - expect_equivalent(tmp,array(c(4,5,6),dimnames=list(c("a","b","c")))) + expect_equal(tmp,array(c(4,5,6),dimnames=list(c("a","b","c")))) # 2-D example of mean tmp <- sumTable(dat~g1*g2,data=d) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),3) expect_equal(ncol(tmp),3) - expect_equivalent(tmp,matrix(1:9,nrow=3,byrow=TRUE)) + expect_equal(tmp,matrix(1:9,nrow=3,byrow=TRUE),ignore_attr=TRUE) # 1-D example of length tmp <- sumTable(dat~g1,data=d,FUN=length) - expect_is(tmp,"array") + expect_equal(class(tmp),"array") expect_equal(mode(tmp),"numeric") - expect_equivalent(tmp,array(c(3,3,3),dimnames=list(c("A","B","C")))) + expect_equal(tmp,array(c(3,3,3),dimnames=list(c("A","B","C")))) # 2-D example of length tmp <- sumTable(dat~g1*g2,data=d,FUN=length) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),3) expect_equal(ncol(tmp),3) - expect_equivalent(tmp,matrix(1,nrow=3,ncol=3,byrow=TRUE)) + expect_equal(tmp,matrix(1,nrow=3,ncol=3,byrow=TRUE),ignore_attr=TRUE) # 1-D example of sd tmp <- sumTable(dat~g1,data=d,FUN=sd) - expect_is(tmp,"array") + expect_equal(class(tmp),"array") expect_equal(mode(tmp),"numeric") - expect_equivalent(tmp,array(c(1,1,1),dimnames=list(c("A","B","C")))) + expect_equal(tmp,array(c(1,1,1),dimnames=list(c("A","B","C")))) # 2-D example of sd tmp <- sumTable(dat~g1*g2,data=d,FUN=sd) - expect_is(tmp,"matrix") + expect_equal(class(tmp),c("matrix","array")) expect_equal(mode(tmp),"numeric") expect_equal(nrow(tmp),3) expect_equal(ncol(tmp),3) - expect_equivalent(tmp,matrix(NA,nrow=3,ncol=3,byrow=TRUE)) + expect_equal(tmp,matrix(NA_real_,nrow=3,ncol=3,byrow=TRUE),ignore_attr=TRUE) }) - From b47637213619e5698e73126e3adb4a9319495f80 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Mon, 30 Dec 2024 21:09:30 -0600 Subject: [PATCH 04/21] Moved package loading in tests from individual test files --- NEWS.md | 2 +- man/FSA.Rd | 2 +- man/growthModels.Rd | 30 +- tests/testthat.R | 15 +- tests/testthat/testthat_AgeComparisons.R | 53 ++-- tests/testthat/testthat_AgeLengthKey.R | 54 ++-- tests/testthat/testthat_FSAUtils.R | 24 +- tests/testthat/testthat_VonBertalanffy.R | 331 +++++++++++------------ tests/testthat/testthat_catchCurve.R | 62 ++--- tests/testthat/testthat_depletion.R | 88 +++--- tests/testthat/testthat_dunnTest.R | 67 +++-- tests/testthat/testthat_extraTests.R | 20 +- tests/testthat/testthat_metaM.R | 17 +- tests/testthat/testthat_mrClosed.R | 70 +++-- tests/testthat/testthat_nlsTracePlot.R | 26 +- tests/testthat/testthat_removal.R | 22 +- 16 files changed, 426 insertions(+), 457 deletions(-) diff --git a/NEWS.md b/NEWS.md index 01480c95..1c1b8d8c 100644 --- a/NEWS.md +++ b/NEWS.md @@ -6,7 +6,7 @@ * Replaced many `expect_equivalent()` with `expect_equal(,ignore_attr=TRUE)` as `expect_equivalent()` was deprecated. * Had to correct many tests where I expected just `matrix` but the class was `c("matrix","array")`. * Had to handle multiple warnings for some tests (see [this article](https://testthat.r-lib.org/articles/third-edition.html#warnings)). - * Removed many `require()` that were not needed. + * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). diff --git a/man/FSA.Rd b/man/FSA.Rd index a82f728e..7492298e 100644 --- a/man/FSA.Rd +++ b/man/FSA.Rd @@ -2,8 +2,8 @@ % Please edit documentation in R/FSA.R \docType{package} \name{FSA} -\alias{FSA} \alias{FSA-package} +\alias{FSA} \title{Fisheries stock assessment methods and data.} \description{ Functions to support basic fisheries stock assessment methods. diff --git a/man/growthModels.Rd b/man/growthModels.Rd index bb6a9f99..e0410ad0 100644 --- a/man/growthModels.Rd +++ b/man/growthModels.Rd @@ -77,7 +77,7 @@ Take note of the following for parameterizations (i.e., \code{param}) of each gr \itemize{ \item The \sQuote{Ricker2} and \sQuote{QuinnDeriso1} are synonymous, as are \sQuote{Ricker3} and \sQuote{QuinnDeriso2}. \item The parameterizations and parameters for the Gompertz function are varied and confusing in the literature. I have attempted to use a uniform set of parameters in these functions, but that makes a direct comparison to the literature difficult. Common sources for Gompertz models are listed in the references below. I make some comments here to aid comparisons to the literature. - \item Within FSA, L0 is the mean length at age 0, Linf is the mean asymptotic length, ti is the age at the inflection point, gi is the instantaneous growth rate at the inflection point, t* is a dimensionless parameter related to time/age, and a is a dimensionless parameter related to growth. + \item Within FSA, L0 is the mean length at age 0, Linf is the mean asymptotic length, ti is the age at the inflection point, gi is the instantaneous growth rate at the inflection point, t0 is a dimensionless parameter related to time/age, and a is a dimensionless parameter related to growth. \item In the Quinn and Deriso (1999) functions (the \sQuote{QuinnDerisoX} functions), the a parameter here is equal to lambda/K there and the gi parameter here is equal to the K parameter there. Also note that their Y is L here. \item In the Ricker (1979)[p. 705] functions (the \sQuote{RickerX} functions), the a parameter here is equal to k there and the gi parameter here is equal to the g parameter there. Also note that their w is L here. In the Ricker (1979) functions as presented in Campana and Jones (1992), the a parameter here is equal to k parameter there and the gi parameter here is equal to the G parameter there. Also note that their X is L here. \item The function in Ricker (1975)[p. 232] is the same as \sQuote{Ricker2} where the a parameter here is qual to G there and the gi parameter here is equal to the g parameter there. Also note that their w is L here. @@ -87,7 +87,7 @@ Take note of the following for parameterizations (i.e., \code{param}) of each gr \item Richards \itemize{ \item Within FSA, Linf is the mean asymptotic length, ti is the age at the inflection point, k controls the slope at the inflection point (maximum relative growth rate), b is dimensionless but related to the vertical position (i.e., size) of the inflection point, a is dimensionless but related to the horizontal position (i.e., age) of the inflection point, and L0 is the mean length at age-0. - \item The parameterizations (1-6) correspond to functions/equations 1, 4, 5, 6, 7, and 8, respectively, in Tjorve and Tjorve (2010). Note that their A, S, k, d, and B are Linf, a, k, b, and L0, respectively, here (in FSA). THeir (Tjorve and Tjorve 2010) K does not appear here. + \item The parameterizations (1-6) correspond to functions/equations 1, 4, 5, 6, 7, and 8, respectively, in Tjorve and Tjorve (2010). Note that their A, S, k, d, and B are Linf, a, k, b, and L0, respectively, here (in FSA). Their (Tjorve and Tjorve 2010) K does not appear here. } \item logistic \itemize{ @@ -288,7 +288,7 @@ par(op) \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. -Campana, S.E. and C.M. Jones. 1992. Analysis of otolith microstructure data. Pages 73-100 In D.K. Stevenson and S.E. Campana, editors. Otolith microstructure examination and analysis. Canadian Special Publication of Fisheries and Aquatic Sciences 117. [Was (is?) from http://www.dfo-mpo.gc.ca/Library/141734.pdf.] +Campana, S.E. and C.M. Jones. 1992. Analysis of otolith microstructure data. Pages 73-100 In D.K. Stevenson and S.E. Campana, editors. Otolith microstructure examination and analysis. Canadian Special Publication of Fisheries and Aquatic Sciences 117. [Was (is?) from https://waves-vagues.dfo-mpo.gc.ca/library-bibliotheque/141734.pdf.] Fabens, A. 1965. Properties and fitting of the von Bertalanffy growth curve. Growth 29:265-289. @@ -296,45 +296,45 @@ Francis, R.I.C.C. 1988. Are growth parameters estimated from tagging and age-len Gallucci, V.F. and T.J. Quinn II. 1979. Reparameterizing, fitting, and testing a simple growth model. Transactions of the American Fisheries Society, 108:14-25. -Garcia-Berthou, E., G. Carmona-Catot, R. Merciai, and D.H. Ogle. A technical note on seasonal growth models. Reviews in Fish Biology and Fisheries 22:635-640. [Was (is?) from https://www.researchgate.net/publication/257658359_A_technical_note_on_seasonal_growth_models.] +Garcia-Berthou, E., G. Carmona-Catot, R. Merciai, and D.H. Ogle. A technical note on seasonal growth models. Reviews in Fish Biology and Fisheries 22:635-640. -Gompertz, B. 1825. On the nature of the function expressive of the law of human mortality, and on a new method of determining the value of life contingencies. Philosophical Transactions of the Royal Society of London. 115:513-583. +Gompertz, B. 1825. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. Philosophical Transactions of the Royal Society of London. 115:513-583. -Haddon, M., C. Mundy, and D. Tarbath. 2008. Using an inverse-logistic model to describe growth increments of Blacklip Abalone (\emph{Haliotis rubra}) in Tasmania. Fishery Bulletin 106:58-71. [Was (is?) from http://aquaticcommons.org/8857/1/haddon_Fish_Bull_2008.pdf.] +Haddon, M., C. Mundy, and D. Tarbath. 2008. Using an inverse-logistic model to describe growth increments of blacklip abalone (\emph{Haliotis rubra}) in Tasmania. Fishery Bulletin 106:58-71. [Was (is?) from https://spo.nmfs.noaa.gov/sites/default/files/pdf-content/2008/1061/haddon.pdf.] -Karkach, A. S. 2006. Trajectories and models of individual growth. Demographic Research 15:347-400. [Was (is?) from http://www.demographic-research.org/volumes/vol15/12/15-12.pdf.] +Karkach, A. S. 2006. Trajectories and models of individual growth. Demographic Research 15:347-400. [Was (is?) from https://www.demographic-research.org/volumes/vol15/12/15-12.pdf.] Katsanevakis, S. and C.D. Maravelias. 2008. Modeling fish growth: multi-model inference as a better alternative to a priori using von Bertalanffy equation. Fish and Fisheries 9:178-187. -Mooij, W.M., J.M. Van Rooij, and S. Wijnhoven. 1999. Analysis and comparison of fish growth from small samples of length-at-age data: Detection of sexual dimorphism in Eurasian Perch as an example. Transactions of the American Fisheries Society 128:483-490. +Mooij, W.M., J.M. Van Rooij, and S. Wijnhoven. 1999. Analysis and comparison of fish growth from small samples of length-at-age data: Detection of sexual dimorphism in Eurasian perch as an example. Transactions of the American Fisheries Society 128:483-490. -Polacheck, T., J.P. Eveson, and G.M. Laslett. 2004. Increase in growth rates of southern Bluefin Tuna (\emph{Thunnus maccoyii}) over four decades: 1960 to 2000. Canadian Journal of Fisheries and Aquatic Sciences, 61:307-322. +Polacheck, T., J.P. Eveson, and G.M. Laslett. 2004. Increase in growth rates of southern bluefin tuna (\emph{Thunnus maccoyii}) over four decades: 1960 to 2000. Canadian Journal of Fisheries and Aquatic Sciences, 61:307-322. Quinn, T. J. and R. B. Deriso. 1999. Quantitative Fish Dynamics. Oxford University Press, New York, New York. 542 pages. -Quist, M.C., M.A. Pegg, and D.R. DeVries. 2012. Age and Growth. Chapter 15 in A.V. Zale, D.L Parrish, and T.M. Sutton, Editors Fisheries Techniques, Third Edition. American Fisheries Society, Bethesda, MD. +Quist, M.C., M.A. Pegg, and D.R. DeVries. 2012. Age and growth. Chapter 15 in A.V. Zale, D.L Parrish, and T.M. Sutton, editors. Fisheries Techniques, Third Edition. American Fisheries Society, Bethesda, MD. Richards, F. J. 1959. A flexible growth function for empirical use. Journal of Experimental Biology 10:290-300. -Ricker, W.E. 1975. Computation and interpretation of biological statistics of fish populations. Technical Report Bulletin 191, Bulletin of the Fisheries Research Board of Canada. [Was (is?) from http://www.dfo-mpo.gc.ca/Library/1485.pdf.] +Ricker, W.E. 1975. Computation and interpretation of biological statistics of fish populations. Technical Report Bulletin 191, Bulletin of the Fisheries Research Board of Canada. [Was (is?) from https://publications.gc.ca/collections/collection_2015/mpo-dfo/Fs94-191-eng.pdf.] -Ricker, W.E. 1979. Growth rates and models. Pages 677-743 In W.S. Hoar, D.J. Randall, and J.R. Brett, editors. Fish Physiology, Vol. 8: Bioenergetics and Growth. Academic Press, NY, NY. [Was (is?) from https://books.google.com/books?id=CB1qu2VbKwQC&pg=PA705&lpg=PA705&dq=Gompertz+fish&source=bl&ots=y34lhFP4IU&sig=EM_DGEQMPGIn_DlgTcGIi_wbItE&hl=en&sa=X&ei=QmM4VZK6EpDAgwTt24CABw&ved=0CE8Q6AEwBw#v=onepage&q=Gompertz\%20fish&f=false.] +Ricker, W.E. 1979. Growth rates and models. Pages 677-743 In W.S. Hoar, D.J. Randall, and J.R. Brett, editors. Fish Physiology, Vol. 8: Bioenergetics and Growth. Academic Press, New York, NY. [Was (is?) from https://books.google.com/books?id=CB1qu2VbKwQC&pg=PA705&lpg=PA705&dq=Gompertz+fish&source=bl&ots=y34lhFP4IU&sig=EM_DGEQMPGIn_DlgTcGIi_wbItE&hl=en&sa=X&ei=QmM4VZK6EpDAgwTt24CABw&ved=0CE8Q6AEwBw#v=onepage&q=Gompertz\%20fish&f=false.] Schnute, J. 1981. A versatile growth model with statistically stable parameters. Canadian Journal of Fisheries and Aquatic Sciences, 38:1128-1140. -Somers, I. F. 1988. On a seasonally oscillating growth function. Fishbyte 6(1):8-11. [Was (is?) from http://www.worldfishcenter.org/Naga/na_2914.pdf.] +Somers, I. F. 1988. On a seasonally oscillating growth function. Fishbyte 6(1):8-11. [Was (is?) from https://www.fishbase.us/manual/English/fishbaseSeasonal_Growth.htm.] Tjorve, E. and K. M. C. Tjorve. 2010. A unified approach to the Richards-model family for use in growth analyses: Why we need only two model forms. Journal of Theoretical Biology 267:417-425. [Was (is?) from https://www.researchgate.net/profile/Even_Tjorve/publication/46218377_A_unified_approach_to_the_Richards-model_family_for_use_in_growth_analyses_why_we_need_only_two_model_forms/links/54ba83b80cf29e0cb04bd24e.pdf.] Troynikov, V. S., R. W. Day, and A. M. Leorke. Estimation of seasonal growth parameters using a stochastic Gompertz model for tagging data. Journal of Shellfish Research 17:833-838. [Was (is?) from https://www.researchgate.net/profile/Robert_Day2/publication/249340562_Estimation_of_seasonal_growth_parameters_using_a_stochastic_gompertz_model_for_tagging_data/links/54200fa30cf203f155c2a08a.pdf.] -Vaughan, D. S. and T. E. Helser. 1990. Status of the Red Drum stock of the Atlantic coast: Stock assessment report for 1989. NOAA Technical Memorandum NMFS-SEFC-263, 117 p. [Was (is?) from http://docs.lib.noaa.gov/noaa_documents/NMFS/SEFSC/TM_NMFS_SEFSC/NMFS_SEFSC_TM_263.pdf.] +Vaughan, D. S. and T. E. Helser. 1990. Status of the Red Drum stock of the Atlantic coast: Stock assessment report for 1989. NOAA Technical Memorandum NMFS-SEFC-263, 117 p. [Was (is?) from https://repository.library.noaa.gov/view/noaa/5927/noaa_5927_DS1.pdf.] Wang, Y.-G. 1998. An improved Fabens method for estimation of growth parameters in the von Bertalanffy model with individual asymptotes. Canadian Journal of Fisheries and Aquatic Sciences 55:397-400. Weisberg, S., G.R. Spangler, and L. S. Richmond. 2010. Mixed effects models for fish growth. Canadian Journal of Fisheries And Aquatic Sciences 67:269-277. -Winsor, C.P. 1932. The Gompertz curve as a growth curve. Proceedings of the National Academy of Sciences. 18:1-8. [Was (is?) from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1076153/pdf/pnas01729-0009.pdf.] +Winsor, C.P. 1932. The Gompertz curve as a growth curve. Proceedings of the National Academy of Sciences. 18:1-8. [Was (is?) from https://pmc.ncbi.nlm.nih.gov/articles/PMC1076153/pdf/pnas01729-0009.pdf.] } \seealso{ See \code{\link{Schnute}} for an implementation of the Schnute (1981) model. diff --git a/tests/testthat.R b/tests/testthat.R index 50e48886..de7e3fb8 100644 --- a/tests/testthat.R +++ b/tests/testthat.R @@ -6,7 +6,18 @@ # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview # * https://testthat.r-lib.org/articles/special-files.html -library(testthat) -library(FSA) +suppressPackageStartupMessages(library(testthat)) +suppressPackageStartupMessages(library(FSA)) +# packages used in tests (load here instead of in individual files) +suppressPackageStartupMessages(library(FSAdata)) +suppressPackageStartupMessages(library(fishmethods)) +suppressPackageStartupMessages(library(DescTools)) +suppressPackageStartupMessages(library(dunn.test)) +suppressPackageStartupMessages(library(lmtest)) +suppressPackageStartupMessages(library(nlme)) +suppressPackageStartupMessages(library(psych)) +suppressPackageStartupMessages(library(tibble)) + +# Test the package test_check("FSA") diff --git a/tests/testthat/testthat_AgeComparisons.R b/tests/testthat/testthat_AgeComparisons.R index 2b20975d..61da690d 100644 --- a/tests/testthat/testthat_AgeComparisons.R +++ b/tests/testthat/testthat_AgeComparisons.R @@ -148,33 +148,31 @@ test_that("ageBias() symmetry tests match results in Evans and Hoenig (2008)",{ }) test_that("test ageBias() against compare2() with AlewifeLH data",{ - if (require(FSAdata,quietly=TRUE) & require(fishmethods,quietly=TRUE)) { - data(AlewifeLH,package="FSAdata") - ab2 <- compare2(AlewifeLH,barplot=FALSE) - ## no continuity correction - suppressWarnings(ab1 <- ageBias(scales~otoliths,data=AlewifeLH, - ref.lab="Otolith Age",nref.lab="Scale Age")) - junk <- capture.output( ab1sum <- summary(ab1) ) - expect_equal(ab1sum[ab1sum$symTest=="McNemar","chi.sq"], ab2$McNemar$Chisq) - expect_equal(ab1sum[ab1sum$symTest=="McNemar","p"], ab2$McNemar$pvalue) - expect_equal(ab1sum[ab1sum$symTest=="EvansHoenig","chi.sq"], - ab2$Evans_Hoenig$Chisq) - expect_equal(ab1sum[ab1sum$symTest=="EvansHoenig","p"], - ab2$Evans_Hoenig$pvalue) - expect_equal(ab1sum[ab1sum$symTest=="EvansHoenig","df"], - ab2$Evans_Hoenig$df) - ## Yates continuity correction - junk <- capture.output( ab1sum2 <- summary(ab1,what="McNemar", - cont.corr="Yates") ) - expect_equal(ab1sum2[1,"chi.sq"], ab2$McNemar_continuity_correction$Chisq) - expect_equal(ab1sum2[1,"p"], ab2$McNemar_continuity_correction$pvalue) - ## Edwards continuity correction - ab3 <- compare2(AlewifeLH,correct="Edwards",barplot=FALSE) - junk <- capture.output( ab1sum3 <- summary(ab1,what="McNemar", - cont.corr="Edwards") ) - expect_equal(ab1sum3[1,"chi.sq"], ab3$McNemar_continuity_correction$Chisq) - expect_equal(ab1sum3[1,"p"], ab3$McNemar_continuity_correction$pvalue) - } + data(AlewifeLH,package="FSAdata") + ab2 <- fishmethods::compare2(AlewifeLH,barplot=FALSE) + ## no continuity correction + suppressWarnings(ab1 <- ageBias(scales~otoliths,data=AlewifeLH, + ref.lab="Otolith Age",nref.lab="Scale Age")) + junk <- capture.output( ab1sum <- summary(ab1) ) + expect_equal(ab1sum[ab1sum$symTest=="McNemar","chi.sq"], ab2$McNemar$Chisq) + expect_equal(ab1sum[ab1sum$symTest=="McNemar","p"], ab2$McNemar$pvalue) + expect_equal(ab1sum[ab1sum$symTest=="EvansHoenig","chi.sq"], + ab2$Evans_Hoenig$Chisq) + expect_equal(ab1sum[ab1sum$symTest=="EvansHoenig","p"], + ab2$Evans_Hoenig$pvalue) + expect_equal(ab1sum[ab1sum$symTest=="EvansHoenig","df"], + ab2$Evans_Hoenig$df) + ## Yates continuity correction + junk <- capture.output( ab1sum2 <- summary(ab1,what="McNemar", + cont.corr="Yates") ) + expect_equal(ab1sum2[1,"chi.sq"], ab2$McNemar_continuity_correction$Chisq) + expect_equal(ab1sum2[1,"p"], ab2$McNemar_continuity_correction$pvalue) + ## Edwards continuity correction + ab3 <- fishmethods::compare2(AlewifeLH,correct="Edwards",barplot=FALSE) + junk <- capture.output( ab1sum3 <- summary(ab1,what="McNemar", + cont.corr="Edwards") ) + expect_equal(ab1sum3[1,"chi.sq"], ab3$McNemar_continuity_correction$Chisq) + expect_equal(ab1sum3[1,"p"], ab3$McNemar_continuity_correction$pvalue) }) test_that("ageBias() compared to http://www.nefsc.noaa.gov/fbp/age-prec/ calculations for AlewifeLH data",{ @@ -285,4 +283,3 @@ test_that("agePrecision() differences for simple data with NA values",{ ap135 <- agePrecision(~age1+age3+age5,data=tmp) expect_equal(round(ap135$PercAgree,4),50.0000) }) - diff --git a/tests/testthat/testthat_AgeLengthKey.R b/tests/testthat/testthat_AgeLengthKey.R index 574b4a36..c9730c96 100644 --- a/tests/testthat/testthat_AgeLengthKey.R +++ b/tests/testthat/testthat_AgeLengthKey.R @@ -274,32 +274,30 @@ test_that("Assigned ages are correct (within rounding) with semi-random alkIndiv }) test_that("alkAgeDist() reproduces results from Table 8.4 (left) of Quinn and Deriso (1999)",{ - if (require(fishmethods,quietly=TRUE)) { - ## Q&D (1999) data are alkdata and alkprop reproduces Table 8.4 results - data(alkdata,package="fishmethods") - tmp1 <- alkprop(alkdata)$results - } - if (require(FSAdata,quietly=TRUE)) { - ## Same data in SnapperHG2 in a different format - ## create ALK and intermediate results - data(SnapperHG2,package="FSAdata") - len.n <- xtabs(~len,data=SnapperHG2) - sn.age <- subset(SnapperHG2,!is.na(age)) - agekey <- prop.table(xtabs(~len+age,data=sn.age),1) - lenA.n <- xtabs(~len,data=sn.age) - ## get ALKAgeDist results - tmp2 <- alkAgeDist(agekey,lenA.n,len.n) - - ## Find difference in results - diff <- as.matrix(tmp2[,-1]-tmp1[,-3]) - expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) - - ## enter Q&D results as a guard against fishmethods changing - props <- c(0.0003,0.0213,0.1624,0.0926,0.1533,0.1461,0.1260, - 0.0133,0.0277,0.0763,0.0298,0.0332,0.0162,0.1017) - ses <- c(0.0003,0.0056,0.0157,0.0158,0.0185,0.0182,0.0150, - 0.0050,0.0074,0.0083,0.0047,0.0050,0.0031,0.0063) - diff <- as.matrix(round(tmp2[,-1],4)-cbind(props,ses)) - expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) - } + ## Q&D (1999) data are alkdata and alkprop reproduces Table 8.4 results + data(alkdata,package="fishmethods") + tmp1 <- fishmethods::alkprop(alkdata)$results + + ## Same data in SnapperHG2 in a different format + ## create ALK and intermediate results + data(SnapperHG2,package="FSAdata") + len.n <- xtabs(~len,data=SnapperHG2) + sn.age <- subset(SnapperHG2,!is.na(age)) + agekey <- prop.table(xtabs(~len+age,data=sn.age),1) + lenA.n <- xtabs(~len,data=sn.age) + ## get ALKAgeDist results + tmp2 <- alkAgeDist(agekey,lenA.n,len.n) + + ## Find difference in results + diff <- as.matrix(tmp2[,-1]-tmp1[,-3]) + expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) + + ## enter Q&D results as a guard against fishmethods changing + props <- c(0.0003,0.0213,0.1624,0.0926,0.1533,0.1461,0.1260, + 0.0133,0.0277,0.0763,0.0298,0.0332,0.0162,0.1017) + ses <- c(0.0003,0.0056,0.0157,0.0158,0.0185,0.0182,0.0150, + 0.0050,0.0074,0.0083,0.0047,0.0050,0.0031,0.0063) + diff <- as.matrix(round(tmp2[,-1],4)-cbind(props,ses)) + expect_equal(diff,matrix(0,nrow=nrow(diff),ncol=ncol(diff)),ignore_attr=TRUE) + }) diff --git a/tests/testthat/testthat_FSAUtils.R b/tests/testthat/testthat_FSAUtils.R index 17ce759d..ea1c2980 100644 --- a/tests/testthat/testthat_FSAUtils.R +++ b/tests/testthat/testthat_FSAUtils.R @@ -202,16 +202,12 @@ test_that("geomean() / geosd() results",{ expect_equal(round(geomean(tmp),4),1.0861) expect_equal(round(geosd(tmp),4),1.0795) # match geometric.mean in psych package - if (require(psych,quietly=TRUE)) { - expect_equal(geomean(tmp),psych::geometric.mean(tmp)) - expect_equal(geomean(tmp2,na.rm=TRUE),psych::geometric.mean(tmp2)) - } - if (require(DescTools,quietly=TRUE)) { - expect_equal(geomean(tmp),DescTools::Gmean(tmp)) - expect_equal(geomean(tmp2,na.rm=TRUE),DescTools::Gmean(tmp2,na.rm=TRUE)) - expect_equal(geosd(tmp),DescTools::Gsd(tmp)) - expect_equal(geosd(tmp2,na.rm=TRUE),DescTools::Gsd(tmp2,na.rm=TRUE)) - } + expect_equal(geomean(tmp),psych::geometric.mean(tmp)) + expect_equal(geomean(tmp2,na.rm=TRUE),psych::geometric.mean(tmp2)) + expect_equal(geomean(tmp),DescTools::Gmean(tmp)) + expect_equal(geomean(tmp2,na.rm=TRUE),DescTools::Gmean(tmp2,na.rm=TRUE)) + expect_equal(geosd(tmp),DescTools::Gsd(tmp)) + expect_equal(geosd(tmp2,na.rm=TRUE),DescTools::Gsd(tmp2,na.rm=TRUE)) }) test_that("headtail() return values",{ @@ -252,11 +248,9 @@ test_that("headtail() return values",{ expect_true(is.null(rownames(tmp))) ## check how it handles tbl_df object - if (require(tibble,quietly=TRUE)) { - iris2 <- tibble::as_tibble(iris) - tmp <- FSA::headtail(iris2,n=15) - expect_equal(class(tmp),"data.frame") - } + iris2 <- tibble::as_tibble(iris) + tmp <- FSA::headtail(iris2,n=15) + expect_equal(class(tmp),"data.frame") }) test_that("peek() return values",{ diff --git a/tests/testthat/testthat_VonBertalanffy.R b/tests/testthat/testthat_VonBertalanffy.R index 1c7b4055..e9125336 100644 --- a/tests/testthat/testthat_VonBertalanffy.R +++ b/tests/testthat/testthat_VonBertalanffy.R @@ -3,99 +3,96 @@ ## Test Messages ---- test_that("vbStarts() messages",{ ## Get some data for the following attempts - if (require(fishmethods,quietly=TRUE)) { - data(Kimura,package="fishmethods") - ## Asked for a dynamicPlot, which now does not exist - expect_warning(vbStarts(length~age,data=Kimura,dynamicPlot=TRUE), - "functionality has been moved to") - ## wrong types - expect_error(vbStarts(length~age,data=Kimura,param="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,type="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,param="Francis",methEV="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,param="Schnute",methEV="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,param="typical",meth0="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,param="original",meth0="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,methLinf="Derek"), - "should be one of") - expect_error(vbStarts(length~age,data=Kimura,methLinf="oldAge",num4Linf=-1), - "must be at least 1") - expect_error(vbStarts(length~age,data=Kimura,methLinf="longFish",num4Linf=-1), - "must be at least 1") - expect_error(vbStarts(length~age,data=Kimura,methLinf="oldAge",num4Linf=30), - "less than the number of observed ages") - expect_error(vbStarts(length~age,data=Kimura,methLinf="longFish",num4Linf=500), - "less than the number of recorded lengths") - ## Two variables on LHS - expect_error(vbStarts(length+age~age,data=Kimura,param="typical"), - "more than one variable on the LHS") - ## Two variables on RHS - expect_error(vbStarts(length~age+sex,data=Kimura,param="typical"), - "must have only one RHS variable") - ## LHS is a factor - expect_error(vbStarts(sex~age,data=Kimura,param="typical"), - "LHS variable must be numeric") - ## RHS is a factor - expect_error(vbStarts(length~sex,data=Kimura,param="typical"), - "RHS variable must be numeric") - ## not two ages2use given - expect_error(vbStarts(length~age,data=Kimura,param="Francis",ages2use=2), - "have only two ages") - expect_error(vbStarts(length~age,data=Kimura,param="Francis", - ages2use=c(2,5,10)), - "have only two ages") - expect_error(vbStarts(length~age,data=Kimura,param="Schnute",ages2use=2), - "have only two ages") - expect_error(vbStarts(length~age,data=Kimura,param="Schnute", - ages2use=c(2,5,10)), - "have only two ages") - ## ages2use in wrong order - expect_warning(vbStarts(length~age,data=Kimura,param="Francis", - ages2use=c(10,2)), - "order reversed to continue") - expect_warning(vbStarts(length~age,data=Kimura,param="Schnute", - ages2use=c(10,2)), - "order reversed to continue") - ## problems with fixed argument - expect_error(vbStarts(length~age,data=Kimura,fixed=c(Linf=3)), - "must be a list") - expect_error(vbStarts(length~age,data=Kimura,fixed=list(Linf=3,7)), - "must be named") - ## problems with valOgle argument - expect_error(vbStarts(length~age,data=Kimura,param="Ogle"), - "must contain a value for 'Lr' or 'tr'") - expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=3), - "must be a named vector") - expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle="3"), - "must be numeric") - expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(3,4)), - "must contain only one value") - expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(a=3)), - "must be 'Lr' or 'tr'") - expect_warning(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(tr=0)), - "less than minimum observed age") - expect_warning(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(Lr=0)), - "less than minimum observed length") - ## too few ages to estimate Linf - expect_error(vbStarts(length~age,data=subset(Kimura,age<3)), - "cannot be automatically determined") - } - if (require(FSAdata,quietly=TRUE)) { - data(SpottedSucker1,package="FSAdata") - ## gives warning about a poor estimate for K and Linf - sv <- list(Linf=max(SpottedSucker1$tl),K=0.3,t0=0) - vbStarts(tl~age,data=SpottedSucker1,param="typical") %>% - expect_warning("Starting value for Linf is very different from the observed") %>% - expect_warning("The suggested starting value for K is negative") - ## too few ages to estimate Linf - expect_error(vbStarts(tl~age,data=subset(SpottedSucker1,age<5)), - "cannot be automatically determined") - } + data(Kimura,package="fishmethods") + ## Asked for a dynamicPlot, which now does not exist + expect_warning(vbStarts(length~age,data=Kimura,dynamicPlot=TRUE), + "functionality has been moved to") + ## wrong types + expect_error(vbStarts(length~age,data=Kimura,param="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,type="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,param="Francis",methEV="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,param="Schnute",methEV="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,param="typical",meth0="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,param="original",meth0="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,methLinf="Derek"), + "should be one of") + expect_error(vbStarts(length~age,data=Kimura,methLinf="oldAge",num4Linf=-1), + "must be at least 1") + expect_error(vbStarts(length~age,data=Kimura,methLinf="longFish",num4Linf=-1), + "must be at least 1") + expect_error(vbStarts(length~age,data=Kimura,methLinf="oldAge",num4Linf=30), + "less than the number of observed ages") + expect_error(vbStarts(length~age,data=Kimura,methLinf="longFish",num4Linf=500), + "less than the number of recorded lengths") + ## Two variables on LHS + expect_error(vbStarts(length+age~age,data=Kimura,param="typical"), + "more than one variable on the LHS") + ## Two variables on RHS + expect_error(vbStarts(length~age+sex,data=Kimura,param="typical"), + "must have only one RHS variable") + ## LHS is a factor + expect_error(vbStarts(sex~age,data=Kimura,param="typical"), + "LHS variable must be numeric") + ## RHS is a factor + expect_error(vbStarts(length~sex,data=Kimura,param="typical"), + "RHS variable must be numeric") + ## not two ages2use given + expect_error(vbStarts(length~age,data=Kimura,param="Francis",ages2use=2), + "have only two ages") + expect_error(vbStarts(length~age,data=Kimura,param="Francis", + ages2use=c(2,5,10)), + "have only two ages") + expect_error(vbStarts(length~age,data=Kimura,param="Schnute",ages2use=2), + "have only two ages") + expect_error(vbStarts(length~age,data=Kimura,param="Schnute", + ages2use=c(2,5,10)), + "have only two ages") + ## ages2use in wrong order + expect_warning(vbStarts(length~age,data=Kimura,param="Francis", + ages2use=c(10,2)), + "order reversed to continue") + expect_warning(vbStarts(length~age,data=Kimura,param="Schnute", + ages2use=c(10,2)), + "order reversed to continue") + ## problems with fixed argument + expect_error(vbStarts(length~age,data=Kimura,fixed=c(Linf=3)), + "must be a list") + expect_error(vbStarts(length~age,data=Kimura,fixed=list(Linf=3,7)), + "must be named") + ## problems with valOgle argument + expect_error(vbStarts(length~age,data=Kimura,param="Ogle"), + "must contain a value for 'Lr' or 'tr'") + expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=3), + "must be a named vector") + expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle="3"), + "must be numeric") + expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(3,4)), + "must contain only one value") + expect_error(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(a=3)), + "must be 'Lr' or 'tr'") + expect_warning(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(tr=0)), + "less than minimum observed age") + expect_warning(vbStarts(length~age,data=Kimura,param="Ogle",valOgle=c(Lr=0)), + "less than minimum observed length") + ## too few ages to estimate Linf + expect_error(vbStarts(length~age,data=subset(Kimura,age<3)), + "cannot be automatically determined") + + data(SpottedSucker1,package="FSAdata") + ## gives warning about a poor estimate for K and Linf + sv <- list(Linf=max(SpottedSucker1$tl),K=0.3,t0=0) + vbStarts(tl~age,data=SpottedSucker1,param="typical") %>% + expect_warning("Starting value for Linf is very different from the observed") %>% + expect_warning("The suggested starting value for K is negative") + ## too few ages to estimate Linf + expect_error(vbStarts(tl~age,data=subset(SpottedSucker1,age<5)), + "cannot be automatically determined") }) @@ -314,80 +311,78 @@ test_that("vbFuns() and vbStarts() fit to AIFFD book (Box 5.4) results (SAS).",{ test_that("vbFuns() and vbStarts() fit to Kimura separated by sex match fishmethods (and Kimura) results.",{ - if (require(lmtest,quietly=TRUE)) { - data(Kimura,package="fishmethods") - - ### get fishmethods results (straight from example) - fm1 <- growthlrt(len=Kimura$length,age=Kimura$age,group=Kimura$sex, - error=2,select=1) - fm1$results - - ### fit with my methods - ## Is this the same as fishmethods results for Ho vs H4 - # Common model - vbCom <- length~Linf*(1-exp(-K*(age-t0))) - svCom <- vbStarts(length~age,data=Kimura) - fitCom <- nls(vbCom,data=Kimura,start=svCom) - # General model - vbGen <- length~Linf[sex]*(1-exp(-K[sex]*(age-t0[sex]))) - svGen <- lapply(svCom,rep,2) - fitGen <- nls(vbGen,data=Kimura,start=svGen) - # LRT - lr04 <- lrtest(fitCom,fitGen) - expect_equal(lr04$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H4"]) - expect_equal(round(lr04$Chisq[2],2), - round(fm1$results$chisq[fm1$results$tests=="Ho vs H4"],2)) - expect_equal(round(lr04$'Pr(>Chisq)'[2],3), - round(fm1$results$p[fm1$results$tests=="Ho vs H4"],3)) - ## Is this the same as fishmethods results for Ho vs H3 - vb2LK <- length~Linf[sex]*(1-exp(-K[sex]*(age-t0))) - sv2LK <- mapply(rep,svCom,c(2,2,1)) - fit2LK <- nls(vb2LK,data=Kimura,start=sv2LK) - lr03 <- lrtest(fit2LK,fitGen) - expect_equal(lr03$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H3"]) - expect_equal(round(lr03$Chisq[2],2), - round(fm1$results$chisq[fm1$results$tests=="Ho vs H3"],2)) - # only to two decimals in p-value (likely just rounding error) - expect_equal(round(lr03$'Pr(>Chisq)'[2],2), - round(fm1$results$p[fm1$results$tests=="Ho vs H3"],2)) - - ## Is this the same as fishmethods results for Ho vs H2 - vb2Lt <- length~Linf[sex]*(1-exp(-K*(age-t0[sex]))) - sv2Lt <- mapply(rep,svCom,c(2,1,2)) - fit2Lt <- nls(vb2Lt,data=Kimura,start=sv2Lt) - lr02 <- lrtest(fit2Lt,fitGen) - expect_equal(lr02$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H2"]) - expect_equal(round(lr02$Chisq[2],2), - round(fm1$results$chisq[fm1$results$tests=="Ho vs H2"],2)) - expect_equal(round(lr02$'Pr(>Chisq)'[2],3), - round(fm1$results$p[fm1$results$tests=="Ho vs H2"],3)) - - ## Is this the same as fishmethods results for Ho vs H1 - vb2Kt <- length~Linf*(1-exp(-K[sex]*(age-t0[sex]))) - sv2Kt <- mapply(rep,svCom,c(1,2,2)) - fit2Kt <- nls(vb2Kt,data=Kimura,start=sv2Kt) - lr01 <- lrtest(fit2Kt,fitGen) - expect_equal(lr01$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H1"]) - expect_equal(round(lr01$Chisq[2],2), - round(fm1$results$chisq[fm1$results$tests=="Ho vs H1"],2)) - expect_equal(round(lr01$'Pr(>Chisq)'[2],3), - round(fm1$results$p[fm1$results$tests=="Ho vs H1"],3)) - - ## Do parameter estimates match those in Kimura (Table 3) - # general model - expect_equal(round(coef(fitGen)[1:2],2),c(61.23,55.98),ignore_attr=TRUE) - expect_equal(round(coef(fitGen)[3:6],3),c(0.296,0.386,-0.057,0.171),ignore_attr=TRUE) - # Linf equal model (H3) - expect_equal(round(coef(fit2Kt)[1],2),c(59.40),ignore_attr=TRUE) - expect_equal(round(coef(fit2Kt)[2:5],3),c(0.337,0.297,0.087,-0.111),ignore_attr=TRUE) - # K equal model (H2) Linf slightly off in 2nd decimal for 2nd value - expect_equal(round(coef(fit2Lt)[1:2],1),c(60.1,57.4),ignore_attr=TRUE) - expect_equal(round(coef(fit2Lt)[3:5],3),c(0.330,0.095,-0.021),ignore_attr=TRUE) - # t0 equal model (H1) - expect_equal(round(coef(fit2LK)[1:2],2),c(60.77,56.45),ignore_attr=TRUE) - expect_equal(round(coef(fit2LK)[3:5],3),c(0.313,0.361,0.057),ignore_attr=TRUE) - # common model (H4) - expect_equal(round(coef(fitCom),2),c(59.29,0.32,0.01),ignore_attr=TRUE) - } + data(Kimura,package="fishmethods") + + ### get fishmethods results (straight from example) + fm1 <- fishmethods::growthlrt(len=Kimura$length,age=Kimura$age,group=Kimura$sex, + error=2,select=1) + fm1$results + + ### fit with my methods + ## Is this the same as fishmethods results for Ho vs H4 + # Common model + vbCom <- length~Linf*(1-exp(-K*(age-t0))) + svCom <- vbStarts(length~age,data=Kimura) + fitCom <- nls(vbCom,data=Kimura,start=svCom) + # General model + vbGen <- length~Linf[sex]*(1-exp(-K[sex]*(age-t0[sex]))) + svGen <- lapply(svCom,rep,2) + fitGen <- nls(vbGen,data=Kimura,start=svGen) + # LRT + lr04 <- lmtest::lrtest(fitCom,fitGen) + expect_equal(lr04$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H4"]) + expect_equal(round(lr04$Chisq[2],2), + round(fm1$results$chisq[fm1$results$tests=="Ho vs H4"],2)) + expect_equal(round(lr04$'Pr(>Chisq)'[2],3), + round(fm1$results$p[fm1$results$tests=="Ho vs H4"],3)) + ## Is this the same as fishmethods results for Ho vs H3 + vb2LK <- length~Linf[sex]*(1-exp(-K[sex]*(age-t0))) + sv2LK <- mapply(rep,svCom,c(2,2,1)) + fit2LK <- nls(vb2LK,data=Kimura,start=sv2LK) + lr03 <- lmtest::lrtest(fit2LK,fitGen) + expect_equal(lr03$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H3"]) + expect_equal(round(lr03$Chisq[2],2), + round(fm1$results$chisq[fm1$results$tests=="Ho vs H3"],2)) + # only to two decimals in p-value (likely just rounding error) + expect_equal(round(lr03$'Pr(>Chisq)'[2],2), + round(fm1$results$p[fm1$results$tests=="Ho vs H3"],2)) + + ## Is this the same as fishmethods results for Ho vs H2 + vb2Lt <- length~Linf[sex]*(1-exp(-K*(age-t0[sex]))) + sv2Lt <- mapply(rep,svCom,c(2,1,2)) + fit2Lt <- nls(vb2Lt,data=Kimura,start=sv2Lt) + lr02 <- lmtest::lrtest(fit2Lt,fitGen) + expect_equal(lr02$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H2"]) + expect_equal(round(lr02$Chisq[2],2), + round(fm1$results$chisq[fm1$results$tests=="Ho vs H2"],2)) + expect_equal(round(lr02$'Pr(>Chisq)'[2],3), + round(fm1$results$p[fm1$results$tests=="Ho vs H2"],3)) + + ## Is this the same as fishmethods results for Ho vs H1 + vb2Kt <- length~Linf*(1-exp(-K[sex]*(age-t0[sex]))) + sv2Kt <- mapply(rep,svCom,c(1,2,2)) + fit2Kt <- nls(vb2Kt,data=Kimura,start=sv2Kt) + lr01 <- lmtest::lrtest(fit2Kt,fitGen) + expect_equal(lr01$Df[2],fm1$results$df[fm1$results$tests=="Ho vs H1"]) + expect_equal(round(lr01$Chisq[2],2), + round(fm1$results$chisq[fm1$results$tests=="Ho vs H1"],2)) + expect_equal(round(lr01$'Pr(>Chisq)'[2],3), + round(fm1$results$p[fm1$results$tests=="Ho vs H1"],3)) + + ## Do parameter estimates match those in Kimura (Table 3) + # general model + expect_equal(round(coef(fitGen)[1:2],2),c(61.23,55.98),ignore_attr=TRUE) + expect_equal(round(coef(fitGen)[3:6],3),c(0.296,0.386,-0.057,0.171),ignore_attr=TRUE) + # Linf equal model (H3) + expect_equal(round(coef(fit2Kt)[1],2),c(59.40),ignore_attr=TRUE) + expect_equal(round(coef(fit2Kt)[2:5],3),c(0.337,0.297,0.087,-0.111),ignore_attr=TRUE) + # K equal model (H2) Linf slightly off in 2nd decimal for 2nd value + expect_equal(round(coef(fit2Lt)[1:2],1),c(60.1,57.4),ignore_attr=TRUE) + expect_equal(round(coef(fit2Lt)[3:5],3),c(0.330,0.095,-0.021),ignore_attr=TRUE) + # t0 equal model (H1) + expect_equal(round(coef(fit2LK)[1:2],2),c(60.77,56.45),ignore_attr=TRUE) + expect_equal(round(coef(fit2LK)[3:5],3),c(0.313,0.361,0.057),ignore_attr=TRUE) + # common model (H4) + expect_equal(round(coef(fitCom),2),c(59.29,0.32,0.01),ignore_attr=TRUE) }) diff --git a/tests/testthat/testthat_catchCurve.R b/tests/testthat/testthat_catchCurve.R index f55f222a..813c5c92 100644 --- a/tests/testthat/testthat_catchCurve.R +++ b/tests/testthat/testthat_catchCurve.R @@ -406,36 +406,34 @@ test_that("catchCurve() and chapmanRobson() match Miranda & Bettoli (2007) boxes }) test_that("catchCurve() and chapmanRobson() match results from fishmethods package",{ - if (require(fishmethods,quietly=TRUE)) { - ## get data - data(rockbass,package="fishmethods") - ## fishmethods results - fm <- agesurv(age=rockbass$age,full=6)$results - ## FSA results - df <- data.frame(xtabs(~age,data=rockbass)) - df$age <- fact2num(df$age) - cc1 <- catchCurve(Freq~age,data=df) - scc1 <- summary(cc1) - cr1 <- chapmanRobson(Freq~age,data=df,zmethod="original") - scr1 <- summary(cr1) - cr2 <- chapmanRobson(Freq~age,data=df,zmethod="Smithetal") - scr2 <- summary(cr2) - # catchCurve results match - expect_equal(round(scc1["Z","Estimate"],3), - round(fm$Estimate[fm$Method=="Linear Regression" & fm$Parameter=="Z"],3)) - expect_equal(round(scc1["Z","Std. Error"],3), - round(fm$SE[fm$Method=="Linear Regression" & fm$Parameter=="Z"],3)) - # chapmanRobson results match - expect_equal(round(scr1["Z","Estimate"],3), - round(fm$Estimate[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"],3)) - expect_equal(round(scr1["Z","Std. Error"],3), - round(fm$SE[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"],3)) - # chapmanRobson (with Smith et al. (2012) bias corrections) - # results match for the point estimates but not the SE - # fishmethods appears to use eqn 5 from smith et al. for - # the uncorrected SE of Z, whereas FSA uses eqn 2 - expect_equal(round(scr2["Z","Estimate"],3), - round(fm$Estimate[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"],3)) - #expect_equal(round(scr2["Z","Std. Error"],3),round(fm$SE[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"]),3) - } + ## get data + data(rockbass,package="fishmethods") + ## fishmethods results + fm <- fishmethods::agesurv(age=rockbass$age,full=6)$results + ## FSA results + df <- data.frame(xtabs(~age,data=rockbass)) + df$age <- fact2num(df$age) + cc1 <- catchCurve(Freq~age,data=df) + scc1 <- summary(cc1) + cr1 <- chapmanRobson(Freq~age,data=df,zmethod="original") + scr1 <- summary(cr1) + cr2 <- chapmanRobson(Freq~age,data=df,zmethod="Smithetal") + scr2 <- summary(cr2) + # catchCurve results match + expect_equal(round(scc1["Z","Estimate"],3), + round(fm$Estimate[fm$Method=="Linear Regression" & fm$Parameter=="Z"],3)) + expect_equal(round(scc1["Z","Std. Error"],3), + round(fm$SE[fm$Method=="Linear Regression" & fm$Parameter=="Z"],3)) + # chapmanRobson results match + expect_equal(round(scr1["Z","Estimate"],3), + round(fm$Estimate[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"],3)) + expect_equal(round(scr1["Z","Std. Error"],3), + round(fm$SE[fm$Method=="Chapman-Robson" & fm$Parameter=="Z"],3)) + # chapmanRobson (with Smith et al. (2012) bias corrections) + # results match for the point estimates but not the SE + # fishmethods appears to use eqn 5 from smith et al. for + # the uncorrected SE of Z, whereas FSA uses eqn 2 + expect_equal(round(scr2["Z","Estimate"],3), + round(fm$Estimate[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"],3)) + #expect_equal(round(scr2["Z","Std. Error"],3),round(fm$SE[fm$Method=="Chapman-Robson CB" & fm$Parameter=="Z"]),3) }) diff --git a/tests/testthat/testthat_depletion.R b/tests/testthat/testthat_depletion.R index d4de64d2..eeb84fe3 100644 --- a/tests/testthat/testthat_depletion.R +++ b/tests/testthat/testthat_depletion.R @@ -1,54 +1,50 @@ ## Results for validation tests below ---- # fishmethods's Darter data -if (require(fishmethods,quietly=TRUE)) { - data(darter,package="fishmethods") - # fishmethod Leslie - deplet(catch=darter$catch,effort=darter$effort,method="l") - cf1fm <- l.out$results[,1:2] - ci1fm <- l.out$results[,3:4] - # FSA Leslie - ex1 <- with(darter,depletion(darter$catch,darter$effort)) - cf1 <- summary(ex1) - ci1 <- confint(ex1) - - # fishmethod DeLury - deplet(catch=darter$catch,effort=darter$effort,method="d") - cf2fm <- d.out$results[,1:2] - ci2fm <- d.out$results[,3:4] - # FSA Leslie - ex2 <- with(darter,depletion(darter$catch,darter$effort, - method="DeLury",Ricker.mod=TRUE)) - cf2 <- summary(ex2) - ci2 <- confint(ex2) -} +data(darter,package="fishmethods") +# fishmethod Leslie +fishmethods::deplet(catch=darter$catch,effort=darter$effort,method="l") +cf1fm <- l.out$results[,1:2] +ci1fm <- l.out$results[,3:4] +# FSA Leslie +ex1 <- with(darter,depletion(darter$catch,darter$effort)) +cf1 <- summary(ex1) +ci1 <- confint(ex1) + +# fishmethod DeLury +fishmethods::deplet(catch=darter$catch,effort=darter$effort,method="d") +cf2fm <- d.out$results[,1:2] +ci2fm <- d.out$results[,3:4] +# FSA Leslie +ex2 <- with(darter,depletion(darter$catch,darter$effort, + method="DeLury",Ricker.mod=TRUE)) +cf2 <- summary(ex2) +ci2 <- confint(ex2) # DeLury's Lobster Data -if (require(FSAdata,quietly=TRUE)) { - data(LobsterPEI,package="FSAdata") - df <- subset(LobsterPEI,day>16) - # fishmethod Leslie - deplet(catch=df$catch,effort=df$effort,method="l") - cf3fm <- l.out$results[,1:2] - ci3fm <- l.out$results[,3:4] - # FSA Leslie - ex3 <- with(df,depletion(catch,effort)) - cf3 <- summary(ex3) - ci3 <- confint(ex3) - - # fishmethod Leslie - deplet(catch=df$catch,effort=df$effort,method="d") - cf4fm <- d.out$results[,1:2] - ci4fm <- d.out$results[,3:4] - # FSA DeLury - ex4 <- with(df,depletion(catch,effort,method="DeLury",Ricker.mod=TRUE)) - cf4 <- summary(ex4) - ci4 <- confint(ex4) -} +data(LobsterPEI,package="FSAdata") +df <- subset(LobsterPEI,day>16) +# fishmethod Leslie +fishmethods::deplet(catch=df$catch,effort=df$effort,method="l") +cf3fm <- l.out$results[,1:2] +ci3fm <- l.out$results[,3:4] +# FSA Leslie +ex3 <- with(df,depletion(catch,effort)) +cf3 <- summary(ex3) +ci3 <- confint(ex3) + +# fishmethod Leslie +fishmethods::deplet(catch=df$catch,effort=df$effort,method="d") +cf4fm <- d.out$results[,1:2] +ci4fm <- d.out$results[,3:4] +# FSA DeLury +ex4 <- with(df,depletion(catch,effort,method="DeLury",Ricker.mod=TRUE)) +cf4 <- summary(ex4) +ci4 <- confint(ex4) # Fischler's Blue Crab data data(BlueCrab,package="FSAdata") # fishmethod Leslie -deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="l") +fishmethods::deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="l") cf5fm <- l.out$results[,1:2] ci5fm <- l.out$results[,3:4] # FSA Leslie @@ -57,7 +53,7 @@ cf5 <- summary(ex5) ci5 <- confint(ex5) # fishmethod DeLury -deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="d") +fishmethods::deplet(catch=BlueCrab$catch,effort=BlueCrab$effort,method="d") cf6fm <- d.out$results[,1:2] ci6fm <- d.out$results[,3:4] # FSA DeLury @@ -68,7 +64,7 @@ ci6 <- confint(ex6) # Omand's SMB data # fishmethod Leslie -deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="l") +fishmethods::deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="l") cf7fm <- l.out$results[,1:2] ci7fm <- l.out$results[,3:4] # FSA Leslie @@ -81,7 +77,7 @@ cf7r <- summary(ex7r) ci7r <- confint(ex7r) # fishmethod DeLury -deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="d") +fishmethods::deplet(catch=SMBassLS$catch,effort=SMBassLS$effort,method="d") cf8fm <- d.out$results[,1:2] ci8fm <- d.out$results[,3:4] # FSA DeLury diff --git a/tests/testthat/testthat_dunnTest.R b/tests/testthat/testthat_dunnTest.R index 70703059..3fecbf93 100644 --- a/tests/testthat/testthat_dunnTest.R +++ b/tests/testthat/testthat_dunnTest.R @@ -30,27 +30,25 @@ test_that("dunnTest() error and warning messages",{ ## Test Output Types ---- test_that("dunnTest() output",{ - if (require(dunn.test,quietly=TRUE)) { - ## Loop through all methods in p.adjustment.methods - lbls <- c("No Adjustment","Bonferroni","Sidak","Holm","Holm-Sidak", - "Hochberg","Benjamini-Hochberg","Benjamini-Yekuteili") - meths <- dunn.test::p.adjustment.methods - for (i in seq_along(meths)) { ## For two-sided cases - tmp <- dunnTest(pH~fpond,data=ponds,method=meths[i],two.sided=TRUE) - expect_true(is.list(tmp)) - expect_equal(names(tmp),c("method","res","dtres")) - expect_equal(tmp$method,lbls[i]) - expect_equal(class(tmp$res),"data.frame") - expect_equal(names(tmp$res),c("Comparison","Z","P.unadj","P.adj")) - } - for (i in seq_along(meths)) { ## For one-sided cases - tmp <- dunnTest(pH~fpond,data=ponds,method=meths[i],two.sided=FALSE) - expect_true(is.list(tmp)) - expect_equal(names(tmp),c("method","res","dtres")) - expect_equal(tmp$method,lbls[i]) - expect_equal(class(tmp$res),"data.frame") - expect_equal(names(tmp$res),c("Comparison","Z","P.unadj","P.adj")) - } + ## Loop through all methods in p.adjustment.methods + lbls <- c("No Adjustment","Bonferroni","Sidak","Holm","Holm-Sidak", + "Hochberg","Benjamini-Hochberg","Benjamini-Yekuteili") + meths <- dunn.test::p.adjustment.methods + for (i in seq_along(meths)) { ## For two-sided cases + tmp <- dunnTest(pH~fpond,data=ponds,method=meths[i],two.sided=TRUE) + expect_true(is.list(tmp)) + expect_equal(names(tmp),c("method","res","dtres")) + expect_equal(tmp$method,lbls[i]) + expect_equal(class(tmp$res),"data.frame") + expect_equal(names(tmp$res),c("Comparison","Z","P.unadj","P.adj")) + } + for (i in seq_along(meths)) { ## For one-sided cases + tmp <- dunnTest(pH~fpond,data=ponds,method=meths[i],two.sided=FALSE) + expect_true(is.list(tmp)) + expect_equal(names(tmp),c("method","res","dtres")) + expect_equal(tmp$method,lbls[i]) + expect_equal(class(tmp$res),"data.frame") + expect_equal(names(tmp$res),c("Comparison","Z","P.unadj","P.adj")) } }) @@ -76,23 +74,22 @@ test_that("dunnTest matches dunn.test results for ponds data",{ ## Loop through all methods in p.adjustment.methods for (m in dunn.test::p.adjustment.methods) { # for one-sided results tmp <- dunnTest(pH~fpond,data=ponds,method=m,two.sided=FALSE)$res$P.adj - junk <- utils::capture.output(tmp2 <- dunn.test(ponds$pH,ponds$fpond, - method=m)$P.adjusted) + junk <- utils::capture.output( + tmp2 <- dunn.test::dunn.test(ponds$pH,ponds$fpond,method=m)$P.adjusted) expect_equal(tmp,tmp2) } for (m in dunn.test::p.adjustment.methods) { # for two-sided results tmp <- dunnTest(pH~fpond,data=ponds,method=m,two.sided=TRUE)$res$P.adj junk <- utils::capture.output( - tmp2 <- dunn.test(ponds$pH,ponds$fpond, - method=m,altp=TRUE)$altP.adjusted) + tmp2 <- dunn.test::dunn.test(ponds$pH,ponds$fpond,method=m,altp=TRUE)$altP.adjusted) expect_equal(tmp,tmp2) } for (m in dunn.test::p.adjustment.methods) { # for one-sided results with missing data suppressWarnings( tmp <- dunnTest(pH~fpond,data=ponds2,method=m, two.sided=FALSE)$res$P.adj) - junk <- utils::capture.output(tmp2 <- dunn.test(ponds2$pH,ponds2$fpond, - method=m)$P.adjusted) + junk <- utils::capture.output( + tmp2 <- dunn.test::dunn.test(ponds2$pH,ponds2$fpond,method=m)$P.adjusted) expect_equal(tmp,tmp2) } for (m in dunn.test::p.adjustment.methods) { # for two-sided results with missing data @@ -100,8 +97,7 @@ test_that("dunnTest matches dunn.test results for ponds data",{ tmp <- dunnTest(pH~fpond,data=ponds2,method=m, two.sided=TRUE)$res$P.adj) junk <- utils::capture.output( - tmp2 <- dunn.test(ponds2$pH,ponds2$fpond, - method=m,altp=TRUE)$altP.adjusted) + tmp2 <- dunn.test::dunn.test(ponds2$pH,ponds2$fpond,method=m,altp=TRUE)$altP.adjusted) expect_equal(tmp,tmp2) } }) @@ -113,16 +109,15 @@ test_that("dunnTest matches dunn.test results for homecare data",{ tmp <- dunnTest(occupation~eligibility,data=homecare, method=m,two.sided=FALSE)$res$P.adj junk <- utils::capture.output( - tmp2 <- dunn.test(homecare$occupation,homecare$eligibility, - method=m)$P.adjusted) + tmp2 <- dunn.test::dunn.test(homecare$occupation,homecare$eligibility,method=m)$P.adjusted) expect_equal(tmp,tmp2) } for (m in dunn.test::p.adjustment.methods) { # for two-sided results tmp <- dunnTest(occupation~eligibility,data=homecare,method=m, two.sided=TRUE)$res$P.adj junk <- utils::capture.output( - tmp2 <- dunn.test(homecare$occupation,homecare$eligibility, - method=m,altp=TRUE)$altP.adjusted) + tmp2 <- dunn.test::dunn.test(homecare$occupation,homecare$eligibility, + method=m,altp=TRUE)$altP.adjusted) expect_equal(tmp,tmp2) } }) @@ -134,15 +129,15 @@ test_that("dunnTest matches dunn.test results for airquality data",{ suppressWarnings(tmp <- dunnTest(Ozone~Month,data=airquality, method=m,two.sided=FALSE)$res$P.adj) junk <- utils::capture.output( - tmp2 <- dunn.test(airquality$Ozone,airquality$Month,method=m)$P.adjusted) + tmp2 <- dunn.test::dunn.test(airquality$Ozone,airquality$Month,method=m)$P.adjusted) expect_equal(tmp,tmp2) } for (m in dunn.test::p.adjustment.methods) { # for two-sided results suppressWarnings(tmp <- dunnTest(Ozone~Month,data=airquality, method=m,two.sided=TRUE)$res$P.adj) junk <- utils::capture.output( - tmp2 <- dunn.test(airquality$Ozone,airquality$Month,method=m, - altp=TRUE)$altP.adjusted) + tmp2 <- dunn.test::dunn.test(airquality$Ozone,airquality$Month,method=m, + altp=TRUE)$altP.adjusted) expect_equal(tmp,tmp2) } }) diff --git a/tests/testthat/testthat_extraTests.R b/tests/testthat/testthat_extraTests.R index 17a94286..312f306f 100644 --- a/tests/testthat/testthat_extraTests.R +++ b/tests/testthat/testthat_extraTests.R @@ -19,14 +19,9 @@ nls.0 <- nls(y~rep(c,length(df$y)),data=df,start=list(c=10)) nls.1 <- nls(y~a*x+c,data=df,start=list(a=1,c=1)) nls.2 <- nls(y~b*x2+a*x+c,data=df,start=list(a=-1,b=0.3,c=10)) -if (suppressMessages(require(nlme,quietly=TRUE))) { - gls.0 <- gls(y~1,data=df,method="ML") - gls.1 <- gls(y~x,data=df,method="ML") - gls.2 <- gls(y~x+x2,data=df,method="ML") -} - -suppressMessages(library(lmtest)) - +gls.0 <- nlme::gls(y~1,data=df,method="ML") +gls.1 <- nlme::gls(y~x,data=df,method="ML") +gls.2 <- nlme::gls(y~x+x2,data=df,method="ML") ## Test Messages ---- test_that("extraSS() and lrt() messages",{ @@ -147,31 +142,30 @@ test_that("extraSS() computations",{ }) test_that("lrt() computations",{ - require(lmtest,quietly=TRUE) ## Two model lm comparisons tmp1 <- lrt(lm.0,com=lm.1) - tmp2 <- lrtest(lm.0,lm.1) + tmp2 <- lmtest::lrtest(lm.0,lm.1) expect_equal(tmp1[1,"Chisq"],tmp2[2,"Chisq"]) expect_equal(tmp1[1,"Df"],tmp2[2,"Df"]) expect_equal(tmp1[1,"logLikO"],tmp2[1,"LogLik"]) expect_equal(tmp1[1,"logLikA"],tmp2[2,"LogLik"]) ## Three model lm comparisons (only can compare to last) tmp1 <- lrt(lm.0,lm.1,com=lm.2) - tmp2 <- lrtest(lm.0,lm.1,lm.2) + tmp2 <- lmtest::lrtest(lm.0,lm.1,lm.2) expect_equal(tmp1[2,"Chisq"],tmp2[3,"Chisq"]) expect_equal(tmp1[2,"Df"],tmp2[3,"Df"]) expect_equal(tmp1[2,"logLikO"],tmp2[2,"LogLik"]) expect_equal(tmp1[2,"logLikA"],tmp2[3,"LogLik"]) ## Two model nls comparisons tmp1 <- lrt(nls.0,com=nls.1) - tmp2 <- lrtest(nls.0,nls.1) + tmp2 <- lmtest::lrtest(nls.0,nls.1) expect_equal(tmp1[1,"Chisq"],tmp2[2,"Chisq"]) expect_equal(tmp1[1,"Df"],tmp2[2,"Df"]) expect_equal(tmp1[1,"logLikO"],tmp2[1,"LogLik"]) expect_equal(tmp1[1,"logLikA"],tmp2[2,"LogLik"]) ## Three model nls comparisons (only can compare to last) tmp1 <- lrt(nls.0,nls.1,com=nls.2) - tmp2 <- lrtest(nls.0,nls.1,nls.2) + tmp2 <- lmtest::lrtest(nls.0,nls.1,nls.2) expect_equal(tmp1[2,"Chisq"],tmp2[3,"Chisq"]) expect_equal(tmp1[2,"Df"],tmp2[3,"Df"]) expect_equal(tmp1[2,"logLikO"],tmp2[2,"LogLik"]) diff --git a/tests/testthat/testthat_metaM.R b/tests/testthat/testthat_metaM.R index 5154dda0..331482e5 100644 --- a/tests/testthat/testthat_metaM.R +++ b/tests/testthat/testthat_metaM.R @@ -161,13 +161,12 @@ test_that("metaM() matches results from Kenchington (2014)",{ test_that("metaM() matches M.empirical() from fishmethods for Rio Formosa Seahorse",{ ## ALL MATCHES - if (require(fishmethods,quietly=TRUE)) { - meths <- c("PaulyL","PaulyW","HoenigO","HoenigOF","AlversonCarney","Gislason") - tmp <- metaM(meths,tmax=5.5,Linf=19.76,Winf=17.3,K=0.571,t0=-0.91, - b=3.276,t50=0.49,T=19,L=10) - tmp2 <- M.empirical(Linf=19.76,Winf=17.3,Kl=0.571,Kw=0.571,TC=19, - Bl=10,tmax=5.5,method=c(1:4,9)) - tmp <- data.frame(tmp,tmp2) - expect_equal(round(tmp$M,3),tmp$M.1) - } + meths <- c("PaulyL","PaulyW","HoenigO","HoenigOF","AlversonCarney","Gislason") + tmp <- metaM(meths,tmax=5.5,Linf=19.76,Winf=17.3,K=0.571,t0=-0.91, + b=3.276,t50=0.49,T=19,L=10) + tmp2 <- fishmethods::M.empirical(Linf=19.76,Winf=17.3,Kl=0.571,Kw=0.571,TC=19, + Bl=10,tmax=5.5,method=c(1:4,9)) + tmp <- data.frame(tmp,tmp2) + expect_equal(round(tmp$M,3),tmp$M.1) + }) diff --git a/tests/testthat/testthat_mrClosed.R b/tests/testthat/testthat_mrClosed.R index 22b4ac00..3444c2f3 100644 --- a/tests/testthat/testthat_mrClosed.R +++ b/tests/testthat/testthat_mrClosed.R @@ -476,25 +476,23 @@ test_that("mrClosed match the Chapman results from Table 3.7 and 3.8 in Seber (2 }) test_that("mrClosed match the Chapman results from mrN.single() from fishmethods",{ - if (require(fishmethods,quietly=TRUE)) { - tmp1 <- mrN.single(M=948,C=421,R=167) - - tmp <- mrClosed(M=948,n=421,m=167,method="Chapman") - stmp <- summary(tmp,incl.SE=TRUE) - expect_equal(stmp[[1,"N"]], round(tmp1$N[1],0)) - expect_equal(stmp[[1,"SE"]], round(tmp1$SE[1],1)) - - ctmp <- confint(tmp,type="hypergeometric") - ## The CIs do not equal (<1%) ... fish methods uses qhyper - ## whereas FSA uses hyperCI - #expect_equal(ctmp[[1,"95% LCI"]], round(tmp1$LCI[1],0)) - #expect_equal(ctmp[[1,"95% UCI"]], round(tmp1$UCI[1],0)) - } + tmp1 <- fishmethods::mrN.single(M=948,C=421,R=167) + + tmp <- mrClosed(M=948,n=421,m=167,method="Chapman") + stmp <- summary(tmp,incl.SE=TRUE) + expect_equal(stmp[[1,"N"]], round(tmp1$N[1],0)) + expect_equal(stmp[[1,"SE"]], round(tmp1$SE[1],1)) + + ctmp <- confint(tmp,type="hypergeometric") + ## The CIs do not equal (<1%) ... fish methods uses qhyper + ## whereas FSA uses hyperCI + #expect_equal(ctmp[[1,"95% LCI"]], round(tmp1$LCI[1],0)) + #expect_equal(ctmp[[1,"95% UCI"]], round(tmp1$UCI[1],0)) }) test_that("mrClosed match the Bailey results from mrN.single() from fishmethods",{ - tmp1 <- mrN.single(M=948,C=421,R=167) + tmp1 <- fishmethods::mrN.single(M=948,C=421,R=167) tmp <- mrClosed(M=948,n=421,m=167,method="Bailey") stmp <- summary(tmp,incl.SE=TRUE) @@ -509,28 +507,26 @@ test_that("mrClosed match the Bailey results from mrN.single() from fishmethods" test_that("mrClosed match the Schnabel Results from p. 32 Krebs (1989)",{ - if (require(FSAdata,quietly=TRUE)) { - data(SunfishIN,package="FSAdata") - - tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, - method="Schnabel",chapman.mod=FALSE)) - stmp <- summary(tmp) - expect_equal(stmp[[1,"N"]], 448) - ## See if intermediate calculations match Krebs - expect_equal(tmp$N, 447.5) - expect_equal(tmp$sum.m, 24) # sum R in Krebs - expect_equal(tmp$sum.nM, 10740) # sum CM in Krebs - expect_equal(tmp$sum.nM2, 970296) # sum CM^2 in Krebs - expect_equal(tmp$sum.mM, 2294) # sum RM in Krebs - expect_equal(round(tmp$sum.m2dn,3), 7.745) # sum R^2/C in Krebs - ctmp <- confint(tmp,type="Poisson") - ## The CIs do not equal ... Krebs uses table, FSA uses poiCI (see below) - #expect_equal(ctmp[[1,"95% LCI"]], 310) - #expect_equal(ctmp[[1,"95% UCI"]], 720) - ptmp <- poiCI(tmp$sum.m) - #expect_equal(ptmp[[1,"95% LCI"]], 14.921) - #expect_equal(ptmp[[1,"95% UCI"]], 34.665) - } + data(SunfishIN,package="FSAdata") + + tmp <- with(SunfishIN,mrClosed(n=caught,m=recaps,R=retmarks, + method="Schnabel",chapman.mod=FALSE)) + stmp <- summary(tmp) + expect_equal(stmp[[1,"N"]], 448) + ## See if intermediate calculations match Krebs + expect_equal(tmp$N, 447.5) + expect_equal(tmp$sum.m, 24) # sum R in Krebs + expect_equal(tmp$sum.nM, 10740) # sum CM in Krebs + expect_equal(tmp$sum.nM2, 970296) # sum CM^2 in Krebs + expect_equal(tmp$sum.mM, 2294) # sum RM in Krebs + expect_equal(round(tmp$sum.m2dn,3), 7.745) # sum R^2/C in Krebs + ctmp <- confint(tmp,type="Poisson") + ## The CIs do not equal ... Krebs uses table, FSA uses poiCI (see below) + #expect_equal(ctmp[[1,"95% LCI"]], 310) + #expect_equal(ctmp[[1,"95% UCI"]], 720) + ptmp <- poiCI(tmp$sum.m) + #expect_equal(ptmp[[1,"95% LCI"]], 14.921) + #expect_equal(ptmp[[1,"95% UCI"]], 34.665) }) diff --git a/tests/testthat/testthat_nlsTracePlot.R b/tests/testthat/testthat_nlsTracePlot.R index dbb26a7b..a21675ff 100644 --- a/tests/testthat/testthat_nlsTracePlot.R +++ b/tests/testthat/testthat_nlsTracePlot.R @@ -25,20 +25,18 @@ test_that("nlsTracePlot() test output",{ expect_equal(mode(tmp),"numeric") expect_equal(ncol(tmp),3) # unsuccessful fit - if (require(FSAdata,quietly=TRUE)) { - data(BSkateGB,package="FSAdata") - wtr <- droplevels(subset(BSkateGB,season=="winter")) - bh1 <- srFuns() - trc <- capture.output(try( - expect_error(fit2 <- nls(recruits~bh1(spawners,a,b),wtr, - start=srStarts(recruits~spawners,data=wtr), - trace=TRUE)) - )) - tmp <- nlsTracePlot(trc,bh1,add=FALSE) - expect_equal(class(tmp),c("matrix","array")) - expect_equal(mode(tmp),"numeric") - expect_equal(ncol(tmp),2) - } + data(BSkateGB,package="FSAdata") + wtr <- droplevels(subset(BSkateGB,season=="winter")) + bh1 <- srFuns() + trc <- capture.output(try( + expect_error(fit2 <- nls(recruits~bh1(spawners,a,b),wtr, + start=srStarts(recruits~spawners,data=wtr), + trace=TRUE)) + )) + tmp <- nlsTracePlot(trc,bh1,add=FALSE) + expect_equal(class(tmp),c("matrix","array")) + expect_equal(mode(tmp),"numeric") + expect_equal(ncol(tmp),2) }) diff --git a/tests/testthat/testthat_removal.R b/tests/testthat/testthat_removal.R index 32711964..c6e9523b 100644 --- a/tests/testthat/testthat_removal.R +++ b/tests/testthat/testthat_removal.R @@ -297,18 +297,16 @@ test_that("removal with 'CarleStrub' matches Cowx (1983) page 77",{ }) test_that("removal with 'CarleStrub' match results from Jones & Stockwell (1995)",{ - if (require(FSAdata,quietly=TRUE)) { - data(JonesStockwell,package="FSAdata") - # isolate captures and Carle-Strub estimates ... for non-rejected estimates - JS.caps <- JonesStockwell[!JonesStockwell$rejected,4:6] - JS.cs <- JonesStockwell[!JonesStockwell$rejected,7] - # compute Carle-Strub estimates for all data in JS.caps - tmp <- apply(JS.caps,1,removal,just.ests=TRUE)["No",] - # Make a comparison matrix - compJS <- round(cbind(tmp,JS.cs,tmp-JS.cs,(tmp-JS.cs)/JS.cs*100),1) - # all values are within 3 - expect_true(all(abs(compJS[,3])<=3,na.rm=TRUE)) - } + data(JonesStockwell,package="FSAdata") + # isolate captures and Carle-Strub estimates ... for non-rejected estimates + JS.caps <- JonesStockwell[!JonesStockwell$rejected,4:6] + JS.cs <- JonesStockwell[!JonesStockwell$rejected,7] + # compute Carle-Strub estimates for all data in JS.caps + tmp <- apply(JS.caps,1,removal,just.ests=TRUE)["No",] + # Make a comparison matrix + compJS <- round(cbind(tmp,JS.cs,tmp-JS.cs,(tmp-JS.cs)/JS.cs*100),1) + # all values are within 3 + expect_true(all(abs(compJS[,3])<=3,na.rm=TRUE)) }) test_that("removal with 'Seber3' matches Cowx (1983) page 75",{ From e75418e4db9a427b56f07f083d5317d3248a36bb Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Mon, 30 Dec 2024 21:18:59 -0600 Subject: [PATCH 05/21] Fixed some minor errors in documentation related to brackets --- NEWS.md | 1 + R/bootstrap.R | 4 ++-- R/knitUtil.R | 6 +++--- R/psdCI.R | 2 +- man/boot.Rd | 2 +- man/knitUtil.Rd | 6 +++--- man/nlsBoot.Rd | 2 +- man/psdCI.Rd | 2 +- 8 files changed, 13 insertions(+), 12 deletions(-) diff --git a/NEWS.md b/NEWS.md index 1c1b8d8c..10c7a8d6 100644 --- a/NEWS.md +++ b/NEWS.md @@ -7,6 +7,7 @@ * Had to correct many tests where I expected just `matrix` but the class was `c("matrix","array")`. * Had to handle multiple warnings for some tests (see [this article](https://testthat.r-lib.org/articles/third-edition.html#warnings)). * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. +* Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). diff --git a/R/bootstrap.R b/R/bootstrap.R index d732a63f..ed845555 100644 --- a/R/bootstrap.R +++ b/R/bootstrap.R @@ -15,7 +15,7 @@ #' @param parm A number or string that indicates which column of \code{object} contains the parameter estimates to use for the confidence interval or hypothesis test. #' @param conf.level A level of confidence as a proportion. #' @param level Same as \code{conf.level}. -#' @param plot A logical that indicates whether a plot should be constructed. If \code{confint} then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If code{htest} then a histogram of the \code{parm} parameters with a vertical line illustrating the \code{bo} value will be constructed. +#' @param plot A logical that indicates whether a plot should be constructed. If \code{confint} then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If \code{htest} then a histogram of the \code{parm} parameters with a vertical line illustrating the \code{bo} value will be constructed. #' @param err.col A single numeric or character that identifies the color for the error bars on the plot. #' @param err.lwd A single numeric that identifies the line width for the error bars on the plot. #' @param rows A single numeric that contains the number of rows to use on the graphic. @@ -165,7 +165,7 @@ hist.boot <- function(x,same.ylim=TRUE,ymax=NULL, #' @param parm An integer that indicates which parameter to compute the confidence interval or hypothesis test for. The confidence interval Will be computed for all parameters if \code{NULL}. #' @param conf.level A level of confidence as a proportion. #' @param level Same as \code{conf.level}. Used for compatibility with the main \code{confint}. -#' @param plot A logical that indicates whether a plot should be constructed. If \code{confint}, then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If code{htest}, then a histogram of the \code{parm} parameters with a vertical lines illustrating the \code{bo}value will be constructed. +#' @param plot A logical that indicates whether a plot should be constructed. If \code{confint}, then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If \code{htest}, then a histogram of the \code{parm} parameters with a vertical lines illustrating the \code{bo}value will be constructed. #' @param err.col A single numeric or character that identifies the color for the error bars on the plot. #' @param err.lwd A single numeric that identifies the line width for the error bars on the plot. #' @param rows A numeric that contains the number of rows to use on the graphic. diff --git a/R/knitUtil.R b/R/knitUtil.R index d2adb1eb..bdffdd99 100644 --- a/R/knitUtil.R +++ b/R/knitUtil.R @@ -29,9 +29,9 @@ #' @param rqrdPkgs A string vector that contains packages that are required for the vignette and for which all dependencies should be found. #' @param elapsed A numeric, usually from \code{proc.time}, that is the time required to run the vignette. If \code{NULL} then this output will not be used. See the note below. #' @param width A numeric that indicates the width to use for wrapping the reproducibility information when \code{out="r"}. -#' @param addTOC A logical that indicates whether or not a table of contents entry for the reproducibility section should be added to the LaTeX output. Used only if \R{out="latex"} -#' @param newpage A logical that indicates whether or not the reproducibility information should begin on a new page. Used only if \R{out="latex"} -#' @param links A named character vector that will add a links bullet to the reproducibility information. The names will be shown and the values are the links. Used only if \R{out="markdown}. +#' @param addTOC A logical that indicates whether or not a table of contents entry for the reproducibility section should be added to the LaTeX output. Used only if \code{out="latex"} +#' @param newpage A logical that indicates whether or not the reproducibility information should begin on a new page. Used only if \code{out="latex"} +#' @param links A named character vector that will add a links bullet to the reproducibility information. The names will be shown and the values are the links. Used only if \code{out="markdown"}. #' @param closeGraphics A logical that indicates whether the graphics device should be closed or not. #' @param ind An integer that indicates the CRAN mirror to use. Defaults to 1. #' @param \dots Additional arguments for the original \code{purl}. diff --git a/R/psdCI.R b/R/psdCI.R index 3eedfbc2..18a190bb 100644 --- a/R/psdCI.R +++ b/R/psdCI.R @@ -12,7 +12,7 @@ #' @param ptbl A numeric vector or array that contains the proportion or percentage of all individuals in each length category. See details. #' @param n A single numeric of the number of fish used to construct \code{ptbl}. #' @param method A string that identifies the confidence interval method to use. See details. -#' @param bin.type A string that identifies the type of method to use for calculation of the confidence intervals when \R{method="binomial"}. See details of \code{\link{binCI}}. +#' @param bin.type A string that identifies the type of method to use for calculation of the confidence intervals when \code{method="binomial"}. See details of \code{\link{binCI}}. #' @param conf.level A number that indicates the level of confidence to use for constructing confidence intervals (default is \code{0.95}). #' @param label A single string that can be used to label the row of the output matrix. #' @param digits A numeric that indicates the number of decimals to round the result to. diff --git a/man/boot.Rd b/man/boot.Rd index ce34f13d..46604ce9 100644 --- a/man/boot.Rd +++ b/man/boot.Rd @@ -54,7 +54,7 @@ \item{type}{Confidence interval type; types implemented are the "percentile" method, which uses the function quantile to return the appropriate quantiles for the confidence limit specified, the default bca which uses the bias-corrected and accelerated method presented by Efron and Tibshirani (1993, Chapter 14). For the other types, see the documentation for \code{\link[boot]{boot}}.} -\item{plot}{A logical that indicates whether a plot should be constructed. If \code{confint} then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If code{htest} then a histogram of the \code{parm} parameters with a vertical line illustrating the \code{bo} value will be constructed.} +\item{plot}{A logical that indicates whether a plot should be constructed. If \code{confint} then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If \code{htest} then a histogram of the \code{parm} parameters with a vertical line illustrating the \code{bo} value will be constructed.} \item{err.col}{A single numeric or character that identifies the color for the error bars on the plot.} diff --git a/man/knitUtil.Rd b/man/knitUtil.Rd index 73694e2c..44cc44f7 100644 --- a/man/knitUtil.Rd +++ b/man/knitUtil.Rd @@ -72,11 +72,11 @@ reproInfo( \item{width}{A numeric that indicates the width to use for wrapping the reproducibility information when \code{out="r"}.} -\item{addTOC}{A logical that indicates whether or not a table of contents entry for the reproducibility section should be added to the LaTeX output. Used only if \R{out="latex"}} +\item{addTOC}{A logical that indicates whether or not a table of contents entry for the reproducibility section should be added to the LaTeX output. Used only if \code{out="latex"}} -\item{newpage}{A logical that indicates whether or not the reproducibility information should begin on a new page. Used only if \R{out="latex"}} +\item{newpage}{A logical that indicates whether or not the reproducibility information should begin on a new page. Used only if \code{out="latex"}} -\item{links}{A named character vector that will add a links bullet to the reproducibility information. The names will be shown and the values are the links. Used only if \R{out="markdown}.} +\item{links}{A named character vector that will add a links bullet to the reproducibility information. The names will be shown and the values are the links. Used only if \code{out="markdown"}.} \item{closeGraphics}{A logical that indicates whether the graphics device should be closed or not.} diff --git a/man/nlsBoot.Rd b/man/nlsBoot.Rd index ed5575ac..3731aad5 100644 --- a/man/nlsBoot.Rd +++ b/man/nlsBoot.Rd @@ -43,7 +43,7 @@ htest(object, ...) \item{conf.level}{A level of confidence as a proportion.} -\item{plot}{A logical that indicates whether a plot should be constructed. If \code{confint}, then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If code{htest}, then a histogram of the \code{parm} parameters with a vertical lines illustrating the \code{bo}value will be constructed.} +\item{plot}{A logical that indicates whether a plot should be constructed. If \code{confint}, then a histogram of the \code{parm} parameters from the bootstrap samples with error bars that illustrate the bootstrapped confidence intervals will be constructed. If \code{htest}, then a histogram of the \code{parm} parameters with a vertical lines illustrating the \code{bo}value will be constructed.} \item{err.col}{A single numeric or character that identifies the color for the error bars on the plot.} diff --git a/man/psdCI.Rd b/man/psdCI.Rd index 147e86a7..94a9b567 100644 --- a/man/psdCI.Rd +++ b/man/psdCI.Rd @@ -24,7 +24,7 @@ psdCI( \item{method}{A string that identifies the confidence interval method to use. See details.} -\item{bin.type}{A string that identifies the type of method to use for calculation of the confidence intervals when \R{method="binomial"}. See details of \code{\link{binCI}}.} +\item{bin.type}{A string that identifies the type of method to use for calculation of the confidence intervals when \code{method="binomial"}. See details of \code{\link{binCI}}.} \item{conf.level}{A number that indicates the level of confidence to use for constructing confidence intervals (default is \code{0.95}).} From e186a7b6dc9a4e68b521134ccd7e66331fcfdc2f Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Tue, 31 Dec 2024 08:04:45 -0600 Subject: [PATCH 06/21] Changed what= to method= in Mmethods() --- NEWS.md | 1 + R/metaM.R | 9 ++++----- man/metaM.Rd | 6 ++---- 3 files changed, 7 insertions(+), 9 deletions(-) diff --git a/NEWS.md b/NEWS.md index 10c7a8d6..6c925b9c 100644 --- a/NEWS.md +++ b/NEWS.md @@ -10,6 +10,7 @@ * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). +* `Mmethods()`: Changed `what=` to `method=` for simplicity with `metaM()`. # FSA 0.9.5 * Fixed FSA-package \alias problem using the "automatic approach" (i.e., adding a "_PACKAGE" line to FSA.R) suggested in an e-mail from Kurt Hornik on 19-Aug-2023. diff --git a/R/metaM.R b/R/metaM.R index eafe1849..49f3e3fa 100644 --- a/R/metaM.R +++ b/R/metaM.R @@ -25,8 +25,7 @@ #' \item \code{method="RikhterEfanov2"}: The \dQuote{Rikhter and Efanov (1976) equation (#1)} as given in the first column of page 541 of Kenchington (2014). Requires \code{t50}, \code{K}, \code{t0}, and \code{b}. #' } #' -#' @param what A string that indicates what grouping of methods to return. Defaults to returning all methods. -#' @param method A string that indicates which method or equation to use. See details. +#' @param method A string that indicates what grouping of methods to return (defaults to all methods) in \code{Mmethods()} or which methods or equations to use in \code{metaM()}. See details. #' @param justM A logical that indicates whether just the estimate of M (\code{TRUE}; Default) or a more descriptive list should be returned. #' @param tmax The maximum age for the population of fish. #' @param K The Brody growth coefficient from the fit of the von Bertalanffy growth function. @@ -130,8 +129,8 @@ #' #' @rdname metaM #' @export -Mmethods <- function(what=c("all","tmax","K","Hoenig","Pauly")) { - what <- match.arg(what) +Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly")) { + method <- match.arg(method) all_meth <- c("HoenigNLS","HoenigO","HoenigOF","HoenigOM","HoenigOC", "HoenigO2","HoenigO2F","HoenigO2M","HoenigO2C", "HoenigLM","HewittHoenig","tmax1", @@ -142,7 +141,7 @@ Mmethods <- function(what=c("all","tmax","K","Hoenig","Pauly")) { "RikhterEfanov1","RikhterEfanov2") H_meth <- all_meth[grep("Hoenig",all_meth)] P_meth <- - switch(what, + switch(method, all = { meths <- all_meth }, tmax = { meths <- c("tmax1",H_meth)}, K = { meths <- c("K1","K2","JensenK1","JensenK2")}, diff --git a/man/metaM.Rd b/man/metaM.Rd index 3997b43b..cface56d 100644 --- a/man/metaM.Rd +++ b/man/metaM.Rd @@ -6,7 +6,7 @@ \alias{print.metaM} \title{Estimate natural mortality from a variety of empirical methods.} \usage{ -Mmethods(what = c("all", "tmax", "K", "Hoenig", "Pauly")) +Mmethods(method = c("all", "tmax", "K", "Hoenig", "Pauly")) metaM( method = Mmethods(), @@ -25,9 +25,7 @@ metaM( \method{print}{metaM}(x, digits = 4, ...) } \arguments{ -\item{what}{A string that indicates what grouping of methods to return. Defaults to returning all methods.} - -\item{method}{A string that indicates which method or equation to use. See details.} +\item{method}{A string that indicates what grouping of methods to return (defaults to all methods) in \code{Mmethods()} or which methods or equations to use in \code{metaM()}. See details.} \item{justM}{A logical that indicates whether just the estimate of M (\code{TRUE}; Default) or a more descriptive list should be returned.} From fbd73f680dd3329a26e9cbfc11483cace8a163f1 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Tue, 31 Dec 2024 11:02:40 -0600 Subject: [PATCH 07/21] Changes to metaM() to address #114 --- NAMESPACE | 1 - NEWS.md | 10 ++- R/metaM.R | 148 ++++++++++++++++++++------------ man/metaM.Rd | 55 ++++++------ tests/testthat/testthat_metaM.R | 75 ++++++++-------- 5 files changed, 167 insertions(+), 122 deletions(-) diff --git a/NAMESPACE b/NAMESPACE index 2f2df002..cb6becc4 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -43,7 +43,6 @@ S3method(predict,boot) S3method(predict,nlsBoot) S3method(print,dunnTest) S3method(print,extraTest) -S3method(print,metaM) S3method(psdAdd,default) S3method(psdAdd,formula) S3method(rSquared,catchCurve) diff --git a/NEWS.md b/NEWS.md index 6c925b9c..e8f2d000 100644 --- a/NEWS.md +++ b/NEWS.md @@ -10,7 +10,15 @@ * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). -* `Mmethods()`: Changed `what=` to `method=` for simplicity with `metaM()`. +* `metaM()`: Modified to address [#114](https://github.com/fishR-Core-Team/FSA/issues/114). + * Returns data.frame rather than list. + * Added conditional mortality rate (cm) to returned data.frame (for use with `rFAMS`). + * Removed `justM=` and its functionality (not needed with data.frame returned). + * Added `verbose=` to allow user to limit some of what is returned in data.frame. + * Removed `print.metaM()` method. + * Added Quinn and Deriso (1999), Peterson and Wroblewski (1984), and Chan and Watanabe (1989) methods from FAMS manual. These are probably only useful for comparison to FAMS results. + * Added an example for computing an average M or cm from multiple model results. +* `Mmethods()`: Modified. Changed `what=` to `method=` for simplicity with `metaM()`. # FSA 0.9.5 * Fixed FSA-package \alias problem using the "automatic approach" (i.e., adding a "_PACKAGE" line to FSA.R) suggested in an e-mail from Kurt Hornik on 19-Aug-2023. diff --git a/R/metaM.R b/R/metaM.R index 49f3e3fa..1be50fe5 100644 --- a/R/metaM.R +++ b/R/metaM.R @@ -5,7 +5,7 @@ #' @details One of several methods is chosen with \code{method}. The available methods can be seen with \code{Mmethods()} and are listed below with a brief description of where the equation came from. The sources (listed below) should be consulted for more specific information. #' \itemize{ #' \item \code{method="HoenigNLS"}: The \dQuote{modified Hoenig equation derived with a non-linear model} as described in Then \emph{et al.} (2015) on the third line of Table 3. This method was the preferred method suggested by Then \emph{et al.} (2015). Requires only \code{tmax}. -#' \item \code{method="PaulyLNoT"}: The \dQuote{modified Pauly length equation} as described on the sixth line of Table 3 in Then \emph{et al.} (2015). Then \emph{et al.} (2015) suggested that this is the preferred model if maximum age (tmax) information was not available. Requires \code{K} and \code{Linf}. +#' \item \code{method="PaulyLNoT"}: The \dQuote{modified Pauly length equation} as described on the sixth line of Table 3 in Then \emph{et al.} (2015). Then \emph{et al.} (2015) suggested that this is the preferred method if maximum age (tmax) information was not available. Requires \code{K} and \code{Linf}. #' \item \code{method="PaulyL"}: The \dQuote{Pauly (1980) equation using fish lengths} from his equation 11. This is the most commonly used method in the literature. Note that Pauly used common logarithms as used here but the model is often presented in other sources with natural logarithms. Requires \code{K}, \code{Linf}, and \code{T}. #' \item \code{method="PaulyW"}: The \dQuote{Pauly (1980) equation for weights} from his equation 10. Requires \code{K}, \code{Winf}, and \code{T}. #' \item \code{method="HoeingO"}, \code{method="HoeingOF"}, \code{method="HoeingOM"}, \code{method="HoeingOC"}: The original \dQuote{Hoenig (1983) composite}, \dQuote{fish}, \dQuote{mollusc}, and \dQuote{cetacean} (fit with OLS) equations from the second column on page 899 of Hoenig (1983). Requires only \code{tmax}. @@ -23,10 +23,14 @@ #' \item \code{method="ZhangMegreyD"}, \code{method="ZhangMegreyP"}: The \dQuote{Zhang and Megrey (2006) equation} as given in their equation 8 but modified for demersal or pelagic fish. Thus, the user must choose the fish type with \code{group}. Requires \code{tmax}, \code{K}, \code{t0}, \code{t50}, and \code{b}. #' \item \code{method="RikhterEfanov1"}: The \dQuote{Rikhter and Efanov (1976) equation (#2)} as given in the second column of page 541 of Kenchington (2014) and in Table 6.4 of Miranda and Bettoli (2007). Requires only \code{t50}. #' \item \code{method="RikhterEfanov2"}: The \dQuote{Rikhter and Efanov (1976) equation (#1)} as given in the first column of page 541 of Kenchington (2014). Requires \code{t50}, \code{K}, \code{t0}, and \code{b}. +#' \item \code{method="QuinnDeriso"}: The \dQuote{Quinn & Derison (1999)} equation as given in the FAMS manual as equation 4:18. Requires \code{PS} and \code{tmax}. Included only for use with \code{rFAMS} package. +#' \item \code{method="ChanWatanabe"}: The \dQuote{Chan & Watanabe (1989)} equation as given in the FAMS manual as equation 4:24. As suggested in FAMS manual used \code{tmax} for final time and 1 as initial time. Requires \code{tmax}, \code{K}, and \code{t0}. Included only for use with \code{rFAMS} package. +#' \item \code{method="PetersonWroblewski"}: The \dQuote{Peterson & Wroblewski (1984)} equation as given in the FAMS manual as equation 4:22. As suggested in FAMS manual used \code{Winf} for weight. Requires \code{Winf}. Included only for use with \code{rFAMS} package. #' } -#' +#' +#' Conditional mortality (cm) is estimated from instantaneous natural mortality (M) with 1-exp(-M). It is returned with M here simply as a courtesy for those using the \code{rFAMS} package. +#' #' @param method A string that indicates what grouping of methods to return (defaults to all methods) in \code{Mmethods()} or which methods or equations to use in \code{metaM()}. See details. -#' @param justM A logical that indicates whether just the estimate of M (\code{TRUE}; Default) or a more descriptive list should be returned. #' @param tmax The maximum age for the population of fish. #' @param K The Brody growth coefficient from the fit of the von Bertalanffy growth function. #' @param Linf The asymptotic mean length (cm) from the fit of the von Bertalanffy growth function. @@ -36,23 +40,23 @@ #' @param Temp The temperature experienced by the fish (C). #' @param t50 The age (time) when half the fish in the population are mature. #' @param Winf The asymptotic mean weight (g) from the fit of the von Bertalanffy growth function. -#' @param x A \code{metaM} object returned from \code{metaM} when \code{justM=FALSE}. -#' @param digits A numeric that controls the number of digits printed for the estimate of M. -#' @param \dots Additional arguments for methods. Not implemented. +#' @param PS The proportion of the population that survive to \code{tmax}. Should usually be around 0.01 or 0.05. +#' @param verbose Logical for whether to include method name and given inputs in resultant data.frame. Defaults to \code{TRUE}. #' -#' @return \code{Mmethods} returns a character vector with a list of methods. If only one \code{method} is chosen then \code{metaM} returns a single numeric if \code{justM=TRUE} or, otherwise, a \code{metaM} object that is a list with the following items: +#' @return \code{Mmethods} returns a character vector with a list of methods. +#' +#' \code{metaM} returns a data.frame with the following items: #' \itemize{ +#' \item \code{M}: The estimated natural mortality rate. +#' \item \code{cm}: The estimated conditional natural mortality rate (computed directly from \code{M}). #' \item \code{method}: The name for the method within the function (as given in \code{method}). #' \item \code{name}: A more descriptive name for the method. -#' \item \code{givens}: A vector of values required by the method to estimate M. -#' \item \code{M}: The estimated natural mortality rate. +#' \item \code{givens}: A string that contains the input values required by the method to estimate M. #' } #' -#' If multiple \code{method}s are chosen then a data.frame is returned with the method name abbreviation in the \code{method} variable and the associated estimated M in the \code{M} variable. -#' -#' @section Testing: Kenchington (2014) provided life history parameters for several stocks and used many models to estimate M. I checked the calculations for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO} for \code{Hgroup="all"} and \code{Hgroup="fish"}, \code{HoenigO2} for \code{Hgroup="all"} and \code{Hgroup="fish"}, \code{"JensenK1"}, \code{"Gislason"}, \code{"AlversonCarney"}, \code{"Charnov"}, \code{"ZhangMegrey"}, \code{"RikhterEfanov1"}, and \code{"RikhterEfanov2"} methods for three stocks. All results perfectly matched Kenchington's results for Chesapeake Bay Anchovy and Rio Formosa Seahorse. For the Norwegian Fjord Lanternfish, all results perfectly matched Kenchington's results except for when \code{Hgroup="fish"} for both \code{HoenigO} and \code{HoenigO2}. +#' @section Testing: Kenchington (2014) provided life history parameters for several stocks and used many models to estimate M. I checked the calculations for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO}, \code{HoenigOF}, \code{HoenigO2}, \code{HoenigO2F}, \code{"JensenK1"}, \code{"Gislason"}, \code{"AlversonCarney"}, \code{"Charnov"}, \code{"ZhangMegrey"}, \code{"RikhterEfanov1"}, and \code{"RikhterEfanov2"} methods for three stocks. All results perfectly matched Kenchington's results for Chesapeake Bay Anchovy and Rio Formosa Seahorse. For the Norwegian Fjord Lanternfish, all results perfectly matched Kenchington's results except for \code{HoenigOF} and \code{HoenigO2F}. #' -#' Results for the Rio Formosa Seahorse data were also tested against results from \code{\link[fishmethods]{M.empirical}} from \pkg{fishmethods} for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO} for \code{Hgroup="all"} and \code{Hgroup="fish"}, \code{"Gislason"}, and \code{"AlversonCarney"} methods (the only methods in common between the two packages). All results matched perfectly. +#' Results for the Rio Formosa Seahorse data were also tested against results from \code{\link[fishmethods]{M.empirical}} from \pkg{fishmethods} for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO}, \code{HoenigOF}, \code{"Gislason"}, and \code{"AlversonCarney"} methods (the only methods in common between the two packages). All results matched perfectly. #' #' @author Derek H. Ogle, \email{DerekOgle51@gmail.com} #' @@ -88,7 +92,7 @@ #' #' @keywords manip #' -#' @aliases metaM print.metaM Mmethods +#' @aliases metaM Mmethods #' #' @examples #' ## List names for available methods @@ -97,9 +101,8 @@ #' #' ## Simple Examples #' metaM("tmax",tmax=20) -#' metaM("tmax",tmax=20,justM=FALSE) #' metaM("HoenigNLS",tmax=20) -#' metaM("HoenigNLS",tmax=20,justM=FALSE) +#' metaM("HoenigNLS",tmax=20,verbose=FALSE) #' #' ## Example Patagonian Sprat ... from Table 2 in Cerna et al. (2014) #' ## http://www.scielo.cl/pdf/lajar/v42n3/art15.pdf @@ -111,7 +114,6 @@ #' t50 <- t0-(1/K)*log(1-13.5/Linf) #' metaM("RikhterEfanov1",t50=t50) #' metaM("PaulyL",K=K,Linf=Linf,Temp=Temp) -#' metaM("PaulyL",K=K,Linf=Linf,Temp=Temp,justM=FALSE) #' metaM("HoenigNLS",tmax=tmax) #' metaM("HoenigO",tmax=tmax) #' metaM("HewittHoenig",tmax=tmax) @@ -123,13 +125,20 @@ #' #' ## Example of multiple methods using Mmethods #' # select some methods -#' metaM(Mmethods()[-c(15,20,22:24,26)],K=K,Linf=Linf,Temp=Temp,tmax=tmax,t50=t50) +#' metaM(Mmethods()[-c(15,20,22:24,26:29)],K=K,Linf=Linf,Temp=Temp,tmax=tmax,t50=t50) #' # select just the Hoenig methods #' metaM(Mmethods("Hoenig"),K=K,Linf=Linf,Temp=Temp,tmax=tmax,t50=t50) #' +#' ## Example of computing an average M +#' # select multiple models used in FAMS (example only, these are not best models) +#' ( res <- metaM(Mmethods("FAMS"),tmax=tmax,K=K,Linf=Linf,t0=t0, +#' Temp=Temp,PS=0.01,Winf=30) ) +#' ( meanM <- mean(res$M) ) +#' ( meancm <- mean(res$cm) ) +#' #' @rdname metaM #' @export -Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly")) { +Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly","FAMS")) { method <- match.arg(method) all_meth <- c("HoenigNLS","HoenigO","HoenigOF","HoenigOM","HoenigOC", "HoenigO2","HoenigO2F","HoenigO2M","HoenigO2C", @@ -138,7 +147,8 @@ Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly")) { "K1","K2","JensenK1","JensenK2","Gislason", "AlversonCarney","Charnov", "ZhangMegreyD","ZhangMegreyP", - "RikhterEfanov1","RikhterEfanov2") + "RikhterEfanov1","RikhterEfanov2", + "QuinnDeriso","ChanWatanabe","PetersonWroblewski") H_meth <- all_meth[grep("Hoenig",all_meth)] P_meth <- switch(method, @@ -146,35 +156,32 @@ Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly")) { tmax = { meths <- c("tmax1",H_meth)}, K = { meths <- c("K1","K2","JensenK1","JensenK2")}, Hoenig = { meths <- H_meth}, - Pauly = { meths <- all_meth[grep("Pauly",all_meth)] }) + Pauly = { meths <- all_meth[grep("Pauly",all_meth)] }, + FAMS = { meths <- c("QuinnDeriso","HoenigOF","JensenK1", + "PetersonWroblewski","PaulyL","ChanWatanabe")} + ) meths } #' @rdname metaM #' @export -metaM <- function(method=Mmethods(),justM=TRUE, +metaM <- function(method=Mmethods(), tmax=NULL,K=NULL,Linf=NULL,t0=NULL,b=NULL, - L=NULL,Temp=NULL,t50=NULL,Winf=NULL) { + L=NULL,Temp=NULL,t50=NULL,Winf=NULL,PS=NULL, + verbose=TRUE) { ## Get method or methods method <- match.arg(method,several.ok=TRUE) - ## If only one method then one call to metaM1 - if (length(method)==1) res <- metaM1(method,justM,tmax,K,Linf,t0,b,L,Temp,t50,Winf) - else { - ## If multiple methods then use apply to run all at once - if (justM) { - res <- apply(matrix(method),1,metaM1,justM,tmax,K,Linf,t0,b,L,Temp,t50,Winf) - ## put together as a data.frame to return - res <- data.frame(method,M=res,stringsAsFactors=FALSE) - } else { - for (i in method) print(metaM1(i,justM,tmax,K,Linf,t0,b,L,Temp,t50,Winf)) - res <- NULL - } - } + ## Use apply to run all methods at once (even if only one) + res <- lapply(method,metaM1,tmax,K,Linf,t0,b,L,Temp,t50,Winf,PS) + ## Put together as a data.frame to return + res <- as.data.frame(do.call(rbind,res)) + ## If not verbose then remove name and givens from data.frame + if (!verbose) res <- res[,!names(res) %in% c("name","givens")] ## Return the result res } -metaM1 <- function(method,justM,tmax,K,Linf,t0,b,L,Temp,t50,Winf,...) { +metaM1 <- function(method,tmax,K,Linf,t0,b,L,Temp,t50,Winf,PS,...) { switch(method, tmax1 = { ## from Then et al. (2015), Table 3, 1st line name <- "Then et al. (2015) tmax equation" @@ -335,27 +342,51 @@ metaM1 <- function(method,justM,tmax,K,Linf,t0,b,L,Temp,t50,Winf,...) { name <- "Richter & Evanov (1976) equation #2" givens <- c(K=K,t0=t0,t50=t50,b=b) M <- (b*K)/(exp(K*(t50-t0))-1) }, + QuinnDeriso = { + ## from Quinn & Deriso (1990) as described in FAMS manual + ## equation 4:18 in FAMS manual + iCheck_PS(PS) + name <- "Quinn & Deriso (1999) from FAMS" + givens <- c(PS=PS,tmax=tmax) + M <- -log(PS)/tmax + }, + ChanWatanabe = { + ## from Chan and Watanabe (1989) as described in FAMS manual + ## equation 4:24 in FAMS manual + ## here followed FAMS notes and used ti=1 and tf=tmax + ti <- 1 # initial age + tf <- tmax # final age + iCheck_K(K) + iCheck_t0(t0) + iCheck_tmax(tmax) + name <- "Chan & Watanabe (1989) from FAMS" + givens <- c(tmax=tmax,K=K,t0=t0) + M <- (1/(tf-ti))*log((exp(-K*tf)-exp(-K*t0))/(exp(-K*ti)-exp(-K*t0))) + }, + PetersonWroblewski = { + ## From Peterson & Wroblewski (1984) as described in FAMS manual + ## equation 4:22 in FAMS manual + ## here followed FAMS notes and used W=Winf + W <- Winf + iCheck_Winf(Winf) + name <- "Peterson & Watanabe (1984) from FAMS" + givens <- c(Winf=Winf) + M <- 1.92*(W^(-0.25)) + } ) # end switch() - ## Return just M result if justM=TRUE - if (justM) res <- M - else { - ## Otherwise a list with a class for printing - res <- list(method=method,name=name,givens=givens,M=M) - class(res) <- "metaM" - } - res + ## Make givens into a string + ### Round given values to default digits, and then convert to chracter + givens <- sapply(givens, + FUN=function(x) as.character(round(x,digits=getOption("digits")))) + ### Combine givens name with givens value + tmpgivens <- paste0(names(givens),"=",givens) + ### Separate multiple givens with a comma + if (length(givens>1)) tmpgivens <- paste(tmpgivens,collapse=", ") + + ## Return data.frame + data.frame(M=M,cm=1-exp(-M),method=method,name=name,givens=tmpgivens) } -#' @rdname metaM -#' @export -print.metaM <- function(x,digits=4,...) { # nocov start - message("M=",round(x$M,digits)," as estimated with ",x$name) - tmp <- paste0(names(x$givens),"=",x$givens) - if (length(x$givens>1)) tmp <- paste(tmp,collapse=", ") - message(" with givens: ",tmp) -} # nocov end - - # ############################################################ # Internal methods # ############################################################ @@ -407,3 +438,10 @@ iCheck_b <- function(b) { if (is.null(b)) STOP("A value must be given to 'b'.") if (b<1 || b>5) WARN("'b' value seems unreasonable.") } + +iCheck_PS <- function(PS) { + if (is.null(PS)) STOP("A value must be given to 'PS'.") + if (PS<0) STOP("'PS' must be greater than 0.") + if (PS>1) STOP("'PS' should be proportion (e.g., 0.01).") + if (PS>0.1) WARN("'PS' value seems unreasonable (FAMS suggests 0.01 or 0.05).") +} diff --git a/man/metaM.Rd b/man/metaM.Rd index cface56d..f69d8aee 100644 --- a/man/metaM.Rd +++ b/man/metaM.Rd @@ -3,14 +3,12 @@ \name{Mmethods} \alias{Mmethods} \alias{metaM} -\alias{print.metaM} \title{Estimate natural mortality from a variety of empirical methods.} \usage{ -Mmethods(method = c("all", "tmax", "K", "Hoenig", "Pauly")) +Mmethods(method = c("all", "tmax", "K", "Hoenig", "Pauly", "FAMS")) metaM( method = Mmethods(), - justM = TRUE, tmax = NULL, K = NULL, Linf = NULL, @@ -19,16 +17,14 @@ metaM( L = NULL, Temp = NULL, t50 = NULL, - Winf = NULL + Winf = NULL, + PS = NULL, + verbose = TRUE ) - -\method{print}{metaM}(x, digits = 4, ...) } \arguments{ \item{method}{A string that indicates what grouping of methods to return (defaults to all methods) in \code{Mmethods()} or which methods or equations to use in \code{metaM()}. See details.} -\item{justM}{A logical that indicates whether just the estimate of M (\code{TRUE}; Default) or a more descriptive list should be returned.} - \item{tmax}{The maximum age for the population of fish.} \item{K}{The Brody growth coefficient from the fit of the von Bertalanffy growth function.} @@ -47,22 +43,21 @@ metaM( \item{Winf}{The asymptotic mean weight (g) from the fit of the von Bertalanffy growth function.} -\item{x}{A \code{metaM} object returned from \code{metaM} when \code{justM=FALSE}.} - -\item{digits}{A numeric that controls the number of digits printed for the estimate of M.} +\item{PS}{The proportion of the population that survive to \code{tmax}. Should usually be around 0.01 or 0.05.} -\item{\dots}{Additional arguments for methods. Not implemented.} +\item{verbose}{Logical for whether to include method name and given inputs in resultant data.frame. Defaults to \code{TRUE}.} } \value{ -\code{Mmethods} returns a character vector with a list of methods. If only one \code{method} is chosen then \code{metaM} returns a single numeric if \code{justM=TRUE} or, otherwise, a \code{metaM} object that is a list with the following items: +\code{Mmethods} returns a character vector with a list of methods. + +\code{metaM} returns a data.frame with the following items: \itemize{ + \item \code{M}: The estimated natural mortality rate. + \item \code{cm}: The estimated conditional natural mortality rate (computed directly from \code{M}). \item \code{method}: The name for the method within the function (as given in \code{method}). \item \code{name}: A more descriptive name for the method. - \item \code{givens}: A vector of values required by the method to estimate M. - \item \code{M}: The estimated natural mortality rate. + \item \code{givens}: A string that contains the input values required by the method to estimate M. } - -If multiple \code{method}s are chosen then a data.frame is returned with the method name abbreviation in the \code{method} variable and the associated estimated M in the \code{M} variable. } \description{ Several methods can be used to estimated natural mortality (M) from other types of data, including parameters from the von Bertalanffy growth equation, maximum age, and temperature. These relationships have been developed from meta-analyses of a large number of populations. Several of these methods are implemented in this function. @@ -71,7 +66,7 @@ Several methods can be used to estimated natural mortality (M) from other types One of several methods is chosen with \code{method}. The available methods can be seen with \code{Mmethods()} and are listed below with a brief description of where the equation came from. The sources (listed below) should be consulted for more specific information. \itemize{ \item \code{method="HoenigNLS"}: The \dQuote{modified Hoenig equation derived with a non-linear model} as described in Then \emph{et al.} (2015) on the third line of Table 3. This method was the preferred method suggested by Then \emph{et al.} (2015). Requires only \code{tmax}. - \item \code{method="PaulyLNoT"}: The \dQuote{modified Pauly length equation} as described on the sixth line of Table 3 in Then \emph{et al.} (2015). Then \emph{et al.} (2015) suggested that this is the preferred model if maximum age (tmax) information was not available. Requires \code{K} and \code{Linf}. + \item \code{method="PaulyLNoT"}: The \dQuote{modified Pauly length equation} as described on the sixth line of Table 3 in Then \emph{et al.} (2015). Then \emph{et al.} (2015) suggested that this is the preferred method if maximum age (tmax) information was not available. Requires \code{K} and \code{Linf}. \item \code{method="PaulyL"}: The \dQuote{Pauly (1980) equation using fish lengths} from his equation 11. This is the most commonly used method in the literature. Note that Pauly used common logarithms as used here but the model is often presented in other sources with natural logarithms. Requires \code{K}, \code{Linf}, and \code{T}. \item \code{method="PaulyW"}: The \dQuote{Pauly (1980) equation for weights} from his equation 10. Requires \code{K}, \code{Winf}, and \code{T}. \item \code{method="HoeingO"}, \code{method="HoeingOF"}, \code{method="HoeingOM"}, \code{method="HoeingOC"}: The original \dQuote{Hoenig (1983) composite}, \dQuote{fish}, \dQuote{mollusc}, and \dQuote{cetacean} (fit with OLS) equations from the second column on page 899 of Hoenig (1983). Requires only \code{tmax}. @@ -89,12 +84,17 @@ One of several methods is chosen with \code{method}. The available methods can b \item \code{method="ZhangMegreyD"}, \code{method="ZhangMegreyP"}: The \dQuote{Zhang and Megrey (2006) equation} as given in their equation 8 but modified for demersal or pelagic fish. Thus, the user must choose the fish type with \code{group}. Requires \code{tmax}, \code{K}, \code{t0}, \code{t50}, and \code{b}. \item \code{method="RikhterEfanov1"}: The \dQuote{Rikhter and Efanov (1976) equation (#2)} as given in the second column of page 541 of Kenchington (2014) and in Table 6.4 of Miranda and Bettoli (2007). Requires only \code{t50}. \item \code{method="RikhterEfanov2"}: The \dQuote{Rikhter and Efanov (1976) equation (#1)} as given in the first column of page 541 of Kenchington (2014). Requires \code{t50}, \code{K}, \code{t0}, and \code{b}. - } + \item \code{method="QuinnDeriso"}: The \dQuote{Quinn & Derison (1999)} equation as given in the FAMS manual as equation 4:18. Requires \code{PS} and \code{tmax}. Included only for use with \code{rFAMS} package. + \item \code{method="ChanWatanabe"}: The \dQuote{Chan & Watanabe (1989)} equation as given in the FAMS manual as equation 4:24. As suggested in FAMS manual used \code{tmax} for final time and 1 as initial time. Requires \code{tmax}, \code{K}, and \code{t0}. Included only for use with \code{rFAMS} package. + \item \code{method="PetersonWroblewski"}: The \dQuote{Peterson & Wroblewski (1984)} equation as given in the FAMS manual as equation 4:22. As suggested in FAMS manual used \code{Winf} for weight. Requires \code{Winf}. Included only for use with \code{rFAMS} package. + } + +Conditional mortality (cm) is estimated from instantaneous natural mortality (M) with 1-exp(-M). It is returned with M here simply as a courtesy for those using the \code{rFAMS} package. } \section{Testing}{ - Kenchington (2014) provided life history parameters for several stocks and used many models to estimate M. I checked the calculations for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO} for \code{Hgroup="all"} and \code{Hgroup="fish"}, \code{HoenigO2} for \code{Hgroup="all"} and \code{Hgroup="fish"}, \code{"JensenK1"}, \code{"Gislason"}, \code{"AlversonCarney"}, \code{"Charnov"}, \code{"ZhangMegrey"}, \code{"RikhterEfanov1"}, and \code{"RikhterEfanov2"} methods for three stocks. All results perfectly matched Kenchington's results for Chesapeake Bay Anchovy and Rio Formosa Seahorse. For the Norwegian Fjord Lanternfish, all results perfectly matched Kenchington's results except for when \code{Hgroup="fish"} for both \code{HoenigO} and \code{HoenigO2}. + Kenchington (2014) provided life history parameters for several stocks and used many models to estimate M. I checked the calculations for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO}, \code{HoenigOF}, \code{HoenigO2}, \code{HoenigO2F}, \code{"JensenK1"}, \code{"Gislason"}, \code{"AlversonCarney"}, \code{"Charnov"}, \code{"ZhangMegrey"}, \code{"RikhterEfanov1"}, and \code{"RikhterEfanov2"} methods for three stocks. All results perfectly matched Kenchington's results for Chesapeake Bay Anchovy and Rio Formosa Seahorse. For the Norwegian Fjord Lanternfish, all results perfectly matched Kenchington's results except for \code{HoenigOF} and \code{HoenigO2F}. -Results for the Rio Formosa Seahorse data were also tested against results from \code{\link[fishmethods]{M.empirical}} from \pkg{fishmethods} for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO} for \code{Hgroup="all"} and \code{Hgroup="fish"}, \code{"Gislason"}, and \code{"AlversonCarney"} methods (the only methods in common between the two packages). All results matched perfectly. +Results for the Rio Formosa Seahorse data were also tested against results from \code{\link[fishmethods]{M.empirical}} from \pkg{fishmethods} for the \code{PaulyL}, \code{PaulyW}, \code{HoenigO}, \code{HoenigOF}, \code{"Gislason"}, and \code{"AlversonCarney"} methods (the only methods in common between the two packages). All results matched perfectly. } \section{IFAR Chapter}{ @@ -108,9 +108,8 @@ Mmethods("tmax") ## Simple Examples metaM("tmax",tmax=20) -metaM("tmax",tmax=20,justM=FALSE) metaM("HoenigNLS",tmax=20) -metaM("HoenigNLS",tmax=20,justM=FALSE) +metaM("HoenigNLS",tmax=20,verbose=FALSE) ## Example Patagonian Sprat ... from Table 2 in Cerna et al. (2014) ## http://www.scielo.cl/pdf/lajar/v42n3/art15.pdf @@ -122,7 +121,6 @@ tmax <- t0+3/K t50 <- t0-(1/K)*log(1-13.5/Linf) metaM("RikhterEfanov1",t50=t50) metaM("PaulyL",K=K,Linf=Linf,Temp=Temp) -metaM("PaulyL",K=K,Linf=Linf,Temp=Temp,justM=FALSE) metaM("HoenigNLS",tmax=tmax) metaM("HoenigO",tmax=tmax) metaM("HewittHoenig",tmax=tmax) @@ -134,10 +132,17 @@ metaM(c("RikhterEfanov1","PaulyL","HoenigO","HewittHoenig","AlversonCarney"), ## Example of multiple methods using Mmethods # select some methods -metaM(Mmethods()[-c(15,20,22:24,26)],K=K,Linf=Linf,Temp=Temp,tmax=tmax,t50=t50) +metaM(Mmethods()[-c(15,20,22:24,26:29)],K=K,Linf=Linf,Temp=Temp,tmax=tmax,t50=t50) # select just the Hoenig methods metaM(Mmethods("Hoenig"),K=K,Linf=Linf,Temp=Temp,tmax=tmax,t50=t50) +## Example of computing an average M +# select multiple models used in FAMS (example only, these are not best models) +( res <- metaM(Mmethods("FAMS"),tmax=tmax,K=K,Linf=Linf,t0=t0, + Temp=Temp,PS=0.01,Winf=30) ) +( meanM <- mean(res$M) ) +( meancm <- mean(res$cm) ) + } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. diff --git a/tests/testthat/testthat_metaM.R b/tests/testthat/testthat_metaM.R index 331482e5..20dd4e24 100644 --- a/tests/testthat/testthat_metaM.R +++ b/tests/testthat/testthat_metaM.R @@ -60,69 +60,65 @@ test_that("metaM() output",{ ## Individual methods with justM for (i in meths) { expect_equal(class(metaM(i,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, - b=3.22,t50=0.83,T=17,L=3)),"numeric") - } - - ## Individual methods without justM - for (i in meths) { - suppressMessages( - tmp <- metaM(i,justM=FALSE,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, - b=3.22,t50=0.83,T=17,L=3) - ) - expect_equal(class(tmp),"metaM") - expect_equal(mode(tmp),"list") - expect_equal(tmp[["method"]],i) - expect_message(print(tmp)) + b=3.22,t50=0.83,T=17,L=3,PS=0.01)),"data.frame") } ## Multiple selected methods tmp <- metaM(meths[1:2],tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, - b=3.22,t50=0.83,T=17,L=3) + b=3.22,t50=0.83,T=17,L=3,PS=0.01) expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),2) - expect_equal(names(tmp),c("method","M")) + expect_equal(names(tmp),c("M","cm","method","name","givens")) expect_equal(tmp$method,meths[1:2]) - expect_equal(ncol(tmp),2) + expect_equal(ncol(tmp),5) - tmp <- metaM(meths,tmax=3,Linf=12.93,Winf=20.79, - K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) + tmp <- metaM(meths,tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3,PS=0.01) expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(meths)) - expect_equal(names(tmp),c("method","M")) + expect_equal(names(tmp),c("M","cm","method","name","givens")) expect_equal(tmp$method,meths) - expect_equal(ncol(tmp),2) - - expect_message(tmp <- metaM(meths,justM=FALSE,tmax=3,Linf=12.93,Winf=20.79, - K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3)) + expect_equal(ncol(tmp),5) - tmp <- metaM(Mmethods("tmax"),tmax=3,Linf=12.93,Winf=20.79, - K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) + tmp <- metaM(Mmethods("tmax"),tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3,PS=0.01) expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("tmax"))) - expect_equal(names(tmp),c("method","M")) + expect_equal(names(tmp),c("M","cm","method","name","givens")) expect_equal(tmp$method,Mmethods("tmax")) - expect_equal(ncol(tmp),2) - tmp <- metaM(Mmethods("K"),tmax=3,Linf=12.93,Winf=20.79, - K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) + expect_equal(ncol(tmp),5) + + tmp <- metaM(Mmethods("K"),tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3,PS=0.01) expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("K"))) - expect_equal(names(tmp),c("method","M")) + expect_equal(names(tmp),c("M","cm","method","name","givens")) expect_equal(tmp$method,Mmethods("K")) - expect_equal(ncol(tmp),2) - tmp <- metaM(Mmethods("Pauly"),tmax=3,Linf=12.93,Winf=20.79, - K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) + expect_equal(ncol(tmp),5) + + tmp <- metaM(Mmethods("Pauly"),tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3,PS=0.01) expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("Pauly"))) - expect_equal(names(tmp),c("method","M")) + expect_equal(names(tmp),c("M","cm","method","name","givens")) expect_equal(tmp$method,Mmethods("Pauly")) - expect_equal(ncol(tmp),2) - tmp <- metaM(Mmethods("Hoenig"),tmax=3,Linf=12.93,Winf=20.79, - K=0.23,t0=-1.23,b=3.22,t50=0.83,T=17,L=3) + expect_equal(ncol(tmp),5) + + tmp <- metaM(Mmethods("Hoenig"),tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3,PS=0.01) expect_equal(class(tmp),"data.frame") expect_equal(nrow(tmp),length(Mmethods("Hoenig"))) - expect_equal(names(tmp),c("method","M")) + expect_equal(names(tmp),c("M","cm","method","name","givens")) expect_equal(tmp$method,Mmethods("Hoenig")) - expect_equal(ncol(tmp),2) + expect_equal(ncol(tmp),5) + + tmp <- metaM(Mmethods("FAMS"),tmax=3,Linf=12.93,Winf=20.79,K=0.23,t0=-1.23, + b=3.22,t50=0.83,T=17,L=3,PS=0.01) + expect_equal(class(tmp),"data.frame") + expect_equal(nrow(tmp),length(Mmethods("FAMS"))) + expect_equal(names(tmp),c("M","cm","method","name","givens")) + expect_equal(tmp$method,Mmethods("FAMS")) + expect_equal(ncol(tmp),5) }) @@ -168,5 +164,4 @@ test_that("metaM() matches M.empirical() from fishmethods for Rio Formosa Seahor Bl=10,tmax=5.5,method=c(1:4,9)) tmp <- data.frame(tmp,tmp2) expect_equal(round(tmp$M,3),tmp$M.1) - }) From 0094f2d5e2a5b9dabdf36d168847a2bf5d3fe1ba Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Tue, 31 Dec 2024 13:38:55 -0600 Subject: [PATCH 08/21] Fixed Chan to Chen, added citations to metaM --- R/metaM.R | 20 ++++++++++++++------ man/metaM.Rd | 10 +++++++++- 2 files changed, 23 insertions(+), 7 deletions(-) diff --git a/R/metaM.R b/R/metaM.R index 1be50fe5..865f09dc 100644 --- a/R/metaM.R +++ b/R/metaM.R @@ -24,7 +24,7 @@ #' \item \code{method="RikhterEfanov1"}: The \dQuote{Rikhter and Efanov (1976) equation (#2)} as given in the second column of page 541 of Kenchington (2014) and in Table 6.4 of Miranda and Bettoli (2007). Requires only \code{t50}. #' \item \code{method="RikhterEfanov2"}: The \dQuote{Rikhter and Efanov (1976) equation (#1)} as given in the first column of page 541 of Kenchington (2014). Requires \code{t50}, \code{K}, \code{t0}, and \code{b}. #' \item \code{method="QuinnDeriso"}: The \dQuote{Quinn & Derison (1999)} equation as given in the FAMS manual as equation 4:18. Requires \code{PS} and \code{tmax}. Included only for use with \code{rFAMS} package. -#' \item \code{method="ChanWatanabe"}: The \dQuote{Chan & Watanabe (1989)} equation as given in the FAMS manual as equation 4:24. As suggested in FAMS manual used \code{tmax} for final time and 1 as initial time. Requires \code{tmax}, \code{K}, and \code{t0}. Included only for use with \code{rFAMS} package. +#' \item \code{method="ChenWatanabe"}: The \dQuote{Chen & Watanabe (1989)} equation as given in the FAMS manual as equation 4:24. As suggested in FAMS manual used \code{tmax} for final time and 1 as initial time. Requires \code{tmax}, \code{K}, and \code{t0}. Included only for use with \code{rFAMS} package. #' \item \code{method="PetersonWroblewski"}: The \dQuote{Peterson & Wroblewski (1984)} equation as given in the FAMS manual as equation 4:22. As suggested in FAMS manual used \code{Winf} for weight. Requires \code{Winf}. Included only for use with \code{rFAMS} package. #' } #' @@ -68,6 +68,8 @@ #' #' Alverson, D.L. and M.J. Carney. 1975. A graphic review of the growth and decay of population cohorts. Journal du Conseil International pour l'Exploration de la Mer. 36:133-143. #' +#' Chen, S. and S. Watanabe. 1989. Age dependence of natural mortality coefficient in fish population dynamics. Nippon Suisan Gakkaishi 55:205-208. +#' #' Charnov, E.L., H. Gislason, and J.G. Pope. 2013. Evolutionary assembly rules for fish life histories. Fish and Fisheries. 14:213-224. #' #' Gislason, H., N. Daan, J.C. Rice, and J.G. Pope. 2010. Size, growth, temperature and the natural mortality of marine fish. Fish and Fisheries 11:149-158. @@ -84,8 +86,14 @@ #' #' Pauly, D. 1980. On the interrelationships between natural mortality, growth parameters, and mean environmental temperature in 175 fish stocks. Journal du Conseil International pour l'Exploration de la Mer. 39:175-192. [Was (is?) from http://innri.unuftp.is/pauly/On\%20the\%20interrelationships\%20betwe.pdf.] #' +#' Peterson, I. and J.S. Wroblewski. 1984. Mortality rate of fishes in the pelagic ecosystem. Canadian Journal of Fisheries and Aquatic Sciences. 41:1117-1120. +#' +#' Quinn III, T.J. and R.B. Deriso. 1999. Quantitative Fish Dynamics. Oxford University Press, New York. +#' #' Rikhter, V.A., and V.N. Efanov. 1976. On one of the approaches for estimating natural mortality in fish populations (in Russian). ICNAF Research Document 76/IV/8, 12pp. #' +#' Slipke, J.W. and M.J. Maceina. 2013. Fisheries Analysis and Modeling Simulator (FAMS 1.64). American Fisheries Society. +#' #' Then, A.Y., J.M. Hoenig, N.G. Hall, and D.A. Hewitt. 2015. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species. ICES Journal of Marine Science. 72:82-92. #' #' Zhang, C-I and B.A. Megrey. 2006. A revised Alverson and Carney model for estimating the instantaneous rate of natural mortality. Transactions of the American Fisheries Society. 135-620-633. [Was (is?) from http://www.pmel.noaa.gov/foci/publications/2006/zhan0531.pdf.] @@ -148,7 +156,7 @@ Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly","FAMS")) { "AlversonCarney","Charnov", "ZhangMegreyD","ZhangMegreyP", "RikhterEfanov1","RikhterEfanov2", - "QuinnDeriso","ChanWatanabe","PetersonWroblewski") + "QuinnDeriso","ChenWatanabe","PetersonWroblewski") H_meth <- all_meth[grep("Hoenig",all_meth)] P_meth <- switch(method, @@ -158,7 +166,7 @@ Mmethods <- function(method=c("all","tmax","K","Hoenig","Pauly","FAMS")) { Hoenig = { meths <- H_meth}, Pauly = { meths <- all_meth[grep("Pauly",all_meth)] }, FAMS = { meths <- c("QuinnDeriso","HoenigOF","JensenK1", - "PetersonWroblewski","PaulyL","ChanWatanabe")} + "PetersonWroblewski","PaulyL","ChenWatanabe")} ) meths } @@ -350,8 +358,8 @@ metaM1 <- function(method,tmax,K,Linf,t0,b,L,Temp,t50,Winf,PS,...) { givens <- c(PS=PS,tmax=tmax) M <- -log(PS)/tmax }, - ChanWatanabe = { - ## from Chan and Watanabe (1989) as described in FAMS manual + ChenWatanabe = { + ## from Chen and Watanabe (1989) as described in FAMS manual ## equation 4:24 in FAMS manual ## here followed FAMS notes and used ti=1 and tf=tmax ti <- 1 # initial age @@ -359,7 +367,7 @@ metaM1 <- function(method,tmax,K,Linf,t0,b,L,Temp,t50,Winf,PS,...) { iCheck_K(K) iCheck_t0(t0) iCheck_tmax(tmax) - name <- "Chan & Watanabe (1989) from FAMS" + name <- "Chen & Watanabe (1989) from FAMS" givens <- c(tmax=tmax,K=K,t0=t0) M <- (1/(tf-ti))*log((exp(-K*tf)-exp(-K*t0))/(exp(-K*ti)-exp(-K*t0))) }, diff --git a/man/metaM.Rd b/man/metaM.Rd index f69d8aee..4361c863 100644 --- a/man/metaM.Rd +++ b/man/metaM.Rd @@ -85,7 +85,7 @@ One of several methods is chosen with \code{method}. The available methods can b \item \code{method="RikhterEfanov1"}: The \dQuote{Rikhter and Efanov (1976) equation (#2)} as given in the second column of page 541 of Kenchington (2014) and in Table 6.4 of Miranda and Bettoli (2007). Requires only \code{t50}. \item \code{method="RikhterEfanov2"}: The \dQuote{Rikhter and Efanov (1976) equation (#1)} as given in the first column of page 541 of Kenchington (2014). Requires \code{t50}, \code{K}, \code{t0}, and \code{b}. \item \code{method="QuinnDeriso"}: The \dQuote{Quinn & Derison (1999)} equation as given in the FAMS manual as equation 4:18. Requires \code{PS} and \code{tmax}. Included only for use with \code{rFAMS} package. - \item \code{method="ChanWatanabe"}: The \dQuote{Chan & Watanabe (1989)} equation as given in the FAMS manual as equation 4:24. As suggested in FAMS manual used \code{tmax} for final time and 1 as initial time. Requires \code{tmax}, \code{K}, and \code{t0}. Included only for use with \code{rFAMS} package. + \item \code{method="ChenWatanabe"}: The \dQuote{Chen & Watanabe (1989)} equation as given in the FAMS manual as equation 4:24. As suggested in FAMS manual used \code{tmax} for final time and 1 as initial time. Requires \code{tmax}, \code{K}, and \code{t0}. Included only for use with \code{rFAMS} package. \item \code{method="PetersonWroblewski"}: The \dQuote{Peterson & Wroblewski (1984)} equation as given in the FAMS manual as equation 4:22. As suggested in FAMS manual used \code{Winf} for weight. Requires \code{Winf}. Included only for use with \code{rFAMS} package. } @@ -149,6 +149,8 @@ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html# Alverson, D.L. and M.J. Carney. 1975. A graphic review of the growth and decay of population cohorts. Journal du Conseil International pour l'Exploration de la Mer. 36:133-143. +Chen, S. and S. Watanabe. 1989. Age dependence of natural mortality coefficient in fish population dynamics. Nippon Suisan Gakkaishi 55:205-208. + Charnov, E.L., H. Gislason, and J.G. Pope. 2013. Evolutionary assembly rules for fish life histories. Fish and Fisheries. 14:213-224. Gislason, H., N. Daan, J.C. Rice, and J.G. Pope. 2010. Size, growth, temperature and the natural mortality of marine fish. Fish and Fisheries 11:149-158. @@ -165,8 +167,14 @@ Kenchington, T.J. 2014. Natural mortality estimators for information-limited fis Pauly, D. 1980. On the interrelationships between natural mortality, growth parameters, and mean environmental temperature in 175 fish stocks. Journal du Conseil International pour l'Exploration de la Mer. 39:175-192. [Was (is?) from http://innri.unuftp.is/pauly/On\%20the\%20interrelationships\%20betwe.pdf.] +Peterson, I. and J.S. Wroblewski. 1984. Mortality rate of fishes in the pelagic ecosystem. Canadian Journal of Fisheries and Aquatic Sciences. 41:1117-1120. + +Quinn III, T.J. and R.B. Deriso. 1999. Quantitative Fish Dynamics. Oxford University Press, New York. + Rikhter, V.A., and V.N. Efanov. 1976. On one of the approaches for estimating natural mortality in fish populations (in Russian). ICNAF Research Document 76/IV/8, 12pp. +Slipke, J.W. and M.J. Maceina. 2013. Fisheries Analysis and Modeling Simulator (FAMS 1.64). American Fisheries Society. + Then, A.Y., J.M. Hoenig, N.G. Hall, and D.A. Hewitt. 2015. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species. ICES Journal of Marine Science. 72:82-92. Zhang, C-I and B.A. Megrey. 2006. A revised Alverson and Carney model for estimating the instantaneous rate of natural mortality. Transactions of the American Fisheries Society. 135-620-633. [Was (is?) from http://www.pmel.noaa.gov/foci/publications/2006/zhan0531.pdf.] From d6fbf4f63a9c1553c50be5933f70cd81cca13a32 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Thu, 2 Jan 2025 09:44:28 -0600 Subject: [PATCH 09/21] Added formula notation to depletion() --- NAMESPACE | 2 ++ NEWS.md | 2 ++ R/depletion.R | 38 ++++++++++++++++++++++++++--- man/depletion.Rd | 31 ++++++++++++++++++----- tests/testthat/testthat_depletion.R | 11 ++++++++- 5 files changed, 74 insertions(+), 10 deletions(-) diff --git a/NAMESPACE b/NAMESPACE index cb6becc4..2718637e 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -20,6 +20,8 @@ S3method(confint,mrClosed2) S3method(confint,mrOpen) S3method(confint,nlsBoot) S3method(confint,removal) +S3method(depletion,default) +S3method(depletion,formula) S3method(dunnTest,default) S3method(dunnTest,formula) S3method(hist,boot) diff --git a/NEWS.md b/NEWS.md index e8f2d000..6491e294 100644 --- a/NEWS.md +++ b/NEWS.md @@ -9,6 +9,8 @@ * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. +* `depletion()`: Modified. + * Added formula notation such that `depletion()` wash changed to a method call and `depletion.default()` and `depletion.formula()` were added. Tests for the formula were included. * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). * `metaM()`: Modified to address [#114](https://github.com/fishR-Core-Team/FSA/issues/114). * Returns data.frame rather than list. diff --git a/R/depletion.R b/R/depletion.R index 4bbcdd7c..0c5f96e0 100644 --- a/R/depletion.R +++ b/R/depletion.R @@ -8,7 +8,8 @@ #' #' Standard errors for the catchability and population size estimates are computed from formulas on page 298 (for Leslie) and 303 (for DeLury) from Seber (2002). Confidence intervals are computed using standard large-sample normal distribution theory with the regression error df. #' -#' @param catch A numeric vector of catches of fish at each time. +#' @param catch A numeric vector of catches of fish at each time, or a formula of the form \code{catch~effort}. +#' @param data A data.frame from which the variables in the \code{catch} formula can be found. Not used if \code{catch} is not a formula. #' @param effort A numeric vector of efforts expended at each time. #' @param method A single string that indicates which depletion method to use #' @param Ricker.mod A single logical that indicates whether to use the modification proposed by Ricker (=TRUE) or not (=FALSE, default). @@ -103,10 +104,41 @@ #' cbind(Est=coef(d2,parm="q"),confint(d2,parm="q")) #' plot(d2) #' +#' # with formula notation +#' l3 <- depletion(catch~effort,data=SMBassLS) +#' summary(l3) +#' +#' @rdname depletion +#' @export +depletion <- function(catch,...) { + UseMethod("depletion") +} + +#' @rdname depletion +#' @export +depletion.formula <- function(catch,data,method=c("Leslie","DeLury","Delury"), + Ricker.mod=FALSE,...) { + ## Handle the formula and perform some checks + tmp <- iHndlFormula(catch,data,expNumR=1,expNumE=1) + if (!tmp$metExpNumR) + STOP("'depletion' must have only one LHS variable.") + if (!tmp$Rclass %in% c("numeric","integer")) + STOP("LHS variable (catch) must be numeric.") + if (!tmp$metExpNumE) + STOP("'depletion' must have only one RHS variable.") + if (!tmp$Eclass %in% c("numeric","integer")) + STOP("RHS variable (effort) must be numeric.") + ## Get variables from model frame + catch <- tmp$mf[,tmp$Rname] + effort <- tmp$mf[,tmp$Enames] + ## Call the default function + depletion.default(catch,effort,method=method,Ricker.mod=Ricker.mod) +} + #' @rdname depletion #' @export -depletion <- function(catch,effort,method=c("Leslie","DeLury","Delury"), - Ricker.mod=FALSE) { +depletion.default <- function(catch,effort,method=c("Leslie","DeLury","Delury"), + Ricker.mod=FALSE,...) { # check method, change bad spelling of DeLury if necessary method <- match.arg(method) if (method=="Delury") method <- "DeLury" diff --git a/man/depletion.Rd b/man/depletion.Rd index 263e2596..2f52a087 100644 --- a/man/depletion.Rd +++ b/man/depletion.Rd @@ -7,14 +7,27 @@ \alias{coef.depletion} \alias{anova.depletion} \alias{confint.depletion} +\alias{depletion.formula} +\alias{depletion.default} \alias{rSquared.depletion} \title{Computes the Leslie or DeLury population estimate from catch and effort data.} \usage{ -depletion( +depletion(catch, ...) + +\method{depletion}{formula}( + catch, + data, + method = c("Leslie", "DeLury", "Delury"), + Ricker.mod = FALSE, + ... +) + +\method{depletion}{default}( catch, effort, method = c("Leslie", "DeLury", "Delury"), - Ricker.mod = FALSE + Ricker.mod = FALSE, + ... ) \method{summary}{depletion}(object, parm = c("all", "both", "No", "q", "lm"), verbose = FALSE, ...) @@ -48,22 +61,24 @@ depletion( ) } \arguments{ -\item{catch}{A numeric vector of catches of fish at each time.} +\item{catch}{A numeric vector of catches of fish at each time, or a formula of the form \code{catch~effort}.} -\item{effort}{A numeric vector of efforts expended at each time.} +\item{\dots}{Additional arguments for methods.} + +\item{data}{A data.frame from which the variables in the \code{catch} formula can be found. Not used if \code{catch} is not a formula.} \item{method}{A single string that indicates which depletion method to use} \item{Ricker.mod}{A single logical that indicates whether to use the modification proposed by Ricker (=TRUE) or not (=FALSE, default).} +\item{effort}{A numeric vector of efforts expended at each time.} + \item{object}{An object saved from the \code{removal} call (i.e., of class \code{depletion}).} \item{parm}{A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered.} \item{verbose}{A logical that indicates whether a reminder of the method used should be printed with the summary results.} -\item{\dots}{Additional arguments for methods.} - \item{level}{Same as \code{conf.level} but used for compatibility with generic \code{confint} function.} \item{conf.level}{A single number that represents the level of confidence to use for constructing confidence intervals.} @@ -165,6 +180,10 @@ cbind(Est=coef(d2),confint(d2)) cbind(Est=coef(d2,parm="q"),confint(d2,parm="q")) plot(d2) +# with formula notation +l3 <- depletion(catch~effort,data=SMBassLS) +summary(l3) + } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. diff --git a/tests/testthat/testthat_depletion.R b/tests/testthat/testthat_depletion.R index eeb84fe3..29e86583 100644 --- a/tests/testthat/testthat_depletion.R +++ b/tests/testthat/testthat_depletion.R @@ -88,6 +88,11 @@ ci8 <- confint(ex8) ## Test Messages ---- test_that("depletion() messages",{ + ## bad formulae + expect_error(depletion(catch~effort+day,data=SMBassLS), + "only one RHS variable") + expect_error(depletion(catch+day~effort,data=SMBassLS), + "more than one variable on the LHS") ## wrong type expect_error(depletion(c(346,184,49),rep(7,3),method="Derek"), "should be one of") @@ -107,7 +112,7 @@ test_that("depletion() messages",{ ## too few catches expect_error(depletion(c(346,184),rep(7,2)), "Must have at least 3 values") - ## negative catchs or non-non-negative efforts + ## negative catches or non-non-negative efforts expect_error(depletion(c(346,184,-49),rep(7,3)), "must be non-negative") expect_error(depletion(c(346,184,49),c(7,3,-1)), @@ -146,7 +151,11 @@ test_that("depletion() messages",{ ## Test Output Types ---- test_that("depletion() output types",{ + # formula matches vector method + l1f <- depletion(catch~effort,data=SMBassLS,method="Leslie") l1 <- depletion(SMBassLS$catch,SMBassLS$effort,method="Leslie") + expect_equal(l1f,l1) + # Just check vector method after this expect_equal(class(l1),"depletion") # coef()s l1A <- coef(l1) From bdfd863617854f48e25884d6be0c2e4cb134533e Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Thu, 2 Jan 2025 10:59:54 -0600 Subject: [PATCH 10/21] Add as.df= to depletion extractor functions to address #111 --- NEWS.md | 3 +- R/depletion.R | 94 +++++++++++++++++++++++++---- man/depletion.Rd | 39 +++++++++++- tests/testthat/testthat_depletion.R | 41 +++++++++++++ 4 files changed, 162 insertions(+), 15 deletions(-) diff --git a/NEWS.md b/NEWS.md index 6491e294..0d4f6b7a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -9,8 +9,9 @@ * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. -* `depletion()`: Modified. +* `depletion()`: Modified to address [#111](https://github.com/fishR-Core-Team/FSA/issues/111). * Added formula notation such that `depletion()` wash changed to a method call and `depletion.default()` and `depletion.formula()` were added. Tests for the formula were included. + * Added `as.df=` to `coef.depletion()`, `confint.depletion()`, and `summary.depletion()` so that the result is returned as a data.frame when set to `TRUE` (default is `FALSE` to maintain backward compatability). * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). * `metaM()`: Modified to address [#114](https://github.com/fishR-Core-Team/FSA/issues/114). * Returns data.frame rather than list. diff --git a/R/depletion.R b/R/depletion.R index 0c5f96e0..5ac7bc3f 100644 --- a/R/depletion.R +++ b/R/depletion.R @@ -15,6 +15,7 @@ #' @param Ricker.mod A single logical that indicates whether to use the modification proposed by Ricker (=TRUE) or not (=FALSE, default). #' @param object An object saved from the \code{removal} call (i.e., of class \code{depletion}). #' @param x An object saved from the \code{depletion} call (i.e., of class \code{depletion}). +#' @param as.df A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}. #' @param verbose A logical that indicates whether a reminder of the method used should be printed with the summary results. #' @param parm A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered. #' @param conf.level A single number that represents the level of confidence to use for constructing confidence intervals. @@ -108,6 +109,32 @@ #' l3 <- depletion(catch~effort,data=SMBassLS) #' summary(l3) #' +#' # summarizing by group (requires dplyr package) +#' # Dummy example data (grp="A" is SMBassLS example ... just FYI) +#' tmpdf <- data.frame(ct=c(131,69,99,78,56,76,49,42,63,47, +#' 117,75,87,67,58,67,42), +#' ft=c(7,7,7,7,7,7,7,7,7,7, +#' 5,7,5,5,4,6,5), +#' grp=as.factor(c("A","A","A","A","A","A","A","A","A","A", +#' "B","B","B","B","B","B","B"))) +#' +#' if (require(dplyr)) { +#' res1 <- tmpdf %>% +#' dplyr::group_by(grp) %>% +#' dplyr::group_modify(~summary(depletion(ct~ft,data=.x),as.df=TRUE)) %>% +#' as.data.frame() # removes tibble and grouping structure +#' res1 +#' +#' res2 <- tmpdf %>% +#' dplyr::group_by(grp) %>% +#' dplyr::group_modify(~confint(depletion(ct~ft,data=.x),as.df=TRUE)) %>% +#' as.data.frame() # removes tibble and grouping structure +#' res2 +#' +#' res <- dplyr::left_join(res1,res2,by="grp") +#' res +#' } +#' #' @rdname depletion #' @export depletion <- function(catch,...) { @@ -238,27 +265,48 @@ iCheckRegSig <- function(tmp) { #' @rdname depletion #' @export -summary.depletion <- function(object,parm=c("all","both","No","q","lm"),verbose=FALSE,...) { +summary.depletion <- function(object,parm=c("all","both","No","q","lm"), + verbose=FALSE,as.df=FALSE,...) { if (verbose) message("The ",object$method," method was used.") parm <- match.arg(parm) if(parm=="lm") { - tmp <- summary(object$lm,...) + res <- summary(object$lm,...) } else { - tmp <- object$est - if (!parm %in% c("all","both")) tmp <- tmp[parm,,drop=FALSE] + res <- object$est + if (parm %in% c("all","both")) { + if (as.df) { + res <- data.frame(No_Est=res[["No","Estimate"]], + No_SE=res[["No","Std. Err."]], + q_Est=res[["q","Estimate"]], + q_SE=res[["q","Std. Err."]]) + } + } else { + res <- res[parm,,drop=FALSE] + if (as.df) { + res <- data.frame(Est=res[[parm,"Estimate"]], + SE=res[[parm,"Std. Err."]]) + names(res) <- paste(parm,names(res),sep="_") + } + } } - tmp + res } #' @rdname depletion #' @export -coef.depletion <- function(object,parm=c("all","both","No","q","lm"),...) { +coef.depletion <- function(object,parm=c("all","both","No","q","lm"),as.df=FALSE,...) { parm <- match.arg(parm) if(parm=="lm") { tmp <- stats::coef(object$lm,...) + if(as.df) tmp <- data.frame(Intercept=tmp[["(Intercept)"]],K=tmp[["K"]]) } else { tmp <- object$est[,"Estimate"] - if (!parm %in% c("all","both")) tmp <- tmp[parm] + if(parm %in% c("all","both")) { + if (as.df) tmp <- data.frame(No=tmp[["No"]],q=tmp[["q"]]) + } else { + tmp <- tmp[parm] + if (as.df) tmp <- data.frame(No=tmp[[parm]]) + } } tmp } @@ -266,21 +314,43 @@ coef.depletion <- function(object,parm=c("all","both","No","q","lm"),...) { #' @rdname depletion #' @export confint.depletion <- function(object,parm=c("all","both","No","q","lm"), - level=conf.level,conf.level=0.95,...) { + level=conf.level,conf.level=0.95,as.df=FALSE,...) { parm <- match.arg(parm) ## Check on conf.level iCheckConfLevel(conf.level) - if (parm=="lm") res <- stats::confint(object$lm,level=conf.level) - else { + if (parm=="lm") { + res <- stats::confint(object$lm,level=conf.level) + colnames(res) <- iCILabel(conf.level) + if (as.df) { + res <- data.frame(Intercept_LCI=res[["(Intercept)","95% LCI"]], + Intercept_UCI=res[["(Intercept)","95% UCI"]], + K_LCI=res[["K","95% LCI"]], + K_UCI=res[["K","95% UCI"]]) + } + } else { t <- stats::qt(1-(1-conf.level)/2,summary(object$lm)$df[2]) tmp <- summary(object) res <- cbind(tmp[,"Estimate"]-t*tmp[,"Std. Err."], tmp[,"Estimate"]+t*tmp[,"Std. Err."]) - if (!parm %in% c("all","both")) res <- res[parm,,drop=FALSE] + colnames(res) <- iCILabel(conf.level) + if (parm %in% c("all","both")) { + if (as.df) { + res <- data.frame(No_LCI=res[["No","95% LCI"]], + No_UCI=res[["No","95% UCI"]], + q_LCI=res[["q","95% LCI"]], + q_UCI=res[["q","95% UCI"]]) + } + } else { + res <- res[parm,,drop=FALSE] + if (as.df) { + res <- data.frame(LCI=res[[parm,"95% LCI"]], + UCI=res[[parm,"95% UCI"]]) + names(res) <- paste(parm,names(res),sep="_") + } + } } - colnames(res) <- iCILabel(conf.level) res } diff --git a/man/depletion.Rd b/man/depletion.Rd index 2f52a087..ee796633 100644 --- a/man/depletion.Rd +++ b/man/depletion.Rd @@ -30,15 +30,22 @@ depletion(catch, ...) ... ) -\method{summary}{depletion}(object, parm = c("all", "both", "No", "q", "lm"), verbose = FALSE, ...) +\method{summary}{depletion}( + object, + parm = c("all", "both", "No", "q", "lm"), + verbose = FALSE, + as.df = FALSE, + ... +) -\method{coef}{depletion}(object, parm = c("all", "both", "No", "q", "lm"), ...) +\method{coef}{depletion}(object, parm = c("all", "both", "No", "q", "lm"), as.df = FALSE, ...) \method{confint}{depletion}( object, parm = c("all", "both", "No", "q", "lm"), level = conf.level, conf.level = 0.95, + as.df = FALSE, ... ) @@ -79,6 +86,8 @@ depletion(catch, ...) \item{verbose}{A logical that indicates whether a reminder of the method used should be printed with the summary results.} +\item{as.df}{A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}.} + \item{level}{Same as \code{conf.level} but used for compatibility with generic \code{confint} function.} \item{conf.level}{A single number that represents the level of confidence to use for constructing confidence intervals.} @@ -184,6 +193,32 @@ plot(d2) l3 <- depletion(catch~effort,data=SMBassLS) summary(l3) +# summarizing by group (requires dplyr package) +# Dummy example data (grp="A" is SMBassLS example ... just FYI) +tmpdf <- data.frame(ct=c(131,69,99,78,56,76,49,42,63,47, + 117,75,87,67,58,67,42), + ft=c(7,7,7,7,7,7,7,7,7,7, + 5,7,5,5,4,6,5), + grp=as.factor(c("A","A","A","A","A","A","A","A","A","A", + "B","B","B","B","B","B","B"))) + +if (require(dplyr)) { + res1 <- tmpdf \%>\% + dplyr::group_by(grp) \%>\% + dplyr::group_modify(~summary(depletion(ct~ft,data=.x),as.df=TRUE)) \%>\% + as.data.frame() # removes tibble and grouping structure + res1 + + res2 <- tmpdf \%>\% + dplyr::group_by(grp) \%>\% + dplyr::group_modify(~confint(depletion(ct~ft,data=.x),as.df=TRUE)) \%>\% + as.data.frame() # removes tibble and grouping structure + res2 + + res <- dplyr::left_join(res1,res2,by="grp") + res +} + } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. diff --git a/tests/testthat/testthat_depletion.R b/tests/testthat/testthat_depletion.R index 29e86583..406026de 100644 --- a/tests/testthat/testthat_depletion.R +++ b/tests/testthat/testthat_depletion.R @@ -163,16 +163,31 @@ test_that("depletion() output types",{ expect_equal(class(l1A),"numeric") expect_equal(length(l1A),2) expect_equal(names(l1A),c("No","q")) + l1A <- coef(l1,as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),2) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No","q")) l1A <- coef(l1,parm="No") expect_true(is.vector(l1A)) expect_equal(class(l1A),"numeric") expect_equal(length(l1A),1) expect_equal(names(l1A),c("No")) + l1A <- coef(l1,parm="No",as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),1) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),"No") l1A <- coef(l1,parm="lm") expect_true(is.vector(l1A)) expect_equal(class(l1A),"numeric") expect_equal(length(l1A),2) expect_equal(names(l1A),c("(Intercept)","K")) + l1A <- coef(l1,parm="lm",as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),2) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("Intercept","K")) # confint()s l1A <- confint(l1) expect_true(is.matrix(l1A)) @@ -181,6 +196,11 @@ test_that("depletion() output types",{ expect_equal(ncol(l1A),2) expect_equal(rownames(l1A),c("No","q")) expect_equal(colnames(l1A),c("95% LCI","95% UCI")) + l1A <- confint(l1,as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),4) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No_LCI","No_UCI","q_LCI","q_UCI")) l1A <- confint(l1,parm="No") expect_true(is.matrix(l1A)) expect_equal(mode(l1A),"numeric") @@ -188,6 +208,11 @@ test_that("depletion() output types",{ expect_equal(nrow(l1A),1) expect_equal(rownames(l1A),c("No")) expect_equal(colnames(l1A),c("95% LCI","95% UCI")) + l1A <- confint(l1,parm="No",as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),2) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No_LCI","No_UCI")) l1A <- confint(l1,parm="lm") expect_true(is.matrix(l1A)) expect_equal(mode(l1A),"numeric") @@ -195,6 +220,11 @@ test_that("depletion() output types",{ expect_equal(ncol(l1A),2) expect_equal(rownames(l1A),c("(Intercept)","K")) expect_equal(colnames(l1A),c("95% LCI","95% UCI")) + l1A <- confint(l1,parm="lm",as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),4) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("Intercept_LCI","Intercept_UCI","K_LCI","K_UCI")) # summary()s l1A <- summary(l1) expect_true(is.matrix(l1A)) @@ -203,6 +233,11 @@ test_that("depletion() output types",{ expect_equal(ncol(l1A),2) expect_equal(rownames(l1A),c("No","q")) expect_equal(colnames(l1A),c("Estimate","Std. Err.")) + l1A <- summary(l1,as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),4) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No_Est","No_SE","q_Est","q_SE")) l1A <- summary(l1,parm="No") expect_true(is.matrix(l1A)) expect_equal(mode(l1A),"numeric") @@ -210,6 +245,11 @@ test_that("depletion() output types",{ expect_equal(ncol(l1A),2) expect_equal(rownames(l1A),c("No")) expect_equal(colnames(l1A),c("Estimate","Std. Err.")) + l1A <- summary(l1,parm="No",as.df=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),2) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No_Est","No_SE")) l1A <- summary(l1,parm="lm") expect_equal(class(l1A),"summary.lm") expect_equal(mode(l1A),"list") @@ -220,6 +260,7 @@ test_that("depletion() output types",{ # r-squared expect_true(is.numeric(rSquared(l1))) + ## did not check as.df=TRUE with method="DeLury" d1 <- depletion(SMBassLS$catch,SMBassLS$effort,method="DeLury") expect_equal(class(d1),"depletion") # coef()s From de71a6685d809fb724c5730828a568a0716f69fd Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Thu, 2 Jan 2025 14:53:03 -0600 Subject: [PATCH 11/21] Added incl.est= to confint.depletion() --- NEWS.md | 1 + R/depletion.R | 117 +++++++++++++--------------- man/depletion.Rd | 55 +++++-------- tests/testthat/testthat_depletion.R | 41 +++++++++- 4 files changed, 116 insertions(+), 98 deletions(-) diff --git a/NEWS.md b/NEWS.md index 0d4f6b7a..e5b7cb68 100644 --- a/NEWS.md +++ b/NEWS.md @@ -12,6 +12,7 @@ * `depletion()`: Modified to address [#111](https://github.com/fishR-Core-Team/FSA/issues/111). * Added formula notation such that `depletion()` wash changed to a method call and `depletion.default()` and `depletion.formula()` were added. Tests for the formula were included. * Added `as.df=` to `coef.depletion()`, `confint.depletion()`, and `summary.depletion()` so that the result is returned as a data.frame when set to `TRUE` (default is `FALSE` to maintain backward compatability). + * Added `incl.est=` to `confint.depletion()` to make it easier to get away from the clunky `cbind("Est"=coef(),confint())` code. * `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). * `metaM()`: Modified to address [#114](https://github.com/fishR-Core-Team/FSA/issues/114). * Returns data.frame rather than list. diff --git a/R/depletion.R b/R/depletion.R index 5ac7bc3f..d81b78ed 100644 --- a/R/depletion.R +++ b/R/depletion.R @@ -16,6 +16,7 @@ #' @param object An object saved from the \code{removal} call (i.e., of class \code{depletion}). #' @param x An object saved from the \code{depletion} call (i.e., of class \code{depletion}). #' @param as.df A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}. +#' @param incl.est A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}. #' @param verbose A logical that indicates whether a reminder of the method used should be printed with the summary results. #' @param parm A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered. #' @param conf.level A single number that represents the level of confidence to use for constructing confidence intervals. @@ -75,63 +76,45 @@ #' summary(l1,parm="No") #' rSquared(l1) #' rSquared(l1,digits=1,percent=TRUE) -#' cbind(Est=coef(l1),confint(l1)) -#' cbind(Est=coef(l1,parm="No"),confint(l1,parm="No")) -#' cbind(Est=coef(l1,parm="q"),confint(l1,parm="q")) -#' summary(l1,parm="lm") +#' coef(l1) +#' confint(l1) +#' confint(l1,incl.est=TRUE) +#' confint(l1,incl.est=TRUE,parm="No") +#' confint(l1,incl.est=TRUE,parm="q") +#' confint(l1,incl.est=TRUE,parm="lm") #' plot(l1) #' #' # with Ricker modification #' l2 <- depletion(SMBassLS$catch,SMBassLS$effort,method="Leslie",Ricker.mod=TRUE) #' summary(l2) -#' cbind(Est=coef(l2),confint(l1)) +#' confint(l1,incl.est=TRUE) #' plot(l2) #' -#' ## DeLury model examples -#' # no Ricker modification +#' ## DeLury model examples with no Ricker modification #' d1 <- depletion(SMBassLS$catch,SMBassLS$effort,method="DeLury") #' summary(d1) -#' summary(d1,parm="q") -#' summary(d1,verbose=TRUE) #' rSquared(d1) -#' cbind(Est=coef(d1),confint(d1)) -#' summary(d1,parm="lm") +#' confint(d1,incl.est=TRUE) #' plot(d1) -#' -#' # with Ricker modification -#' d2 <- depletion(SMBassLS$catch,SMBassLS$effort,method="DeLury",Ricker.mod=TRUE) -#' summary(d2) -#' cbind(Est=coef(d2),confint(d2)) -#' cbind(Est=coef(d2,parm="q"),confint(d2,parm="q")) -#' plot(d2) #' -#' # with formula notation +#' # Leslie model using formula notation #' l3 <- depletion(catch~effort,data=SMBassLS) #' summary(l3) #' -#' # summarizing by group (requires dplyr package) -#' # Dummy example data (grp="A" is SMBassLS example ... just FYI) +#' # Leslie model by group (requires dplyr package) +#' # Dummy example data (lake=="A" is SMBassLS example ... just FYI) #' tmpdf <- data.frame(ct=c(131,69,99,78,56,76,49,42,63,47, #' 117,75,87,67,58,67,42), #' ft=c(7,7,7,7,7,7,7,7,7,7, #' 5,7,5,5,4,6,5), -#' grp=as.factor(c("A","A","A","A","A","A","A","A","A","A", -#' "B","B","B","B","B","B","B"))) +#' lake=as.factor(c(rep("A",10),rep("B",7)))) #' #' if (require(dplyr)) { -#' res1 <- tmpdf %>% -#' dplyr::group_by(grp) %>% -#' dplyr::group_modify(~summary(depletion(ct~ft,data=.x),as.df=TRUE)) %>% -#' as.data.frame() # removes tibble and grouping structure -#' res1 -#' -#' res2 <- tmpdf %>% -#' dplyr::group_by(grp) %>% -#' dplyr::group_modify(~confint(depletion(ct~ft,data=.x),as.df=TRUE)) %>% +#' res <- tmpdf %>% +#' dplyr::group_by(lake) %>% +#' dplyr::group_modify(~confint(depletion(ct~ft,data=.x), +#' incl.est=TRUE,as.df=TRUE)) %>% #' as.data.frame() # removes tibble and grouping structure -#' res2 -#' -#' res <- dplyr::left_join(res1,res2,by="grp") #' res #' } #' @@ -314,44 +297,54 @@ coef.depletion <- function(object,parm=c("all","both","No","q","lm"),as.df=FALSE #' @rdname depletion #' @export confint.depletion <- function(object,parm=c("all","both","No","q","lm"), - level=conf.level,conf.level=0.95,as.df=FALSE,...) { + level=conf.level,conf.level=0.95, + incl.est=FALSE,as.df=FALSE,...) { parm <- match.arg(parm) - ## Check on conf.level iCheckConfLevel(conf.level) if (parm=="lm") { - res <- stats::confint(object$lm,level=conf.level) - colnames(res) <- iCILabel(conf.level) - if (as.df) { - res <- data.frame(Intercept_LCI=res[["(Intercept)","95% LCI"]], - Intercept_UCI=res[["(Intercept)","95% UCI"]], - K_LCI=res[["K","95% LCI"]], - K_UCI=res[["K","95% UCI"]]) + ## make matrix of all possible results + resm <- cbind(stats::coef(object,parm="lm"), + stats::confint(object$lm,level=conf.level)) + colnames(resm) <- c("Est",iCILabel(conf.level)) + ## make data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,]),t(resm[2,]))) + names(resd) <- c("(Intercept)","(Intercept)_LCI","(Intercept)_UCI", + "K","K_LCI","K_UCI") + ## remove estimates if not asked for + if (!incl.est) { + resm <- resm[,-which(colnames(resm)=="Est"),drop=FALSE] + resd <- resd[!names(resd) %in% c("(Intercept)","K")] } + ## Return the appropriate matrix or data.frame + if (as.df) resd + else resm } else { t <- stats::qt(1-(1-conf.level)/2,summary(object$lm)$df[2]) tmp <- summary(object) - res <- cbind(tmp[,"Estimate"]-t*tmp[,"Std. Err."], - tmp[,"Estimate"]+t*tmp[,"Std. Err."]) - colnames(res) <- iCILabel(conf.level) - if (parm %in% c("all","both")) { - if (as.df) { - res <- data.frame(No_LCI=res[["No","95% LCI"]], - No_UCI=res[["No","95% UCI"]], - q_LCI=res[["q","95% LCI"]], - q_UCI=res[["q","95% UCI"]]) - } - } else { - res <- res[parm,,drop=FALSE] - if (as.df) { - res <- data.frame(LCI=res[[parm,"95% LCI"]], - UCI=res[[parm,"95% UCI"]]) - names(res) <- paste(parm,names(res),sep="_") - } + ## make matrix of all possible results + resm <- cbind(coef.depletion(object), + tmp[,"Estimate"]-t*tmp[,"Std. Err."], + tmp[,"Estimate"]+t*tmp[,"Std. Err."]) + colnames(resm) <- c("Est",iCILabel(conf.level)) + ## make data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,]),t(resm[2,]))) + names(resd) <- c("No","No_LCI","No_UCI","q","q_LCI","q_UCI") + ## remove estimates if not asked for + if (!incl.est) { + resm <- resm[,-which(colnames(resm)=="Est"),drop=FALSE] + resd <- resd[!names(resd) %in% c("No","q")] } + ## remove unasked for parameters + if (!parm %in% c("all","both")) { + resm <- resm[parm,,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] + } + ## Return the appropriate matrix or data.frame + if (as.df) resd + else resm } - res } #' @rdname depletion diff --git a/man/depletion.Rd b/man/depletion.Rd index ee796633..c34f82b5 100644 --- a/man/depletion.Rd +++ b/man/depletion.Rd @@ -45,6 +45,7 @@ depletion(catch, ...) parm = c("all", "both", "No", "q", "lm"), level = conf.level, conf.level = 0.95, + incl.est = FALSE, as.df = FALSE, ... ) @@ -92,6 +93,8 @@ depletion(catch, ...) \item{conf.level}{A single number that represents the level of confidence to use for constructing confidence intervals.} +\item{incl.est}{A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}.} + \item{digits}{The number of digits to round the \code{rSquared} result to.} \item{percent}{A logical that indicates if the \code{rSquared} result should be returned as a percentage (\code{=TRUE}) or as a proportion (\code{=FALSE}; default).} @@ -159,63 +162,45 @@ summary(l1,verbose=TRUE) summary(l1,parm="No") rSquared(l1) rSquared(l1,digits=1,percent=TRUE) -cbind(Est=coef(l1),confint(l1)) -cbind(Est=coef(l1,parm="No"),confint(l1,parm="No")) -cbind(Est=coef(l1,parm="q"),confint(l1,parm="q")) -summary(l1,parm="lm") +coef(l1) +confint(l1) +confint(l1,incl.est=TRUE) +confint(l1,incl.est=TRUE,parm="No") +confint(l1,incl.est=TRUE,parm="q") +confint(l1,incl.est=TRUE,parm="lm") plot(l1) # with Ricker modification l2 <- depletion(SMBassLS$catch,SMBassLS$effort,method="Leslie",Ricker.mod=TRUE) summary(l2) -cbind(Est=coef(l2),confint(l1)) +confint(l1,incl.est=TRUE) plot(l2) -## DeLury model examples -# no Ricker modification +## DeLury model examples with no Ricker modification d1 <- depletion(SMBassLS$catch,SMBassLS$effort,method="DeLury") summary(d1) -summary(d1,parm="q") -summary(d1,verbose=TRUE) rSquared(d1) -cbind(Est=coef(d1),confint(d1)) -summary(d1,parm="lm") +confint(d1,incl.est=TRUE) plot(d1) -# with Ricker modification -d2 <- depletion(SMBassLS$catch,SMBassLS$effort,method="DeLury",Ricker.mod=TRUE) -summary(d2) -cbind(Est=coef(d2),confint(d2)) -cbind(Est=coef(d2,parm="q"),confint(d2,parm="q")) -plot(d2) - -# with formula notation +# Leslie model using formula notation l3 <- depletion(catch~effort,data=SMBassLS) summary(l3) -# summarizing by group (requires dplyr package) -# Dummy example data (grp="A" is SMBassLS example ... just FYI) +# Leslie model by group (requires dplyr package) +# Dummy example data (lake=="A" is SMBassLS example ... just FYI) tmpdf <- data.frame(ct=c(131,69,99,78,56,76,49,42,63,47, 117,75,87,67,58,67,42), ft=c(7,7,7,7,7,7,7,7,7,7, 5,7,5,5,4,6,5), - grp=as.factor(c("A","A","A","A","A","A","A","A","A","A", - "B","B","B","B","B","B","B"))) + lake=as.factor(c(rep("A",10),rep("B",7)))) if (require(dplyr)) { - res1 <- tmpdf \%>\% - dplyr::group_by(grp) \%>\% - dplyr::group_modify(~summary(depletion(ct~ft,data=.x),as.df=TRUE)) \%>\% + res <- tmpdf \%>\% + dplyr::group_by(lake) \%>\% + dplyr::group_modify(~confint(depletion(ct~ft,data=.x), + incl.est=TRUE,as.df=TRUE)) \%>\% as.data.frame() # removes tibble and grouping structure - res1 - - res2 <- tmpdf \%>\% - dplyr::group_by(grp) \%>\% - dplyr::group_modify(~confint(depletion(ct~ft,data=.x),as.df=TRUE)) \%>\% - as.data.frame() # removes tibble and grouping structure - res2 - - res <- dplyr::left_join(res1,res2,by="grp") res } diff --git a/tests/testthat/testthat_depletion.R b/tests/testthat/testthat_depletion.R index 406026de..44f1d741 100644 --- a/tests/testthat/testthat_depletion.R +++ b/tests/testthat/testthat_depletion.R @@ -224,7 +224,45 @@ test_that("depletion() output types",{ expect_true(is.data.frame(l1A)) expect_equal(ncol(l1A),4) expect_equal(nrow(l1A),1) - expect_equal(names(l1A),c("Intercept_LCI","Intercept_UCI","K_LCI","K_UCI")) + expect_equal(names(l1A),c("(Intercept)_LCI","(Intercept)_UCI","K_LCI","K_UCI")) + # confint()s with incl.est=TRUE + l1A <- confint(l1,incl.est=TRUE) + expect_true(is.matrix(l1A)) + expect_equal(mode(l1A),"numeric") + expect_equal(nrow(l1A),2) + expect_equal(ncol(l1A),3) + expect_equal(rownames(l1A),c("No","q")) + expect_equal(colnames(l1A),c("Est","95% LCI","95% UCI")) + l1A <- confint(l1,as.df=TRUE,incl.est=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),6) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No","No_LCI","No_UCI","q","q_LCI","q_UCI")) + l1A <- confint(l1,parm="No",incl.est=TRUE) + expect_true(is.matrix(l1A)) + expect_equal(mode(l1A),"numeric") + expect_equal(ncol(l1A),3) + expect_equal(nrow(l1A),1) + expect_equal(rownames(l1A),c("No")) + expect_equal(colnames(l1A),c("Est","95% LCI","95% UCI")) + l1A <- confint(l1,parm="No",as.df=TRUE,incl.est=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),3) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("No","No_LCI","No_UCI")) + l1A <- confint(l1,parm="lm",incl.est=TRUE) + expect_true(is.matrix(l1A)) + expect_equal(mode(l1A),"numeric") + expect_equal(nrow(l1A),2) + expect_equal(ncol(l1A),3) + expect_equal(rownames(l1A),c("(Intercept)","K")) + expect_equal(colnames(l1A),c("Est","95% LCI","95% UCI")) + l1A <- confint(l1,parm="lm",as.df=TRUE,incl.est=TRUE) + expect_true(is.data.frame(l1A)) + expect_equal(ncol(l1A),6) + expect_equal(nrow(l1A),1) + expect_equal(names(l1A),c("(Intercept)","(Intercept)_LCI","(Intercept)_UCI", + "K","K_LCI","K_UCI")) # summary()s l1A <- summary(l1) expect_true(is.matrix(l1A)) @@ -431,3 +469,4 @@ test_that("depletion() with 'DeLury' with Ricker.mod matches example 6.2 (p. 154 expect_equal(round(cf8["No","Estimate"],0),1150) expect_equal(round(cf8["q","Estimate"],5),0.01319) }) + From 0143976fa883654e5402ef6ea478ae1ce123bd81 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Thu, 2 Jan 2025 15:08:42 -0600 Subject: [PATCH 12/21] simplified code around as.df= in depletion family --- R/depletion.R | 53 +++++++++++++++-------------- tests/testthat/testthat_depletion.R | 4 +-- 2 files changed, 30 insertions(+), 27 deletions(-) diff --git a/R/depletion.R b/R/depletion.R index d81b78ed..5bc32488 100644 --- a/R/depletion.R +++ b/R/depletion.R @@ -255,22 +255,19 @@ summary.depletion <- function(object,parm=c("all","both","No","q","lm"), if(parm=="lm") { res <- summary(object$lm,...) } else { - res <- object$est - if (parm %in% c("all","both")) { - if (as.df) { - res <- data.frame(No_Est=res[["No","Estimate"]], - No_SE=res[["No","Std. Err."]], - q_Est=res[["q","Estimate"]], - q_SE=res[["q","Std. Err."]]) - } - } else { - res <- res[parm,,drop=FALSE] - if (as.df) { - res <- data.frame(Est=res[[parm,"Estimate"]], - SE=res[[parm,"Std. Err."]]) - names(res) <- paste(parm,names(res),sep="_") - } + # matrix of all possible results + resm <- object$est + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,]),t(resm[2,]))) + names(resd) <- c("No","No_SE","q","q_SE") + # remove parameters not asked for + if (!parm %in% c("all","both")) { + resm <- resm[parm,,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] } + # prepare to return data.frame if asked for, otherwise matrix + if (as.df) res <- resd + else res <- resm } res } @@ -280,18 +277,24 @@ summary.depletion <- function(object,parm=c("all","both","No","q","lm"), coef.depletion <- function(object,parm=c("all","both","No","q","lm"),as.df=FALSE,...) { parm <- match.arg(parm) if(parm=="lm") { - tmp <- stats::coef(object$lm,...) - if(as.df) tmp <- data.frame(Intercept=tmp[["(Intercept)"]],K=tmp[["K"]]) + res <- stats::coef(object$lm,...) + if(as.df) res <- data.frame(Intercept=res[["(Intercept)"]],K=res[["K"]]) } else { - tmp <- object$est[,"Estimate"] - if(parm %in% c("all","both")) { - if (as.df) tmp <- data.frame(No=tmp[["No"]],q=tmp[["q"]]) - } else { - tmp <- tmp[parm] - if (as.df) tmp <- data.frame(No=tmp[[parm]]) + # matrix of all possible results + resm <- object$est[,"Estimate"] + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1]),t(resm[2]))) + names(resd) <- c("No","q") + # remove parameters not asked for + if (!parm %in% c("all","both")) { + resm <- resm[parm,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] } + # prepare to return data.frame if asked for, otherwise matrix + if (as.df) res <- resd + else res <- resm } - tmp + res } #' @rdname depletion @@ -319,7 +322,7 @@ confint.depletion <- function(object,parm=c("all","both","No","q","lm"), } ## Return the appropriate matrix or data.frame if (as.df) resd - else resm + else resm } else { t <- stats::qt(1-(1-conf.level)/2,summary(object$lm)$df[2]) tmp <- summary(object) diff --git a/tests/testthat/testthat_depletion.R b/tests/testthat/testthat_depletion.R index 44f1d741..15e30278 100644 --- a/tests/testthat/testthat_depletion.R +++ b/tests/testthat/testthat_depletion.R @@ -275,7 +275,7 @@ test_that("depletion() output types",{ expect_true(is.data.frame(l1A)) expect_equal(ncol(l1A),4) expect_equal(nrow(l1A),1) - expect_equal(names(l1A),c("No_Est","No_SE","q_Est","q_SE")) + expect_equal(names(l1A),c("No","No_SE","q","q_SE")) l1A <- summary(l1,parm="No") expect_true(is.matrix(l1A)) expect_equal(mode(l1A),"numeric") @@ -287,7 +287,7 @@ test_that("depletion() output types",{ expect_true(is.data.frame(l1A)) expect_equal(ncol(l1A),2) expect_equal(nrow(l1A),1) - expect_equal(names(l1A),c("No_Est","No_SE")) + expect_equal(names(l1A),c("No","No_SE")) l1A <- summary(l1,parm="lm") expect_equal(class(l1A),"summary.lm") expect_equal(mode(l1A),"list") From 6b0abd65f65c09cf7f27b07f2bb03817348c5053 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Thu, 2 Jan 2025 19:32:21 -0600 Subject: [PATCH 13/21] Added as.df= and incl.est= to catch.curve extractors --- NEWS.md | 1 + R/catchCurve.R | 116 +++++++++++++++++++++------ man/catchCurve.Rd | 20 +++-- tests/testthat/testthat_catchCurve.R | 69 ++++++++++++++++ 4 files changed, 176 insertions(+), 30 deletions(-) diff --git a/NEWS.md b/NEWS.md index e5b7cb68..2db6151a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -9,6 +9,7 @@ * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. +* `catchCurve()`: Added `as.df=` to extractor functions and `incl.est=` to `confint.catchCurve()` to match functionality added to `depletion()`. * `depletion()`: Modified to address [#111](https://github.com/fishR-Core-Team/FSA/issues/111). * Added formula notation such that `depletion()` wash changed to a method call and `depletion.default()` and `depletion.formula()` were added. Tests for the formula were included. * Added `as.df=` to `coef.depletion()`, `confint.depletion()`, and `summary.depletion()` so that the result is returned as a data.frame when set to `TRUE` (default is `FALSE` to maintain backward compatability). diff --git a/R/catchCurve.R b/R/catchCurve.R index 8c29c5ba..198e0220 100644 --- a/R/catchCurve.R +++ b/R/catchCurve.R @@ -13,6 +13,8 @@ #' @param ages2use A numerical vector of ages that define the descending limb of the catch curve. #' @param weighted A logical that indicates whether a weighted regression should be used. See details. #' @param negWeightReplace A single non-negative numeric that will replace negative weights (defaults to 0). Only used when \code{weighted=TRUE}. See details. +#' @param as.df A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}. +#' @param incl.est A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}. #' @param pos.est A string to identify where to place the estimated mortality rates on the plot. Can be set to one of \code{"bottomright"}, \code{"bottom"}, \code{"bottomleft"}, \code{"left"}, \code{"topleft"}, \code{"top"}, \code{"topright"}, \code{"right"} or \code{"center"} for positioning the estimated mortality rates on the plot. Typically \code{"bottomleft"} (DEFAULT) and \code{"topright"} will be \dQuote{out-of-the-way} placements. Set \code{pos.est} to \code{NULL} to remove the estimated mortality rates from the plot. #' @param cex.est A single numeric character expansion value for the estimated mortality rates on the plot. #' @param round.est A numeric that indicates the number of decimal place to which Z (first value) and A (second value) should be rounded. If only one value then it will be used for both Z and A. @@ -66,11 +68,13 @@ #' ## demonstration of formula notation #' cc1 <- catchCurve(catch~age,data=BrookTroutTH,ages2use=2:6) #' summary(cc1) -#' cbind(Est=coef(cc1),confint(cc1)) +#' coef(cc1) +#' confint(cc1) +#' confint(cc1,incl.est=TRUE) #' rSquared(cc1) #' plot(cc1) #' summary(cc1,parm="Z") -#' cbind(Est=coef(cc1,parm="Z"),confint(cc1,parm="Z")) +#' confint(cc1,parm="Z",incl.est=TRUE) #' #' ## demonstration of excluding ages2use #' cc2 <- catchCurve(catch~age,data=BrookTroutTH,ages2use=-c(0,1)) @@ -84,7 +88,7 @@ #' #' ## demonstration of returning the linear model results #' summary(cc3,parm="lm") -#' cbind(Est=coef(cc3,parm="lm"),confint(cc3,parm="lm")) +#' confint(cc3,parm="lm",incl.est=TRUE) #' #' ## demonstration of ability to work with missing age classes #' df <- data.frame(age=c( 2, 3, 4, 5, 7, 9,12), @@ -94,7 +98,7 @@ #' plot(cc4) #' #' ## demonstration of ability to work with missing age classes -#' ## evein if catches are recorded as NAs +#' ## even if catches are recorded as NAs #' df <- data.frame(age=c( 2, 3, 4, 5, 6, 7, 8, 9,10,11,12), #' ct= c(100,92,83,71,NA,56,NA,35,NA,NA, 1)) #' cc5 <- catchCurve(ct~age,data=df,ages2use=4:12) @@ -189,32 +193,62 @@ catchCurve.formula <- function(x,data,ages2use=age, #' @rdname catchCurve #' @export -summary.catchCurve <- function(object,parm=c("both","all","Z","A","lm"),...) { +summary.catchCurve <- function(object,parm=c("both","all","Z","A","lm"), + as.df=FALSE,...) { parm <- match.arg(parm) - tmp <- summary(object$lm,...) + res <- summary(object$lm,...) if (parm!="lm") { + # matrix of all possible results Z <- summary(object$lm)$coef[2,] Z[c(1,3)] <- -Z[c(1,3)] A <- c(100*(1-exp(-Z[1])),NA,NA,NA) - tmp <- rbind(Z,A) - if (!parm %in% c("both","all")) tmp <- tmp[parm,,drop=FALSE] + resm <- rbind(Z,A) + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,c("Estimate","Std. Error")]), + t(resm[2,c("Estimate","Std. Error")]))) + names(resd) <- c("Z","Z_SE","A","A_SE") + # remove parameters not asked for + if (!parm %in% c("all","both")) { + resm <- resm[parm,,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] + } + # prepare to return data.frame if asked for, otherwise matrix + if (as.df) res <- resd + else res <- resm } - tmp + res } #' @rdname catchCurve #' @export -coef.catchCurve <- function(object,parm=c("all","both","Z","A","lm"),...) { +coef.catchCurve <- function(object,parm=c("all","both","Z","A","lm"), + as.df=FALSE,...) { parm <- match.arg(parm) - tmp <- stats::coef(object$lm,...) - if (parm!="lm") { - Z <- -tmp[2] - A <- 100*(1-exp(-Z)) - tmp <- c(Z,A) - names(tmp) <- c("Z","A") - if (!parm %in% c("both","all")) tmp <- tmp[parm] + # matrix of lm results + res <- stats::coef(object$lm,...) + if (parm=="lm") { + if (as.df) { + resd <- data.frame(cbind(t(res[1]),t(res[2]))) + names(resd) <- names(res) + res <- resd + } + } else { + # matrix of all possible results + Z <- -res[[2]] + resm <- c(Z=Z,A=100*(1-exp(-Z))) + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1]),t(resm[2]))) + names(resd) <- names(resm) + # remove parameters not asked for + if (!parm %in% c("all","both")) { + resm <- resm[parm,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] + } + # prepare to return data.frame if asked for, otherwise matrix + if (as.df) res <- resd + else res <- resm } - tmp + res } #' @rdname catchCurve @@ -226,17 +260,51 @@ anova.catchCurve <- function(object,...) { #' @rdname catchCurve #' @export confint.catchCurve <- function(object,parm=c("all","both","Z","A","lm"), - level=conf.level,conf.level=0.95,...) { + level=conf.level,conf.level=0.95, + as.df=FALSE,incl.est=FALSE,...) { parm <- match.arg(parm) ## Check on conf.level iCheckConfLevel(conf.level) ci <- stats::confint(object$lm,conf.level=level,...) - if (parm=="lm") res <- ci - else { - res <- rbind(Z=-ci[2,2:1],A=100*(1-exp(ci[2,2:1]))) - if (!parm %in% c("all","both")) res <- res[parm,,drop=FALSE] + if (parm=="lm") { + # matrix of all possible results + resm <- ci + resm <- cbind(Est=stats::coef(object$lm),ci) + colnames(resm) <- c("Est",iCILabel(conf.level)) + ## make data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,]),t(resm[2,]))) + names(resd) <- c("(Intercept)","(Intercept)_LCI","(Intercept)_UCI", + "age.e","age.e_LCI","age.e_UCI") + ## remove estimates if not asked for + if (!incl.est) { + resm <- resm[,-which(colnames(resm)=="Est"),drop=FALSE] + resd <- resd[!names(resd) %in% c("(Intercept)","age.e")] + } + ## Return the appropriate matrix or data.frame + if (as.df) res <- resd + else res <- resm + } else { + # matrix of all possible results + resm <- cbind(coef.catchCurve(object), + rbind(Z=-ci[2,2:1],A=100*(1-exp(ci[2,2:1])))) + colnames(resm) <- c("Est",iCILabel(conf.level)) + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,]),t(resm[2,]))) + names(resd) <- c("Z","Z_LCI","Z_UCI","A","A_LCI","A_UCI") + ## remove estimates if not asked for + if (!incl.est) { + resm <- resm[,-which(colnames(resm)=="Est"),drop=FALSE] + resd <- resd[!names(resd) %in% c("Z","A")] + } + ## remove unasked for parameters + if (!parm %in% c("all","both")) { + resm <- resm[parm,,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] + } + ## Return the appropriate matrix or data.frame + if (as.df) res <- resd + else res <- resm } - colnames(res) <- iCILabel(conf.level) res } diff --git a/man/catchCurve.Rd b/man/catchCurve.Rd index f3cb8fb3..e7ac6dd1 100644 --- a/man/catchCurve.Rd +++ b/man/catchCurve.Rd @@ -32,9 +32,9 @@ catchCurve(x, ...) ... ) -\method{summary}{catchCurve}(object, parm = c("both", "all", "Z", "A", "lm"), ...) +\method{summary}{catchCurve}(object, parm = c("both", "all", "Z", "A", "lm"), as.df = FALSE, ...) -\method{coef}{catchCurve}(object, parm = c("all", "both", "Z", "A", "lm"), ...) +\method{coef}{catchCurve}(object, parm = c("all", "both", "Z", "A", "lm"), as.df = FALSE, ...) \method{anova}{catchCurve}(object, ...) @@ -43,6 +43,8 @@ catchCurve(x, ...) parm = c("all", "both", "Z", "A", "lm"), level = conf.level, conf.level = 0.95, + as.df = FALSE, + incl.est = FALSE, ... ) @@ -82,10 +84,14 @@ catchCurve(x, ...) \item{parm}{A numeric or string (of parameter names) vector that specifies which parameters are to be given confidence intervals. If \code{parm="lm"} then confidence intervals for the underlying linear model are returned.} +\item{as.df}{A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}.} + \item{level}{Same as \code{conf.level}. Used for compatibility with the generic \code{confint} function.} \item{conf.level}{A number representing the level of confidence to use for constructing confidence intervals.} +\item{incl.est}{A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}.} + \item{digits}{The number of digits to round the \code{rSquared} result to.} \item{percent}{A logical that indicates if the \code{rSquared} result should be returned as a percentage (\code{=TRUE}) or as a proportion (\code{=FALSE}; default).} @@ -143,11 +149,13 @@ plot(catch~age,data=BrookTroutTH,pch=19) ## demonstration of formula notation cc1 <- catchCurve(catch~age,data=BrookTroutTH,ages2use=2:6) summary(cc1) -cbind(Est=coef(cc1),confint(cc1)) +coef(cc1) +confint(cc1) +confint(cc1,incl.est=TRUE) rSquared(cc1) plot(cc1) summary(cc1,parm="Z") -cbind(Est=coef(cc1,parm="Z"),confint(cc1,parm="Z")) +confint(cc1,parm="Z",incl.est=TRUE) ## demonstration of excluding ages2use cc2 <- catchCurve(catch~age,data=BrookTroutTH,ages2use=-c(0,1)) @@ -161,7 +169,7 @@ plot(cc3) ## demonstration of returning the linear model results summary(cc3,parm="lm") -cbind(Est=coef(cc3,parm="lm"),confint(cc3,parm="lm")) +confint(cc3,parm="lm",incl.est=TRUE) ## demonstration of ability to work with missing age classes df <- data.frame(age=c( 2, 3, 4, 5, 7, 9,12), @@ -171,7 +179,7 @@ summary(cc4) plot(cc4) ## demonstration of ability to work with missing age classes -## evein if catches are recorded as NAs +## even if catches are recorded as NAs df <- data.frame(age=c( 2, 3, 4, 5, 6, 7, 8, 9,10,11,12), ct= c(100,92,83,71,NA,56,NA,35,NA,NA, 1)) cc5 <- catchCurve(ct~age,data=df,ages2use=4:12) diff --git a/tests/testthat/testthat_catchCurve.R b/tests/testthat/testthat_catchCurve.R index 813c5c92..fb2c20b5 100644 --- a/tests/testthat/testthat_catchCurve.R +++ b/tests/testthat/testthat_catchCurve.R @@ -155,6 +155,22 @@ test_that("catchCurve() outpute types",{ expect_equal(class(ccA),"numeric") expect_equal(length(ccA),2) expect_equal(names(ccA),c("(Intercept)","age.e")) + # coef, unweighted with as.df + ccA <- coef(cc,as.df=TRUE) + expect_true(is.data.frame(ccA)) + expect_equal(ncol(ccA),2) + expect_equal(nrow(ccA),1) + expect_equal(names(ccA),c("Z","A")) + ccA <- coef(cc,parm="Z",as.df=TRUE) + expect_true(is.data.frame(ccA)) + expect_equal(ncol(ccA),1) + expect_equal(nrow(ccA),1) + expect_equal(names(ccA),c("Z")) + ccA <- coef(cc,parm="lm",as.df=TRUE) + expect_true(is.data.frame(ccA)) + expect_equal(ncol(ccA),2) + expect_equal(nrow(ccA),1) + expect_equal(names(ccA),c("(Intercept)","age.e")) # coef, weighted cc2A <- coef(cc2) expect_true(is.vector(cc2A)) @@ -193,6 +209,45 @@ test_that("catchCurve() outpute types",{ expect_equal(ncol(ccA),2) expect_equal(rownames(ccA),c("(Intercept)","age.e")) expect_equal(colnames(ccA),c("95% LCI","95% UCI")) + # confint, unweighted using incl.est=TRUE + ccA <- confint(cc,incl.est=TRUE) + expect_equal(class(ccA),c("matrix","array")) + expect_equal(mode(ccA),"numeric") + expect_equal(nrow(ccA),2) + expect_equal(ncol(ccA),3) + expect_equal(rownames(ccA),c("Z","A")) + expect_equal(colnames(ccA),c("Est","95% LCI","95% UCI")) + ccA <- confint(cc,parm="Z",incl.est=TRUE) + expect_equal(class(ccA),c("matrix","array")) + expect_equal(mode(ccA),"numeric") + expect_equal(nrow(ccA),1) + expect_equal(ncol(ccA),3) + expect_equal(rownames(ccA),c("Z")) + expect_equal(colnames(ccA),c("Est","95% LCI","95% UCI")) + ccA <- confint(cc,parm="lm",incl.est=TRUE) + expect_equal(class(ccA),c("matrix","array")) + expect_equal(mode(ccA),"numeric") + expect_equal(nrow(ccA),2) + expect_equal(ncol(ccA),3) + expect_equal(rownames(ccA),c("(Intercept)","age.e")) + expect_equal(colnames(ccA),c("Est","95% LCI","95% UCI")) + # confint, unweighted using incl.est=TRUE with as.df=TRUE + ccA <- confint(cc,incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(ccA)) + expect_equal(nrow(ccA),1) + expect_equal(ncol(ccA),6) + expect_equal(names(ccA),c("Z","Z_LCI","Z_UCI","A","A_LCI","A_UCI")) + ccA <- confint(cc,parm="Z",incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(ccA)) + expect_equal(nrow(ccA),1) + expect_equal(ncol(ccA),3) + expect_equal(names(ccA),c("Z","Z_LCI","Z_UCI")) + ccA <- confint(cc,parm="lm",incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(ccA)) + expect_equal(nrow(ccA),1) + expect_equal(ncol(ccA),6) + expect_equal(names(ccA),c("(Intercept)","(Intercept)_LCI","(Intercept)_UCI", + "age.e","age.e_LCI","age.e_UCI")) # confint, weighted cc2A <- confint(cc2) expect_equal(class(cc2A),c("matrix","array")) @@ -232,6 +287,20 @@ test_that("catchCurve() outpute types",{ expect_equal(colnames(cc2A),c("Estimate","Std. Error","t value","Pr(>|t|)")) cc2A <- summary(cc2,parm="lm") expect_equal(class(cc2A),"summary.lm") + + # summary with as.df=TRUE + cc2A <- summary(cc2,as.df=TRUE) + expect_true(is.data.frame(cc2A)) + expect_equal(nrow(cc2A),1) + expect_equal(ncol(cc2A),4) + expect_equal(names(cc2A),c("Z","Z_SE","A","A_SE")) + cc2A <- summary(cc2,parm="Z",as.df=TRUE) + expect_true(is.data.frame(cc2A)) + expect_equal(nrow(cc2A),1) + expect_equal(ncol(cc2A),2) + expect_equal(names(cc2A),c("Z","Z_SE")) + cc2A <- summary(cc2,parm="lm",as.df=TRUE) + expect_equal(class(cc2A),"summary.lm") # r-squared expect_true(is.numeric(rSquared(cc))) From d5460befcf6bbb42ef3aab518a2c2d6795fcaefb Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Thu, 2 Jan 2025 20:19:37 -0600 Subject: [PATCH 14/21] Added incl.est= and as.df= to chapmanRobson extractors --- NEWS.md | 1 + R/chapmanRobson.R | 80 +++++++++++++++++++++------- man/chapmanRobson.Rd | 23 ++++++-- tests/testthat/testthat_catchCurve.R | 52 +++++++++++++++++- 4 files changed, 132 insertions(+), 24 deletions(-) diff --git a/NEWS.md b/NEWS.md index 2db6151a..f9dc4607 100644 --- a/NEWS.md +++ b/NEWS.md @@ -10,6 +10,7 @@ * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. * `catchCurve()`: Added `as.df=` to extractor functions and `incl.est=` to `confint.catchCurve()` to match functionality added to `depletion()`. +* `chapmanRobson()`: Added `as.df=` to extractor functions and `incl.est=` to `confint.chapmanRobson()` to match functionality added to `depletion()`. * `depletion()`: Modified to address [#111](https://github.com/fishR-Core-Team/FSA/issues/111). * Added formula notation such that `depletion()` wash changed to a method call and `depletion.default()` and `depletion.formula()` were added. Tests for the formula were included. * Added `as.df=` to `coef.depletion()`, `confint.depletion()`, and `summary.depletion()` so that the result is returned as a data.frame when set to `TRUE` (default is `FALSE` to maintain backward compatability). diff --git a/R/chapmanRobson.R b/R/chapmanRobson.R index c375e0b4..c2535d03 100644 --- a/R/chapmanRobson.R +++ b/R/chapmanRobson.R @@ -12,6 +12,8 @@ #' @param data A data frame from which the variables in the \code{x} formula can be found. Not used if \code{x} is not a formula. #' @param ages2use A numerical vector of the ages that define the descending limb of the catch curve. #' @param zmethod A string that indicates the method to use for estimating Z. See details. +#' @param as.df A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}. +#' @param incl.est A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}. #' @param verbose A logical that indicates whether the method should return just the estimate (\code{FALSE}; default) or a more verbose statement. #' @param pos.est A string to identify where to place the estimated mortality rates on the plot. Can be set to one of \code{"bottomright"}, \code{"bottom"}, \code{"bottomleft"}, \code{"left"}, \code{"topleft"}, \code{"top"}, \code{"topright"}, \code{"right"} or \code{"center"} for positioning the estimated mortality rates on the plot. Typically \code{"bottomleft"} (DEFAULT) and \code{"topright"} will be \dQuote{out-of-the-way} placements. Set \code{pos.est} to \code{NULL} to remove the estimated mortality rates from the plot. #' @param cex.est A single numeric character expansion value for the estimated mortality rates on the plot. @@ -71,12 +73,15 @@ #' cr1 <- chapmanRobson(catch~age,data=BrookTroutTH,ages2use=2:6) #' summary(cr1) #' summary(cr1,verbose=TRUE) -#' cbind(Est=coef(cr1),confint(cr1)) +#' coef(cr1) +#' confint(cr1) +#' confint(cr1,incl.est=TRUE) #' plot(cr1) #' plot(cr1,axis.age="age") #' plot(cr1,axis.age="recoded age") #' summary(cr1,parm="Z") -#' cbind(Est=coef(cr1,parm="Z"),confint(cr1,parm="Z")) +#' coef(cr1,parm="Z") +#' confint(cr1,parm="Z",incl.est=TRUE) #' #' ## demonstration of excluding ages2use #' cr2 <- chapmanRobson(catch~age,data=BrookTroutTH,ages2use=-c(0,1)) @@ -190,40 +195,79 @@ chapmanRobson.formula <- function(x,data,ages2use=age, #' @rdname chapmanRobson #' @export summary.chapmanRobson <- function(object,parm=c("all","both","Z","S"), - verbose=FALSE,...) { + verbose=FALSE,as.df=FALSE,...) { parm <- match.arg(parm) if (verbose) message("Intermediate statistics: ","n=",object$n,"; T=",object$T) - if (!parm %in% c("all","both")) - object$est[which(rownames(object$est)==parm),,drop=FALSE] - else object$est -} + # matrix of all possible results + resm <- object$est + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,c("Estimate","Std. Error")]), + t(resm[2,c("Estimate","Std. Error")]))) + names(resd) <- c("S","S_SE","Z","Z_SE") + # remove parameters not asked for + if (!parm %in% c("all","both")) { + resm <- resm[parm,,drop=FALSE] + resd <- resd[substr(names(resd),1,1)==parm] + } + # prepare to return data.frame if asked for, otherwise matrix + if (as.df) res <- resd + else res <- resm + res +} #' @rdname chapmanRobson #' @export -coef.chapmanRobson <- function(object,parm=c("all","both","Z","S"),...) { +coef.chapmanRobson <- function(object,parm=c("all","both","Z","S"),as.df=FALSE,...) { parm <- match.arg(parm) tmp <- summary(object,parm) - res <- tmp[,1] - names(res) <- rownames(tmp) - if (!parm %in% c("all","both")) res <- res[parm] + # matrix of lm results + resm <- tmp[,1] + names(resm) <- rownames(tmp) + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1]),t(resm[2]))) + names(resd) <- names(resm) + # remove parameters not asked for + if (!parm %in% c("all","both")) { + resm <- resm[parm,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] + } + # prepare to return data.frame if asked for, otherwise matrix + if (as.df) res <- resd + else res <- resm res } #' @rdname chapmanRobson #' @export confint.chapmanRobson <- function(object,parm=c("all","both","S","Z"), - level=conf.level,conf.level=0.95,...) { + level=conf.level,conf.level=0.95, + as.df=FALSE,incl.est=FALSE,...) { parm <- match.arg(parm) - ## Check on conf.level iCheckConfLevel(conf.level) + # matrix of all possible results z <- c(-1,1)*stats::qnorm((1-(1-conf.level)/2)) - res <- rbind(S=object$est["S","Estimate"]+z*object$est["S","Std. Error"], - Z=object$est["Z","Estimate"]+z*object$est["Z","Std. Error"]) - colnames(res) <- iCILabel(conf.level) - # Create output matrix - if (!parm %in% c("all","both")) res <- res[parm,,drop=FALSE] + resm <- cbind(coef.chapmanRobson(object), + rbind(S=object$est["S","Estimate"]+z*object$est["S","Std. Error"], + Z=object$est["Z","Estimate"]+z*object$est["Z","Std. Error"])) + colnames(resm) <- c("Est",iCILabel(conf.level)) + # data.frame of all possible results + resd <- data.frame(cbind(t(resm[1,]),t(resm[2,]))) + names(resd) <- c("S","S_LCI","S_UCI","Z","Z_LCI","Z_UCI") + ## remove estimates if not asked for + if (!incl.est) { + resm <- resm[,-which(colnames(resm)=="Est"),drop=FALSE] + resd <- resd[!names(resd) %in% c("S","Z")] + } + ## remove unasked for parameters + if (!parm %in% c("all","both")) { + resm <- resm[parm,,drop=FALSE] + resd <- resd[grepl(parm,names(resd))] + } + ## Return the appropriate matrix or data.frame + if (as.df) res <- resd + else res <- resm res } diff --git a/man/chapmanRobson.Rd b/man/chapmanRobson.Rd index 79990163..7a121a87 100644 --- a/man/chapmanRobson.Rd +++ b/man/chapmanRobson.Rd @@ -28,15 +28,23 @@ chapmanRobson(x, ...) ... ) -\method{summary}{chapmanRobson}(object, parm = c("all", "both", "Z", "S"), verbose = FALSE, ...) +\method{summary}{chapmanRobson}( + object, + parm = c("all", "both", "Z", "S"), + verbose = FALSE, + as.df = FALSE, + ... +) -\method{coef}{chapmanRobson}(object, parm = c("all", "both", "Z", "S"), ...) +\method{coef}{chapmanRobson}(object, parm = c("all", "both", "Z", "S"), as.df = FALSE, ...) \method{confint}{chapmanRobson}( object, parm = c("all", "both", "S", "Z"), level = conf.level, conf.level = 0.95, + as.df = FALSE, + incl.est = FALSE, ... ) @@ -72,10 +80,14 @@ chapmanRobson(x, ...) \item{verbose}{A logical that indicates whether the method should return just the estimate (\code{FALSE}; default) or a more verbose statement.} +\item{as.df}{A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Ignored in \code{summary} if \code{parm="lm"}.} + \item{level}{Same as \code{conf.level}. Used for compatibility with the generic \code{confint} function.} \item{conf.level}{A number representing the level of confidence to use for constructing confidence intervals.} +\item{incl.est}{A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}.} + \item{pos.est}{A string to identify where to place the estimated mortality rates on the plot. Can be set to one of \code{"bottomright"}, \code{"bottom"}, \code{"bottomleft"}, \code{"left"}, \code{"topleft"}, \code{"top"}, \code{"topright"}, \code{"right"} or \code{"center"} for positioning the estimated mortality rates on the plot. Typically \code{"bottomleft"} (DEFAULT) and \code{"topright"} will be \dQuote{out-of-the-way} placements. Set \code{pos.est} to \code{NULL} to remove the estimated mortality rates from the plot.} \item{cex.est}{A single numeric character expansion value for the estimated mortality rates on the plot.} @@ -130,12 +142,15 @@ plot(catch~age,data=BrookTroutTH,pch=19) cr1 <- chapmanRobson(catch~age,data=BrookTroutTH,ages2use=2:6) summary(cr1) summary(cr1,verbose=TRUE) -cbind(Est=coef(cr1),confint(cr1)) +coef(cr1) +confint(cr1) +confint(cr1,incl.est=TRUE) plot(cr1) plot(cr1,axis.age="age") plot(cr1,axis.age="recoded age") summary(cr1,parm="Z") -cbind(Est=coef(cr1,parm="Z"),confint(cr1,parm="Z")) +coef(cr1,parm="Z") +confint(cr1,parm="Z",incl.est=TRUE) ## demonstration of excluding ages2use cr2 <- chapmanRobson(catch~age,data=BrookTroutTH,ages2use=-c(0,1)) diff --git a/tests/testthat/testthat_catchCurve.R b/tests/testthat/testthat_catchCurve.R index fb2c20b5..eb1e958b 100644 --- a/tests/testthat/testthat_catchCurve.R +++ b/tests/testthat/testthat_catchCurve.R @@ -323,6 +323,17 @@ test_that("chapmanRobson() output types",{ expect_equal(class(crA),"numeric") expect_equal(length(crA),1) expect_equal(names(crA),c("S")) + # coef with as.df=TRUE + crA <- coef(cr,as.df=TRUE) + expect_true(is.data.frame(crA)) + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),2) + expect_equal(names(crA),c("S","Z")) + crA <- coef(cr,parm="S",as.df=TRUE) + expect_true(is.data.frame(crA)) + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),1) + expect_equal(names(crA),"S") # summary crA <- summary(cr) expect_equal(class(crA),c("matrix","array")) @@ -336,8 +347,19 @@ test_that("chapmanRobson() output types",{ expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),1) expect_equal(ncol(crA),2) - expect_equal(rownames(crA),c("S")) + expect_equal(rownames(crA),"S") expect_equal(colnames(crA),c("Estimate","Std. Error")) + # summary with as.df=TRUE + crA <- summary(cr,as.df=TRUE) + expect_true(is.data.frame(crA)) + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),4) + expect_equal(names(crA),c("S","S_SE","Z","Z_SE")) + crA <- summary(cr,parm="S",as.df=TRUE) + expect_true(is.data.frame(crA)) + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),2) + expect_equal(names(crA),c("S","S_SE")) # confint crA <- confint(cr) expect_equal(class(crA),c("matrix","array")) @@ -351,8 +373,34 @@ test_that("chapmanRobson() output types",{ expect_equal(mode(crA),"numeric") expect_equal(nrow(crA),1) expect_equal(ncol(crA),2) - expect_equal(rownames(crA),c("S")) + expect_equal(rownames(crA),"S") expect_equal(colnames(crA),c("95% LCI","95% UCI")) + # confint with incl.est=TRUE + crA <- confint(cr,incl.est=TRUE) + expect_equal(class(crA),c("matrix","array")) + expect_equal(mode(crA),"numeric") + expect_equal(nrow(crA),2) + expect_equal(ncol(crA),3) + expect_equal(rownames(crA),c("S","Z")) + expect_equal(colnames(crA),c("Est","95% LCI","95% UCI")) + crA <- confint(cr,parm="S",incl.est=TRUE) + expect_equal(class(crA),c("matrix","array")) + expect_equal(mode(crA),"numeric") + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),3) + expect_equal(rownames(crA),"S") + expect_equal(colnames(crA),c("Est","95% LCI","95% UCI")) + # confint with incl.est=TRUE and as.df=TRUE + crA <- confint(cr,incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(crA)) + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),6) + expect_equal(names(crA),c("S","S_LCI","S_UCI","Z","Z_LCI","Z_UCI")) + crA <- confint(cr,parm="S",incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(crA)) + expect_equal(nrow(crA),1) + expect_equal(ncol(crA),3) + expect_equal(names(crA),c("S","S_LCI","S_UCI")) expect_equal(class(cr1),"chapmanRobson") # coef From 62110278c7208c05a5e9bb6d359685e306349b89 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Fri, 3 Jan 2025 14:55:05 -0600 Subject: [PATCH 15/21] Deprecated just.ests= in removal() --- NEWS.md | 3 ++ R/removal.R | 78 +++++++++---------------------- man/removal.Rd | 54 +++------------------ tests/testthat/testthat_removal.R | 13 ------ 4 files changed, 30 insertions(+), 118 deletions(-) diff --git a/NEWS.md b/NEWS.md index f9dc4607..96d2c366 100644 --- a/NEWS.md +++ b/NEWS.md @@ -25,6 +25,9 @@ * Added Quinn and Deriso (1999), Peterson and Wroblewski (1984), and Chan and Watanabe (1989) methods from FAMS manual. These are probably only useful for comparison to FAMS results. * Added an example for computing an average M or cm from multiple model results. * `Mmethods()`: Modified. Changed `what=` to `method=` for simplicity with `metaM()`. +* `removal()`: Modified. + * Deprecated `just.ests=`. Functionality will be largely replaced with `incl.ests=` in `confint()`. Replaced split-apply example with a new one that performs similarly without `just.ests=` and is more in-line with examples in `depletion` *et al.* + # FSA 0.9.5 * Fixed FSA-package \alias problem using the "automatic approach" (i.e., adding a "_PACKAGE" line to FSA.R) suggested in an e-mail from Kurt Hornik on 19-Aug-2023. diff --git a/R/removal.R b/R/removal.R index ca9f6df9..ea9f9986 100644 --- a/R/removal.R +++ b/R/removal.R @@ -29,14 +29,14 @@ #' @param parm A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered. #' @param level Not used, but here for compatibility with generic \code{confint} function. #' @param conf.level A single number representing the level of confidence to use for constructing confidence intervals. This is sent in the main \code{removal} function rather than \code{confint}. -#' @param just.ests A logical that indicates whether just the estimates (\code{=TRUE}) or the return list (\code{=FALSE}; default; see below) is returned. #' @param verbose A logical that indicates whether descriptive labels should be printed from \code{summary} and if certain warnings are shown with \code{confint}. #' @param digits A single numeric that controls the number of decimals in the output from \code{summary} and \code{confint}. #' @param Tmult A single numeric that will be multiplied by the total catch in all samples to set the upper value for the range of population sizes when minimizing the log-likelihood and creating confidence intervals for the Moran and Schnute methods. Large values are much slower to compute, but values that are too low may result in missing the best estimate. A warning is issued if too low of a value is suspected. #' @param CIMicroFish A logical that indicates whether the t value used to calculate confidence intervals when \code{method="Burnham"} should be rounded to two or three decimals and whether the confidence intervals for No should be rounded to whole numbers as done in MicroFish 3.0. The default (\code{=FALSE}) is to NOT round the t values or No confidence interval. This option is provided only so that results will exactly match MicroFish results (see testing). +#' @param just.ests Deprecated as of v0.9.6. This was primarily used when using \code{removal} with a split-and-apply approach to estimate N for multiple groups. See examples and use of \code{incl.ests=} in \code{confint} for similar functionality. #' @param \dots Additional arguments for methods. #' -#' @return A vector that contains the estimates and standard errors for No and p if \code{just.ests=TRUE} or (default) a list with at least the following items: +#' @return A list with at least the following items: #' \itemize{ #' \item catch The original vector of observed catches. #' \item method The method used (provided by the user). @@ -151,60 +151,22 @@ #' p2a <- removal(ct4,method="Moran") #' p3a <- removal(ct4,method="Schnute") #' chi2.val <- 2*(p2a$min.nlogLH-p3a$min.nlogLH) # 4.74 in Schnute(1983) -#' pchisq(chi2.val,df=1,lower.tail=FALSE) # significant difference (catchability differs) +#' pchisq(chi2.val,df=1,lower.tail=FALSE) # sig diff (catchability differs) #' summary(p3a) #' -#' -#' ### Using lapply() to use removal() on many different groups -#' ### with the removals in a single variable ("long format") -#' ## create a dummy data frame -#' lake <- factor(rep(c("Ash Tree","Bark","Clay"),each=5)) -#' year <- factor(rep(c("2010","2011","2010","2011","2010","2011"),times=c(2,3,3,2,2,3))) -#' pass <- factor(c(1,2,1,2,3,1,2,3,1,2,1,2,1,2,3)) -#' catch <- c(57,34,65,34,12,54,26,9,54,27,67,34,68,35,12) -#' d <- data.frame(lake,year,pass,catch) -#' -#' ## create a variable that indicates each different group -#' d$group <- with(d,interaction(lake,year)) -#' d -#' ## split the catch by the different groups (creates a list of catch vectors) -#' ds <- split(d$catch,d$group) -#' ## apply removal() to each catch vector (i.e., different group) -#' res <- lapply(ds,removal,just.ests=TRUE) -#' res <- data.frame(t(data.frame(res,check.names=FALSE))) -#' ## get rownames from above and split into separate columns -#' nms <- t(data.frame(strsplit(rownames(res),"\\."))) -#' attr(nms,"dimnames") <- NULL -#' fnl <- data.frame(nms,res) -#' ## put names together with values -#' rownames(fnl) <- NULL -#' colnames(fnl)[1:2] <- c("Lake","Year") -#' fnl -#' -#' -#' ### Using apply() to use removal() on many different groups -#' ### with the removals in several variables ("wide format") -#' ## create a dummy data frame (just reshaped from above as -#' ## an example; -5 to ignore the group variable from above) -#' d1 <- reshape(d[,-5],timevar="pass",idvar=c("lake","year"),direction="wide") -#' ## apply restore() to each row of only the catch data -#' res1 <- apply(d1[,3:5],MARGIN=1,FUN=removal,method="CarleStrub",just.ests=TRUE) -#' res1 <- data.frame(t(data.frame(res1,check.names=FALSE))) -#' ## add the grouping information to the results -#' fnl1 <- data.frame(d1[,1:2],res1) -#' ## put names together with values -#' rownames(fnl1) <- NULL -#' fnl1 -#' #' @rdname removal #' @export removal <- function(catch, method=c("CarleStrub","Zippin","Seber3","Seber2", "RobsonRegier2","Moran","Schnute","Burnham"), alpha=1,beta=1,CS.se=c("Zippin","alternative"), - conf.level=0.95,just.ests=FALSE,Tmult=3,CIMicroFish=FALSE) { + conf.level=0.95,Tmult=3,CIMicroFish=FALSE,just.ests=FALSE) { # some initial checks method <- match.arg(method) + if (just.ests) + message("'just.ests=' is deprecated as of v0.9.6. 'just.ests=' was used\n", + " primarily with split-and-apply for multiple groups. See 'incl.ests='\n", + " in 'confint()' and examples for same functionality in >v0.9.6.") ## Check on conf.level iCheckConfLevel(conf.level) @@ -250,12 +212,9 @@ removal <- function(catch, Schnute= { tmp <- iSchnute(catch,conf.level,Tmult) }, Burnham= { tmp <- iBurnham(catch,conf.level,Tmult,CIMicroFish) } ) - if (just.ests) { tmp <- tmp$est } - else { - tmp <- c(tmp,method=method,conf.level=conf.level) - class(tmp) <- "removal" - } - # return object + # Prepare object list to return + tmp <- c(tmp,method=method,conf.level=conf.level) + class(tmp) <- "removal" tmp } @@ -717,7 +676,8 @@ iBurnham <- function(catch,conf.level,Tmult,CIMicroFish){ #' @rdname removal #' @export -summary.removal <- function(object,parm=c("No","p","p1"),digits=getOption("digits"),verbose=FALSE,...) { +summary.removal <- function(object,parm=c("No","p","p1"), + digits=getOption("digits"),verbose=FALSE,...) { parm <- match.arg(parm,several.ok=TRUE) # send warning if chose 'p1' parameter but not Schnute method # but don't warn if all parameters are chosen @@ -729,7 +689,8 @@ summary.removal <- function(object,parm=c("No","p","p1"),digits=getOption("digit parm <- parm[-which(parm=="p1")] } if (verbose) { - if (object$method %in% c("Moran","Schnute")) message("The ",object$lbl," was used (SEs not computed).") + if (object$method %in% c("Moran","Schnute")) + message("The ",object$lbl," was used (SEs not computed).") else message("The ",object$lbl," was used.") } if (object$method %in% c("Zippin","CarleStrub","Seber3","Seber2","RobsonRegier2","Burnham")) { @@ -754,7 +715,8 @@ summary.removal <- function(object,parm=c("No","p","p1"),digits=getOption("digit confint.removal <- function(object,parm=c("No","p"), level=conf.level,conf.level=NULL, digits=getOption("digits"),verbose=FALSE,...) { - if (!is.null(level)) WARN("The confidence level is not set here, it is set with 'conf.level=' in 'removal()'.") + if (!is.null(level)) + WARN("The confidence level is not set here, it is set with 'conf.level=' in 'removal()'.") parm <- match.arg(parm,several.ok=TRUE) if (object$method %in% c("Zippin","CarleStrub","Seber3","Seber2","RobsonRegier2","Burnham")) { res <- matrix(object$est[c("No.LCI","No.UCI","p.LCI","p.UCI")],nrow=2,byrow=TRUE) @@ -763,9 +725,11 @@ confint.removal <- function(object,parm=c("No","p"), } else { ## Handle some messaging if (object$method %in% c("Moran","Schnute")) { - # warn about no CIs for p with Moran and Schnute but only if p is the only parm chosen + # warn about no CIs for p with Moran and Schnute but only if p is only parm chosen if ("p" %in% parm) { - if (length(parm)==1) STOP("Confidence intervals for 'p' cannot be computed with ",object$method," method.") + if (length(parm)==1) + STOP("Confidence intervals for 'p' cannot be computed with ", + object$method," method.") parm <- "No" } # print messages about CI fails if they exist diff --git a/man/removal.Rd b/man/removal.Rd index ec1d080a..02b04cb5 100644 --- a/man/removal.Rd +++ b/man/removal.Rd @@ -14,9 +14,9 @@ removal( beta = 1, CS.se = c("Zippin", "alternative"), conf.level = 0.95, - just.ests = FALSE, Tmult = 3, - CIMicroFish = FALSE + CIMicroFish = FALSE, + just.ests = FALSE ) \method{summary}{removal}( @@ -50,12 +50,12 @@ removal( \item{conf.level}{A single number representing the level of confidence to use for constructing confidence intervals. This is sent in the main \code{removal} function rather than \code{confint}.} -\item{just.ests}{A logical that indicates whether just the estimates (\code{=TRUE}) or the return list (\code{=FALSE}; default; see below) is returned.} - \item{Tmult}{A single numeric that will be multiplied by the total catch in all samples to set the upper value for the range of population sizes when minimizing the log-likelihood and creating confidence intervals for the Moran and Schnute methods. Large values are much slower to compute, but values that are too low may result in missing the best estimate. A warning is issued if too low of a value is suspected.} \item{CIMicroFish}{A logical that indicates whether the t value used to calculate confidence intervals when \code{method="Burnham"} should be rounded to two or three decimals and whether the confidence intervals for No should be rounded to whole numbers as done in MicroFish 3.0. The default (\code{=FALSE}) is to NOT round the t values or No confidence interval. This option is provided only so that results will exactly match MicroFish results (see testing).} +\item{just.ests}{Deprecated as of v0.9.6. This was primarily used when using \code{removal} with a split-and-apply approach to estimate N for multiple groups. See examples and use of \code{incl.ests=} in \code{confint} for similar functionality.} + \item{object}{An object saved from \code{removal()}.} \item{parm}{A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered.} @@ -69,7 +69,7 @@ removal( \item{level}{Not used, but here for compatibility with generic \code{confint} function.} } \value{ -A vector that contains the estimates and standard errors for No and p if \code{just.ests=TRUE} or (default) a list with at least the following items: +A list with at least the following items: \itemize{ \item catch The original vector of observed catches. \item method The method used (provided by the user). @@ -179,51 +179,9 @@ ct4 <- c(45,11,18,8) p2a <- removal(ct4,method="Moran") p3a <- removal(ct4,method="Schnute") chi2.val <- 2*(p2a$min.nlogLH-p3a$min.nlogLH) # 4.74 in Schnute(1983) -pchisq(chi2.val,df=1,lower.tail=FALSE) # significant difference (catchability differs) +pchisq(chi2.val,df=1,lower.tail=FALSE) # sig diff (catchability differs) summary(p3a) - -### Using lapply() to use removal() on many different groups -### with the removals in a single variable ("long format") -## create a dummy data frame -lake <- factor(rep(c("Ash Tree","Bark","Clay"),each=5)) -year <- factor(rep(c("2010","2011","2010","2011","2010","2011"),times=c(2,3,3,2,2,3))) -pass <- factor(c(1,2,1,2,3,1,2,3,1,2,1,2,1,2,3)) -catch <- c(57,34,65,34,12,54,26,9,54,27,67,34,68,35,12) -d <- data.frame(lake,year,pass,catch) - -## create a variable that indicates each different group -d$group <- with(d,interaction(lake,year)) -d -## split the catch by the different groups (creates a list of catch vectors) -ds <- split(d$catch,d$group) -## apply removal() to each catch vector (i.e., different group) -res <- lapply(ds,removal,just.ests=TRUE) -res <- data.frame(t(data.frame(res,check.names=FALSE))) -## get rownames from above and split into separate columns -nms <- t(data.frame(strsplit(rownames(res),"\\\\."))) -attr(nms,"dimnames") <- NULL -fnl <- data.frame(nms,res) -## put names together with values -rownames(fnl) <- NULL -colnames(fnl)[1:2] <- c("Lake","Year") -fnl - - -### Using apply() to use removal() on many different groups -### with the removals in several variables ("wide format") -## create a dummy data frame (just reshaped from above as -## an example; -5 to ignore the group variable from above) -d1 <- reshape(d[,-5],timevar="pass",idvar=c("lake","year"),direction="wide") -## apply restore() to each row of only the catch data -res1 <- apply(d1[,3:5],MARGIN=1,FUN=removal,method="CarleStrub",just.ests=TRUE) -res1 <- data.frame(t(data.frame(res1,check.names=FALSE))) -## add the grouping information to the results -fnl1 <- data.frame(d1[,1:2],res1) -## put names together with values -rownames(fnl1) <- NULL -fnl1 - } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. diff --git a/tests/testthat/testthat_removal.R b/tests/testthat/testthat_removal.R index c6e9523b..d9f0e317 100644 --- a/tests/testthat/testthat_removal.R +++ b/tests/testthat/testthat_removal.R @@ -296,19 +296,6 @@ test_that("removal with 'CarleStrub' matches Cowx (1983) page 77",{ expect_equal(round(tmp["p","Estimate"],3),0.187) }) -test_that("removal with 'CarleStrub' match results from Jones & Stockwell (1995)",{ - data(JonesStockwell,package="FSAdata") - # isolate captures and Carle-Strub estimates ... for non-rejected estimates - JS.caps <- JonesStockwell[!JonesStockwell$rejected,4:6] - JS.cs <- JonesStockwell[!JonesStockwell$rejected,7] - # compute Carle-Strub estimates for all data in JS.caps - tmp <- apply(JS.caps,1,removal,just.ests=TRUE)["No",] - # Make a comparison matrix - compJS <- round(cbind(tmp,JS.cs,tmp-JS.cs,(tmp-JS.cs)/JS.cs*100),1) - # all values are within 3 - expect_true(all(abs(compJS[,3])<=3,na.rm=TRUE)) -}) - test_that("removal with 'Seber3' matches Cowx (1983) page 75",{ tmp <- summary(removal(c(72,56,46),method="Seber3")) expect_equal(round(tmp["No","Estimate"],0),353) From fc9f7af8dbcdbab258592bbb37dd4ef39298ba29 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Sat, 4 Jan 2025 15:50:32 -0600 Subject: [PATCH 16/21] Added formula for removal(), added coef(), modified summary() and confint() --- DESCRIPTION | 1 + NAMESPACE | 3 + NEWS.md | 6 +- R/removal.R | 267 ++++++++++++++++++++++-------- man/removal.Rd | 106 ++++++++++-- tests/testthat/testthat_removal.R | 153 ++++++++++++++--- 6 files changed, 436 insertions(+), 100 deletions(-) diff --git a/DESCRIPTION b/DESCRIPTION index 030e8a7c..d0854454 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -48,6 +48,7 @@ Suggests: rmarkdown, testthat (>= 3.0.0), tibble, + tidyr, covr Encoding: UTF-8 RoxygenNote: 7.3.2 diff --git a/NAMESPACE b/NAMESPACE index 2718637e..e3453d3e 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -11,6 +11,7 @@ S3method(chapmanRobson,formula) S3method(coef,catchCurve) S3method(coef,chapmanRobson) S3method(coef,depletion) +S3method(coef,removal) S3method(confint,boot) S3method(confint,catchCurve) S3method(confint,chapmanRobson) @@ -51,6 +52,8 @@ S3method(rSquared,catchCurve) S3method(rSquared,default) S3method(rSquared,depletion) S3method(rSquared,lm) +S3method(removal,default) +S3method(removal,formula) S3method(sumTable,formula) S3method(summary,ageBias) S3method(summary,agePrec) diff --git a/NEWS.md b/NEWS.md index 96d2c366..d6a7655a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,6 +1,6 @@ # FSA 0.9.5.9000 * Updated testing to use `testthat` v3.0.0. - * Changes to `DESCRIPTION`. + * Changes to `DESCRIPTION` including adding `tidyr` in Suggests (for example in `removal()`). * Replaced MANY `expect_is()` with `expect_equal(class())` idioms. * Replaced many `expect_equivalent()` with `expect_equal()` as `expect_equivalent()` was not needed to begin with. * Replaced many `expect_equivalent()` with `expect_equal(,ignore_attr=TRUE)` as `expect_equivalent()` was deprecated. @@ -26,8 +26,10 @@ * Added an example for computing an average M or cm from multiple model results. * `Mmethods()`: Modified. Changed `what=` to `method=` for simplicity with `metaM()`. * `removal()`: Modified. + * Added a formula version to better match `depletion()`. * Deprecated `just.ests=`. Functionality will be largely replaced with `incl.ests=` in `confint()`. Replaced split-apply example with a new one that performs similarly without `just.ests=` and is more in-line with examples in `depletion` *et al.* - + * Added `coef()` extractor function. + * Modified `confint()` and `summary()` extractor functions to better match `depletion()`. # FSA 0.9.5 * Fixed FSA-package \alias problem using the "automatic approach" (i.e., adding a "_PACKAGE" line to FSA.R) suggested in an e-mail from Kurt Hornik on 19-Aug-2023. diff --git a/R/removal.R b/R/removal.R index ea9f9986..36aa3e84 100644 --- a/R/removal.R +++ b/R/removal.R @@ -20,7 +20,8 @@ #' #' Confidence intervals for the last method are computed as per Ken Burnham's instructions for the Burnham Method (Jack Van Deventer, personal communication). Specifically, they are calculated with the t-statistic and No-1 degrees of freedom. Please note that the MicroFish software rounds the t-statistic before it calculates the confidence intervals about No and p. If you need the confidence interals produced by FSA::removal to duplicate MicroFish, please use CIMicroFish=TRUE. #' -#' @param catch A numerical vector of catch at each pass. +#' @param catch A numerical vector of catch at each pass, or a formula of the form \code{~catch}. +#' @param data A data.frame from which the variables in the \code{catch} formula can be found. Not used if \code{catch} is not a formula. #' @param method A single string that identifies the removal method to use. See details. #' @param alpha A single numeric value for the alpha parameter in the CarleStrub method (default is \code{1}). #' @param beta A single numeric value for the beta parameter in the CarleStrub method (default is \code{1}). @@ -34,6 +35,8 @@ #' @param Tmult A single numeric that will be multiplied by the total catch in all samples to set the upper value for the range of population sizes when minimizing the log-likelihood and creating confidence intervals for the Moran and Schnute methods. Large values are much slower to compute, but values that are too low may result in missing the best estimate. A warning is issued if too low of a value is suspected. #' @param CIMicroFish A logical that indicates whether the t value used to calculate confidence intervals when \code{method="Burnham"} should be rounded to two or three decimals and whether the confidence intervals for No should be rounded to whole numbers as done in MicroFish 3.0. The default (\code{=FALSE}) is to NOT round the t values or No confidence interval. This option is provided only so that results will exactly match MicroFish results (see testing). #' @param just.ests Deprecated as of v0.9.6. This was primarily used when using \code{removal} with a split-and-apply approach to estimate N for multiple groups. See examples and use of \code{incl.ests=} in \code{confint} for similar functionality. +#' @param as.df A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Defaults to \code{FALSE}. +#' @param incl.est A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}. #' @param \dots Additional arguments for methods. #' #' @return A list with at least the following items: @@ -138,7 +141,12 @@ #' summary(p4,verbose=TRUE) #' confint(p4) #' -#' +#' ## Use formula with a data.frame +#' d <- data.frame(ct=ct3) +#' p1a <- removal(~ct,data=d) +#' summary(p1a,verbose=TRUE) +#' confint(p1a,incl.est=TRUE) +#' #' ### Test if catchability differs between first sample and the other samples #' # chi-square test statistic from negative log-likelihoods #' # from Moran and Schnute fits (from above) @@ -154,13 +162,86 @@ #' pchisq(chi2.val,df=1,lower.tail=FALSE) # sig diff (catchability differs) #' summary(p3a) #' +#' # Demonstrate multiple groups ... data in long format +#' ## create a dummy data frame +#' d <- data.frame(lake=factor(rep(c("Ash Tree","Bark","Clay"),each=5)), +#' year=factor(rep(c("2010","2011","2010","2011","2010","2011"), +#' times=c(2,3,3,2,2,3))), +#' pass=factor(c(1,2,1,2,3,1,2,3,1,2,1,2,1,2,3)), +#' catch=c(57,34,65,34,12,54,26,9,54,27,67,34,68,35,12)) +#' d +#' +#' ## note use of confint with incl.est= and as.df= +#' if (require(dplyr) & require(tidyr)) { +#' res <- d %>% +#' dplyr::group_by(interaction(lake,year)) %>% +#' dplyr::group_modify(~confint(removal(~catch,data=.x), +#' incl.est=TRUE,as.df=TRUE)) %>% +#' tidyr::separate_wider_delim(1,names=c("lake","year"),delim=".") %>% +#' as.data.frame() # removes tibble and grouping structure +#' res +#' } +#' +#' # Demonstrate multiple groups ... data in wide format +#' ## create a dummy data frame ... same data as previous ... note that this is +#' ## not an efficient way to enter data, used here just for simple example +#' d2w <- rbind(data.frame(lake="Ash Tree",year=2011,pass1=65,pass2=34,pass3=12), +#' data.frame(lake="Bark",year=2010,pass1=54,pass2=26,pass3=9), +#' data.frame(lake="Bark",year=2011,pass1=54,pass2=27,pass3=NA), +#' data.frame(lake="Clay",year=2010,pass1=67,pass2=34,pass3=NA), +#' data.frame(lake="Clay",year=2011,pass1=68,pass2=35,pass3=12)) +#' d2w +#' +#' ## convert to long format first +#' d2l <- tidyr::pivot_longer(d2w,cols=c("pass1","pass2","pass3"), +#' names_to="pass",values_to="catch") +#' d2l +#' +#' ## then same process as previous example +#' if (require(dplyr)) { +#' res2 <- d2l %>% +#' dplyr::group_by(interaction(lake,year)) %>% +#' dplyr::group_modify(~confint(removal(~catch,data=.x), +#' incl.est=TRUE,as.df=TRUE)) %>% +#' tidyr::separate_wider_delim(1,names=c("lake","year"),delim=".") %>% +#' as.data.frame() # removes tibble and grouping structure +#' res2 +#' } +#' +#' @rdname removal +#' @export +removal <- function(catch,...) { + UseMethod("removal") +} + +#' @rdname removal +#' @export +removal.formula <- function(catch,data, + method=c("CarleStrub","Zippin","Seber3","Seber2", + "RobsonRegier2","Moran","Schnute","Burnham"), + alpha=1,beta=1,CS.se=c("Zippin","alternative"), + conf.level=0.95,Tmult=3,CIMicroFish=FALSE,...) { + ## Handle the formula and perform some checks + tmp <- iHndlFormula(catch,data,expNumR=0,expNumE=1) + if (tmp$vnum>1) + STOP("'removal' formula must have only one variable (on LHS).") + if (!tmp$vclass %in% c("numeric","integer")) + STOP("RHS variable (catch) must be numeric.") + ## Get variables from model frame + catch <- tmp$mf[,tmp$vname] + ## Call the default function + removal.default(catch,method=method,alpha=alpha,beta=beta,CS.se=CS.se, + conf.level=conf.level,Tmult=Tmult,CIMicroFish=CIMicroFish) +} + #' @rdname removal #' @export -removal <- function(catch, - method=c("CarleStrub","Zippin","Seber3","Seber2", - "RobsonRegier2","Moran","Schnute","Burnham"), - alpha=1,beta=1,CS.se=c("Zippin","alternative"), - conf.level=0.95,Tmult=3,CIMicroFish=FALSE,just.ests=FALSE) { +removal.default <- function(catch, + method=c("CarleStrub","Zippin","Seber3","Seber2", + "RobsonRegier2","Moran","Schnute","Burnham"), + alpha=1,beta=1,CS.se=c("Zippin","alternative"), + conf.level=0.95,Tmult=3,CIMicroFish=FALSE, + just.ests=FALSE,...) { # some initial checks method <- match.arg(method) if (just.ests) @@ -352,7 +433,8 @@ iMoran <- function(catch,conf.level,Tmult) { p <- T/(k*N0-X) # compute confidence intervals for No tmpci <- iRemovalLHCI("Moran",catch,conf.level,k,T,X,tmp$objective,Tmult) - est <- c(No=N0,No.LCI=tmpci$CI[[1]],No.UCI=tmpci$CI[[2]],p=p) + est <- c(No=N0,No.LCI=tmpci$CI[[1]],No.UCI=tmpci$CI[[2]], + p=p,p.se=NA,p.LCI=NA,p.UCI=NA) } # return list list(est=est,catch=catch,min.nlogLH=tmp$objective,Tmult=Tmult, @@ -408,7 +490,9 @@ iSchnute <- function(catch,conf.level,Tmult) { p <- (T-catch[1])/((k-1)*(N0-catch[1])-(X-(k-1)*catch[1])) # compute confidence intervals for No tmpci <- iRemovalLHCI("Schnute",catch,conf.level,k=k,T=T,X=X,tmp$objective,Tmult) - est <- c(No=N0,No.LCI=tmpci$CI[[1]],No.UCI=tmpci$CI[[2]],p=p,p1=p1) + est <- c(No=N0,No.se=NA,No.LCI=tmpci$CI[[1]],No.UCI=tmpci$CI[[2]], + p=p,p.se=NA,p.LCI=NA,p.UCI=NA, + p1=p1,p1.se=NA,p1.LCI=NA,p1.UCI=NA) } # return list list(est=est,catch=catch,min.nlogLH=tmp$objective,Tmult=Tmult, @@ -674,71 +758,124 @@ iBurnham <- function(catch,conf.level,Tmult,CIMicroFish){ lbl="Burnham K-Pass Removal Method (Van Deventer and Platts 1983)") } +#============================================================= +# INTERNAL -- Handle parm= argument for removal extractors. Is +# more complex because can be 2 or 3 parms +# depending on the method used. +#============================================================= +iHndlRemovalParms <- function(object,parm) { + if (any(parm=="all")) { + if (object$method=="Schnute") parm <- c("No","p","p1") + else parm <- c("No","p") + } else if (any(parm=="p1") & object$method!="Schnute") { + msg <- paste0("'",object$method,"' method does not use 'p1' parameter.") + if (length(parm)==1) STOP(msg) + else WARN(msg) + parm <- parm[parm!="p1"] + } + parm +} + +## Extractor functions #' @rdname removal #' @export -summary.removal <- function(object,parm=c("No","p","p1"), - digits=getOption("digits"),verbose=FALSE,...) { +coef.removal <- function(object,parm=c("all","No","p","p1"),as.df=FALSE,...) { parm <- match.arg(parm,several.ok=TRUE) - # send warning if chose 'p1' parameter but not Schnute method - # but don't warn if all parameters are chosen - # but stop if only p1 was chosen - if (("p1" %in% parm) & object$method!="Schnute") { - msg <- paste("'p1' parameter not relevant for the ",object$method," method.") - if (length(parm)==1) STOP(msg) - if (length(parm)<3) WARN(msg) - parm <- parm[-which(parm=="p1")] - } - if (verbose) { - if (object$method %in% c("Moran","Schnute")) - message("The ",object$lbl," was used (SEs not computed).") - else message("The ",object$lbl," was used.") - } - if (object$method %in% c("Zippin","CarleStrub","Seber3","Seber2","RobsonRegier2","Burnham")) { - res <- matrix(object$est[c("No","No.se","p","p.se")],nrow=2,byrow=TRUE) - colnames(res) <- c("Estimate","Std. Error") - rownames(res) <- c("No","p") - } else if (object$method=="Moran") { - res <- matrix(object$est[c("No","p")],nrow=2) - colnames(res) <- c("Estimate") - rownames(res) <- c("No","p") - } else { - res <- matrix(object$est[c("No","p","p1")],nrow=3) - colnames(res) <- c("Estimate") - rownames(res) <- c("No","p","p1") - } - res <- res[which(rownames(res) %in% parm),,drop=FALSE] - round(res,digits) + parm <- iHndlRemovalParms(object,parm) + # matrix of all possible results + res <- object$est[parm] + # convert to data.frame if asked + if (as.df) res <- as.data.frame(t(res)) + res } #' @rdname removal #' @export -confint.removal <- function(object,parm=c("No","p"), +confint.removal <- function(object,parm=c("all","No","p","p1"), level=conf.level,conf.level=NULL, - digits=getOption("digits"),verbose=FALSE,...) { + digits=getOption("digits"),verbose=FALSE, + incl.est=FALSE,as.df=FALSE,...) { + parm <- match.arg(parm,several.ok=TRUE) + parm <- iHndlRemovalParms(object,parm) + + # Handle some messaging if (!is.null(level)) - WARN("The confidence level is not set here, it is set with 'conf.level=' in 'removal()'.") + WARN("The confidence level is not set here, it was set at ",level, + " in the original 'removal()' call.") + if (object$method %in% c("Moran","Schnute")) { + ## print messages about CI fails if they exist (only for Moran & Schnute) + if (!is.na(object$LCImsg) & verbose) message(object$LCImsg) + if (!is.na(object$UCImsg) & verbose) message(object$UCImsg) + + ## CI for p cannot be computed for Moran and Schnute methods. + ## Warn if those parms are selected with those methods + ## Stop of those are only parms selected with those methods + if ("p" %in% parm) { + msg <- paste0("Confidence intervals for 'p' can not be computed for ", + object$method," method.") + if (length(parm)==1) STOP(msg) + else message(msg) + } + } + ## CI for p1 cannot be computed for Schnute method. + if (object$method=="Schnute") { + if ("p1" %in% parm) { + msg <- paste0("Confidence intervals for 'p1' can not be computed for ", + object$method," method.") + if (length(parm)==1) STOP(msg) + else message(msg) + } + } + + # Append parm names for selecting from vector + if (incl.est) parm.nms <- paste0(rep(parm,each=3),c("",".LCI",".UCI")) + else parm.nms <- paste0(rep(parm,each=2),c(".LCI",".UCI")) + + # Get vector of results out of object.est + resv <- object$est[parm.nms] + # Matrix of results + resm <- matrix(resv,nrow=length(parm),byrow=TRUE) + tmp <- iCILabel(object$conf.level) + if (incl.est) colnames(resm) <- c("Est",tmp) + else colnames(resm) <- tmp + rownames(resm) <- parm + resm <- round(resm,digits) + # Data.frame of results + resd <- as.data.frame(t(resv)) + # Return appropriate results + if (as.df) resd + else resm +} + + +#' @rdname removal +#' @export +summary.removal <- function(object,parm=c("all","No","p","p1"), + digits=getOption("digits"), + verbose=FALSE,as.df=FALSE,...) { parm <- match.arg(parm,several.ok=TRUE) - if (object$method %in% c("Zippin","CarleStrub","Seber3","Seber2","RobsonRegier2","Burnham")) { - res <- matrix(object$est[c("No.LCI","No.UCI","p.LCI","p.UCI")],nrow=2,byrow=TRUE) - rownames(res) <- c("No","p") - res <- res[which(rownames(res) %in% parm),,drop=FALSE] - } else { - ## Handle some messaging - if (object$method %in% c("Moran","Schnute")) { - # warn about no CIs for p with Moran and Schnute but only if p is only parm chosen - if ("p" %in% parm) { - if (length(parm)==1) - STOP("Confidence intervals for 'p' cannot be computed with ", - object$method," method.") - parm <- "No" - } - # print messages about CI fails if they exist - if (!is.na(object$LCImsg) & verbose) message(object$LCImsg) - if (!is.na(object$UCImsg) & verbose) message(object$UCImsg) - } - res <- matrix(object$est[c("No.LCI","No.UCI")],nrow=1) - rownames(res) <- c("No") + parm <- iHndlRemovalParms(object,parm) + + # Handle some messaging + if (verbose) { + if (object$method %in% c("Moran","Schnute")) + message("The ",object$lbl," was used (SEs not computed).") + else message("The ",object$lbl," was used.") } - colnames(res) <- iCILabel(object$conf.level) - round(res,digits) + + # Append parm names for selecting from vector + parm.nms <- paste0(rep(parm,each=2),c("",".se")) + + # Get vector of results out of object.est + resv <- object$est[parm.nms] + # Matrix of results + resm <- matrix(resv,nrow=length(parm),byrow=TRUE) + colnames(resm) <- c("Estimate","Std. Error") + rownames(resm) <- parm + resm <- round(resm,digits) + # Data.frame of results + resd <- as.data.frame(t(resv)) + # Return appropriate results + if (as.df) resd + else resm } diff --git a/man/removal.Rd b/man/removal.Rd index 02b04cb5..5304f78e 100644 --- a/man/removal.Rd +++ b/man/removal.Rd @@ -4,10 +4,16 @@ \alias{removal} \alias{summary.removal} \alias{confint.removal} +\alias{removal.formula} +\alias{removal.default} +\alias{coef.removal} \title{Population estimates for k-, 3-, or 2-pass removal data.} \usage{ -removal( +removal(catch, ...) + +\method{removal}{formula}( catch, + data, method = c("CarleStrub", "Zippin", "Seber3", "Seber2", "RobsonRegier2", "Moran", "Schnute", "Burnham"), alpha = 1, @@ -16,29 +22,52 @@ removal( conf.level = 0.95, Tmult = 3, CIMicroFish = FALSE, - just.ests = FALSE + ... ) -\method{summary}{removal}( - object, - parm = c("No", "p", "p1"), - digits = getOption("digits"), - verbose = FALSE, +\method{removal}{default}( + catch, + method = c("CarleStrub", "Zippin", "Seber3", "Seber2", "RobsonRegier2", "Moran", + "Schnute", "Burnham"), + alpha = 1, + beta = 1, + CS.se = c("Zippin", "alternative"), + conf.level = 0.95, + Tmult = 3, + CIMicroFish = FALSE, + just.ests = FALSE, ... ) +\method{coef}{removal}(object, parm = c("all", "No", "p", "p1"), as.df = FALSE, ...) + \method{confint}{removal}( object, - parm = c("No", "p"), + parm = c("all", "No", "p", "p1"), level = conf.level, conf.level = NULL, digits = getOption("digits"), verbose = FALSE, + incl.est = FALSE, + as.df = FALSE, + ... +) + +\method{summary}{removal}( + object, + parm = c("all", "No", "p", "p1"), + digits = getOption("digits"), + verbose = FALSE, + as.df = FALSE, ... ) } \arguments{ -\item{catch}{A numerical vector of catch at each pass.} +\item{catch}{A numerical vector of catch at each pass, or a formula of the form \code{~catch}.} + +\item{\dots}{Additional arguments for methods.} + +\item{data}{A data.frame from which the variables in the \code{catch} formula can be found. Not used if \code{catch} is not a formula.} \item{method}{A single string that identifies the removal method to use. See details.} @@ -60,13 +89,15 @@ removal( \item{parm}{A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered.} +\item{as.df}{A logical that indicates whether the results of \code{coef}, \code{confint}, or \code{summary} should be returned as a data.frame. Defaults to \code{FALSE}.} + +\item{level}{Not used, but here for compatibility with generic \code{confint} function.} + \item{digits}{A single numeric that controls the number of decimals in the output from \code{summary} and \code{confint}.} \item{verbose}{A logical that indicates whether descriptive labels should be printed from \code{summary} and if certain warnings are shown with \code{confint}.} -\item{\dots}{Additional arguments for methods.} - -\item{level}{Not used, but here for compatibility with generic \code{confint} function.} +\item{incl.est}{A logical that indicated whether the parameter point estimate should be included in the results from \code{confint}. Defaults to \code{FALSE}.} } \value{ A list with at least the following items: @@ -166,6 +197,11 @@ p4 <- removal(ct2,method="Seber2") summary(p4,verbose=TRUE) confint(p4) +## Use formula with a data.frame +d <- data.frame(ct=ct3) +p1a <- removal(~ct,data=d) +summary(p1a,verbose=TRUE) +confint(p1a,incl.est=TRUE) ### Test if catchability differs between first sample and the other samples # chi-square test statistic from negative log-likelihoods @@ -182,6 +218,52 @@ chi2.val <- 2*(p2a$min.nlogLH-p3a$min.nlogLH) # 4.74 in Schnute(1983) pchisq(chi2.val,df=1,lower.tail=FALSE) # sig diff (catchability differs) summary(p3a) +# Demonstrate multiple groups ... data in long format +## create a dummy data frame +d <- data.frame(lake=factor(rep(c("Ash Tree","Bark","Clay"),each=5)), + year=factor(rep(c("2010","2011","2010","2011","2010","2011"), + times=c(2,3,3,2,2,3))), + pass=factor(c(1,2,1,2,3,1,2,3,1,2,1,2,1,2,3)), + catch=c(57,34,65,34,12,54,26,9,54,27,67,34,68,35,12)) +d + +## note use of confint with incl.est= and as.df= +if (require(dplyr) & require(tidyr)) { + res <- d \%>\% + dplyr::group_by(interaction(lake,year)) \%>\% + dplyr::group_modify(~confint(removal(~catch,data=.x), + incl.est=TRUE,as.df=TRUE)) \%>\% + tidyr::separate_wider_delim(1,names=c("lake","year"),delim=".") \%>\% + as.data.frame() # removes tibble and grouping structure + res +} + +# Demonstrate multiple groups ... data in wide format +## create a dummy data frame ... same data as previous ... note that this is +## not an efficient way to enter data, used here just for simple example +d2w <- rbind(data.frame(lake="Ash Tree",year=2011,pass1=65,pass2=34,pass3=12), + data.frame(lake="Bark",year=2010,pass1=54,pass2=26,pass3=9), + data.frame(lake="Bark",year=2011,pass1=54,pass2=27,pass3=NA), + data.frame(lake="Clay",year=2010,pass1=67,pass2=34,pass3=NA), + data.frame(lake="Clay",year=2011,pass1=68,pass2=35,pass3=12)) +d2w + +## convert to long format first +d2l <- tidyr::pivot_longer(d2w,cols=c("pass1","pass2","pass3"), + names_to="pass",values_to="catch") +d2l + +## then same process as previous example +if (require(dplyr)) { + res2 <- d2l \%>\% + dplyr::group_by(interaction(lake,year)) \%>\% + dplyr::group_modify(~confint(removal(~catch,data=.x), + incl.est=TRUE,as.df=TRUE)) \%>\% + tidyr::separate_wider_delim(1,names=c("lake","year"),delim=".") \%>\% + as.data.frame() # removes tibble and grouping structure + res2 +} + } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. diff --git a/tests/testthat/testthat_removal.R b/tests/testthat/testthat_removal.R index d9f0e317..f963dd32 100644 --- a/tests/testthat/testthat_removal.R +++ b/tests/testthat/testthat_removal.R @@ -79,6 +79,7 @@ test_that("removal() messages",{ "model failure") expect_warning(removal(c(0,0),method="RobsonRegier2"), "model failure") + ## wrong parm in summary and confint tmp <- removal(c(346,184,49)) expect_error(summary(tmp,parm="Derek"), @@ -91,6 +92,7 @@ test_that("removal() messages",{ "must be between 0 and 1") expect_error(removal(c(346,184,49),conf.level="R"), "must be numeric") + ## Check whole number, expect_warning(removal(c(346,184,49.1)), "'catch' contains non-whole numbers.") @@ -100,23 +102,34 @@ test_that("removal() messages",{ ## Bad data leads to failure of Zippin (from Carle-Strub (1978) example 2) expect_warning(removal(c(5,7,8),method="Zippin"), "Zippin model failure") + ## Chose "p1" summary for other than Schnute method tmp <- removal(c(45,11,18,8),method="Zippin") expect_error(summary(tmp,parm="p1"), - "parameter not relevant") + "method does not use 'p1' parameter") expect_warning(summary(tmp,parm=c("p","p1")), - "parameter not relevant") + "method does not use 'p1' parameter") + ## Chose only "p" CI for Moran or Schnute method tmp <- removal(c(45,11,18,8),method="Schnute") expect_error(confint(tmp,parm="p"), - "cannot be computed with Schnute") + "Confidence intervals for 'p' can not be computed") + ## Chose bad value for Tmult expect_error(removal(c(45,11,18,8),method="Moran",Tmult=0.9), "greater than 1") expect_warning(removal(c(45,11,18,8),method="Moran",Tmult=1.2), "try increasing") ## NAs in catch vector - expect_warning(removal(c(45,11,NA,8)),"'NA's removed from") + expect_warning(removal(c(45,11,NA,8)),"'NA's removed from") + + ## Formula issues + expect_error(removal(ct~pass, + data=data.frame(ct=c(12,7,2),pass=c("a","b","c"))), + "'removal' formula must have only one variable") + expect_error(removal(~pass, + data=data.frame(ct=c(12,7,2),pass=c("a","b","c"))), + "must be numeric") }) test_that("removal() verbose= messages",{ @@ -140,6 +153,15 @@ test_that("removal() verbose= messages",{ ## Test Output Types ---- +test_that("removal.formula() same as removal.default()",{ + ct <- c(38,26,12) + d <- data.frame(ct=ct) + expect_equal(removal(~ct,data=d),removal(ct)) + expect_equal(removal(~ct,data=d,method="Moran"),removal(ct,method="Moran")) + expect_equal(removal(~ct,data=d,method="Schnute"),removal(ct,method="Schnute")) + # formula form matches default form ... only check default form after this +}) + test_that("removal() return types",{ expect_equal(class(removal(c(38,26,12))),"removal") expect_equal(class(removal(c(38,26,12),method="Moran")),"removal") @@ -153,6 +175,35 @@ test_that("removal() return types",{ expect_equal(class(removal(data.frame(c(38,26,12)))),"removal") expect_equal(class(removal(matrix(c(38,26,12),nrow=1))),"removal") expect_equal(class(removal(matrix(c(38,26,12),ncol=1))),"removal") + + # coef() results + tmp <- removal(c(38,26,12)) + tmp2 <- coef(tmp) + expect_true(is.vector(tmp2)) + expect_equal(class(tmp2),"numeric") + expect_equal(length(tmp2),2) + expect_equal(names(tmp2),c("No","p")) + tmp2 <- coef(tmp,as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(ncol(tmp2),2) + expect_equal(nrow(tmp2),1) + expect_equal(names(tmp2),c("No","p")) + tmp2 <- coef(tmp,parm="p") + expect_true(is.vector(tmp2)) + expect_equal(class(tmp2),"numeric") + expect_equal(length(tmp2),1) + expect_equal(names(tmp2),"p") + tmp <- removal(c(38,26,12),method="Schnute") + tmp2 <- coef(tmp) + expect_true(is.vector(tmp2)) + expect_equal(class(tmp2),"numeric") + expect_equal(length(tmp2),3) + expect_equal(names(tmp2),c("No","p","p1")) + tmp2 <- coef(tmp,as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(ncol(tmp2),3) + expect_equal(nrow(tmp2),1) + expect_equal(names(tmp2),c("No","p","p1")) # summary() results tmp <- removal(c(38,26,12)) @@ -178,21 +229,21 @@ test_that("removal() return types",{ tmp2 <- summary(tmp) expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),3) - expect_equal(ncol(tmp2),1) + expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p","p1")) - expect_equal(colnames(tmp2),c("Estimate")) + expect_equal(colnames(tmp2),c("Estimate","Std. Error")) tmp2 <- summary(tmp,parm="p1") expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),1) - expect_equal(ncol(tmp2),1) + expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("p1")) - expect_equal(colnames(tmp2),c("Estimate")) - tmp2 <- summary(tmp,parm=c("p","No")) + expect_equal(colnames(tmp2),c("Estimate","Std. Error")) + tmp2 <- summary(tmp,parm=c("No","p")) expect_equal(class(tmp2),c("matrix","array")) expect_equal(nrow(tmp2),2) - expect_equal(ncol(tmp2),1) + expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p")) - expect_equal(colnames(tmp2),c("Estimate")) + expect_equal(colnames(tmp2),c("Estimate","Std. Error")) tmp <- removal(c(38,26),method="Seber2") tmp2 <- summary(tmp) expect_equal(class(tmp2),c("matrix","array")) @@ -200,7 +251,41 @@ test_that("removal() return types",{ expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No","p")) expect_equal(colnames(tmp2),c("Estimate","Std. Error")) - + + # summary() results ... as.df=TRUE + tmp <- removal(c(38,26,12)) + tmp2 <- summary(tmp,as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),4) + expect_equal(names(tmp2),c("No","No.se","p","p.se")) + tmp2 <- summary(tmp,parm="p",as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),2) + expect_equal(names(tmp2),c("p","p.se")) + tmp2 <- summary(tmp,parm="No",as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),2) + expect_equal(names(tmp2),c("No","No.se")) + tmp <- removal(c(38,26,12),method="Schnute") + tmp2 <- summary(tmp,as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),6) + expect_equal(names(tmp2),c("No","No.se","p","p.se","p1","p1.se")) + tmp2 <- summary(tmp,parm="p1",as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),2) + expect_equal(names(tmp2),c("p1","p1.se")) + tmp2 <- summary(tmp,parm=c("p","p1"),as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),4) + expect_equal(names(tmp2),c("p","p.se","p1","p1.se")) + # confint() results tmp <- removal(c(38,26,12)) tmp2 <- confint(tmp) @@ -222,11 +307,11 @@ test_that("removal() return types",{ expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp <- removal(c(38,26,12),method="Schnute") - tmp2 <- confint(tmp) + tmp2 <- suppressMessages(confint(tmp)) expect_equal(class(tmp2),c("matrix","array")) - expect_equal(nrow(tmp2),1) + expect_equal(nrow(tmp2),3) expect_equal(ncol(tmp2),2) - expect_equal(rownames(tmp2),c("No")) + expect_equal(rownames(tmp2),c("No","p","p1")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) tmp2 <- confint(tmp,parm="No") expect_equal(class(tmp2),c("matrix","array")) @@ -234,6 +319,19 @@ test_that("removal() return types",{ expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) + tmp <- removal(c(38,26,12),method="Schnute") + tmp2 <- suppressMessages(confint(tmp,incl.est=TRUE)) + expect_equal(class(tmp2),c("matrix","array")) + expect_equal(nrow(tmp2),3) + expect_equal(ncol(tmp2),3) + expect_equal(rownames(tmp2),c("No","p","p1")) + expect_equal(colnames(tmp2),c("Est","95% LCI","95% UCI")) + tmp2 <- confint(tmp,parm="No",incl.est=TRUE) + expect_equal(class(tmp2),c("matrix","array")) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),3) + expect_equal(rownames(tmp2),"No") + expect_equal(colnames(tmp2),c("Est","95% LCI","95% UCI")) tmp <- removal(c(38,26),method="Seber2") tmp2 <- confint(tmp) expect_equal(class(tmp2),c("matrix","array")) @@ -253,6 +351,18 @@ test_that("removal() return types",{ expect_equal(ncol(tmp2),2) expect_equal(rownames(tmp2),c("No")) expect_equal(colnames(tmp2),c("95% LCI","95% UCI")) + ## Confint with as.df=TRUE + tmp <- removal(c(38,26),method="Seber2") + tmp2 <- confint(tmp,incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),6) + expect_equal(names(tmp2),c("No","No.LCI","No.UCI","p","p.LCI","p.UCI")) + tmp2 <- confint(tmp,parm="No",incl.est=TRUE,as.df=TRUE) + expect_true(is.data.frame(tmp2)) + expect_equal(nrow(tmp2),1) + expect_equal(ncol(tmp2),3) + expect_equal(names(tmp2),c("No","No.LCI","No.UCI")) ## What if catches are all zeroes suppressWarnings( @@ -333,9 +443,9 @@ test_that("removal with 'Moran' matches Schnute (1983)",{ Ns[i] <- round(tmp$est[["No"]],1) ps[i] <- round(tmp$est[["p"]],2) LHs[i] <- round(tmp$min.nlogLH,2) - tmp <- confint(tmp) - NLCI[i] <- tmp[1] - NUCI[i] <- tmp[2] + suppressMessages(tmp <- confint(tmp)) + NLCI[i] <- tmp["No","95% LCI"] + NUCI[i] <- tmp["No","95% UCI"] } ## check point estimates tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),Ns,ps,LHs, @@ -361,9 +471,9 @@ test_that("removal with 'Schnute' matches Schnute (1983)",{ p1s[i] <- round(tmp$est[["p1"]],2) ps[i] <- round(tmp$est[["p"]],2) LHs[i] <- round(tmp$min.nlogLH,2) - tmp <- confint(tmp) - NLCI[i] <- tmp[1] - NUCI[i] <- tmp[2] + suppressMessages(tmp <- confint(tmp)) + NLCI[i] <- tmp["No","95% LCI"] + NUCI[i] <- tmp["No","95% UCI"] } ## check point estimates tmp <- cbind(sample=seq_len(nrow(BrookTroutNEWP1)),Ns,p1s,ps,LHs, @@ -379,6 +489,7 @@ test_that("removal with 'Schnute' matches Schnute (1983)",{ BrookTroutNEWP1[,c("Schnute.NLCI","Schnute.NUCI")]) expect_true(all(abs(tmp[,2:3]-tmp[,4:5])<=0.1001,na.rm=TRUE)) }) + test_that("removal with 'Burnham' match results from (Van Deventer 1989) page 13",{ tmp <- removal(c(124,61,35,14),method="Burnham",CIMicroFish=TRUE) ## check point estimates From 7ede23f9726cf9cf4aa894dbb3e895b99f0872ad Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Sat, 4 Jan 2025 18:35:58 -0600 Subject: [PATCH 17/21] Added examples to catchCurve() for multiple groups --- R/catchCurve.R | 32 ++++++++++++++++++++++++++++++++ man/catchCurve.Rd | 31 +++++++++++++++++++++++++++++++ 2 files changed, 63 insertions(+) diff --git a/R/catchCurve.R b/R/catchCurve.R index 198e0220..96bd3d94 100644 --- a/R/catchCurve.R +++ b/R/catchCurve.R @@ -105,6 +105,38 @@ #' summary(cc5) #' plot(cc5) #' +#' ## Demonstration of computation for multiple groups +#' ## only ages on the descending limb for each group are in the data.frame +#' # Get example data +#' data(FHCatfish,package="FSAdata") +#' FHCatfish +#' +#' # Note use of incl.est=TRUE and as.df=TRUE +#' if (require(dplyr)) { +#' res <- FHCatfish %>% +#' dplyr::group_by(river) %>% +#' dplyr::group_modify(~confint(catchCurve(abundance~age,data=.x), +#' incl.est=TRUE,as.df=TRUE)) %>% +#' as.data.frame() # removes tibble and grouping structure +#' res +#' } +#' +#' ## Demonstration of computation for multiple groups +#' ## ages not on descending limb are in the data.frame, but use same +#' ## ages.use= for each group +#' # Get example data +#' data(WalleyeKS,package="FSAdata") +#' +#' # Note use of incl.est=TRUE and as.df=TRUE +#' if (require(dplyr)) { +#' res <- WalleyeKS %>% +#' dplyr::group_by(reservoir) %>% +#' dplyr::group_modify(~confint(catchCurve(catch~age,data=.x,ages2use=2:10), +#' incl.est=TRUE,as.df=TRUE)) %>% +#' as.data.frame() # removes tibble and grouping structure +#' res +#' } + #' @rdname catchCurve #' @export catchCurve <- function (x,...) { diff --git a/man/catchCurve.Rd b/man/catchCurve.Rd index e7ac6dd1..a0bf79eb 100644 --- a/man/catchCurve.Rd +++ b/man/catchCurve.Rd @@ -186,6 +186,37 @@ cc5 <- catchCurve(ct~age,data=df,ages2use=4:12) summary(cc5) plot(cc5) +## Demonstration of computation for multiple groups +## only ages on the descending limb for each group are in the data.frame +# Get example data +data(FHCatfish,package="FSAdata") +FHCatfish + +# Note use of incl.est=TRUE and as.df=TRUE +if (require(dplyr)) { + res <- FHCatfish \%>\% + dplyr::group_by(river) \%>\% + dplyr::group_modify(~confint(catchCurve(abundance~age,data=.x), + incl.est=TRUE,as.df=TRUE)) \%>\% + as.data.frame() # removes tibble and grouping structure + res +} + +## Demonstration of computation for multiple groups +## ages not on descending limb are in the data.frame, but use same +## ages.use= for each group +# Get example data +data(WalleyeKS,package="FSAdata") + +# Note use of incl.est=TRUE and as.df=TRUE +if (require(dplyr)) { + res <- WalleyeKS \%>\% + dplyr::group_by(reservoir) \%>\% + dplyr::group_modify(~confint(catchCurve(catch~age,data=.x,ages2use=2:10), + incl.est=TRUE,as.df=TRUE)) \%>\% + as.data.frame() # removes tibble and grouping structure + res +} } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. From 63b76905f621ccb62b6dd096698c6671c59efd1b Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Sat, 4 Jan 2025 18:43:30 -0600 Subject: [PATCH 18/21] Added examples to chapmanRobson() for multiple groups --- R/catchCurve.R | 2 +- R/chapmanRobson.R | 32 ++++++++++++++++++++++++++++++++ man/catchCurve.Rd | 1 + man/chapmanRobson.Rd | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 66 insertions(+), 1 deletion(-) diff --git a/R/catchCurve.R b/R/catchCurve.R index 96bd3d94..2ca7bdb2 100644 --- a/R/catchCurve.R +++ b/R/catchCurve.R @@ -136,7 +136,7 @@ #' as.data.frame() # removes tibble and grouping structure #' res #' } - +#' #' @rdname catchCurve #' @export catchCurve <- function (x,...) { diff --git a/R/chapmanRobson.R b/R/chapmanRobson.R index c2535d03..2ae2ed42 100644 --- a/R/chapmanRobson.R +++ b/R/chapmanRobson.R @@ -95,6 +95,38 @@ #' summary(cr3) #' plot(cr3) #' +#' ## Demonstration of computation for multiple groups +#' ## only ages on the descending limb for each group are in the data.frame +#' # Get example data +#' data(FHCatfish,package="FSAdata") +#' FHCatfish +#' +#' # Note use of incl.est=TRUE and as.df=TRUE +#' if (require(dplyr)) { +#' res <- FHCatfish %>% +#' dplyr::group_by(river) %>% +#' dplyr::group_modify(~confint(chapmanRobson(abundance~age,data=.x), +#' incl.est=TRUE,as.df=TRUE)) %>% +#' as.data.frame() # removes tibble and grouping structure +#' res +#' } +#' +#' ## Demonstration of computation for multiple groups +#' ## ages not on descending limb are in the data.frame, but use same +#' ## ages.use= for each group +#' # Get example data +#' data(WalleyeKS,package="FSAdata") +#' +#' # Note use of incl.est=TRUE and as.df=TRUE +#' if (require(dplyr)) { +#' res <- WalleyeKS %>% +#' dplyr::group_by(reservoir) %>% +#' dplyr::group_modify(~confint(chapmanRobson(catch~age,data=.x,ages2use=2:10), +#' incl.est=TRUE,as.df=TRUE)) %>% +#' as.data.frame() # removes tibble and grouping structure +#' res +#' } +#' #' @rdname chapmanRobson #' @export chapmanRobson <- function (x,...) { diff --git a/man/catchCurve.Rd b/man/catchCurve.Rd index a0bf79eb..8f1f8c0c 100644 --- a/man/catchCurve.Rd +++ b/man/catchCurve.Rd @@ -217,6 +217,7 @@ if (require(dplyr)) { as.data.frame() # removes tibble and grouping structure res } + } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. diff --git a/man/chapmanRobson.Rd b/man/chapmanRobson.Rd index 7a121a87..38908164 100644 --- a/man/chapmanRobson.Rd +++ b/man/chapmanRobson.Rd @@ -164,6 +164,38 @@ cr3 <- chapmanRobson(age,ct,4:12) summary(cr3) plot(cr3) +## Demonstration of computation for multiple groups +## only ages on the descending limb for each group are in the data.frame +# Get example data +data(FHCatfish,package="FSAdata") +FHCatfish + +# Note use of incl.est=TRUE and as.df=TRUE +if (require(dplyr)) { + res <- FHCatfish \%>\% + dplyr::group_by(river) \%>\% + dplyr::group_modify(~confint(chapmanRobson(abundance~age,data=.x), + incl.est=TRUE,as.df=TRUE)) \%>\% + as.data.frame() # removes tibble and grouping structure + res +} + +## Demonstration of computation for multiple groups +## ages not on descending limb are in the data.frame, but use same +## ages.use= for each group +# Get example data +data(WalleyeKS,package="FSAdata") + +# Note use of incl.est=TRUE and as.df=TRUE +if (require(dplyr)) { + res <- WalleyeKS \%>\% + dplyr::group_by(reservoir) \%>\% + dplyr::group_modify(~confint(chapmanRobson(catch~age,data=.x,ages2use=2:10), + incl.est=TRUE,as.df=TRUE)) \%>\% + as.data.frame() # removes tibble and grouping structure + res +} + } \references{ Ogle, D.H. 2016. \href{https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r}{Introductory Fisheries Analyses with R}. Chapman & Hall/CRC, Boca Raton, FL. From a85771aaf82e630dc46470cc6b8817aa9cc9ce56 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Sat, 4 Jan 2025 19:14:20 -0600 Subject: [PATCH 19/21] Minor edits to se() ... was trying to implement katex ... did not work --- R/FSAUtils.R | 12 ++++++++---- man/se.Rd | 12 ++++++++---- 2 files changed, 16 insertions(+), 8 deletions(-) diff --git a/R/FSAUtils.R b/R/FSAUtils.R index 6d981127..b9952d1d 100644 --- a/R/FSAUtils.R +++ b/R/FSAUtils.R @@ -765,6 +765,9 @@ repeatedRows2Keep <- function(df,cols2use=NULL,cols2ignore=NULL, #' #' @description Computes the standard error of the mean (i.e., standard deviation divided by the square root of the sample size). #' +#' @details +#' The standard error of the value in vector \code{x} is simply the standard deviation of \code{x} divided by the square root of the number of valid items in \code{x} +#' #' @param x A numeric vector. #' @param na.rm A logical that indicates whether missing values should be removed before computing the standard error. #' @@ -777,18 +780,19 @@ repeatedRows2Keep <- function(df,cols2use=NULL,cols2ignore=NULL, #' @keywords manip #' #' @examples +#' # example vector #' x <- 1:20 -#' sd(x)/sqrt(length(x)) #' se(x) +#' sd(x)/sqrt(length(x)) ## matches #' #' # all return NA if missing values are not removed #' x2 <- c(x,NA) #' sd(x2)/sqrt(length(x2)) #' #' # Better if missing values are removed -#' se(x2,na.rm=FALSE) -#' sd(x2,na.rm=TRUE)/sqrt(length(x2[complete.cases(x2)])) -#' se(x2) +#' se(x2) ## Default behavior +#' sd(x2,na.rm=TRUE)/sqrt(length(x2[complete.cases(x2)])) ## Matches +#' se(x2,na.rm=FALSE) ## Result from not removing NAs #' #' @export se <- function (x,na.rm=TRUE) { diff --git a/man/se.Rd b/man/se.Rd index a72bf485..6e8f7298 100644 --- a/man/se.Rd +++ b/man/se.Rd @@ -17,19 +17,23 @@ A single numeric that is the standard error of the mean of \code{x}. \description{ Computes the standard error of the mean (i.e., standard deviation divided by the square root of the sample size). } +\details{ +The standard error of the value in vector \code{x} is simply the standard deviation of \code{x} divided by the square root of the number of valid items in \code{x} +} \examples{ +# example vector x <- 1:20 -sd(x)/sqrt(length(x)) se(x) +sd(x)/sqrt(length(x)) ## matches # all return NA if missing values are not removed x2 <- c(x,NA) sd(x2)/sqrt(length(x2)) # Better if missing values are removed -se(x2,na.rm=FALSE) -sd(x2,na.rm=TRUE)/sqrt(length(x2[complete.cases(x2)])) -se(x2) +se(x2) ## Default behavior +sd(x2,na.rm=TRUE)/sqrt(length(x2[complete.cases(x2)])) ## Matches +se(x2,na.rm=FALSE) ## Result from not removing NAs } \seealso{ From 02893a2ddb76a4adb8e213790c3302865b036760 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Mon, 6 Jan 2025 07:27:59 -0600 Subject: [PATCH 20/21] Modified alkPlot() documentation --- NEWS.md | 7 ++++--- R/alkPlot.R | 3 +++ man/alkPlot.Rd | 4 ++++ 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/NEWS.md b/NEWS.md index d6a7655a..f26db24c 100644 --- a/NEWS.md +++ b/NEWS.md @@ -9,13 +9,14 @@ * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. -* `catchCurve()`: Added `as.df=` to extractor functions and `incl.est=` to `confint.catchCurve()` to match functionality added to `depletion()`. -* `chapmanRobson()`: Added `as.df=` to extractor functions and `incl.est=` to `confint.chapmanRobson()` to match functionality added to `depletion()`. +* `alkPlot()`: Modified. Added note in documentation pointing to a fishR blog post on using `ggplot2` to make similar plots. +* `catchCurve()`: Modified. Added `as.df=` to extractor functions and `incl.est=` to `confint.catchCurve()` to match functionality added to `depletion()`. +* `chapmanRobson()`: Modified. Added `as.df=` to extractor functions and `incl.est=` to `confint.chapmanRobson()` to match functionality added to `depletion()`. * `depletion()`: Modified to address [#111](https://github.com/fishR-Core-Team/FSA/issues/111). * Added formula notation such that `depletion()` wash changed to a method call and `depletion.default()` and `depletion.formula()` were added. Tests for the formula were included. * Added `as.df=` to `coef.depletion()`, `confint.depletion()`, and `summary.depletion()` so that the result is returned as a data.frame when set to `TRUE` (default is `FALSE` to maintain backward compatability). * Added `incl.est=` to `confint.depletion()` to make it easier to get away from the clunky `cbind("Est"=coef(),confint())` code. -* `GompertzFuns()`: Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). +* `GompertzFuns()`: Modified. Accepted pull request related to [#112](https://github.com/fishR-Core-Team/FSA/issues/112) that fixed several typos and dead links in the documentation ... thanks Arni. Corrected the erroneous reference to t* (should have been t0) in the documentation for the Gompertz function (fixes [#113](https://github.com/fishR-Core-Team/FSA/issues/113) ... thanks again to Arni). * `metaM()`: Modified to address [#114](https://github.com/fishR-Core-Team/FSA/issues/114). * Returns data.frame rather than list. * Added conditional mortality rate (cm) to returned data.frame (for use with `rFAMS`). diff --git a/R/alkPlot.R b/R/alkPlot.R index a6ae65ea..11baaaad 100644 --- a/R/alkPlot.R +++ b/R/alkPlot.R @@ -10,8 +10,11 @@ #' \item A plot with (differently colored) lines, as estimated by loess splines, that connect the proportions of ages within each length interval is constructed with \code{type="splines"}. #' \item A \dQuote{bubble} plot where circles whose size is proportional to the proportion of fish of each age in each length interval is constructed with \code{type="bubble"}. The color of the bubbles can be controlled with \code{col=} and an underlying grid for ease of seeing the age and length interval for each bubble can be controlled with \code{grid=}. Bubbles from a second age-length key can be overlaid on an already constructed bubble plot by using \code{add=TRUE} in a second call to \code{alkPlot}. #' } +#' #' Note that all plots are \dQuote{vertically conditional} -- i.e., each represents the proportional ages WITHIN each length interval. #' +#' @note These plots are used primarily to explore the structure of an age-length key. While some may find them of "publication-quality", that level of quality and overall control of aspects of the plot are not the primary purpose of this function. Publication-quality plots can be readily made using \code{ggplot2} as demonstrated \href{https://fishr-core-team.github.io/fishR/blog/posts/2025-1-5_ALKPlots_GGplot/}{in this fishR post}. +#' #' @param key A numeric matrix that contains the age-length key. #' @param type A string that indicates the type of plot to construct. See details. #' @param xlab,ylab A string that contains the label for the x- or y-axis. diff --git a/man/alkPlot.Rd b/man/alkPlot.Rd index 244fd1e3..4fa73d22 100644 --- a/man/alkPlot.Rd +++ b/man/alkPlot.Rd @@ -67,8 +67,12 @@ A variety of plots can be used to visualize the proportion of fish of certain ag \item A plot with (differently colored) lines, as estimated by loess splines, that connect the proportions of ages within each length interval is constructed with \code{type="splines"}. \item A \dQuote{bubble} plot where circles whose size is proportional to the proportion of fish of each age in each length interval is constructed with \code{type="bubble"}. The color of the bubbles can be controlled with \code{col=} and an underlying grid for ease of seeing the age and length interval for each bubble can be controlled with \code{grid=}. Bubbles from a second age-length key can be overlaid on an already constructed bubble plot by using \code{add=TRUE} in a second call to \code{alkPlot}. } + Note that all plots are \dQuote{vertically conditional} -- i.e., each represents the proportional ages WITHIN each length interval. } +\note{ +These plots are used primarily to explore the structure of an age-length key. While some may find them of "publication-quality", that level of quality and overall control of aspects of the plot are not the primary purpose of this function. Publication-quality plots can be readily made using \code{ggplot2} as demonstrated \href{https://fishr-core-team.github.io/fishR/blog/posts/2025-1-5_ALKPlots_GGplot/}{in this fishR post}. +} \section{IFAR Chapter}{ 5-Age-Length Key. } From b7b7530a2902421acb71375474d662ffc1f71363 Mon Sep 17 00:00:00 2001 From: Derek Ogle Date: Mon, 6 Jan 2025 09:22:48 -0600 Subject: [PATCH 21/21] Prepping v0.9.6 --- .Rbuildignore | 2 + DESCRIPTION | 4 +- NEWS.md | 3 +- README.md | 2 +- _pkgdown.yml | 3 + cran-comments/cran-comments-v0_9_6.md | 10 + docs/LICENSE-text.html | 55 +- docs/authors.html | 76 +-- docs/favicon.ico | Bin 15086 -> 15086 bytes docs/index.html | 69 +- docs/news/index.html | 134 ++-- docs/pkgdown.js | 16 +- docs/pkgdown.yml | 7 +- docs/reference/BluegillJL.html | 59 +- docs/reference/BrookTroutTH.html | 59 +- docs/reference/ChinookArg.html | 61 +- docs/reference/CodNorwegian.html | 61 +- docs/reference/CutthroatAL.html | 59 +- docs/reference/Ecoli.html | 61 +- docs/reference/FSA-internals.html | 53 +- docs/reference/FSA.html | 68 +- docs/reference/Mirex.html | 59 +- docs/reference/PSDlit.html | 59 +- docs/reference/PikeNY.html | 59 +- docs/reference/PikeNYPartial1.html | 65 +- docs/reference/SMBassLS.html | 59 +- docs/reference/SMBassWB.html | 61 +- docs/reference/Schnute-1.png | Bin 11239 -> 11044 bytes docs/reference/Schnute-2.png | Bin 9572 -> 9536 bytes docs/reference/Schnute-3.png | Bin 9087 -> 8895 bytes docs/reference/Schnute-4.png | Bin 8190 -> 8223 bytes docs/reference/Schnute.html | 77 +-- docs/reference/SpotVA1.html | 59 +- docs/reference/Subset.html | 237 +------ docs/reference/Summarize.html | 81 +-- docs/reference/WR79.html | 59 +- docs/reference/WSlit.html | 61 +- docs/reference/WhitefishLC.html | 59 +- docs/reference/addZeroCatch.html | 73 +- docs/reference/ageBias-19.png | Bin 85252 -> 85264 bytes docs/reference/ageBias.html | 155 ++--- docs/reference/agePrecision.html | 81 +-- docs/reference/alkAgeDist.html | 67 +- docs/reference/alkIndivAge.html | 131 ++-- docs/reference/alkMeanVar.html | 73 +- docs/reference/alkPlot-10.png | Bin 102109 -> 104525 bytes docs/reference/alkPlot-9.png | Bin 117183 -> 120281 bytes docs/reference/alkPlot.html | 95 ++- docs/reference/binCI.html | 73 +- docs/reference/bootCase.html | 331 +-------- docs/reference/capHistConvert.html | 85 +-- docs/reference/capHistSum.html | 87 +-- docs/reference/catchCurve-2.png | Bin 29018 -> 29218 bytes docs/reference/catchCurve-3.png | Bin 28978 -> 29108 bytes docs/reference/catchCurve-4.png | Bin 31242 -> 31225 bytes docs/reference/catchCurve-5.png | Bin 31258 -> 31234 bytes docs/reference/catchCurve.html | 261 ++++--- docs/reference/chapmanRobson-2.png | Bin 24521 -> 24340 bytes docs/reference/chapmanRobson-3.png | Bin 24192 -> 24188 bytes docs/reference/chapmanRobson-4.png | Bin 24687 -> 24683 bytes docs/reference/chapmanRobson-5.png | Bin 24521 -> 24340 bytes docs/reference/chapmanRobson-6.png | Bin 24810 -> 24630 bytes docs/reference/chapmanRobson.html | 240 +++++-- docs/reference/chooseColors.html | 205 +----- docs/reference/col2rgbt.html | 65 +- docs/reference/compIntercepts.html | 268 +------- docs/reference/compSlopes.html | 304 +-------- docs/reference/depletion-1.png | Bin 40173 -> 40183 bytes docs/reference/depletion-2.png | Bin 39923 -> 39907 bytes docs/reference/depletion-3.png | Bin 42496 -> 42487 bytes docs/reference/depletion.html | 282 ++++---- docs/reference/diags.html | 298 +------- docs/reference/dunnTest.html | 83 +-- docs/reference/expandCounts.html | 207 +++--- docs/reference/expandLenFreq.html | 77 +-- docs/reference/extraTests.html | 81 +-- docs/reference/fact2num.html | 63 +- docs/reference/fishR.html | 69 +- docs/reference/fitPlot.html | 409 +---------- docs/reference/fsaNews.html | 114 +--- docs/reference/geomean.html | 71 +- docs/reference/growthModels-18.png | Bin 8132 -> 8457 bytes docs/reference/growthModels-19.png | Bin 29799 -> 30337 bytes docs/reference/growthModels-20.png | Bin 23253 -> 23665 bytes docs/reference/growthModels.html | 217 +++--- docs/reference/headtail.html | 71 +- docs/reference/hist.formula.html | 99 ++- docs/reference/histFromSum.html | 75 +- docs/reference/hoCoef.html | 202 +----- docs/reference/hyperCI.html | 69 +- docs/reference/index.html | 678 ++++++++++--------- docs/reference/knitUtil.html | 114 ++-- docs/reference/ksTest.html | 77 +-- docs/reference/lagratio.html | 73 +- docs/reference/lencat.html | 87 +-- docs/reference/logbtcf.html | 67 +- docs/reference/lwCompPreds-1.png | Bin 64281 -> 64292 bytes docs/reference/lwCompPreds-2.png | Bin 53609 -> 53613 bytes docs/reference/lwCompPreds-3.png | Bin 58912 -> 58928 bytes docs/reference/lwCompPreds-4.png | Bin 53787 -> 53784 bytes docs/reference/lwCompPreds-5.png | Bin 41618 -> 41619 bytes docs/reference/lwCompPreds-6.png | Bin 64281 -> 64292 bytes docs/reference/lwCompPreds.html | 99 ++- docs/reference/mapvalues.html | 161 +---- docs/reference/metaM.html | 345 ++++++---- docs/reference/mrClosed.html | 131 ++-- docs/reference/mrOpen.html | 91 +-- docs/reference/nlsBoot.html | 103 ++- docs/reference/nlsTracePlot-1.png | Bin 98033 -> 97998 bytes docs/reference/nlsTracePlot-2.png | Bin 73049 -> 72934 bytes docs/reference/nlsTracePlot.html | 87 +-- docs/reference/perc.html | 71 +- docs/reference/plotAB-2.png | Bin 47628 -> 47725 bytes docs/reference/plotAB.html | 107 ++- docs/reference/plotBinResp.html | 293 +------- docs/reference/poiCI.html | 71 +- docs/reference/psdAdd.html | 83 +-- docs/reference/psdCI.html | 79 +-- docs/reference/psdCalc.html | 87 +-- docs/reference/psdPlot-1.png | Bin 42101 -> 42169 bytes docs/reference/psdPlot-2.png | Bin 42023 -> 41948 bytes docs/reference/psdPlot.html | 105 ++- docs/reference/psdVal.html | 73 +- docs/reference/rSquared.html | 77 +-- docs/reference/rcumsum.html | 63 +- docs/reference/removal.html | 393 ++++++----- docs/reference/residPlot.html | 380 +---------- docs/reference/se.html | 84 +-- docs/reference/srStarts-1.png | Bin 45907 -> 45944 bytes docs/reference/srStarts-2.png | Bin 38539 -> 38590 bytes docs/reference/srStarts-3.png | Bin 38599 -> 38519 bytes docs/reference/srStarts-4.png | Bin 46653 -> 46552 bytes docs/reference/srStarts-5.png | Bin 41057 -> 41059 bytes docs/reference/srStarts.html | 87 +-- docs/reference/sumTable.html | 73 +- docs/reference/tictactoe-1.png | Bin 26057 -> 26088 bytes docs/reference/tictactoe-2.png | Bin 32024 -> 32065 bytes docs/reference/tictactoe.html | 83 +-- docs/reference/validn.html | 63 +- docs/reference/vbStarts-1.png | Bin 65157 -> 65111 bytes docs/reference/vbStarts-10.png | Bin 65676 -> 65500 bytes docs/reference/vbStarts-11.png | Bin 65793 -> 65755 bytes docs/reference/vbStarts-12.png | Bin 68066 -> 68037 bytes docs/reference/vbStarts-13.png | Bin 66891 -> 66831 bytes docs/reference/vbStarts-14.png | Bin 67524 -> 67367 bytes docs/reference/vbStarts-15.png | Bin 67738 -> 67723 bytes docs/reference/vbStarts-2.png | Bin 65345 -> 65285 bytes docs/reference/vbStarts-3.png | Bin 65570 -> 65472 bytes docs/reference/vbStarts-4.png | Bin 66283 -> 66160 bytes docs/reference/vbStarts-5.png | Bin 65952 -> 65905 bytes docs/reference/vbStarts-6.png | Bin 64264 -> 64126 bytes docs/reference/vbStarts-7.png | Bin 64521 -> 64406 bytes docs/reference/vbStarts-8.png | Bin 68083 -> 68004 bytes docs/reference/vbStarts-9.png | Bin 68730 -> 68675 bytes docs/reference/vbStarts.html | 97 +-- docs/reference/wrAdd.html | 77 +-- docs/reference/wsVal.html | 71 +- docs/sitemap.xml | 416 +++--------- pkgdown/favicon/apple-touch-icon.png | Bin 15834 -> 11646 bytes pkgdown/favicon/favicon-96x96.png | Bin 0 -> 5095 bytes pkgdown/favicon/favicon.ico | Bin 15086 -> 15086 bytes pkgdown/favicon/favicon.svg | 3 + pkgdown/favicon/site.webmanifest | 21 + pkgdown/favicon/web-app-manifest-192x192.png | Bin 0 -> 12845 bytes pkgdown/favicon/web-app-manifest-512x512.png | Bin 0 -> 54565 bytes 165 files changed, 3698 insertions(+), 8310 deletions(-) create mode 100644 cran-comments/cran-comments-v0_9_6.md create mode 100644 pkgdown/favicon/favicon-96x96.png create mode 100644 pkgdown/favicon/favicon.svg create mode 100644 pkgdown/favicon/site.webmanifest create mode 100644 pkgdown/favicon/web-app-manifest-192x192.png create mode 100644 pkgdown/favicon/web-app-manifest-512x512.png diff --git a/.Rbuildignore b/.Rbuildignore index 2f2c20da..a63a7c88 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -22,5 +22,7 @@ cran-comments.md ## Logo inst/sticker/ man/figures/logo.png +man/figures/logo.svg + ^\.github$ ^codecov\.yml$ diff --git a/DESCRIPTION b/DESCRIPTION index d0854454..11ed32fa 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,6 +1,6 @@ Package: FSA -Version: 0.9.5.9000 -Date: 2024-12-30 +Version: 0.9.6 +Date: 2025-1-6 Title: Simple Fisheries Stock Assessment Methods Description: A variety of simple fish stock assessment methods. Authors@R: c( diff --git a/NEWS.md b/NEWS.md index f26db24c..3aed60dc 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,4 @@ -# FSA 0.9.5.9000 +# FSA 0.9.6 * Updated testing to use `testthat` v3.0.0. * Changes to `DESCRIPTION` including adding `tidyr` in Suggests (for example in `removal()`). * Replaced MANY `expect_is()` with `expect_equal(class())` idioms. @@ -8,6 +8,7 @@ * Had to handle multiple warnings for some tests (see [this article](https://testthat.r-lib.org/articles/third-edition.html#warnings)). * Moved all `require()` in individual files to `testthat.R`. This removed many `require()` that were not needed. * Fixed four minor errors in documentation from legacy uses of `\R{}` rather than `\code{}`. +* Made some accessibility changes and rebuilt favicons as suggested by `pkgdown`. * `alkPlot()`: Modified. Added note in documentation pointing to a fishR blog post on using `ggplot2` to make similar plots. * `catchCurve()`: Modified. Added `as.df=` to extractor functions and `incl.est=` to `confint.catchCurve()` to match functionality added to `depletion()`. diff --git a/README.md b/README.md index 06954c21..a1552eef 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@   -## FSA (Fisheries Stock Assessment) +## FSA (Fisheries Stock Assessment) FSA logo The **FSA** package provides R functions to conduct typical introductory fisheries analyses. Example analyses that use **FSA** can be found in the [Introductory Fisheries Analyses with R book](https://fishr-core-team.github.io/fishR/pages/books.html#introductory-fisheries-analyses-with-r) (*see note below*) and on the [fishR website](https://fishr-core-team.github.io/fishR/). Please [cite **FSA**](https://fishr-core-team.github.io/FSA//authors.html) if you use **FSA** in a publication. diff --git a/_pkgdown.yml b/_pkgdown.yml index aa36bcd0..b0338db5 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -154,7 +154,10 @@ navbar: right: - icon: fa-newspaper href: news/index.html + aria-label: See package news - icon: fa-home fa href: index.html + aria-label: Goto FSA home - icon: fa-github fa href: https://github.com/fishR-Core-Team/FSA + aria-label: Goto FSA github page diff --git a/cran-comments/cran-comments-v0_9_6.md b/cran-comments/cran-comments-v0_9_6.md new file mode 100644 index 00000000..6d82b0db --- /dev/null +++ b/cran-comments/cran-comments-v0_9_6.md @@ -0,0 +1,10 @@ +* This updates the existing FSA package on CRAN with a few major changes described in NEWS and minor changes, including fixing notes from CRAN checks. + +## Notes +* There may be a note about "fishR" being misspelled in the description. This is not a misspelling. + +## Testing Environments +* My Windows machine. +* Win Builder -- old-release, release, and development. +* Mac Builder +* GitHub Action (R-CMD-check.yaml) diff --git a/docs/LICENSE-text.html b/docs/LICENSE-text.html index 8ccb22af..7c5df6df 100644 --- a/docs/LICENSE-text.html +++ b/docs/LICENSE-text.html @@ -1,56 +1,33 @@ -License • FSA +License • FSA Skip to contents - -