Compare commits

...

3 Commits

Author SHA1 Message Date
eef682ce15 added code of conduct 2024-02-27 14:18:23 +01:00
f261257575 renv acting up 2024-02-27 14:09:19 +01:00
9e33057c06 linting 2024-02-27 13:20:21 +01:00
38 changed files with 1023 additions and 346 deletions

View File

@ -17,3 +17,5 @@
^CRAN-SUBMISSION$
drafting
app
^\.lintr$
^CODE_OF_CONDUCT\.md$

7
.lintr Normal file
View File

@ -0,0 +1,7 @@
linters: linters_with_defaults(
commented_code_linter = NULL
)
encoding: "UTF-8"
exclusions: list(
"drafting/"
)

126
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,126 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at andreas@gdamsbo.dk.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
<https://www.contributor-covenant.org/version/2/1/code_of_conduct.html>.
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][https://github.com/mozilla/inclusion].
For answers to common questions about this code of conduct, see the FAQ at
<https://www.contributor-covenant.org/faq>. Translations are available at <https://www.contributor-covenant.org/translations>.
[homepage]: https://www.contributor-covenant.org

View File

@ -30,14 +30,10 @@ Suggests:
gt,
usethis,
ggplot2,
haven,
here,
styler,
devtools,
roxygen2,
openxlsx2,
rsconnect,
shiny,
spelling
License: GPL (>= 3)
Encoding: UTF-8
@ -53,7 +49,11 @@ Imports:
keyring,
purrr,
readr,
stats
stats,
shiny,
openxlsx2,
rsconnect,
haven
Collate:
'utils.r'
'process_user_input.r'

View File

@ -26,17 +26,17 @@
#' records <- postForm(
#' uri = api_url, # Supply your site-specific URI
#' token = api_token, # Supply your own API token
#' content = 'record',
#' format = 'json',
#' returnFormat = 'json'
#' content = "record",
#' format = "json",
#' returnFormat = "json"
#' )
#'
#' # Get the metadata
#' metadata <- postForm(
#' uri = api_url, # Supply your site-specific URI
#' token = api_token, # Supply your own API token
#' content = 'metadata',
#' format = 'json'
#' content = "metadata",
#' format = "json"
#' )
#'
#' # Convert exported JSON strings into a list of data.frames
@ -49,7 +49,8 @@
#'
#' # Get the metadata
#' metadata <- read.csv(
#' "/path/to/data/ExampleProject_DataDictionary_2018-06-03.csv")
#' "/path/to/data/ExampleProject_DataDictionary_2018-06-03.csv"
#' )
#'
#' # Split the tables
#' REDCapRITS::REDCap_split(records, metadata)
@ -86,7 +87,6 @@ REDCap_split <- function(records,
metadata,
primary_table_name = "",
forms = c("repeating", "all")) {
# Process user input
records <- process_user_input(records)
metadata <-
@ -97,7 +97,8 @@ REDCap_split <- function(records,
# Process repeat instrument names to match the redcap naming
if (is_repeated_longitudinal(records)) {
records$redcap_repeat_instrument <- clean_redcap_name(records$redcap_repeat_instrument)
records$redcap_repeat_instrument <-
clean_redcap_name(records$redcap_repeat_instrument)
# Match arg for forms
forms <- match.arg(forms, c("repeating", "all"))
@ -145,7 +146,8 @@ REDCap_split <- function(records,
# Variables to be at the beginning of each repeating instrument
repeat_instrument_fields <- grep("^redcap_repeat.*",
vars_in_data,
value = TRUE)
value = TRUE
)
# Identify the subtables in the data
subtables <- unique(records$redcap_repeat_instrument)
@ -169,35 +171,36 @@ REDCap_split <- function(records,
# Delete the variables that are not relevant
for (i in names(out)) {
if (i == primary_table_name) {
out_fields <- which(vars_in_data %in% c(universal_fields,
out_fields <- which(vars_in_data %in% c(
universal_fields,
fields[!fields[, 2] %in%
subtables, 1]))
subtables, 1]
))
out[[primary_table_index]] <-
out[[primary_table_index]][out_fields]
} else {
out_fields <- which(vars_in_data %in% c(universal_fields,
out_fields <- which(vars_in_data %in% c(
universal_fields,
repeat_instrument_fields,
fields[fields[, 2] == i, 1]))
fields[fields[, 2] == i, 1]
))
out[[i]] <- out[[i]][out_fields]
}
}
if (forms == "all") {
out <- c(split_non_repeating_forms(out[[primary_table_index]],
out <- c(
split_non_repeating_forms(
out[[primary_table_index]],
universal_fields,
fields[!fields[, 2] %in% subtables, ]),
out[-primary_table_index])
fields[!fields[, 2] %in% subtables, ]
),
out[-primary_table_index]
)
}
} else {
out <- split_non_repeating_forms(records, universal_fields, fields)
}
out
}

View File

@ -41,7 +41,7 @@ ds2dd <-
dd <- data.frame(matrix(ncol = length(metadata), nrow = ncol(ds)))
colnames(dd) <- metadata
if (is.character(record.id) & !record.id %in% colnames(ds)) {
if (is.character(record.id) && !record.id %in% colnames(ds)) {
stop("Provided record.id is not a variable name in provided data set.")
}
@ -59,7 +59,7 @@ ds2dd <-
dd[, "field_name"] <-
c(field.name[colsel], field.name[!colsel])
if (length(form.name) > 1 & length(form.name) != ncol(ds)) {
if (length(form.name) > 1 && length(form.name) != ncol(ds)) {
stop(
"Provided form.name should be of length 1 (value is reused) or equal
length as number of variables in data set."
@ -67,7 +67,7 @@ ds2dd <-
}
dd[, "form_name"] <- form.name
if (length(field.type) > 1 & length(field.type) != ncol(ds)) {
if (length(field.type) > 1 && length(field.type) != ncol(ds)) {
stop(
"Provided field.type should be of length 1 (value is reused) or equal
length as number of variables in data set."

View File

@ -1,4 +1,9 @@
utils::globalVariables(c( "stats::setNames", "field_name", "field_type", "select_choices_or_calculations"))
utils::globalVariables(c(
"stats::setNames",
"field_name",
"field_type",
"select_choices_or_calculations"
))
#' Try at determining which are true time only variables
#'
#' @description
@ -18,8 +23,13 @@ utils::globalVariables(c( "stats::setNames", "field_name", "field_type", "se
#' @examples
#' data <- redcapcast_data
#' data |> guess_time_only_filter()
#' data |> guess_time_only_filter(validate = TRUE) |> lapply(head)
guess_time_only_filter <- function(data, validate = FALSE, sel.pos = "[Tt]i[d(me)]", sel.neg = "[Dd]at[eo]") {
#' data |>
#' guess_time_only_filter(validate = TRUE) |>
#' lapply(head)
guess_time_only_filter <- function(data,
validate = FALSE,
sel.pos = "[Tt]i[d(me)]",
sel.neg = "[Dd]at[eo]") {
datetime_nms <- data |>
lapply(\(x) any(c("POSIXct", "hms") %in% class(x))) |>
(\(x) names(data)[do.call(c, x)])()
@ -42,12 +52,8 @@ guess_time_only_filter <- function(data, validate = FALSE, sel.pos = "[Tt]i[d(me
}
}
#' Correction based on time_only_filter function. Introduces new class for easier
#' validation labelling.
#' Correction based on time_only_filter function
#'
#' @description
#' Dependens on the data class "hms" introduced with
#' `guess_time_only_filter()` and converts these
#'
#' @param data data set
#' @param ... arguments passed on to `guess_time_only_filter()`
@ -119,8 +125,8 @@ hms2character <- function(data) {
#' data set (imported .dta file with `haven::read_dta()`. Default is "label"
#' @param field.validation manually specify field validation(s). Vector of
#' length 1 or ncol(data). Default is NULL and `levels()` are used for factors
#' or attribute `factor.labels.attr` for haven_labelled data set (imported .dta file with
#' `haven::read_dta()`).
#' or attribute `factor.labels.attr` for haven_labelled data set (imported .dta
#' file with `haven::read_dta()`).
#' @param metadata redcap metadata headings. Default is
#' REDCapCAST:::metadata_names.
#' @param validate.time Flag to validate guessed time columns
@ -164,7 +170,8 @@ ds2dd_detailed <- function(data,
}
if (lapply(data, haven::is.labelled) |> (\(x)do.call(c, x))() |> any()) {
message("Data seems to be imported with haven from a Stata (.dta) file and will be treated as such.")
message("Data seems to be imported with haven from a Stata (.dta) file and
will be treated as such.")
data.source <- "dta"
} else {
data.source <- ""
@ -172,18 +179,25 @@ ds2dd_detailed <- function(data,
## data classes
### Only keeps the first class, as time fields (POSIXct/POSIXt) has two classes
### Only keeps the first class, as time fields (POSIXct/POSIXt) has two
### classes
if (data.source == "dta") {
data_classes <-
data |>
haven::as_factor() |>
time_only_correction(sel.pos = time.var.sel.pos, sel.neg = time.var.sel.neg) |>
time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
) |>
lapply(\(x)class(x)[1]) |>
(\(x)do.call(c, x))()
} else {
data_classes <-
data |>
time_only_correction(sel.pos = time.var.sel.pos, sel.neg = time.var.sel.neg) |>
time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
) |>
lapply(\(x)class(x)[1]) |>
(\(x)do.call(c, x))()
}
@ -204,7 +218,7 @@ ds2dd_detailed <- function(data,
if (is.null(form.name)) {
dd$form_name <- "data"
} else {
if (length(form.name) == 1 | length(form.name) == nrow(dd)) {
if (length(form.name) == 1 || length(form.name) == nrow(dd)) {
dd$form_name <- form.name
} else {
stop("Length of supplied 'form.name' has to be one (1) or ncol(data).")
@ -229,9 +243,11 @@ ds2dd_detailed <- function(data,
}
dd <-
dd |> dplyr::mutate(field_label = dplyr::if_else(is.na(label), field_name, label))
dd |> dplyr::mutate(field_label = dplyr::if_else(is.na(label),
field_name, label
))
} else {
if (length(field.label) == 1 | length(field.label) == nrow(dd)) {
if (length(field.label) == 1 || length(field.label) == nrow(dd)) {
dd$field_label <- field.label
} else {
stop("Length of supplied 'field.label' has to be one (1) or ncol(data).")
@ -245,9 +261,11 @@ ds2dd_detailed <- function(data,
dd$field_type <- "text"
dd <-
dd |> dplyr::mutate(field_type = dplyr::if_else(data_classes == "factor", "radio", field_type))
dd |> dplyr::mutate(field_type = dplyr::if_else(data_classes == "factor",
"radio", field_type
))
} else {
if (length(field.type) == 1 | length(field.type) == nrow(dd)) {
if (length(field.type) == 1 || length(field.type) == nrow(dd)) {
dd$field_type <- field.type
} else {
stop("Length of supplied 'field.type' has to be one (1) or ncol(data).")
@ -271,10 +289,11 @@ ds2dd_detailed <- function(data,
)
)
} else {
if (length(field.validation) == 1 | length(field.validation) == nrow(dd)) {
if (length(field.validation) == 1 || length(field.validation) == nrow(dd)) {
dd$text_validation_type_or_show_slider_number <- field.validation
} else {
stop("Length of supplied 'field.validation' has to be one (1) or ncol(data).")
stop("Length of supplied 'field.validation'
has to be one (1) or ncol(data).")
}
}
@ -300,7 +319,13 @@ ds2dd_detailed <- function(data,
## Re-factors to avoid confusion with missing levels
## Assumes alle relevant levels are represented in the data
re_fac <- factor(x)
paste(paste(unique(as.numeric(re_fac)), levels(re_fac), sep = ", "), collapse = " | ")
paste(
paste(unique(as.numeric(re_fac)),
levels(re_fac),
sep = ", "
),
collapse = " | "
)
} else {
NA
}
@ -319,7 +344,10 @@ ds2dd_detailed <- function(data,
list(
data = data |>
time_only_correction(sel.pos = time.var.sel.pos, sel.neg = time.var.sel.neg) |>
time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
) |>
hms2character() |>
(\(x)stats::setNames(x, tolower(names(x))))(),
meta = dd
@ -337,7 +365,12 @@ mark_complete <- function(upload, ls){
data <- ls$data
meta <- ls$meta
forms <- unique(meta$form_name)
cbind(data[[1]][data[[1]] %in% upload$affected_ids],
data.frame(matrix(2,ncol=length(forms),nrow=upload$records_affected_count))) |>
cbind(
data[[1]][data[[1]] %in% upload$affected_ids],
data.frame(matrix(2,
ncol = length(forms),
nrow = upload$records_affected_count
))
) |>
stats::setNames(c(names(data)[1], paste0(forms, "_complete")))
}

View File

@ -1,4 +1,3 @@
#' Retrieve project API key if stored, if not, set and retrieve
#'
#' @param key.name character vector of key name

View File

@ -20,4 +20,3 @@
#' }
#' @usage data(mtcars_redcap)
"mtcars_redcap"

View File

@ -30,10 +30,8 @@ process_user_input.character <- function(x, ...) {
}
jsonlite::fromJSON(x)
}
process_user_input.response <- function(x, ...) {
process_user_input(rawToChar(x$content))
}

View File

@ -1,11 +1,13 @@
#' Convenience function to download complete instrument, using token storage in keyring.
#' Convenience function to download complete instrument, using token storage
#' in keyring.
#'
#' @param key key name in standard keyring for token retrieval.
#' @param uri REDCap database API uri
#' @param instrument instrument name
#' @param raw_or_label raw or label passed to `REDCapR::redcap_read()`
#' @param id_name id variable name. Default is "record_id".
#' @param records specify the records to download. Index numbers. Numeric vector.
#' @param records specify the records to download. Index numbers.
#' Numeric vector.
#'
#' @return data.frame
#' @export

View File

@ -38,7 +38,8 @@ read_redcap_tables <- function(uri,
fields_test <- fields %in% unique(m$field_name)
if (any(!fields_test)) {
print(paste0("The following field names are invalid: ", paste(fields[!fields_test], collapse = ", "), "."))
print(paste0("The following field names are invalid: ",
paste(fields[!fields_test], collapse = ", "), "."))
stop("Not all supplied field names are valid")
}
}
@ -48,7 +49,8 @@ read_redcap_tables <- function(uri,
forms_test <- forms %in% unique(m$form_name)
if (any(!forms_test)) {
print(paste0("The following form names are invalid: ", paste(forms[!forms_test], collapse = ", "), "."))
print(paste0("The following form names are invalid: ",
paste(forms[!forms_test], collapse = ", "), "."))
stop("Not all supplied form names are valid")
}
}
@ -62,7 +64,8 @@ read_redcap_tables <- function(uri,
event_test <- events %in% unique(arm_event_inst$data$unique_event_name)
if (any(!event_test)) {
print(paste0("The following event names are invalid: ", paste(events[!event_test], collapse = ", "), "."))
print(paste0("The following event names are invalid: ",
paste(events[!event_test], collapse = ", "), "."))
stop("Not all supplied event names are valid")
}
}
@ -97,7 +100,4 @@ read_redcap_tables <- function(uri,
)
sanitize_split(out)
}

View File

@ -1,6 +1,8 @@
utils::globalVariables(c("redcap_wider",
utils::globalVariables(c(
"redcap_wider",
"event.glue",
"inst.glue"))
"inst.glue"
))
#' @title Redcap Wider
#' @description Converts a list of REDCap data frames from long to wide format.
@ -16,42 +18,65 @@ utils::globalVariables(c("redcap_wider",
#'
#' @examples
#' # Longitudinal
#' list1 <- list(data.frame(record_id = c(1,2,1,2),
#' list1 <- list(
#' data.frame(
#' record_id = c(1, 2, 1, 2),
#' redcap_event_name = c("baseline", "baseline", "followup", "followup"),
#' age = c(25,26,27,28)),
#' data.frame(record_id = c(1,2),
#' age = c(25, 26, 27, 28)
#' ),
#' data.frame(
#' record_id = c(1, 2),
#' redcap_event_name = c("baseline", "baseline"),
#' gender = c("male", "female")))
#' gender = c("male", "female")
#' )
#' )
#' redcap_wider(list1)
#' # Simpel with two instruments
#' list2 <- list(data.frame(record_id = c(1,2),
#' age = c(25,26)),
#' data.frame(record_id = c(1,2),
#' gender = c("male", "female")))
#' list2 <- list(
#' data.frame(
#' record_id = c(1, 2),
#' age = c(25, 26)
#' ),
#' data.frame(
#' record_id = c(1, 2),
#' gender = c("male", "female")
#' )
#' )
#' redcap_wider(list2)
#' # Simple with single instrument
#' list3 <- list(data.frame(record_id = c(1,2),
#' age = c(25,26)))
#' list3 <- list(data.frame(
#' record_id = c(1, 2),
#' age = c(25, 26)
#' ))
#' redcap_wider(list3)
#' # Longitudinal with repeatable instruments
#' list4 <- list(data.frame(record_id = c(1,2,1,2),
#' list4 <- list(
#' data.frame(
#' record_id = c(1, 2, 1, 2),
#' redcap_event_name = c("baseline", "baseline", "followup", "followup"),
#' age = c(25,26,27,28)),
#' data.frame(record_id = c(1,1,1,1,2,2,2,2),
#' redcap_event_name = c("baseline", "baseline", "followup", "followup",
#' "baseline", "baseline", "followup", "followup"),
#' age = c(25, 26, 27, 28)
#' ),
#' data.frame(
#' record_id = c(1, 1, 1, 1, 2, 2, 2, 2),
#' redcap_event_name = c(
#' "baseline", "baseline", "followup", "followup",
#' "baseline", "baseline", "followup", "followup"
#' ),
#' redcap_repeat_instrument = "walk",
#' redcap_repeat_instance = c(1, 2, 1, 2, 1, 2, 1, 2),
#' dist = c(40, 32, 25, 33, 28, 24, 23, 36)),
#' data.frame(record_id = c(1,2),
#' dist = c(40, 32, 25, 33, 28, 24, 23, 36)
#' ),
#' data.frame(
#' record_id = c(1, 2),
#' redcap_event_name = c("baseline", "baseline"),
#' gender = c("male", "female")))
#' gender = c("male", "female")
#' )
#' )
#' redcap_wider(list4)
redcap_wider <-
function(data,
event.glue = "{.value}_{redcap_event_name}",
inst.glue = "{.value}_{redcap_repeat_instance}") {
if (!is_repeated_longitudinal(data)) {
if (is.list(data)) {
if (length(data) == 1) {
@ -62,10 +87,7 @@ redcap_wider <-
} else if (is.data.frame(data)) {
out <- data
}
} else {
id.name <- do.call(c, lapply(data, names))[[1]]
l <- lapply(data, function(i) {
@ -124,4 +146,3 @@ redcap_wider <-
out
}

View File

@ -33,5 +33,3 @@
#' }
#' @usage data(redcapcast_data)
"redcapcast_data"

View File

@ -25,5 +25,3 @@
#' }
#' @usage data(redcapcast_meta)
"redcapcast_meta"

View File

@ -15,7 +15,6 @@ server_factory <- function() {
ui_factory <- function() {
# require(ggplot2)
source(here::here("app/ui.R"))
}
#' Launch the included Shiny-app for database casting and upload

View File

@ -128,9 +128,11 @@ sanitize_split <- function(l,
"redcap_repeat_instrument",
"redcap_repeat_instance"
)) {
generic.names <- c(get_id_name(l),
generic.names <- c(
get_id_name(l),
generic.names,
paste0(names(l), "_complete"))
paste0(names(l), "_complete")
)
lapply(l, function(i) {
if (ncol(i) > 2) {
@ -334,7 +336,8 @@ split_non_repeating_forms <-
#' @export
#'
#' @examples
#' test <- c("12 months follow-up", "3 steps", "mRS 6 weeks", "Counting to 231 now")
#' test <- c("12 months follow-up", "3 steps", "mRS 6 weeks",
#' "Counting to 231 now")
#' strsplitx(test, "[0-9]", type = "around")
strsplitx <- function(x,
split,
@ -403,7 +406,8 @@ d2w <- function(x, lang = "en", neutrum = FALSE, everything = FALSE) {
# In Danish the written 1 depends on the counted word
if (neutrum) nt <- "t" else nt <- "n"
# A sapply() call with nested lapply() to handle vectors, data.frames and lists
# A sapply() call with nested lapply() to handle vectors, data.frames
# and lists
convert <- function(x, lang, neutrum) {
zero_nine <- data.frame(
num = 0:9,
@ -503,7 +507,9 @@ is_repeated_longitudinal <- function(data, generics = c(
#' @examples
#' file_extension(list.files(here::here(""))[[2]])[[1]]
file_extension <- function(filenames) {
sub(pattern = "^(.*\\.|[^.]+)(?=[^.]*)", replacement = "", filenames, perl = TRUE)
sub(pattern = "^(.*\\.|[^.]+)(?=[^.]*)", replacement = "",
filenames,
perl = TRUE)
}
#' Flexible file import based on extension
@ -517,7 +523,6 @@ file_extension <- function(filenames) {
#' @examples
#' read_input("https://raw.githubusercontent.com/agdamsbo/cognitive.index.lookup/main/data/sample.csv")
read_input <- function(file, consider.na = c("NA", '""', "")) {
ext <- file_extension(file)
tryCatch(

View File

@ -52,3 +52,6 @@ Install the latest version directly from GitHub:
remotes::install_github("agdamsbo/REDCapCAST")
```
## Code of Conduct
Please note that the REDCapCAST project is released with a [Contributor Code of Conduct](https://agdamsbo.github.io/REDCapCAST/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms.

View File

@ -1,4 +1,6 @@
mtcars_redcap <- mtcars |> dplyr::mutate(record_id=seq_len(dplyr::n()),
mtcars_redcap <- mtcars |>
dplyr::mutate(
record_id = seq_len(dplyr::n()),
name = rownames(mtcars)
) |>
dplyr::select(record_id, dplyr::everything())

View File

@ -3,11 +3,12 @@
# "field_label", "select_choices_or_calculations", "field_note",
# "text_validation_type_or_show_slider_number", "text_validation_min",
# "text_validation_max", "identifier", "branching_logic", "required_field",
# "custom_alignment", "question_number", "matrix_group_name", "matrix_ranking",
# "field_annotation"
# "custom_alignment", "question_number", "matrix_group_name",
# "matrix_ranking", "field_annotation"
# )
metadata_names <- REDCapR::redcap_metadata_read(redcap_uri = keyring::key_get("DB_URI"),
metadata_names <- REDCapR::redcap_metadata_read(
redcap_uri = keyring::key_get("DB_URI"),
token = keyring::key_get("cast_api")
)$data |> names()

View File

@ -1,6 +1,7 @@
## code to prepare `redcapcast_data` dataset goes here
redcapcast_data <- REDCapR::redcap_read(redcap_uri = keyring::key_get("DB_URI"),
redcapcast_data <- REDCapR::redcap_read(
redcap_uri = keyring::key_get("DB_URI"),
token = keyring::key_get("cast_api"),
raw_or_label = "label"
)$data |> dplyr::tibble()

View File

@ -1,5 +1,6 @@
## code to prepare `redcapcast_meta` dataset goes here
redcapcast_meta <- REDCapR::redcap_metadata_read(redcap_uri = keyring::key_get("DB_URI"),
redcapcast_meta <- REDCapR::redcap_metadata_read(
redcap_uri = keyring::key_get("DB_URI"),
token = keyring::key_get("cast_api")
)$data

67
inst/WORDLIST Normal file
View File

@ -0,0 +1,67 @@
Assesment
CMD
Codecov
DOI
DataDictionary
GStat
GithubActions
JSON
Lifecycle
METACRAN
POSIXct
Pivotting
README
REDCap
REDCapR
REDCapRITS
THe
UI
Whishes
al
api
attr
charater
da
dafault
datetime
demonstrational
dmy
doi
dplyr
ds
dta
et
gues
hms
immprovements
io
jbi
keyring
labelled
mRS
matadata
md
mdy
mtcars
natively
ncol
og
param
pegeler
perl
pos
readr
rsconnect
sel
shinyapps
stRoke
stata
strsplit
thorugh
tibble
tidyverse
transistion
ui
uri
wil
ymd

View File

@ -52,17 +52,17 @@ library(RCurl)
records <- postForm(
uri = api_url, # Supply your site-specific URI
token = api_token, # Supply your own API token
content = 'record',
format = 'json',
returnFormat = 'json'
content = "record",
format = "json",
returnFormat = "json"
)
# Get the metadata
metadata <- postForm(
uri = api_url, # Supply your site-specific URI
token = api_token, # Supply your own API token
content = 'metadata',
format = 'json'
content = "metadata",
format = "json"
)
# Convert exported JSON strings into a list of data.frames
@ -75,7 +75,8 @@ records <- read.csv("/path/to/data/ExampleProject_DATA_2018-06-03_1700.csv")
# Get the metadata
metadata <- read.csv(
"/path/to/data/ExampleProject_DataDictionary_2018-06-03.csv")
"/path/to/data/ExampleProject_DataDictionary_2018-06-03.csv"
)
# Split the tables
REDCapRITS::REDCap_split(records, metadata)

View File

@ -44,8 +44,8 @@ data set (imported .dta file with `haven::read_dta()`. Default is "label"}
\item{field.validation}{manually specify field validation(s). Vector of
length 1 or ncol(data). Default is NULL and `levels()` are used for factors
or attribute `factor.labels.attr` for haven_labelled data set (imported .dta file with
`haven::read_dta()`).}
or attribute `factor.labels.attr` for haven_labelled data set (imported .dta
file with `haven::read_dta()`).}
\item{metadata}{redcap metadata headings. Default is
REDCapCAST:::metadata_names.}

View File

@ -32,5 +32,7 @@ has to be converted to character class before REDCap upload.
\examples{
data <- redcapcast_data
data |> guess_time_only_filter()
data |> guess_time_only_filter(validate = TRUE) |> lapply(head)
data |>
guess_time_only_filter(validate = TRUE) |>
lapply(head)
}

View File

@ -2,7 +2,8 @@
% Please edit documentation in R/read_redcap_instrument.R
\name{read_redcap_instrument}
\alias{read_redcap_instrument}
\title{Convenience function to download complete instrument, using token storage in keyring.}
\title{Convenience function to download complete instrument, using token storage
in keyring.}
\usage{
read_redcap_instrument(
key,
@ -24,11 +25,13 @@ read_redcap_instrument(
\item{id_name}{id variable name. Default is "record_id".}
\item{records}{specify the records to download. Index numbers. Numeric vector.}
\item{records}{specify the records to download. Index numbers.
Numeric vector.}
}
\value{
data.frame
}
\description{
Convenience function to download complete instrument, using token storage in keyring.
Convenience function to download complete instrument, using token storage
in keyring.
}

View File

@ -26,35 +26,59 @@ Handles longitudinal projects, but not yet repeated instruments.
}
\examples{
# Longitudinal
list1 <- list(data.frame(record_id = c(1,2,1,2),
list1 <- list(
data.frame(
record_id = c(1, 2, 1, 2),
redcap_event_name = c("baseline", "baseline", "followup", "followup"),
age = c(25,26,27,28)),
data.frame(record_id = c(1,2),
age = c(25, 26, 27, 28)
),
data.frame(
record_id = c(1, 2),
redcap_event_name = c("baseline", "baseline"),
gender = c("male", "female")))
gender = c("male", "female")
)
)
redcap_wider(list1)
# Simpel with two instruments
list2 <- list(data.frame(record_id = c(1,2),
age = c(25,26)),
data.frame(record_id = c(1,2),
gender = c("male", "female")))
list2 <- list(
data.frame(
record_id = c(1, 2),
age = c(25, 26)
),
data.frame(
record_id = c(1, 2),
gender = c("male", "female")
)
)
redcap_wider(list2)
# Simple with single instrument
list3 <- list(data.frame(record_id = c(1,2),
age = c(25,26)))
list3 <- list(data.frame(
record_id = c(1, 2),
age = c(25, 26)
))
redcap_wider(list3)
# Longitudinal with repeatable instruments
list4 <- list(data.frame(record_id = c(1,2,1,2),
list4 <- list(
data.frame(
record_id = c(1, 2, 1, 2),
redcap_event_name = c("baseline", "baseline", "followup", "followup"),
age = c(25,26,27,28)),
data.frame(record_id = c(1,1,1,1,2,2,2,2),
redcap_event_name = c("baseline", "baseline", "followup", "followup",
"baseline", "baseline", "followup", "followup"),
age = c(25, 26, 27, 28)
),
data.frame(
record_id = c(1, 1, 1, 1, 2, 2, 2, 2),
redcap_event_name = c(
"baseline", "baseline", "followup", "followup",
"baseline", "baseline", "followup", "followup"
),
redcap_repeat_instrument = "walk",
redcap_repeat_instance = c(1, 2, 1, 2, 1, 2, 1, 2),
dist = c(40, 32, 25, 33, 28, 24, 23, 36)),
data.frame(record_id = c(1,2),
dist = c(40, 32, 25, 33, 28, 24, 23, 36)
),
data.frame(
record_id = c(1, 2),
redcap_event_name = c("baseline", "baseline"),
gender = c("male", "female")))
gender = c("male", "female")
)
)
redcap_wider(list4)
}

View File

@ -25,6 +25,7 @@ Can be used as a substitute of the base function. Main claim to fame is
easing the split around the defined delimiter, see example.
}
\examples{
test <- c("12 months follow-up", "3 steps", "mRS 6 weeks", "Counting to 231 now")
test <- c("12 months follow-up", "3 steps", "mRS 6 weeks",
"Counting to 231 now")
strsplitx(test, "[0-9]", type = "around")
}

View File

@ -2,8 +2,7 @@
% Please edit documentation in R/ds2dd_detailed.R
\name{time_only_correction}
\alias{time_only_correction}
\title{Correction based on time_only_filter function. Introduces new class for easier
validation labelling.}
\title{Correction based on time_only_filter function}
\usage{
time_only_correction(data, ...)
}
@ -16,8 +15,7 @@ time_only_correction(data, ...)
tibble
}
\description{
Dependens on the data class "hms" introduced with
`guess_time_only_filter()` and converts these
Correction based on time_only_filter function
}
\examples{
data <- redcapcast_data

364
renv.lock
View File

@ -39,6 +39,17 @@
],
"Hash": "e76c401b631961c865b89bb5a4ea3b97"
},
"Rcpp": {
"Package": "Rcpp",
"Version": "1.0.12",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"methods",
"utils"
],
"Hash": "5ea2700d21e038ace58269ecdbeb9ec0"
},
"askpass": {
"Package": "askpass",
"Version": "1.2.0",
@ -69,6 +80,16 @@
],
"Hash": "c39fbec8a30d23e721980b8afb31984c"
},
"base64enc": {
"Package": "base64enc",
"Version": "0.1-3",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R"
],
"Hash": "543776ae6848fde2f48ff3816d0628bc"
},
"bit": {
"Package": "bit",
"Version": "4.0.5",
@ -93,6 +114,38 @@
],
"Hash": "9fe98599ca456d6552421db0d6772d8f"
},
"bslib": {
"Package": "bslib",
"Version": "0.6.1",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"base64enc",
"cachem",
"grDevices",
"htmltools",
"jquerylib",
"jsonlite",
"lifecycle",
"memoise",
"mime",
"rlang",
"sass"
],
"Hash": "c0d8599494bc7fb408cd206bbdd9cab0"
},
"cachem": {
"Package": "cachem",
"Version": "1.0.8",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"fastmap",
"rlang"
],
"Hash": "c35768291560ce302c0a6589f92e837d"
},
"checkmate": {
"Package": "checkmate",
"Version": "2.3.1",
@ -126,6 +179,13 @@
],
"Hash": "3f038e5ac7f41d4ac41ce658c85e3042"
},
"commonmark": {
"Package": "commonmark",
"Version": "1.9.1",
"Source": "Repository",
"Repository": "CRAN",
"Hash": "5d8225445acb167abf7797de48b2ee3c"
},
"cpp11": {
"Package": "cpp11",
"Version": "0.4.7",
@ -158,6 +218,17 @@
],
"Hash": "ce88d13c0b10fe88a37d9c59dba2d7f9"
},
"digest": {
"Package": "digest",
"Version": "0.6.34",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"utils"
],
"Hash": "7ede2ee9ea8d3edbf1ca84c1e333ad1a"
},
"dplyr": {
"Package": "dplyr",
"Version": "1.1.4",
@ -181,6 +252,17 @@
],
"Hash": "fedd9d00c2944ff00a0e2696ccf048ec"
},
"ellipsis": {
"Package": "ellipsis",
"Version": "0.3.2",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"rlang"
],
"Hash": "bb0eec2fe32e88d9e2836c2f73ea2077"
},
"fansi": {
"Package": "fansi",
"Version": "1.0.6",
@ -193,6 +275,13 @@
],
"Hash": "962174cf2aeb5b9eea581522286a911f"
},
"fastmap": {
"Package": "fastmap",
"Version": "1.1.1",
"Source": "Repository",
"Repository": "CRAN",
"Hash": "f7736a18de97dea803bde0a2daaafb27"
},
"filelock": {
"Package": "filelock",
"Version": "1.0.3",
@ -203,6 +292,45 @@
],
"Hash": "192053c276525c8495ccfd523aa8f2d1"
},
"fontawesome": {
"Package": "fontawesome",
"Version": "0.5.2",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"htmltools",
"rlang"
],
"Hash": "c2efdd5f0bcd1ea861c2d4e2a883a67d"
},
"forcats": {
"Package": "forcats",
"Version": "1.0.0",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"cli",
"glue",
"lifecycle",
"magrittr",
"rlang",
"tibble"
],
"Hash": "1a0a9a3d5083d0d573c4214576f1e690"
},
"fs": {
"Package": "fs",
"Version": "1.6.3",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"methods"
],
"Hash": "47b5f30c720c23999b913a1a635cf0bb"
},
"generics": {
"Package": "generics",
"Version": "0.1.3",
@ -225,6 +353,27 @@
],
"Hash": "e0b3a53876554bd45879e596cdb10a52"
},
"haven": {
"Package": "haven",
"Version": "2.5.4",
"Source": "Repository",
"Repository": "RSPM",
"Requirements": [
"R",
"cli",
"cpp11",
"forcats",
"hms",
"lifecycle",
"methods",
"readr",
"rlang",
"tibble",
"tidyselect",
"vctrs"
],
"Hash": "9171f898db9d9c4c1b2c745adc2c1ef1"
},
"hms": {
"Package": "hms",
"Version": "1.1.3",
@ -239,6 +388,38 @@
],
"Hash": "b59377caa7ed00fa41808342002138f9"
},
"htmltools": {
"Package": "htmltools",
"Version": "0.5.7",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"base64enc",
"digest",
"ellipsis",
"fastmap",
"grDevices",
"rlang",
"utils"
],
"Hash": "2d7b3857980e0e0d0a1fd6f11928ab0f"
},
"httpuv": {
"Package": "httpuv",
"Version": "1.6.14",
"Source": "Repository",
"Repository": "RSPM",
"Requirements": [
"R",
"R6",
"Rcpp",
"later",
"promises",
"utils"
],
"Hash": "16abeb167dbf511f8cc0552efaf05bab"
},
"httr": {
"Package": "httr",
"Version": "1.4.7",
@ -254,6 +435,16 @@
],
"Hash": "ac107251d9d9fd72f0ca8049988f1d7f"
},
"jquerylib": {
"Package": "jquerylib",
"Version": "0.1.4",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"htmltools"
],
"Hash": "5aab57a3bd297eee1c1d862735972182"
},
"jsonlite": {
"Package": "jsonlite",
"Version": "1.8.8",
@ -284,6 +475,17 @@
],
"Hash": "5cd8cfb2e90c57110b7dd1785c599aba"
},
"later": {
"Package": "later",
"Version": "1.3.2",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"Rcpp",
"rlang"
],
"Hash": "a3e051d405326b8b0012377434c62b37"
},
"lifecycle": {
"Package": "lifecycle",
"Version": "1.0.4",
@ -307,6 +509,17 @@
],
"Hash": "7ce2733a9826b3aeb1775d56fd305472"
},
"memoise": {
"Package": "memoise",
"Version": "2.0.1",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"cachem",
"rlang"
],
"Hash": "e2817ccf4a065c5d9d7f2cfbe7c1d78c"
},
"mime": {
"Package": "mime",
"Version": "0.12",
@ -327,6 +540,35 @@
],
"Hash": "2a0dc8c6adfb6f032e4d4af82d258ab5"
},
"openxlsx2": {
"Package": "openxlsx2",
"Version": "1.4",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"R6",
"Rcpp",
"grDevices",
"magrittr",
"stringi",
"utils",
"zip"
],
"Hash": "9fa7cdc5fbdb1c8511fdde72a944db63"
},
"packrat": {
"Package": "packrat",
"Version": "0.9.2",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"tools",
"utils"
],
"Hash": "55ddd2d4a1959535f18393478b0c14a6"
},
"pillar": {
"Package": "pillar",
"Version": "1.9.0",
@ -378,6 +620,22 @@
],
"Hash": "f4625e061cb2865f111b47ff163a5ca6"
},
"promises": {
"Package": "promises",
"Version": "1.2.1",
"Source": "Repository",
"Repository": "RSPM",
"Requirements": [
"R6",
"Rcpp",
"fastmap",
"later",
"magrittr",
"rlang",
"stats"
],
"Hash": "0d8a15c9d000970ada1ab21405387dee"
},
"purrr": {
"Package": "purrr",
"Version": "1.0.2",
@ -447,6 +705,83 @@
],
"Hash": "42548638fae05fd9a9b5f3f437fbbbe2"
},
"rsconnect": {
"Package": "rsconnect",
"Version": "1.2.1",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"cli",
"curl",
"digest",
"jsonlite",
"lifecycle",
"openssl",
"packrat",
"renv",
"rlang",
"rstudioapi",
"tools",
"yaml"
],
"Hash": "94bb3a2125b01b13dd2e4a784c2a9639"
},
"rstudioapi": {
"Package": "rstudioapi",
"Version": "0.15.0",
"Source": "Repository",
"Repository": "CRAN",
"Hash": "5564500e25cffad9e22244ced1379887"
},
"sass": {
"Package": "sass",
"Version": "0.4.8",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R6",
"fs",
"htmltools",
"rappdirs",
"rlang"
],
"Hash": "168f9353c76d4c4b0a0bbf72e2c2d035"
},
"shiny": {
"Package": "shiny",
"Version": "1.8.0",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"R6",
"bslib",
"cachem",
"commonmark",
"crayon",
"ellipsis",
"fastmap",
"fontawesome",
"glue",
"grDevices",
"htmltools",
"httpuv",
"jsonlite",
"later",
"lifecycle",
"methods",
"mime",
"promises",
"rlang",
"sourcetools",
"tools",
"utils",
"withr",
"xtable"
],
"Hash": "3a1f41807d648a908e3c7f0334bf85e6"
},
"sodium": {
"Package": "sodium",
"Version": "1.3.1",
@ -454,6 +789,16 @@
"Repository": "CRAN",
"Hash": "dd86d6fd2a01d4eb3777dfdee7076d56"
},
"sourcetools": {
"Package": "sourcetools",
"Version": "0.1.7-1",
"Source": "Repository",
"Repository": "RSPM",
"Requirements": [
"R"
],
"Hash": "5f5a7629f956619d519205ec475fe647"
},
"stringi": {
"Package": "stringi",
"Version": "1.8.3",
@ -622,12 +967,31 @@
],
"Hash": "d31b6c62c10dcf11ec530ca6b0dd5d35"
},
"xtable": {
"Package": "xtable",
"Version": "1.8-4",
"Source": "Repository",
"Repository": "CRAN",
"Requirements": [
"R",
"stats",
"utils"
],
"Hash": "b8acdf8af494d9ec19ccb2481a9b11c2"
},
"yaml": {
"Package": "yaml",
"Version": "2.3.8",
"Source": "Repository",
"Repository": "CRAN",
"Hash": "29240487a071f535f5e5d5a323b7afbd"
},
"zip": {
"Package": "zip",
"Version": "2.3.1",
"Source": "Repository",
"Repository": "RSPM",
"Hash": "fcc4bd8e6da2d2011eb64a5e5cc685ab"
}
}
}

3
tests/spelling.R Normal file
View File

@ -0,0 +1,3 @@
if(requireNamespace('spelling', quietly = TRUE))
spelling::spell_check_test(vignettes = TRUE, error = FALSE,
skip_on_cran = TRUE)

View File

@ -12,17 +12,18 @@ ref_data_location <- function(x) file.path("tests","testthat","data", x)
# RCurl -------------------------------------------------------------------
REDCap_split(
ref_data_location("ExampleProject_records.json") %>% fromJSON,
ref_data_location("ExampleProject_metadata.json") %>% fromJSON
) %>% digest
ref_data_location("ExampleProject_records.json") %>% fromJSON(),
ref_data_location("ExampleProject_metadata.json") %>% fromJSON()
) %>% digest()
# Basic CSV ---------------------------------------------------------------
REDCap_split(
ref_data_location("ExampleProject_DATA_2018-06-07_1129.csv") %>% read.csv,
ref_data_location("ExampleProject_DataDictionary_2018-06-07.csv") %>% read.csv
) %>% digest
ref_data_location("ExampleProject_DATA_2018-06-07_1129.csv") %>% read.csv(),
ref_data_location("ExampleProject_DataDictionary_2018-06-07.csv") %>%
read.csv()
) %>% digest()
# REDCap R Export ---------------------------------------------------------
@ -30,10 +31,11 @@ source("tests/testthat/helper-ExampleProject_R_2018-06-07_1129.r")
REDCap_split(
ref_data_location("ExampleProject_DATA_2018-06-07_1129.csv") %>%
read.csv %>%
REDCap_process_csv,
ref_data_location("ExampleProject_DataDictionary_2018-06-07.csv") %>% read.csv
) %>% digest
read.csv() %>%
REDCap_process_csv(),
ref_data_location("ExampleProject_DataDictionary_2018-06-07.csv") %>%
read.csv()
) %>% digest()
# Longitudinal data from @pbchase; Issue #7 -------------------------------
@ -41,9 +43,10 @@ file_paths <- vapply(
c(
records = "WARRIORtestForSoftwa_DATA_2018-06-21_1431.csv",
metadata = "WARRIORtestForSoftwareUpgrades_DataDictionary_2018-06-21.csv"
), FUN.VALUE = "character", ref_data_location
),
FUN.VALUE = "character", ref_data_location
)
redcap <- lapply(file_paths, read.csv, stringsAsFactors = FALSE)
redcap[["metadata"]] <- with(redcap, metadata[metadata[, 1] > "", ])
with(redcap, REDCap_split(records, metadata)) %>% digest
with(redcap, REDCap_split(records, metadata)) %>% digest()

View File

@ -1,5 +1,3 @@
# Set up the path and data -------------------------------------------------
metadata <- read.csv(
get_data_location("ExampleProject_DataDictionary_2018-06-07.csv"),
@ -8,7 +6,8 @@ metadata <- read.csv(
records <-
read.csv(get_data_location("ExampleProject_DATA_2018-06-07_1129.csv"),
stringsAsFactors = TRUE)
stringsAsFactors = TRUE
)
redcap_output_csv1 <- REDCap_split(records, metadata)
@ -19,18 +18,19 @@ test_that("CSV export matches reference", {
# Test that REDCap_split can handle a focused dataset
records_red <- records[!records$redcap_repeat_instrument == "sale",
records_red <- records[
!records$redcap_repeat_instrument == "sale",
!names(records) %in%
metadata$field_name[metadata$form_name == "sale"] &
!names(records) == "sale_complete"]
!names(records) == "sale_complete"
]
records_red$redcap_repeat_instrument <-
as.character(records_red$redcap_repeat_instrument)
redcap_output_red <- REDCap_split(records_red, metadata)
test_that("REDCap_split handles subset dataset",
{
test_that("REDCap_split handles subset dataset", {
testthat::expect_length(redcap_output_red, 1)
})
@ -47,35 +47,40 @@ if (requireNamespace("Hmisc", quietly = TRUE)) {
if (requireNamespace("readr", quietly = TRUE)) {
metadata <-
readr::read_csv(get_data_location(
"ExampleProject_DataDictionary_2018-06-07.csv"))
"ExampleProject_DataDictionary_2018-06-07.csv"
))
records <-
readr::read_csv(get_data_location(
"ExampleProject_DATA_2018-06-07_1129.csv"))
"ExampleProject_DATA_2018-06-07_1129.csv"
))
redcap_output_readr <- REDCap_split(records, metadata)
expect_matching_elements <- function(FUN) {
FUN <- match.fun(FUN)
expect_identical(lapply(redcap_output_readr, FUN),
lapply(redcap_output_csv1, FUN))
expect_identical(
lapply(redcap_output_readr, FUN),
lapply(redcap_output_csv1, FUN)
)
}
test_that("Result of data read in with `readr` will
match result with `read.csv`",
{
match result with `read.csv`", {
# The list itself
expect_identical(length(redcap_output_readr),
length(redcap_output_csv1))
expect_identical(names(redcap_output_readr),
names(redcap_output_csv1))
expect_identical(
length(redcap_output_readr),
length(redcap_output_csv1)
)
expect_identical(
names(redcap_output_readr),
names(redcap_output_csv1)
)
# Each element of the list
expect_matching_elements(names)
expect_matching_elements(dim)
})
}

View File

@ -1,6 +1,7 @@
test_that("strsplitx works", {
expect_equal(2 * 2, 4)
test <- c("12 months follow-up", "3 steps", "mRS 6 weeks", "Counting to 231 now")
test <- c("12 months follow-up", "3 steps", "mRS 6 weeks",
"Counting to 231 now")
expect_length(strsplitx(test, "[0-9]", type = "around")[[1]], 3)
expect_equal(strsplitx(test, "[0-9]", type = "classic")[[2]][1], "")
@ -10,11 +11,12 @@ test_that("strsplitx works", {
})
test_that("d2w works", {
expect_length(d2w(c(2:8, 21)), 8)
expect_equal(d2w(data.frame(2:7,3:8,1),lang="da",
neutrum=TRUE)[1,3],"et")
expect_equal(d2w(data.frame(2:7, 3:8, 1),
lang = "da",
neutrum = TRUE
)[1, 3], "et")
expect_equal(d2w(list(2:8, c(2, 6, 4, 23), 2), everything = T)[[2]][4], "two three")
})

View File

@ -25,7 +25,8 @@ THe first iteration of a dataset to data dictionary function is the `ds2dd()`, w
```{r eval=FALSE}
mtcars |>
dplyr::mutate(record_id = seq_len(dplyr::n())) |>
ds2dd() |> str()
ds2dd() |>
str()
```
The more advanced `ds2dd_detailed()` is a natural development. It will try to apply the most common data classes for data validation and will assume that the first column is the id number. It outputs a list with the dataset with modified variable names to comply with REDCap naming conventions and a data dictionary.
@ -37,7 +38,8 @@ dd_ls <- mtcars |>
dplyr::mutate(record_id = seq_len(dplyr::n())) |>
dplyr::select(record_id, dplyr::everything()) |>
ds2dd_detailed()
dd_ls |> str()
dd_ls |>
str()
```
Additional specifications to the DataDictionary can be made manually, or it can be uploaded and modified manually in the graphical user interface on the web page.

View File

@ -33,17 +33,23 @@ redcapcast_meta |> gt::gt()
```
```{r}
list <-
REDCap_split(records = redcapcast_data,
REDCap_split(
records = redcapcast_data,
metadata = redcapcast_meta,
forms = "repeating")|> sanitize_split()
forms = "repeating"
) |>
sanitize_split()
str(list)
```
```{r}
list <-
REDCap_split(records = redcapcast_data,
REDCap_split(
records = redcapcast_data,
metadata = redcapcast_meta,
forms = "all") |> sanitize_split()
forms = "all"
) |>
sanitize_split()
str(list)
```
@ -62,5 +68,3 @@ The function works very similar to the `REDCapR::redcap_read()` in allowing to s
```{r}
redcap_wider(list) |> str()
```