Skip to content

Commit 08ee6f2

Browse files
committed
fixed/better documentation for mixture parameter
1 parent f00ff4c commit 08ee6f2

File tree

6 files changed

+38
-38
lines changed

6 files changed

+38
-38
lines changed

R/linear_reg.R

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
#' \itemize{
88
#' \item \code{penalty}: The total amount of regularization
99
#' in the model. Note that this must be zero for some engines.
10-
#' \item \code{mixture}: The proportion of L1 regularization in
11-
#' the model. Note that this will be ignored for some engines.
10+
#' \item \code{mixture}: The mixture amounts of different types of
11+
#' regularization (see below). Note that this will be ignored for some engines.
1212
#' }
1313
#' These arguments are converted to their specific names at the
1414
#' time that the model is fit. Other options and argument can be
@@ -23,11 +23,11 @@
2323
#' amount of regularization (`glmnet`, `keras`, and `spark` only).
2424
#' For `keras` models, this corresponds to purely L2 regularization
2525
#' (aka weight decay) while the other models can be a combination
26-
#' of L1 and L2 (depending on the value of `mixture`).
27-
#' @param mixture A number between zero and one (inclusive) that
28-
#' represents the proportion of regularization that is used for the
29-
#' L2 penalty (i.e. weight decay, or ridge regression) versus L1
30-
#' (the lasso) (`glmnet` and `spark` only).
26+
#' of L1 and L2 (depending on the value of `mixture`; see below).
27+
#' @param mixture A number between zero and one (inclusive) that is the
28+
#' proportion of L1 regularization (i.e. lasso) in the model. When
29+
#' `mixture = 1`, it is a pure lasso model while `mixture = 0` indicates that
30+
#' ridge regression is being used. (`glmnet` and `spark` only).
3131
#' @details
3232
#' The data given to the function are not saved and are only used
3333
#' to determine the _mode_ of the model. For `linear_reg()`, the

R/logistic_reg.R

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
#' \itemize{
88
#' \item \code{penalty}: The total amount of regularization
99
#' in the model. Note that this must be zero for some engines.
10-
#' \item \code{mixture}: The proportion of L1 regularization in
11-
#' the model. Note that this will be ignored for some engines.
10+
#' \item \code{mixture}: The mixture amounts of different types of
11+
#' regularization (see below). Note that this will be ignored for some engines.
1212
#' }
1313
#' These arguments are converted to their specific names at the
1414
#' time that the model is fit. Other options and argument can be
@@ -24,10 +24,10 @@
2424
#' For `keras` models, this corresponds to purely L2 regularization
2525
#' (aka weight decay) while the other models can be a combination
2626
#' of L1 and L2 (depending on the value of `mixture`).
27-
#' @param mixture A number between zero and one (inclusive) that
28-
#' represents the proportion of regularization that is used for the
29-
#' L2 penalty (i.e. weight decay, or ridge regression) versus L1
30-
#' (the lasso) (`glmnet` and `spark` only).
27+
#' @param mixture A number between zero and one (inclusive) that is the
28+
#' proportion of L1 regularization (i.e. lasso) in the model. When
29+
#' `mixture = 1`, it is a pure lasso model while `mixture = 0` indicates that
30+
#' ridge regression is being used. (`glmnet` and `spark` only).
3131
#' @details
3232
#' For `logistic_reg()`, the mode will always be "classification".
3333
#'

R/multinom_reg.R

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
#' \itemize{
88
#' \item \code{penalty}: The total amount of regularization
99
#' in the model. Note that this must be zero for some engines.
10-
#' \item \code{mixture}: The proportion of L1 regularization in
11-
#' the model. Note that this will be ignored for some engines.
10+
#' \item \code{mixture}: The mixture amounts of different types of
11+
#' regularization (see below). Note that this will be ignored for some engines.
1212
#' }
1313
#' These arguments are converted to their specific names at the
1414
#' time that the model is fit. Other options and argument can be
@@ -24,10 +24,10 @@
2424
#' For `keras` models, this corresponds to purely L2 regularization
2525
#' (aka weight decay) while the other models can be a combination
2626
#' of L1 and L2 (depending on the value of `mixture`).
27-
#' @param mixture A number between zero and one (inclusive) that
28-
#' represents the proportion of regularization that is used for the
29-
#' L2 penalty (i.e. weight decay, or ridge regression) versus L1
30-
#' (the lasso) (`glmnet` only).
27+
#' @param mixture A number between zero and one (inclusive) that is the
28+
#' proportion of L1 regularization (i.e. lasso) in the model. When
29+
#' `mixture = 1`, it is a pure lasso model while `mixture = 0` indicates that
30+
#' ridge regression is being used. (`glmnet` and `spark` only).
3131
#' @details
3232
#' For `multinom_reg()`, the mode will always be "classification".
3333
#'

man/linear_reg.Rd

Lines changed: 7 additions & 7 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/logistic_reg.Rd

Lines changed: 6 additions & 6 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/multinom_reg.Rd

Lines changed: 6 additions & 6 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)