Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Sign in
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Amazon Sagemaker

Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. With Amazon SageMaker, all the barriers and complexity that typically slow down developers who want to use machine learning are removed. The service includes models that can be used together or independently to build, train, and deploy your machine learning models.

product logo

H2O.ai H2O-3 GLM Algorithm

By:
Latest Version:
0.1
GLM Algorithm - From H2O-3 Library

    Product Overview

    Generalized Linear Models (GLM) estimate regression models for outcomes following exponential distributions. In addition to the Gaussian (i.e. normal) distribution, these include Poisson, binomial, and gamma distributions. Each serves a different purpose, and depending on distribution and link function choice, can be used either for prediction or classification.

    Key Data

    Type
    Algorithm
    Fulfillment Methods
    Amazon SageMaker

    Highlights

    • Generalized linear model, by H2O.ai from H2O-3 library

    Not quite sure what you’re looking for? AWS Marketplace can help you find the right solution for your use case. Contact us

    Pricing Information

    Use this tool to estimate the software and infrastructure costs based your configuration choices. Your usage and costs might be different from this estimate. They will be reflected on your monthly AWS billing reports.


    Estimating your costs

    Choose your region and launch option to see the pricing details. Then, modify the estimated price by choosing different instance types.

    Version
    Region

    Software Pricing

    Algorithm Training$0.00/hr

    running on ml.c5.2xlarge

    Model Realtime Inference$0.00/hr

    running on ml.c5.2xlarge

    Model Batch Transform$0.00/hr

    running on ml.c5.2xlarge

    Infrastructure Pricing

    With Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
    Learn more about SageMaker pricing

    SageMaker Algorithm Training$0.408/host/hr

    running on ml.c5.2xlarge

    SageMaker Realtime Inference$0.408/host/hr

    running on ml.c5.2xlarge

    SageMaker Batch Transform$0.408/host/hr

    running on ml.c5.2xlarge

    Algorithm Training

    For algorithm training in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type. Additional infrastructure cost, taxes or fees may apply.
    InstanceType
    Algorithm/hr
    ml.c5.2xlarge
    Vendor Recommended
    $0.00
    ml.c5.4xlarge
    $0.00
    ml.c5.9xlarge
    $0.00
    ml.c5.18xlarge
    $0.00
    ml.c4.2xlarge
    $0.00
    ml.c4.4xlarge
    $0.00
    ml.c4.8xlarge
    $0.00
    ml.m5.xlarge
    $0.00
    ml.m5.2xlarge
    $0.00
    ml.m5.4xlarge
    $0.00
    ml.m5.12xlarge
    $0.00
    ml.m5.24xlarge
    $0.00
    ml.m4.2xlarge
    $0.00
    ml.m4.4xlarge
    $0.00
    ml.m4.10xlarge
    $0.00
    ml.m4.16xlarge
    $0.00

    Usage Information

    Fulfillment Methods

    Amazon SageMaker

    See http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/modeling.html#h2ogeneralizedlinearestimator for all hyperparameter definitions. NOTE: Required hyperparameter is "training", make sure to specify "family" for prediction as some distributions require categorical values. The data ingest process does not automatically encode categorical values

    Metrics

    Name
    Regex
    MSE
    MSE: ([0-9\.]*)
    RMSE
    RMSE: ([0-9\.]*)
    LogLoss
    LogLoss: ([0-9\.]*)
    AUC
    AUC: ([0-9\.]*)
    auc_pr
    auc_pr: ([0-9\.]*)
    AIC
    AIC: ([0-9\.]*)
    Gini
    Gini: ([0-9\.]*)

    Channel specification

    Fields marked with * are required

    training

    *
    sdfsdfs
    Input modes: File
    Content types: csv
    Compression types: None

    Hyperparameters

    Fields marked with * are required

    training

    *
    Training Parameters: family?, categorical_columns?, target?
    Type: FreeText
    Tunable: No

    alpha

    Distribution of regularization between the L1 (Lasso) and L2 (Ridge) penalties.
    Type: Continuous
    Tunable: No

    balance_classes

    Balance training data class counts via over/under-sampling
    Type: Categorical
    Tunable: No

    beta_epsilon

    Converge if beta changes less (using L-infinity norm) than beta esilon, ONLY applies to IRLSM solver
    Type: Continuous
    Tunable: No

    class_sampling_factors

    Desired over/under-sampling ratios per class (in lexicographic order).
    Type: FreeText
    Tunable: No

    compute_p_values

    Request p-values computation, p-values work only with IRLSM solver and no regularization
    Type: Categorical
    Tunable: No

    early_stopping

    Stop early when there is no more relative improvement on train or validation
    Type: Categorical
    Tunable: No

    fold_assignment

    Cross-validation fold assignment scheme, if fold_column is not specified.
    Type: FreeText
    Tunable: No

    fold_column

    Column with cross-validation fold index assignment per observation.
    Type: FreeText
    Tunable: No

    gradient_epsilon

    Converge if objective changes less (using L-infinity norm) than this, ONLY applies to L-BFGS solver.
    Type: Continuous
    Tunable: No

    ignore_const_cols

    Ignore constant columns.
    Type: Categorical
    Tunable: No

    ignored_columns

    Names of columns to ignore for training
    Type: FreeText
    Tunable: No

    interactions

    A list of predictor column indices to interact. All pairwise combinations will be computed for the list.
    Type: FreeText
    Tunable: No

    intercept

    Include constant term in the model
    Type: Categorical
    Tunable: No

    lambda_

    Regularization strength
    Type: Continuous
    Tunable: No

    lambda_min_ratio

    Minimum lambda used in lambda search
    Type: Continuous
    Tunable: No

    lambda_search

    Use lambda search starting at lambda max, given lambda is then interpreted as lambda min
    Type: Categorical
    Tunable: No

    link

    One of: family_default, identity, logit, log, inverse, tweedie, ologit, oprobit, ologlog
    Type: Categorical
    Tunable: No

    max_active_predictors

    Maximum number of active predictors during computation.
    Type: Integer
    Tunable: No

    max_after_balance_size

    Maximum relative size of the training data after balancing class counts
    Type: Continuous
    Tunable: No

    max_hit_ratio_k

    Maximum number (top K) of predictions to use for hit ratio computation
    Type: Integer
    Tunable: No

    max_iterations

    Maximum number of iterations
    Type: Integer
    Tunable: No

    max_runtime_secs

    Maximum allowed runtime in seconds for model training. Use 0 to disable.
    Type: Continuous
    Tunable: No

    missing_values_handling

    Handling of missing values. Either MeanImputation or Skip.
    Type: Categorical
    Tunable: No

    nfolds

    Number of folds for K-fold cross-validation (0 to disable or >= 2).
    Type: Integer
    Tunable: No

    nlambdas

    Number of lambdas to be used in a search.
    Type: Integer
    Tunable: No

    non_negative

    Restrict coefficients (not intercept) to be non-negative
    Type: Categorical
    Tunable: No

    obj_reg

    Likelihood divider in objective value computation, default is 1/nobs
    Type: Continuous
    Tunable: No

    objective_epsilon

    Converge if objective value changes less than this
    Type: Continuous
    Tunable: No

    offset_column

    Offset column
    Type: FreeText
    Tunable: No

    prior

    Prior probability for y==1. To be used only for logistic regression iff the data has been sampled and the mean of response does not reflect reality.
    Type: Continuous
    Tunable: No

    remove_collinear_columns

    In case of linearly dependent columns, remove some of the dependent columns
    Type: Categorical
    Tunable: No

    solver

    One of: auto, irlsm, l_bfgs, coordinate_descent_naive, coordinate_descent, gradient_descent_lh, gradient_descent_sqerr (default: auto).
    Type: FreeText
    Tunable: No

    standardize

    Standardize numeric columns to have zero mean and unit variance
    Type: Categorical
    Tunable: No

    tweedie_link_power

    tweedie link power
    Type: Continuous
    Tunable: No

    tweedie_variance_power

    tweedie variance power
    Type: Continuous
    Tunable: No

    weights_column

    Column with observation weights.
    Type: FreeText
    Tunable: No

    End User License Agreement

    By subscribing to this product you agree to terms and conditions outlined in the product End user License Agreement (EULA)

    Support Information

    AWS Infrastructure

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Learn More

    Refund Policy

    There is no refund policy as this algorithm is offered for free

    Customer Reviews

    There are currently no reviews for this product.
    View all