Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Sign in
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Amazon Sagemaker

Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. With Amazon SageMaker, all the barriers and complexity that typically slow down developers who want to use machine learning are removed. The service includes models that can be used together or independently to build, train, and deploy your machine learning models.

product logo

H2O.ai's H2O-3 Deep Learning Algorithm

By:
Latest Version:
0.1
A multi-layer feedforward artificial neural network

    Product Overview

    H2O’s Deep Learning is based on a multi-layer feedforward artificial neural network that is trained with stochastic gradient descent using back-propagation. The network can contain a large number of hidden layers consisting of neurons with tanh, rectifier, and maxout activation functions. Advanced features such as adaptive learning rate, rate annealing, momentum training, dropout, L1 or L2 regularization, checkpointing, and grid search enable high predictive accuracy.

    Key Data

    Type
    Algorithm
    Fulfillment Methods
    Amazon SageMaker

    Highlights

    • A feedforward artificial neural network (ANN) model, also known as deep neural network (DNN) or multi-layer perceptron (MLP), is the most common type of Deep Neural Network and the only type that is supported natively in H2O-3.

    Not quite sure what you’re looking for? AWS Marketplace can help you find the right solution for your use case. Contact us

    Pricing Information

    Use this tool to estimate the software and infrastructure costs based your configuration choices. Your usage and costs might be different from this estimate. They will be reflected on your monthly AWS billing reports.


    Estimating your costs

    Choose your region and launch option to see the pricing details. Then, modify the estimated price by choosing different instance types.

    Version
    Region

    Software Pricing

    Algorithm Training$0.00/hr

    running on ml.c5.2xlarge

    Model Realtime Inference$0.00/hr

    running on ml.c5.2xlarge

    Model Batch Transform$0.00/hr

    running on ml.c5.2xlarge

    Infrastructure Pricing

    With Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
    Learn more about SageMaker pricing

    SageMaker Algorithm Training$0.408/host/hr

    running on ml.c5.2xlarge

    SageMaker Realtime Inference$0.408/host/hr

    running on ml.c5.2xlarge

    SageMaker Batch Transform$0.408/host/hr

    running on ml.c5.2xlarge

    Algorithm Training

    For algorithm training in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type. Additional infrastructure cost, taxes or fees may apply.
    InstanceType
    Algorithm/hr
    ml.c5.2xlarge
    Vendor Recommended
    $0.00
    ml.c5.4xlarge
    $0.00
    ml.c5.9xlarge
    $0.00
    ml.c5.18xlarge
    $0.00
    ml.c4.2xlarge
    $0.00
    ml.c4.4xlarge
    $0.00
    ml.c4.8xlarge
    $0.00
    ml.m5.xlarge
    $0.00
    ml.m5.2xlarge
    $0.00
    ml.m5.4xlarge
    $0.00
    ml.m5.12xlarge
    $0.00
    ml.m5.24xlarge
    $0.00
    ml.m4.2xlarge
    $0.00
    ml.m4.4xlarge
    $0.00
    ml.m4.10xlarge
    $0.00
    ml.m4.16xlarge
    $0.00

    Usage Information

    Fulfillment Methods

    Amazon SageMaker

    See H2O-3 Deep Learning Algorithm Documentation for better usage recommendations. only required hyperparameter is "training" Recommend at least 5x amount of memory on machine as size of data.

    Metrics

    Name
    Regex
    RMSE
    RMSE: ([0-9.]*)
    AUC
    AUC: ([0-9.]*)
    MSE
    MSE: ([0-9.]*)
    LogLoss
    LogLoss: ([0-9.]*)
    Gini
    Gini: ([0-9.]*)

    Channel specification

    Fields marked with * are required

    training

    *
    training data
    Input modes: File
    Content types: csv
    Compression types: None

    Hyperparameters

    Fields marked with * are required

    training

    *
    Training Parameters: classification?, categorical_columns?, target?
    Type: FreeText
    Tunable: No

    activation

    One of: tanh, tanh_with_dropout, rectifier, rectifier_with_dropout, maxout, maxout_with_dropout (default: rectifier).
    Type: Categorical
    Tunable: No

    adaptive_rate

    Adaptive learning rate. (bool, True/False)
    Type: Categorical
    Tunable: No

    autoencoder

    Auto-Encoder (bool, True/False)
    Type: Categorical
    Tunable: No

    average_activation

    Average activation for sparse auto-encoder. #Experimental
    Type: Continuous
    Tunable: No

    balance_classes

    Balance training data class counts via over/under-sampling (for imbalanced data).
    Type: Categorical
    Tunable: No

    categorical_encoding

    One of: auto, enum, one_hot_internal, one_hot_explicit, binary, eigen, label_encoder, sort_by_response, enum_limited (default: auto).
    Type: Categorical
    Tunable: No

    class_sampling_factors

    Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
    Type: FreeText
    Tunable: No

    diagnostics

    Enable diagnostics for hidden layers. (bool, True/False)
    Type: Categorical
    Tunable: No

    distribution

    Distribution Function. One of: auto, bernoulli, multinomial, gaussian, poisson, gamma, tweedie, laplace, quantile, huber (default: auto).
    Type: Categorical
    Tunable: No

    elastic_averaging

    Elastic averaging between compute nodes can improve distributed model convergence. #Experimental (bool, True/False)
    Type: Categorical
    Tunable: No

    elastic_averaging_moving_rate

    Elastic averaging moving rate (only if elastic averaging is enabled).
    Type: Continuous
    Tunable: No

    elastic_averaging_regularization

    Elastic averaging regularization strength (only if elastic averaging is enabled).
    Type: Continuous
    Tunable: No

    epochs

    How many times the dataset should be iterated (streamed), can be fractional.
    Type: Continuous
    Tunable: No

    epsilon

    Adaptive learning rate smoothing factor (to avoid divisions by zero and allow progress).
    Type: Continuous
    Tunable: No

    fast_mode

    Enable fast mode (minor approximation in back-propagation). (bool, True/False)
    Type: Categorical
    Tunable: No

    fold_assignment

    One of: auto, random, modulo, stratified (default: auto).
    Type: Categorical
    Tunable: No

    fold_column

    Column with cross-validation fold index assignment per observation.
    Type: FreeText
    Tunable: No

    force_load_balance

    Force extra load balancing to increase training speed for small datasets (to keep all cores busy).
    Type: Categorical
    Tunable: No

    hidden

    Hidden layer sizes (e.g. [100, 100]).
    Type: FreeText
    Tunable: No

    hidden_dropout_ratios

    Hidden layer dropout ratios (can improve generalization), specify one value per hidden layer, defaults to 0.5.
    Type: FreeText
    Tunable: No

    huber_alpha

    Desired quantile for Huber/M-regression (threshold between quadratic and linear loss, must be between 0 and 1).
    Type: Continuous
    Tunable: No

    ignore_const_cols

    Ignore constant columns. (bool, True/False)
    Type: Categorical
    Tunable: No

    ignored_columns

    Names of columns to ignore for training.
    Type: FreeText
    Tunable: No

    initial_weight_distribution

    Initial weight distribution. One of: uniform_adaptive, uniform, normal (default: uniform_adaptive).
    Type: FreeText
    Tunable: No

    initial_weight_scale

    Uniform: -value…value, Normal: stddev.
    Type: Continuous
    Tunable: No

    input_dropout_ratio

    Input layer dropout ratio (can improve generalization, try 0.1 or 0.2).
    Type: Continuous
    Tunable: No

    l1

    L1 regularization (can add stability and improve generalization, causes many weights to become 0).
    Type: Continuous
    Tunable: No

    l2

    L2 regularization (can add stability and improve generalization, causes many weights to be small.
    Type: Continuous
    Tunable: No

    loss

    Loss function. One of: automatic, cross_entropy, quadratic, huber, absolute, quantile (default: automatic).
    Type: Categorical
    Tunable: No

    max_after_balance_size

    Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.
    Type: Continuous
    Tunable: No

    max_categorical_features

    Max. number of categorical features, enforced via hashing. #Experimental
    Type: Integer
    Tunable: No

    max_hit_ratio_k

    Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable).
    Type: Integer
    Tunable: No

    max_runtime_secs

    Maximum allowed runtime in seconds for model training. Use 0 to disable.
    Type: Continuous
    Tunable: No

    max_w2

    Constraint for squared sum of incoming weights per unit (e.g. for Rectifier).
    Type: Continuous
    Tunable: No

    mini_batch_size

    Mini-batch size (smaller leads to better fit, larger can speed up and generalize better).
    Type: Integer
    Tunable: No

    missing_values_handling

    Handling of missing values. Either MeanImputation or Skip. One of: mean_imputation, skip (default: mean_imputation).
    Type: Categorical
    Tunable: No

    momentum_ramp

    Number of training samples for which momentum increases.
    Type: Continuous
    Tunable: No

    momentum_stable

    Final momentum after the ramp is over (try 0.99).
    Type: Continuous
    Tunable: No

    momentum_start

    Initial momentum at the beginning of training (try 0.5).
    Type: Continuous
    Tunable: No

    nesterov_accelerated_gradient

    Use Nesterov accelerated gradient (recommended). (bool, True/False)
    Type: Categorical
    Tunable: No

    nfolds

    Number of folds for K-fold cross-validation (0 to disable or >= 2).
    Type: Integer
    Tunable: No

    offset_column

    Offset column. This will be added to the combination of columns before applying the link function.
    Type: FreeText
    Tunable: No

    overwrite_with_best_model

    If enabled, override the final model with the best model found during training. (bool, True/False)
    Type: Categorical
    Tunable: No

    quantile_alpha

    Desired quantile for Quantile regression, must be between 0 and 1.
    Type: Continuous
    Tunable: No

    quiet_mode

    Enable quiet mode for less output to standard output.
    Type: Categorical
    Tunable: No

    rate

    Learning rate annealing: rate / (1 + rate_annealing * samples).
    Type: Continuous
    Tunable: No

    rate_decay

    Learning rate decay factor between layers (N-th layer: rate * rate_decay ^ (n - 1).
    Type: Continuous
    Tunable: No

    regression_stop

    Stopping criterion for regression error (MSE) on training data (-1 to disable).
    Type: Continuous
    Tunable: No

    replicate_training_data

    Force reproducibility on small data (will be slow - only uses 1 thread). (bool, True/False)
    Type: Categorical
    Tunable: No

    reproducible

    Force reproducibility on small data (will be slow - only uses 1 thread). (bool, True/False)
    Type: Categorical
    Tunable: No

    rho

    Adaptive learning rate time decay factor (similarity to prior updates).
    Type: Continuous
    Tunable: No

    score_duty_cycle

    Maximum duty cycle fraction for scoring (lower: more training, higher: more scoring).
    Type: Continuous
    Tunable: No

    score_each_iteration

    Whether to score during each iteration of model training. (bool, True/False)
    Type: Categorical
    Tunable: No

    score_interval

    Shortest time interval (in seconds) between model scoring.
    Type: Continuous
    Tunable: No

    score_training_samples

    Number of training set samples for scoring (0 for all).
    Type: Integer
    Tunable: No

    score_validation_samples

    Number of validation set samples for scoring (0 for all).
    Type: Integer
    Tunable: No

    score_validation_sampling

    One of: uniform, stratified (default: uniform).
    Type: Categorical
    Tunable: No

    seed

    Seed for random numbers (affects sampling) - Note: only reproducible when running single threaded.
    Type: Integer
    Tunable: No

    shuffle_training_data

    Enable shuffling of training data (recommended if training data is replicated and train_samples_per_iteration is close to #nodes x #rows, of if using balance_classes)
    Type: Categorical
    Tunable: No

    sparse

    Sparse data handling (more efficient for data with lots of 0 values). (bool, True/False)
    Type: Categorical
    Tunable: No

    sparsity_beta

    Sparsity regularization. #Experimental
    Type: Continuous
    Tunable: No

    standardize

    If enabled, automatically standardize the data. If disabled, the user must provide properly scaled input data. (bool, True/False)
    Type: Categorical
    Tunable: No

    stopping_metric

    One of: auto, deviance, logloss, mse, rmse, mae, rmsle, auc, lift_top_group, misclassification, mean_per_class_error (default: auto).
    Type: Categorical
    Tunable: No

    stopping_rounds

    Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)
    Type: Integer
    Tunable: No

    stopping_tolerance

    Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)
    Type: Continuous
    Tunable: No

    target_ratio_comm_to_comp

    Target ratio of communication overhead to computation. Only for multi-node operation and train_samples_per_iteration = -2 (auto-tuning).
    Type: Continuous
    Tunable: No

    train_samples_per_iteration

    Number of training samples (globally) per MapReduce iteration. Special values are 0: one epoch, -1: all available data (e.g., replicated training data), -2: automatic.
    Type: Integer
    Tunable: No

    tweedie_power

    Tweedie power for Tweedie regression, must be between 1 and 2.
    Type: Continuous
    Tunable: No

    use_all_factor_levels

    Use all factor levels of categorical variables. Otherwise, the first factor level is omitted (without loss of accuracy). Useful for variable importances and auto-enabled for autoencoder
    Type: Categorical
    Tunable: No

    variable_importances

    Compute variable importances for input features (Gedeon method) - can be slow for large networks. (bool, True/False)
    Type: Categorical
    Tunable: No

    weights_column

    Column with observation weights.
    Type: FreeText
    Tunable: No

    End User License Agreement

    By subscribing to this product you agree to terms and conditions outlined in the product End user License Agreement (EULA)

    Support Information

    AWS Infrastructure

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Learn More

    Refund Policy

    There is no refund policy as this algorithm is being offered for free

    Customer Reviews

    There are currently no reviews for this product.
    View all