- Version 1.0
- By Mphasis
In poisoning attack, attacker designed noises- such as variables value changes, label changes- are induced to the training data to test fidelity and robustness of model training. The model trained on such adverse dataset could systematically result in model vulnerability issues. For example, in...
Algorithm - Fulfilled on Amazon SageMaker