- Version 1.0
- By Mphasis
In poisoning attack, attacker designed noises- such as image objects, variables value changes, label changes- are induced to the training data to test fidelity and robustness of model training. The model trained on such adverse dataset could systematically result in model vulnerability issues. For...
Algorithm - Fulfilled on Amazon SageMaker