The article discusses the evaluation of a framework designed to enhance secure collaboration in deep learning applications. It highlights the importance of protecting sensitive intellectual property within training datasets and the risks posed by potential malicious alterations in model parameters. The evaluation uses various test platforms, including Intel and AWS instances, to demonstrate case studies on privacy-preserving training and inference, as well as a model aggregation service that contributes to federated learning. The focus remains on execution time to assess performance improvements.
Training datasets, algorithms, and learnt models may be sensitive IP and the learning and inference processes are vulnerable to malicious changes in model parameters that can cause a negative influence on a model's behaviors that is hard to detect.
We present two Veracruz case-studies in protecting deep learning (DL henceforth) applications: privacypreserving training and inference, and privacy-preserving model aggregation service, a step toward federated DL.
#secure-collaboration #deep-learning #privacy-preservation #intellectual-property #model-aggregation
Collection
[
|
...
]