Aneesh Sreevallabh Chivukula: Adversarial Machine Learning
Adversarial Machine Learning
Buch
- Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence
lieferbar innerhalb 2-3 Wochen
(soweit verfügbar beim Lieferanten)
(soweit verfügbar beim Lieferanten)
EUR 186,19*
Verlängerter Rückgabezeitraum bis 31. Januar 2025
Alle zur Rückgabe berechtigten Produkte, die zwischen dem 1. bis 31. Dezember 2024 gekauft wurden, können bis zum 31. Januar 2025 zurückgegeben werden.
- Springer Nature Switzerland, 03/2024
- Einband: Kartoniert / Broschiert, Paperback
- Sprache: Englisch
- ISBN-13: 9783030997748
- Bestellnummer: 11793980
- Umfang: 324 Seiten
- Nummer der Auflage: 2023
- Auflage: 2023
- Gewicht: 493 g
- Maße: 235 x 155 mm
- Stärke: 18 mm
- Erscheinungstermin: 7.3.2024
Achtung: Artikel ist nicht in deutscher Sprache!
Weitere Ausgaben von Adversarial Machine Learning
Klappentext
A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed.We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantificationof the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications.
In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning.