In recent years, the emergence of Artificial Intelligence and Deep Learning techniques (neural networks) have led to an increase in the definite improvement in the incorporation of Artificial Vision and Natural Language.among others, in business processesThe advantages in terms of speed of decision making in complex problems are obvious.
However, the indiscriminate use of these AI algorithmsHowever, the indiscriminate use of these AI algorithms raises potential security issues.
potential security issues that most companies
that most of the companies using them have not considered in depth.
Are your algorithms secure and reliable? What information may they be unknowingly exposing publicly?
Attack by adversaries
The first type of vulnerability to which neural networks are susceptible is the so-called Adversarial Attack, in which a neural network learns to generate examples that are capable of confusing the target neural network. learns to generate examples that are capable of confusing the target neural network, causing it to fail.causing it to fail.
As example of this type of attack, it has been found that, adding a simple post-it (with a certain pattern) on a traffic signThe attack can cause the autonomous driving system to misinterpret the sign, resulting in a serious traffic accident.
Training data and pre-trained models (AI Trojans)
The second type of vulnerability is related to the use of training data and pre-trained models existing on the Internet and that are publicly accessible.
These public data and/or models may contain within them certain patterns (known as Trojans) that cause the neural network to trigger unwanted hidden undesired hidden behavior upon input of certain data (“triggering mechanism”) with undetermined consequences for the company using the algorithm.
Vulnerability of the internal configuration
The third vulnerability to be taken into account is the possibility of finding out, from the “internal to ascertain from the “internal configuration” of the neural network, which data have been used to train the neural to train the neural network: such as a photo of a person or a patient’s health data, for example.
In the worst case, if the training of the neural network has not been done properly, the data set used for training could be reconstructed, exposing sensitive information in the case of intelligent systems trained with data classified or protected by the Data Protection Act.
The described vulnerabilities can be mitigated by mitigate by using appropriate analysis and training techniques, but the vulnerabilitiesbut… are you sure your neural networks comply with them?
Manuel Gallardo, Engineering Director in Grupo Oesía