insights

Are your Artificial Intelligence Algorithms safe?

Processes and Hyper-automation

 | 

Pharma & MedTech

In recent years, the emergence of Artificial Intelligence and Deep Learning techniques (neural networks) have led to an increase in the definite improvement in the incorporation of Artificial Vision and Natural Language.among others, in business processesThe advantages in terms of speed of decision making in complex problems are obvious.

However, the indiscriminate use of these AI algorithmsHowever, the indiscriminate use of these AI algorithms raises potential security issues.
potential security issues that most companies
that most of the companies using them have not considered in depth.

Are your algorithms secure and reliable? What information may they be unknowingly exposing publicly?

 

Attack by adversaries

The first type of vulnerability to which neural networks are susceptible is the so-called Adversarial Attack, in which a neural network learns to generate examples that are capable of confusing the target neural network. learns to generate examples that are capable of confusing the target neural network, causing it to fail.causing it to fail.

As example of this type of attack, it has been found that, adding a simple post-it (with a certain pattern) on a traffic signThe attack can cause the autonomous driving system to misinterpret the sign, resulting in a serious traffic accident.

 

Training data and pre-trained models (AI Trojans)

The second type of vulnerability is related to the use of training data and pre-trained models existing on the Internet and that are publicly accessible.

These public data and/or models may contain within them certain patterns (known as Trojans) that cause the neural network to trigger unwanted hidden undesired hidden behavior upon input of certain data (“triggering mechanism”) with undetermined consequences for the company using the algorithm.

 

Vulnerability of the internal configuration

The third vulnerability to be taken into account is the possibility of finding out, from the “internal to ascertain from the “internal configuration” of the neural network, which data have been used to train the neural to train the neural network: such as a photo of a person or a patient’s health data, for example.

In the worst case, if the training of the neural network has not been done properly, the data set used for training could be reconstructed, exposing sensitive information in the case of intelligent systems trained with data classified or protected by the Data Protection Act.

The described vulnerabilities can be mitigated by mitigate by using appropriate analysis and training techniques, but the vulnerabilitiesbut… are you sure your neural networks comply with them?

Manuel Gallardo, Engineering Director in Grupo Oesía

artificial intelligence

Discover more

SGoSat

Family of SATCOM On The Move (SOTM) terminals for vehicular installation and stable mobile connection

SGoSat is a family of high-tech SOTM (Satellite Comms On The Move) terminals that are installed in a vehicle, providing the ability to target and maintain a stable connection to the satellite when the vehicle is in motion in any type of conditions.

The SGoSat family is composed of versatile terminals, which can be installed on any type of platform: trains and buses, military and/or government vehicles, aircraft, ships, etc. Originally designed for the military sector, SGoSat terminals are extremely reliable and robust, integrating high-performance components that comply with the most stringent environmental and EMI/EMC regulations. The product uses low-profile, high-efficiency antennas and a high-performance positioning and tracking unit, allowing the terminal to be operated anywhere in the world.

In order to meet the diverse needs of its customers, INSTER has developed single band and dual band terminals in X, Ka and Ku frequencies.

The SGoSat family of terminals can also be configured with a wide range of radomes (including ballistic options) to suit customer requirements.