The security is so important for both storing and transmitting the digital data, the choice of parameters is critical for a security system, that is, a weak parameter will make the scheme very vulnerable to attacks, for example the use of supersingular curves or anomalous curves leads to weaknesses in elliptic curve cryptosystems, for RSA cryptosystem there are some attacks for low public exponent or small private exponent. In certain circumstances the secret sharing scheme is required to decentralize the risk. In the context of the security of secret sharing schemes, it is known that for the scheme of Shamir, an unqualified set of shares cannot leak any information about the secret. This paper aims to show that the well-known Shamir’s secret sharing is not always perfect and that the uniform randomization before sharing is insufficient to obtain a secure scheme. The second purpose of this paper is to give an explicit construction of weak polynomials for which the Shamir’s (k, n) threshold scheme is insecure in the sense that there exist a fewer than k shares which can reconstruct the secret. Particular attention is given to the scheme whose threshold is less than or equal to 6. It also showed that for certain threshold k, the secret can be calculated by a pair of shares with the probability of 1/2. Finally, in order to address the mentioned vulnerabilities, several classes of polynomials should be avoided.
Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management. The success of such an intelligent health assessment model depends not only on the availability of labeled historical data but also on the careful samples selection. However, in real operating systems such as induction machines, which generally have a long reliable life, storing the entire operation history, including deterioration (i.e., bearings), will be very expensive and difficult to feed accurately into the training model. Other alternatives sequentially store samples that hold degradation patterns similar to real ones in damage behavior by imposing an accelerated deterioration. Labels lack and differences in distributions caused by the imposed deterioration will ultimately discriminate the training model and limit its knowledge capacity. In an attempt to overcome these drawbacks, a novel sequence-by-sequence deep learning algorithm able to expand the generalization capacity by transferring obtained knowledge from life cycles of similar systems is proposed. The new algorithm aims to determine health status by involving long short-term memory neural network as a primary component of adaptive learning to extract both health stage and health index inferences. Experimental validation performed using the PRONOSTIA induction machine bearing degradation datasets clearly proves the capacity and higher performance of the proposed deep learning knowledge transfer-based prognosis approach.
Toensureconfidentialityandavoidhumainattacksagainstourdata, we exchange encryption and decryption keys. In our proposal scheme, we use the commutative properties of the product of circular matrices to create a com monencryptionkeybyapplyingtheprotocolof Diffie-Hellmanexchangethrough a classic channel. To raise the security level of our system we have introduced the sensibility of chaotic logistic maps in another exchange protocol which is the BB84throuth a quantum channal.
In this work, the modeling and the sliding mode control of a self-excited asynchronous generator integrated in a wind energy conversion system is studied. The dc-link voltage and frequency output by the wind turbine depend on the wind intensity applied to the turbine and load. The goal of the study is to increase energy quality and to achieve a stabilization of dc-link voltage and frequency values based on sliding mode control. This method offers stability and robustness against external disturbances. However, this method is based in the power converter to improve the excellent dynamic of wind energy conversion system to meet the connection to the main grid. The simulation results show the efficiency and reliability of the proposed control method.
Graph matching is a comparison process of two objects represented as graphs through finding a correspondence between vertices and edges. This process allows defining a similarity degree (or dissimilarity) between the graphs. Generally, graph matching is used for extracting, finding and retrieving any information or sub-information that can be represented by graphs. In this paper, a new consistency rule is proposed to tackle with various problems of graph matching. After, using the proposed rule as a necessary and sufficient condition for the graph isomorphism, we generalize it for subgraph isomorphism, homomorphism and for an example of inexact graph matching. To determine whether there is a matching or not, a backtracking algorithm called CRGI2 is presented who checks the consistency rule by exploring the overall search space. The tree-search is consolidated with a tree pruning technique that eliminates the unfruitful branches as early as possible. Experimental results show that our algorithm is efficient and applicable for a real case application in the information retrieval field. On the efficiency side, due to the ability of the proposed rule to eliminate as early as possible the incorrect solutions, our algorithm outperforms the existing algorithms in the literature. For the application side, the algorithm has been successfully tested for querying a real dataset that contains a large set of e-mail messages.
Graph matching is a comparison process of two objects represented as graphs through finding a correspondence between vertices and edges. This process allows defining a similarity degree (or dissimilarity) between the graphs. Generally, graph matching is used for extracting, finding and retrieving any information or sub-information that can be represented by graphs. In this paper, a new consistency rule is proposed to tackle with various problems of graph matching. After, using the proposed rule as a necessary and sufficient condition for the graph isomorphism, we generalize it for subgraph isomorphism, homomorphism and for an example of inexact graph matching. To determine whether there is a matching or not, a backtracking algorithm called CRGI2 is presented who checks the consistency rule by exploring the overall search space. The tree-search is consolidated with a tree pruning technique that eliminates the unfruitful branches as early as possible. Experimental results show that our algorithm is efficient and applicable for a real case application in the information retrieval field. On the efficiency side, due to the ability of the proposed rule to eliminate as early as possible the incorrect solutions, our algorithm outperforms the existing algorithms in the literature. For the application side, the algorithm has been successfully tested for querying a real dataset that contains a large set of e-mail messages.
Several difficulties and critical problems are facing the modern designers especially the unexpected damages. For such critical issues, the steel behavior’s investigation presents a significant point to predict fatigue life through avoiding sudden damage. An experimental study has been conducted to evaluate the AISI 1045 steel fatigue behavior using three specimens’ shapes: the first one is the conventional shape according to the ASTM E466-07 standard, the second one is performed in a notched shape, and the last specimen according to the pre-loading process. To complete the comparison among the three cases studied, a mandatory checking of the chemical compositions such as carbon content 0.45%, as well as the mechanical properties, have been investigated by preformed a tensile test in order to determine the maximum stress and the yield strength. The staircase method is employed to estimate and compare the endurance limit and its standard deviations for the three shapes. Moreover, and considered that the fatigue life expectancy of the AISI 1045 steel is a crucial step, the Stromeyer model has been proposed to predict the fatigue life which appears to be more effective, considering the average error for all cases compared to the experimental model.
In this paper, the leaky acoustic microwaves (LAW) in a piezoelectric substrate (Lithium Niobate LiNbO3 Cut Y-X) were studied. The main method for this research was classification using a probabilistic neural network (PNN).The originality of this method is in the accurate values it provides. In our case, this technique was helpful in identifying undetectable waves, which are difficult to identify by classical methods. Moreover, all the values of the real part and the imaginary part of the coefficient attenuation with the acoustic velocity were classified in order to build a model from which we could easily note the Leaky waves. Accurate values of the coefficient attenuation and acoustic velocity for Leaky waves were obtained. Hence, in this study, the focus was on the interesting modeling and realization of acoustic microwave devices (radiating structures) based on the propagation of acoustic microwaves
Drying is still considered to be an efficient and important process used for food preservation. Several drying methods are commonly used, so it would be interesting to compare them. The comparison could focus on the quality of the dried products, which is mainly dependent on changes occurring during processing. In the current contribution, an experimental study of drying camel meat (Camelus dromedarius) by two methods, namely direct sun drying and microwave drying, is performed. The investigation is carried out to determine the adequate better drying technique for camel meat from the region of Ouargla, southeast Algeria. Under pre-treatment in a saline solution during 30 minutes of soaking, the samples used are slices 8 mm thick, 100 mm long and 20 mm wide. They are characterized by the initial water content of 73.38 ± 0.13 %, the protein content of 19.77 ± 0.05 %, an ash content of 1.123 ± 0.009 and a lipid content of 3.72 ± 0.05 %. The sun drying experiments are carried out at an average temperature of 21.55 °C and average relative humidity of 28.57 %. The microwave drying is carried out at a power of 180 and 270 W. At the end of drying, kinetics, rate drying, duration drying, organoleptic properties (color and size) and nutritional values (protein and lipid) are determined in each case. Although drying in the microwave is faster and shorter, the results show that the samples sun dried are better. Indeed, sun drying shows a shrinkage rate of 43.63 ± 0.37 % against 56.75 ± 0.36% at 180 W and 57.65 ± 0.32 % at 270 W for microwave drying, with total color differences of 20.59 ± 0.48 against 24.63 ± 0.73 at 180 W and 23.10 ± 0.70 at 270 W for microwave drying. Protein content increases significantly after sun drying (49.44 ± 0.21) and microwave drying (45.30 ± 0.02 % at 180 W and 40.64 ± 0.01 at 270 W). The results also show lipid preservation of 84.13 % during sun drying and an increase in ash content in both drying processes from 1.123 ± 0.009 to: (i) 4.235 ± 0.015 at 180 W and 4.266 ± 0.037 at 280 W, in microwave drying; (ii) 3.903 ± 0.07 during sun drying.
Keywords: Cameline meat, quality, sun drying, microwave drying, experimentation.
La surveillance est la fonction d’observer toutes activités humaine ou environnementales dans le but de superviser, contrôler ou même réagir sur un cas particulier; ce qu’on appelle la supervision ou le monitoring. La technologie de la radio-identification, connue sous l’abréviation RFID (de l’anglais Radio Frequency IDentification), est l’une des technologies utilisées pour récupérer des données à distance de les mémoriser et même de les traiter. C’est une technologie d’actualité et l’une des technologies de l’industrie 4.0 qui s’intègre dans de nombreux domaines de la vie quotidienne notamment la surveillance et le contrôle d’accès. L’objectif de cet article est de montrer comment protéger et surveiller en temps réel des zones industrielles critiques et de tous types d’accès non autorisés de toute personne (employés, visiteurs…) en utilisant la technologie RFID et cela à travers des exemples de simulation à l’aide d’un simulateur dédié aux réseaux de capteurs.
The purpose of this paper is to present a system reconfiguration for a three-phase induction motor (IM) in the event of an open-phase (OP) fault. After the occurrence of the fault, the challenge is how to ensure a safe operation when the IM is only supplied by two phases. The star point of stator is used to reconfigure the IM supply, and a fault tolerant rotor field-oriented control (FT-RFOC) is implemented. Consequently, an equivalent mathematical two-phase model is firstly calculated based on the two available currents. Modifications on the conventional space vector modulation (SVM) algorithm are also introduced in order to control the reconfigured inverter. This system reconfiguration is applied to achieve a safe post-operating after the occurrence of the OP fault. The implemented tests confirm the proposal and prove its effectiveness to compensate for the fault effect.
Prognosis and health management (PHM) are mandatory tasks for real-time monitoring of damage propagation and aging of operating systems during working conditions. More definitely, PHM simplifies conditional maintenance planning by assessing the actual state of health (SoH) through the level of aging indicators. In fact, an accurate estimate of SoH helps determine remaining useful life (RUL), which is the period between the present and the end of a system’s useful life. Traditional residue-based modeling approaches that rely on the interpretation of appropriate physical laws to simulate operating behaviors fail as the complexity of systems increases. Therefore, machine learning (ML) becomes an unquestionable alternative that employs the behavior of historical data to mimic a large number of SoHs under varying working conditions. In this context, the objective of this paper is twofold. First, to provide an overview of recent developments of RUL prediction while reviewing recent ML tools used for RUL prediction in different critical systems. Second, and more importantly, to ensure that the RUL prediction process from data acquisition to model building and evaluation is straightforward. This paper also provides step-by-step guidelines to help determine the appropriate solution for any specific type of driven data. This guide is followed by a classification of different types of ML tools to cover all the discussed cases. Ultimately, this review-based study uses these guidelines to determine learning model limitations, reconstruction challenges, and future prospects.
Many cloud providers offer very high precision services to exploit optical character recognition (OCR). However, there is no provider that offers Tifinagh optical character recognition (OCR) as web services. Several works have been proposed to build powerful Tifinagh OCR. Unfortunately, there is no one developed as a web service. In this paper, the authors present a new architecture of Tifinagh handwriting recognition as a web service based on a deep learning model via Google Colab. For the implementation of the proposal, they used the new version of the TensorFlow library and a very large database of Tifinagh characters composed of 60,000 images from the Mohammed Vth University in Rabat. Experimental results show that the TensorFlow library based on a tensor processing unit constitutes a very promising framework for developing fast and very precise Tifinagh OCR web services. The resultsshow that the method based on convolutional neural network outperforms existing methods based on support vector machines and extreme learning machines.