Mathematics and Computer Science

  • Traffic Shaping for Congestion Control

    The problem of congestion is not new to the field of telecommunications; its root can be traced back to the very foundations of the internet. Most, if not all networks, are currently faced with this problem. The distribution of resources on the network is very vital. The fact that resources are limited brought about major problem such that these limited resources would be shared among various workstations and nodes. As the network grows, and the number of workstations and nodes increases; resources become integral and if not properly managed no meaningful work will be done in such networks. Congestion control, which involved managing resources when the network utilization becomes so high, is of utmost importance so as to avoid congestion collapse. The complex calculations and computation carried out in the existing solutions, increase utilization as well as the problem of congestion. Traffic shaping algorithms provide solutions with little overhead and thus would be preferred to use in medium sized networks. This work proposes traffic shaping as a better solution for congestion control especially in medium size networks.

  • Comparative Study of a Class of One-Step Methods for the Numerical Solution of Some Initial Value Problems in Ordinary Differential Equations

    We emphasized explicitly on the derivation and implementation of a new one-step numerical method for the solution of initial value problems in ordinary differential equations. In this paper, we aimed at comparing the newly developed method with other existing methods such as Euler’s method, Trapezoidal rule and Simpson’s rule. Using these methods to solve some initial value problems of first order ordinary differential equations, we discovered that the results compared favorably, which led to the conclusion that the newly derived one-step numerical method is approximately correct and can be used for any related first order ordinary differential equations.

  • An approximation algorithm for minimizing congestion in the single-source k-splittable flow

    In the traditional multi-commodity transmission networks, the number of paths each commodity can use is unrestricted, and the commodities can use arbitrary number of paths to transmit the flow. However, in the real transmission networks, too many paths will increase the total transmission cost of the network and also cause difficulties in the management of the network. In 2002, Baier[1] proposed the-splittable flow problem, in which each commodity can only use a limited number of paths to transmit the flow. In this paper, we study the-splittable multi-commodity transmission flow problem with the objective of minimizing congestion and cost. We propose an approximation algorithm with performance ratio for congestion and cost in the single-source case, in which is the minimum value of the number of paths each commodity can use. The congestion reflects the total load of the network to some extent. The main aim of minimizing congestion is to distribute the demands of the commodities on the network in a balanced way, avoiding the case that some edge is used too much. By this way, the performance of the network as a whole can be guaranteed and more commodities can be served.

  • Application of the Discrete Geometrical Invariants to the Quantitative Monitoring of the Electrochemical Background

    In this paper, we apply the statistics of the fractional moments (SFM) and discrete geometrical sets/invariants (DGI) for explain of the temporal evolution of the electrochemical background. For analysis of this phenomenon, we apply the internal correlation factor (ICF) and proved that integral curves expressed in the form of voltammograms (VAGs) are more sensitive in comparison with their derivatives. For analysis of the VAGs (integral curves), we propose the set of the quantitative parameters that form the invariant DGI curves of the second and the fourth orders, correspondingly. The method of their calculation based on the generalization of the well-known Pythagor’s theorem is described. The quantitative parameters that determine these DGI allow monitoring the background of the electrochemical solution covering the period of 1-1000 measurements for two types of electrode (Pt and C) and notice the specific peculiarities that characterize each electrode material. The total set of 1000 measurements was divided on 9 parts (1-100, 101-200, 201-300, …, 901-1000) and the duration of each hundred set was 1300 sec. The proposed algorithm is sensitive and has a “universal” character. It can be applied for a wide set of random curves (experimental measurements) that are needed to be compared in terms of a limited number of the integer moments. The qualitative peculiarities of the background behavior for two types of electrodes (Pt and C) based on the DGI can be explained quantitatively.

  • An Implementation of a One-Time Pad Encryption Algorithm for Data Security in Cloud Computing Environment

    The cloud is a computing model used by many consumers which include individuals and organizations for data storage which hitherto demands that adequate security measures be put in place to protect the confidentiality and integrity of that information. In cases where these security measures are inefficient or in some cases non-existent, client data is prone to a number of unauthorized violations which include breach of privacy, loss of data, compromised data integrity, and data manipulation among others. This therefore necessitates the demand for efficient security measures. Encryption is a security technique adopted by many for data protection which entails concealing the information content of a message in a way that only the intended recipient can make use of it. This research paper discusses the concept of encryption, a review of different encryption schemes that have been discussed over the years, and proposes a one-time pad encryption (OTP) algorithm (FAPE’s OTP). FAPE’s OTP implements the one time pad using a key expansion process that transforms a 512 bit key to the length of the plaintext. This research was carried out through a comprehensive study of encryption and cloud processes to understand both concepts independently and determine how they can be interleaved while sustaining optimum delivery. Furthermore, our findings indicate that FAPE’s OTP has a faster speed of operation in comparison to the Advanced Encryption Standard.

  • Expert System Model for Diagnosing Legume Diseases

    There is a current surge in designing expert systems to solve problems like humans in different domains of research but there presently exist dearth of models that can guide designers of expert systems that can be used to accurately diagnose Soya beans diseases. Soya bean is a leguminous plant that is commonly grown in sub Saharan region and it is currently one of the legumes with high Gross Domestic Product (GDP) in the Nigerian economy, but farmers of this product are faced with serious challenges from legume ravaging diseases that are usually similar in symptoms and very difficult to diagnose by the few practising botanist available. Most models available cannot be used to guide designers who are interested in developing secured and reliable expert systems that can accurately classify, identify, diagnose and recommend treatments for Soya beans diseases. The main aim of this paper is to review existing models that are currently being used in designing expert systems and identify their limitations in other to come up with a better, more reliable and secured model. In other to identify inherent gaps from other models, research review method was adopted to critically investigate the weak points in few related and current expert models for classifying and diagnosing legumes diseases.

  • A Conceptual Design and Evaluation Framework for Mobile Persuasive Health Technologies (Usability Approach)

    Persuasive techniques are recently being explored by computer science researchers as an effective strategy towards creating applications that are aimed at positive attitudinal changes especially in the health domain but finding effective evaluation approaches for these technologies remain an herculean task for all stakeholders involved and in order to overcome this limitation, the Persuasive System Design (PSD) model was designed but researchers claim that the model is too theoretical in nature and some of its design principles are too subjective as they cannot be measured quantitatively. Hence, the focus of this paper is to critically review the PSD model and popular models currently being used to evaluate the usability of information systems as usability has been identified as an important requirement currently used to evaluate the overall success of persuasive technologies. To achieve the stated objectives, the systematic review method of research was done to objectively analyze the PSD model, its applicability as an evaluation tool was tested on a popular mobile health application installed on the Samsung Galaxy Tablet using android Operation system. Exhaustive evaluation of the application was performed by 5 software usability researchers using the method of cognitive walkthrough. From the analysis, it was realized that the PSD model is a great tool at designing persuasive technologies but as an evaluation tool, it is too theoretical in nature, its evaluation strategies are too subjective in nature and the 28 principles described in it overlap with one another. As a result, the PSD model was extended with an integrated usability model and the fuzzy Analytic Hierarchical Technique was proposed theoretically to evaluate usability constructs so as to make evaluation of persuasive technologies more quantitative in nature and easier for researchers to analyze their design early enough to minimize developmental efforts and other resources.

  • A Logical Approach for Empirical Risk Minimization in Machine Learning for Data Stratification

    The data-driven methods capable of understanding, mimicking and aiding the information processing tasks of Machine Learning (ML) have been applied in an increasing range over the past years in diverse areas at a very high rate, and had achieved great success in predicting and stratifying given data instances of a problem domain. There has been generalization on the performance of the classifier to be the optimal based on the existing performance benchmarks such as accuracy, speed, time to learn, number of features, comprehensibility, robustness, scalability and interpretability. However, these benchmarks alone do not guarantee the successful adoption of an algorithm for prediction and stratification since there may be an incurring risk in its adoption. Therefore, this paper aims at developing a logical approach for using Empirical Risk Minimization (ERM) technique to determine the machine learning classifier with the minimum risk function for data stratification. The generalization on the performance of optimal algorithm was tested on BayesNet, Multilayered perceptron, Projective Adaptive Resonance Theory (PART) and Logistic Model Trees algorithms based on existing performance benchmarks such as correctly classified instances, time to build, kappa statistics, sensitivity and specificity to determine the algorithms with great performances. The study showed that PART and Logistic Model Trees algorithms perform well than others. Hence, a logical approach to apply Empirical Risk Minimization technique on PART and Logistic Model Trees algorithms is shown to give a detailed procedure of determining their empirical risk function to aid the decision of choosing an algorithm to be the best fit classifier for data stratification. This therefore serves as a benchmark for selecting an optimal algorithm for stratification and prediction alongside other benchmarks.

  • Session Hijacking in Mobile Ad-hoc Networks: Trends, Challenges and Future

    Technological advancement in the field of telecommunication has led to the creation of highly dynamic networks, one of such is Mobile Ad-Hoc Networks which can be described as an autonomous collection of devices (mobile devices) that offers dynamic topologies, no central administration, dynamic and ever mobile nodes and so on, information becomes easy to disseminate. Mobile Ad-Hoc Networks and the various features it provides affects the security of the network. A network with dynamic nodes and no central administration can be prone to network attacks one of such is session hijacking. Integrity is paramount in any network, session hijacking affects the integrity of data on a network, important information is leaked also due to the sensitive application of MANETs especially in the military these has to be avoided. This paper looks into session hijacking in MANETS, reviewed various existing solutions to find out gaps and also proposing an optimised IDS which offers more flexibility and also dynamic applications in any network traffic environment.

  • Human Gait identification System

    Few biometrics can be used to recognize a person from distance without need for direct sharing or cooperation of that person, gait (walking behavior) is one of them. Walking behavior (gait) recognition includes specifying person identity by analyzing his walking style (walking manner). In this paper, a human gait identification system depending on extracted features, which are vertical hip, horizontal hip angle and slop of thigh. The first step of the proposed system is detecting the binary silhouette of a walking individual from the uploaded videos. Then, the gait cycle is allocated using aspect ratio method. Finally, the required features from each frame in gait cycle are extracted. Different image processing operation have been performed to extract the required features. The outcome of the proposal system reflects flexibility in term of inserting, searching, updating, deleting and matching. The proposed system is tested in terms of offered functions, human recognition and noise effects. The obtained results show the efficient performance of the system and high ability of covering the error caused by surrounding conditions. The system is evaluated using the matching rate with the threshold of 70%. The adding noise can degrade the matching rate, particularly for high variance values. This is because of the increasing of noise values that might be the reason of moving the object irregularly while capturing or unexpected changing in the effected surrounding conditions.