This paper solved the problem of how to manage the distribution of airport taxis and balance the revenue of long- and short-haul passenger taxis. In this research, we established a multi-objective programming model, which was solved using genetic algorithms to obtain a reasonable distribution scheme in airport with the highest riding efficiency: set up a pick-up location in the middle of the pick-up area, requiring all cars to leave uniformly when fully loaded, and release an average of 78 taxis per batch in every single boarding location. In addition, with the queuing theory we set the basic parameters of the road. Taking the income balance difference as the objective function, we used the VISSIM software to simulate the simulation. Then the short-term “priority” arrangement plan was: Calculate the ratio of the travel time of the short-distance taxi to the distance from the airport to the city center. If the ratio is less than 0.0659, the taxis that meet the conditions are allowed to be given priority after return. The results have some guidance and strong practical significance.
Purpose: The aim of this reach is to identify how Artificial Intelligence (AI) could be used in enhancing forecasting to achieve more accurate outcomes. The research also explores the influence that forecasting has on global economy and the reasons why it needs to be accurate. Also, the research explains various pitfalls identified in forecasting. Method: This research implements two research approaches which are review of literature and formulation of hypotheses. Seven hypotheses are created. Findings: AI, when integrated with other technologies such as Machine Learning (ML) and when provided with the right computer power, yields much more accurate results than many other forecasting methods. The technology is costly, however, and it is prone to cyber-attacks. Conclusion: The future of business is highly reliant on forecasting, which directly impacts the global economy. But, not every business will have the power to own the forecasting technology due to the cost, and business will need to increase security to protect the forecasting systems.
Purpose: The aim of this research is to thoroughly analyze blockchain with respect to the role it plays in cybersecurity, and how this role may affect the future of blockchain and cybersecurity. Also, gaps are identified along with the shortcomings that cause these gaps. This research also identifies possible solutions to the gaps or issues. Method: the research approach used here is a review of the literature using the systematic-analysis technique. Other works that address various aspects of blockchain are analyzed in-depth to show its effectiveness. Results: there is a great possibility that blockchain is one of the future’s greatest cybersecurity solutions. Among the major issues include quantum computing, user habits, and conflicting interests. All these issues have various ways through which they can be addressed effectively in order to brighten the future of blockchain’s applicability in cybersecurity. Conclusion: blockchain, as it is, promotes fraud in cryptocurrency and therefore needs modification. Blockchain only needs reinforcement from technologies such as Artificial Intelligence and Machine learning to make it the future’s most dependable cybersecurity provider.
Rate monotonic scheduling algorithm (abbr. RM) is one of the main algorithms in real-time systems, but its operation efficiency is low relatively. In this paper, two-level scheduling method is used to improve the operational efficiency of RM algorithm, and the basic principle of computer processor in real-time system is analyzed, and the RM scheduling algorithm is implemented concretely. Considering the shortcoming of RM algorithm, a modified RM algorithm based two-level scheduling strategy is proposed. As a result, the performance and reliability of real-time system is increased, and the applicability of the method is widened.
Ifá scholars have primarily focused on its sociological and linguistic aspects while the scientific and computer aspects have been variously neglected. This paper explored the mathematical and computer model of Ifá corpus, which will assist Ifá priests to use the oracular process to simulate Ikin (the sixteen sacred palm nuts) and Ọ̀pẹ̀lẹ̀ (the divining bead chain) on the way to produce Odù (Ifá poetries) signatures. Each signature links the 256 Odùs in the database which invariably retrieved the corresponding verses with conforming sacrifices or advices. Microsoft Visual Studio.Net Express 2018 Community Edition on Window 10 Professional, 64-bit Operating System with Intel core duo CPU at 2.60 GHz, 12 GB memory was used to implement Ifá Application Tool (IAT). IAT interface supported Ikin and Ọ̀pẹ̀lẹ̀ simulation, the manual inscription of Odù signature, display of verses, stories, advises and recommended sacrifices. Usability testers scored the tool high in the ease of finding information within the user interface while it was above average in the skill to capture essential features for Odù divination accomplishments. This model supported Ifá professionals to make informed decisions and assessment by eliminating the level of ambiguity to interpret Odù corpus with a clear demarcation of its meanings.
Airborne missile servo system (AMSS) is a complex time-varying nonlinear system and the design of which is a multi-objective optimization problem. Fuzzy PID controller (FPC) is demonstrated appropriate for complex time-varying nonlinear systems but the design of which needs a tedious trial and error process. Non-dominated Sorting Genetic Algorithm III (NSGA-III) is a multi-objective evolutionary algorithm with good generality and robustness which can do a big favor for parameter tuning of complex system. This paper develops NSGA-III for parameter tuning in design process of FPC. Resulting FPCs are tested with model of AMSS on simulink. For further comparison, performance of conventional PID controller and sectional PID controller which is widely used in the engineering are also shown. Comparison shows that NSGA-III tuned FPCs have the better performance in AMSS.
Ascertaining the impact of Temperature, Rainfall, Water level, and Water discharge on Sediment Yield
The state of our natural environment is continuously changing. Various texts, global environmental monitoring bodies and environmental focused research groups agree that the deterioration, if not checked, will in the nearest future make it impossible for all living things to continue to exist in the ways we are accustomed to. As Earth’s temperature steadily increases, so has its sea level. This has brought about a lot of changes in the landscape of different catchment areas which is as a result of flooding. Flooding, in turn, has also resulted in death and proliferation of water borne diseases This work is motivated by the need to understand what factors mostly affect sediment yield in order to safe guard against its effects. The work takes a look into temperature, rainfall, water level, and water discharge factors from Oyan gauging station of the Ogun-Osun River basin, in the south western part of Nigeria. Our results show that all factors considered have a Sconsiderable effect on sediment yield.
Neural networks represent a brain metaphor for information processing. These models are biologically inspired rather than an exact replica of how the brain actually functions. Neural networks have been shown to be very promising systems in many forecasting applications and business classification applications due to their ability to learn from the data. This article aims to provide a brief overview of artificial neural network.The artificial neural network learns by updating the network architecture and connection weights so that the network can efficiently perform a task. It can learn either from available training patterns or automatically learn from examples or input-output relations.
In this paper, a DFT- based OFDMA with phase modulation (DFT-OFDMA-PM) system is proposed. The proposed system exploits the advantages of PM, the constant envelope (CE) signal, and the ability to improve the diversity of multipath channels. The performance of the proposed system in terms of bit error rate (BER) is studied and investigated using simulation. An investigation of the proposed system, the DFT-OFDMA-PM, results is carried and compared to the recently proposed DCT-OFDMA-PM system and the conventional systems without PM. Moreover, the key parameter, the modulation index, which affects the performance of the PM systems, is also studied and the optimum value is chosen via simulation. The simulation results for the proposed system show the effectiveness of the proposed system for broadband communications.
Modeling a Fault Tolerant Control Mechanism for Cloud e-marketplaces using Raft Consensus Protocol (RCP)
The evolution of marketplaces started from the traditional marketplace, the internet marketplace, the web service marketplace, the grid marketplace, before moving to cloud e-marketplace. The need to have rapid access to various service by different customers brought about cloud e-marketplaces. The goal of the cloud e-marketplace is to attract the biggest possible number of buyers while ensuring a reduced waiting time for customers and maximized profit for cloud service providers. Challenges like security, performance and fault tolerance are of great concern in the cloud market. While discussion on the issue of security and performance are ongoing, that of fault tolerance is yet to be fully addressed. Although some researchers have proposed the use of multiple servers in achieving the main idea of the cloud e-marketplace, different kinds of faults still affects the performance of cloud e-market. Balancing of providers’ cost and customers’ waiting time is still a major concern. Various techniques have been proposed in solving these problems. However, these techniques only work in a static environment where these servers can be faulty which may lead to long waiting time. We propose the use of Raft consensus protocol as our fault tolerant approach. We use the dynamic environment as against the already static approached already discussed in the literature. In the dynamic environment, two fault tolerant centers that are capable of surviving failure caused by server overload or congestion are used. These are primary and the reservoir centers. The Raft Consensus Protocol is used in both centers to coordinate the servers and make sure that each of the servers exist either as the leader, a candidate, or a as a follower. A waiting time counter algorithm is developed that directs customers request to the primary center when waiting time t