Fog computing is a disruptive technology in the big data analytics area. Smartphone users and organizations use cellular services, which can support decision-making in disaster scenarios with the data that have been collected. Nevertheless, the regular communication infrastructure can be damaged by disasters. NTT provided an easily deployable solution to construct an emergency communication network (ECN), but ECNs are slow at propagating big data due to their limited transmission capabilities. One major issue is efficiently integrating data processing in the ECN to realize effective data processing and transmission in disaster scenarios. In this paper, we present - a detailed mathematical model to represent data processing and transmission in an ECN fog network; an NP-hard proof for the problem of optimizing the overall delay; and a novel algorithm to minimize the overall delay for wirelessly-networked disaster areas that can be run in real-time. We evaluated the systems across various transmission speeds, processing speeds, and network sizes. We also tested the calculation time, accuracy, and percent age error of the systems. Through evaluation, we found that the proposed disaster area adaptive delay minimization (DAADM) algorithm showed to have a reduced overall delay over various network sizes when compared with some conventional solutions. The proposed DAADM algorithm matched the curve of the genetic algorithm (GA), even if its results did not yield delays as small as the GA. The DAADM had one major advantage over the GA which was the processing time, which allows the DAADM to be implemented in a real-time system, where a GA solution would take far too much time.
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）