In Chapter 6, the concept of feed forward networks is introduced which serve as the foundation for recurrent neural networks. Simple writing analysis is the best analogy for RNN, because the prediction of the next word is always dependent on prior knowledge of the sentence’s contents. RNN is a form of artificial neural network that is used to recognize a sequence of data and then analyze the results in order to predict the outcome. The LSTM is a type of RNN that consists of a stack of layers with neurons in each layer. This article also goes into the issues that each technology has as well as possible remedies. Optimization algorithms alter the features of neural networks, such as weights and learning rates, to reduce losses. Optimization Algorithms in Neural Networks is one of the sections. A section dedicated to some of the most current in-depth studies on Steganography and neural network combinations. Finally, for the prior five years, we give an analysis of existing research on the current study (2017 to 2021).
In Chapter 7, it has been found that cyber physical systems (CPS) will be used in the majority of real-time scenarios in the future. The use of such technologies is unavoidable in order to make the world smarter. However, as the use of such technologies grows, so does the need for improved privacy. Users will not be easily used to such systems if the privacy component is compromised. Because Cyber Physical Systems use a variety of heterogeneous sensor data sources, incorporating a high level of privacy is becoming increasingly difficult for system designers. The applicability of the precise penalty function and its benefits in increasing the privacy level of cyber physical systems will be presented in this chapter. We’ll compare this to existing privacy-preserving strategies in cyber-physical systems and discuss how our suggested privacy framework could be improved in the future.
In Chapter 8, the increasing demands for the preservation and transit of multi-media data have been a part of everyday life over the last many decades. Images and videos, as well as multimedia data, play an important role in creating an immersive experience. In today’s technologically evolved society, data and information must be sent rapidly and securely; nevertheless, valuable data must be protected from unauthorized people. A deep neural network is used to develop a covert communication and textual data extraction strategy based on steganography and picture compression in such work. The original input textual image and cover image are both pre-processed using spatial steganography, and then the covert text-based pictures are separated and implanted into the least significant bit of the cover image picture element. Following that, stego-images are compressed to provide a higher-quality image while also saving storage space at the sender’s end. After that, the stego-image will be transmitted to the receiver over a communication link. At the receiver’s end, steganography and compression are then reversed. This work contains a plethora of issues, making it an intriguing subject to pursue. The most crucial component of this task is choosing the right steganography and image compression method. The proposed technology, which combines image steganography and compression, achieves higher peak signal-to-noise efficiency.
Chapter 9 shows the number of mobile network-connected devices is steadily increasing. The 5G network will theoretically provide a speed of 20 gigabits per second, allowing customers to access data at a rate of 100 megabits per second. Around the world, there are estimated to be 5 billion gadgets. With the advancement of wearable technology, a typical client can now carry up to two network-connected devices or engage in D2D communication. Clients are attracted to the 5G network because it advertises reduced inertness information correspondence, faster access and data transfer rates, and a more secure nature. As the number of supporters grows, concerns about information and computerized assurance will grow in order to keep up with the integrity of data security. Similarly, with any type of data security, there are always concerns about the safety of clients and their sensitive information. This chapter will discuss how to secure the diverse structures that are associated with networks, where these networks are vulnerable to compromise, well-known attack tactics, and how to avoid technical discrepancies.
Chapter 10 has explored the modern Information Technology environment necessitates increasing the value for money while ignoring the potency of the gathered components. The rising demand for storage, networking, and accounting has fueled the growth of massive, complex data centers, as well as the big server businesses that manage several current internet operations, as well as economic, trading, and corporate operations. A data centre can hold thousands of servers and consume the same amount of electricity as a small city. The massive amount of calculating power required to run such server systems controls a variety of conflicts, including energy consumption, greenhouse gas emissions, substitutes, and restarting affairs, among others. This is virtualization, which refers to a group of technologies that cover a wide range of applications and hobbies. This can be applied to the sectors of hardware and software, as well as innovations on the outskirts of virtualization’s emergence. This study demonstrates how we proposed using virtualization technologies to gradually transform a traditional data centre structure into a green data centre. This study looks into the reasons for the price profits of supporting virtualization technology, which is recommended by practically every major company in the market. This is a technology that can drastically reduce capital costs in our environment while also almost committing to low operating costs for the next three years while pursuing the finance. We’ll talk about value in terms of cost and space, with space equating to future cost.
The security of big data is being studied, as well as how to keep the performance of the data while it is being transmitted over the network. There have been various studies that have looked into the topic of big data. Furthermore, many of those studies claimed to provide data security but failed to maintain performance. Several encryption techniques, including RSA and AES, have been utilized in past studies. However, if these encryption technologies are used, the network system’s performance suffers. To address these concerns, the proposed approach employs compression mechanisms to minimize the file size before performing encryption. Furthermore, data is spit to increase the reliability of transmission. Data has been transferred from multiple routes after the data was separated.
If any hackers choose to collect that data in an unauthentic method, they will not be able to obtain complete and meaningful data. By combining compression and splitting mechanisms with big data encryption, the suggested model has improved the security of big data in a network environment. Furthermore, using a user-defined port and various pathways during the split transmission of large data improves the dependability and security of big data over the network projects in Chapter 11.
Acknowledgments
We express our great pleasure, sincere thanks, and gratitude to the people who significantly helped, contributed and supported to the completion of this book. Our sincere thanks to Fr. Benny Thomas, Professor, Department of Computer Science