摘要 :
Client-Server has several design alternatives, mainly iterative server and concurrent server. Inefficiency in the use of time and process control can be resulted from choosing a server design improperly. A server has more process ...
展开
Client-Server has several design alternatives, mainly iterative server and concurrent server. Inefficiency in the use of time and process control can be resulted from choosing a server design improperly. A server has more process control than clients as a server has to respond to multi-query and multi-processing in the same time from different client platforms such as IPv4 or IPv6. This study analyzes the performance of IPv4 and IPv6 in 5 different server designs, i.e. Iterative Server, Concurrent Fork Server, Concurrent Thread Server, Concurrent Pre-Fork Server and Concurrent Pre-Thread Server. The experiments for analyzing the CPU time including kernel and user mode time for each server were performed in TCP sockets using several techniques, it includes assigning 5 to 50 clients with connection from 500 to 5000 consecutive connections for each client on each test for each server. This study compared, discussed and analyzed time allocations for each type of those servers in responding to the query from the clients. This paper reveals that among 5 server designs, iterative server took less time in handling clients, while concurrent-fork server took most CPU time in handling multiple clients. Our experimental results show that IPv4 took less time in kernel mode in all the five server designs, and IPv6 took less time in user mode only under iterative server, pre-fork server, and pre-thread server. However, for the overall performance, IPv4 performs better than IPv6.
收起
摘要 :
As the Internet continues to grow, Web server technology is becoming central to the current Internet computing, since the performance of the servers directly affects the performance of the Web sites using it. In this paper, we pre...
展开
As the Internet continues to grow, Web server technology is becoming central to the current Internet computing, since the performance of the servers directly affects the performance of the Web sites using it. In this paper, we present the design and implementation of our Web server AWS. The purpose of this Web server implementation is to provide a research vehicle and full control of source code of a purely Java-based scalable Web server that is HTTP 1.1-compliant and integrated with a Servlet container. The built-in Servlet container supports CGI scripts by using a CGI handler Servlet. The Web server supports two models of scaling to boost its performance: Transparent and Redirect.
收起
摘要 :
As the Internet continues to grow, Web server technology is becoming central to the current Internet computing, since the performance of the servers directly affects the performance of the Web sites using it. In this paper, we pre...
展开
As the Internet continues to grow, Web server technology is becoming central to the current Internet computing, since the performance of the servers directly affects the performance of the Web sites using it. In this paper, we present the design and implementation of our Web server AWS. The purpose of this Web server implementation is to provide a research vehicle and full control of source code of a purely Java-based scalable Web server that is HTTP 1.1-compliant and integrated with a Servlet container. The built-in Servlet container supports CGI scripts by using a CGI handler Servlet. The Web server supports two models of scaling to boost its performance: Transparent and Redirect.
收起
摘要 :
As the Internet continues to grow, Web server technology is becoming central to the current Internet computing, since the performance of the servers directly affects the performance of the Web sites using it. In this paper, we pre...
展开
As the Internet continues to grow, Web server technology is becoming central to the current Internet computing, since the performance of the servers directly affects the performance of the Web sites using it. In this paper, we present the design and implementation of our Web server AWS. The purpose of this Web server implementation is to provide a research vehicle and full control of source code of a purely Java-based scalable Web server that is HTTP 1.1-compliant and integrated with a Servlet container. The built-in Servlet container supports CGI scripts by using a CGI handler Servlet. The Web server supports two models of scaling to boost its performance: Transparent and Redirect.
收起
摘要 :
The cloud storage technology gets a massive growth in development and attraction with respect to the growth of unstructured data (Fu et al., Trans Inf Forensics Sec 12(8):1874-1884,2017 [11]). There are chances of privacy leakage ...
展开
The cloud storage technology gets a massive growth in development and attraction with respect to the growth of unstructured data (Fu et al., Trans Inf Forensics Sec 12(8):1874-1884,2017 [11]). There are chances of privacy leakage risks for face detection and data control rights can be lost due to this schema (Dinh et al., Wirel Commun Mob Comput 13(18): 1587-1611, 2013 [2]). A Three-Layer Approach is designed in order to store and access the data in a secure manner from the cloud server. The fog server concept is integrated for the current cloud in which the data can be stored on multiple nodes rather than in a single storage medium. The data gets partitioned into a number of blocks where each block's encryption standard is monitored by the data owner (Feng, A data privacy protection scheme of cloud storage 14(12): 174-176, 2015 [5, 14]). Once if any data user tries to access the file, he needs to request the file access from the cloud server, where the users can view the file in a decrypted manner and for the remaining, the data cannot be viewed in a plain text manner.
收起
摘要 :
The power management of server farms (Sf) is becoming a relevant problem in economical terms. Server farms totalize millions of servers all over the world that need to be electrically powered. Research is thus expected to investig...
展开
The power management of server farms (Sf) is becoming a relevant problem in economical terms. Server farms totalize millions of servers all over the world that need to be electrically powered. Research is thus expected to investigate into methods to reduce Sf power consumption. However, saving power may turn into waste of performance (high response times), in other words, into waste of Sf Quality of Service (QoS). By use of a Sf-model, this paper investigates Sf power management strategies that look at compromises between power-saving and QoS. Various optimizing Sf power management policies are studied combined with the effects of job queueing disciplines. The (policy, discipline) pairs, or strategies, that optimize the Sf power consumption (minimum absorbed Watts), the Sf performance (minimum response time), and the Sf performance-per-Watt (minimum response time-per-Watt) are identified. By use of the model, the work the server-manager has to do to direct hisSf is greatly simplified, since the universe of all possible (π, δ) strategies he needs to choose from is drastically reduced to a very small set of most significant strategies.
收起
摘要 :
The power management of server farms (Sf) is becoming a relevant problem in economical terms. Server farms totalize millions of servers all over the world that need to be electrically powered. Research is thus expected to investig...
展开
The power management of server farms (Sf) is becoming a relevant problem in economical terms. Server farms totalize millions of servers all over the world that need to be electrically powered. Research is thus expected to investigate into methods to reduce Sf power consumption. However, saving power may turn into waste of performance (high response times), in other words, into waste of Sf Quality of Service (QoS). By use of a Sf-model, this paper investigates Sf power management strategies that look at compromises between power-saving and QoS. Various optimizing Sf power management policies are studied combined with the effects of job queueing disciplines. The (policy, discipline) pairs, or strategies, that optimize the Sf power consumption (minimum absorbed Watts), the Sf performance (minimum response time), and the Sf performance-per-Watt (minimum response time-per-Watt) are identified. By use of the model, the work the server-manager has to do to direct hisSf is greatly simplified, since the universe of all possible (π, δ) strategies he needs to choose from is drastically reduced to a very small set of most significant strategies.
收起
摘要 :
The rapid growth of the Internet increases the importance of connecting to existing databases. The Web, with all its versatility, is putting database security to the test. Access to web-enabled databases containing sensitive infor...
展开
The rapid growth of the Internet increases the importance of connecting to existing databases. The Web, with all its versatility, is putting database security to the test. Access to web-enabled databases containing sensitive information such as credit card numbers must be made available only to those who need it. The focus of this paper is to shed some light on how databases can be used in a secure manner when connecting to the World Wide Web, by investigating the application of current state-of-the-art database security services.
收起
摘要 :
The rapid growth of the Internet increases the importance of connecting to existing databases. The Web, with all its versatility, is putting database security to the test. Access to web-enabled databases containing sensitive infor...
展开
The rapid growth of the Internet increases the importance of connecting to existing databases. The Web, with all its versatility, is putting database security to the test. Access to web-enabled databases containing sensitive information such as credit card numbers must be made available only to those who need it. The focus of this paper is to shed some light on how databases can be used in a secure manner when connecting to the World Wide Web, by investigating the application of current state-of-the-art database security services.
收起
摘要 :
In order to realize the networked test system and interconnect different Test Control Networks with open architecture, an open architecture of Distributed Networked Test Control System with the center of the Test Control Center Se...
展开
In order to realize the networked test system and interconnect different Test Control Networks with open architecture, an open architecture of Distributed Networked Test Control System with the center of the Test Control Center Server is brought forth based on the analysis and research of all kinds of field-bus test control networks and information network technology, which is composed of the Test Server, the Control Server and the Calibration Server. Through the mix time/event-triggered communication model composed of the Control Link and the Data Link based on time-triggered technology and event-triggered technology, the architecture can implement interconnection and real-time communication among all logical servers. Also, the synchronization of time, clock and data can be achieved through the synchronization unit of servers. This open architecture can not only interconnect different test control networks and systems, but also acquire real-time data from the industry and our living, share these information by network, and satisfy the people's requirements of digitized living.
收起