摘要 :
Test cost minimisation approaches have traditionally been devoted to minimising ``execution costs'', while maximising coverage or reliability. However, in a runtime testing context, the amount of coverage or reliability that can b...
展开
Test cost minimisation approaches have traditionally been devoted to minimising ``execution costs'', while maximising coverage or reliability. However, in a runtime testing context, the amount of coverage or reliability that can be achieved, in other words, the system's Runtime Testability, is limited by the adverse effects that the interferences of runtime tests have on the system. Supporting runtime testing, therefore, introduces an additional cost in ``preparatory'' activities in software, (e.g., testable components) and in hardware (e.g., more memory), before certain runtime tests can be executed. In this paper we present a low-complexity, cost minimisation algorithm for the optimal selection of preparation activities, based on a near-optimal trade-off between preparation cost and a structure-based measurement of Runtime Testability, coined the Runtime Testability Metric (RTM). We perform a theoretical and empirical validation of RTM, showing that RTM is indeed a valid, and reasonably accurate measurement with ratio scale. We also present empirical data demonstrating the near-optimal performance at a low computational cost of our algorithm.
收起
摘要 :
Test cost minimisation approaches have traditionally been devoted to minimising ``execution costs'', while maximising coverage or reliability. However, in a runtime testing context, the amount of coverage or reliability that can b...
展开
Test cost minimisation approaches have traditionally been devoted to minimising ``execution costs'', while maximising coverage or reliability. However, in a runtime testing context, the amount of coverage or reliability that can be achieved, in other words, the system's Runtime Testability, is limited by the adverse effects that the interferences of runtime tests have on the system. Supporting runtime testing, therefore, introduces an additional cost in ``preparatory'' activities in software, (e.g., testable components) and in hardware (e.g., more memory), before certain runtime tests can be executed. In this paper we present a low-complexity, cost minimisation algorithm for the optimal selection of preparation activities, based on a near-optimal trade-off between preparation cost and a structure-based measurement of Runtime Testability, coined the Runtime Testability Metric (RTM). We perform a theoretical and empirical validation of RTM, showing that RTM is indeed a valid, and reasonably accurate measurement with ratio scale. We also present empirical data demonstrating the near-optimal performance at a low computational cost of our algorithm.
收起
摘要 :
We wish to examine the intelligence of a system implemented with mobile agents verses a system implemented with non-mobile (static) agents. Toward this goal we develop and validate a metric at the boundary of static object oriente...
展开
We wish to examine the intelligence of a system implemented with mobile agents verses a system implemented with non-mobile (static) agents. Toward this goal we develop and validate a metric at the boundary of static object oriented complexity metrics and runtime performance metrics. We then examine a small case study with equivalent mobile agent and non-mobile agent solutions.
收起
摘要 :
We wish to examine the intelligence of a system
implemented with mobile agents verses a system
implemented with non-mobile (static) agents. Toward this
goal we develop and validate a metric at the boundary of
static object ori...
展开
We wish to examine the intelligence of a system
implemented with mobile agents verses a system
implemented with non-mobile (static) agents. Toward this
goal we develop and validate a metric at the boundary of
static object oriented complexity metrics and runtime
performance metrics. We then examine a small case study
with equivalent mobile agent and non-mobile agent
solutions.
收起
摘要 :
Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing is too costly, or cannot be performed. How...
展开
Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing is too costly, or cannot be performed. However, in many situations, an extra effort will have to be invested in implementing appropriate measures to enable runtime tests to be performed without affecting the running system or its environment. This paper introduces a method for the improvement of the runtime testability of a system, which provides an optimal implementation plan for the application of measures to avoid the runtime tests' interferences. This plan is calculated considering the trade-off between testability and implementation cost. The computation of the implementation plan is driven by an estimation of runtime testability, and based on a model of the system. Runtime testability is estimated independently of the test cases and focused exclusively on the architecture of the system at runtime.
收起
摘要 :
In many digital forensic investigations large amounts of material needs to be examined. Investigations involving video files are one instance where the amounts of material can be very large. To aid in examinations involving video,...
展开
In many digital forensic investigations large amounts of material needs to be examined. Investigations involving video files are one instance where the amounts of material can be very large. To aid in examinations involving video, automated tools for video content classification can be employed. In this work we examine the performance of several different video classifiers in the context of forensic detection of a small number of relevant videos among a large number of irrelevant videos. The higher level task performance that is of interest is thus the ability to detect a relevant video in a limited amount of time. The performance on this higher level task is a combination of the classification performance, but also the run-time performance of the classifiers. A variety of video classification techniques are available in the literature. This work examines task performance for 6 video classification approaches from literature using Monte-Carlo simulations. The results illustrate the interdependence between run-time and classification performance, and show that high classification performance in terms of true positive and false positive rates not necessarily lead to high task performance.
收起
摘要 :
In many digital forensic investigations large amounts of material needs to be examined. Investigations involving video files are one instance where the amounts of material can be very large. To aid in examinations involving video,...
展开
In many digital forensic investigations large amounts of material needs to be examined. Investigations involving video files are one instance where the amounts of material can be very large. To aid in examinations involving video, automated tools for video content classification can be employed. In this work we examine the performance of several different video classifiers in the context of forensic detection of a small number of relevant videos among a large number of irrelevant videos. The higher level task performance that is of interest is thus the ability to detect a relevant video in a limited amount of time. The performance on this higher level task is a combination of the classification performance, but also the run-time performance of the classifiers. A variety of video classification techniques are available in the literature. This work examines task performance for 6 video classification approaches from literature using Monte-Carlo simulations. The results illustrate the interdependence between run-time and classification performance, and show that high classification performance in terms of true positive and false positive rates not necessarily lead to high task performance.
收起
摘要 :
Code instrumentation is a mechanism that allows modules of programs to be completely rewritten at runtime. With the advent of virtual machines, this type of functionality is becoming more interesting because it allows the introduc...
展开
Code instrumentation is a mechanism that allows modules of programs to be completely rewritten at runtime. With the advent of virtual machines, this type of functionality is becoming more interesting because it allows the introduction of new functionality after an application has been deployed, easy implementation of aspect-oriented programming, performing security verifications, dynamic software upgrading, among others.The Runtime Assembly Instrumentation Library (RAIL) is one of the first frameworks to implement code instrumentation in the .NET platform. It specifically addresses the limitations that exist between the reflection capabilities of .NET and its code emission functionalities. RAIL gives the programmer an object-oriented vision of the code of an application, allowing assemblies, modules, classes, references and even intermediate code to be easily manipulated.This paper addresses the design of an implementation of RAIL along with the difficulties and lessons learned while building a framework for code instrumentation in .NET.
收起
摘要 :
Code instrumentation is a mechanism that allows modules of programs to be completely rewritten at runtime. With the advent of virtual machines, this type of functionality is becoming more interesting because it allows the introduc...
展开
Code instrumentation is a mechanism that allows modules of programs to be completely rewritten at runtime. With the advent of virtual machines, this type of functionality is becoming more interesting because it allows the introduction of new functionality after an application has been deployed, easy implementation of aspect-oriented programming, performing security verifications, dynamic software upgrading, among others.The Runtime Assembly Instrumentation Library (RAIL) is one of the first frameworks to implement code instrumentation in the .NET platform. It specifically addresses the limitations that exist between the reflection capabilities of .NET and its code emission functionalities. RAIL gives the programmer an object-oriented vision of the code of an application, allowing assemblies, modules, classes, references and even intermediate code to be easily manipulated.This paper addresses the design of an implementation of RAIL along with the difficulties and lessons learned while building a framework for code instrumentation in .NET.
收起
摘要 :
Runtime enforcement aims at verifying software execution trace against formally specified properties and en-' forcing the properties when software execution is going to violate them.Runtime enforcement is usually realized through ...
展开
Runtime enforcement aims at verifying software execution trace against formally specified properties and en-' forcing the properties when software execution is going to violate them.Runtime enforcement is usually realized through runtime monitors and is widely used in different application areas.Runtime monitors are injected into base program and guarantee that its execution will satisfy the property.However,without carefully analyzing the runtime monitor and the base program,it is rather easy to break the base program's original properties.We try to extend work from article [1} to resolve this problem.The contributions of this paper can be summarized as follows: (1) a category of runtime monitors based on their influence on program's properties is proposed;(2) an approach of detecting a runtime monitor's category is given.
收起