Figure 5.7 Iterative process to create a workable DSA service agreement.
With standard cloud services, a customer should be able to compare two service agreements from two different providers and select the provider that best meets his needs. The provider of an IaaS attempts to optimize the infrastructure resources use dynamically in order to create an attractive service agreement. If the scripted scenarios in Figure 5.7 are selected to represent deployed scenarios accurately, and if the iterative process in Figure 5.7 is run sufficiently enough and with enough samples, the service agreement created should be met with the deployed system. However, there should still be room to refine the cognitive algorithms, policies, rule sets, and configuration parameters after deployment if post‐processing analysis necessitates this change. A good system design should only require refining of policies, rule sets, and configuration parameters without the need for software modification. This system design should allow for the deployed cognitive engine to morph based on post‐processing analysis results.
5.3.3 Examples of DSA Cloud Services Metrics
This section presents some examples of DSA cloud services metrics that can be considered in DSA design. Note that these are examples and the designer can choose to add more metrics depending on the system requirements and design analysis.
5.3.3.1 Response Time
Metric name: Response time.
Metric description: Response time between when an entity requests a DSA service and when the service is granted.
Metric measured property: Time.
Metric scale: Milliseconds.
Metric source: Depends on the hierarchy of the networks. The source is always a DSA cognitive engine but the response can be local, distributed cooperative, or centralized. The response can also be deferred to a higher hierarchy DSA cognitive engine.
Note: Response time can be more than one metric. Response time for a local decision is measured differently from response time from a gateway or a central arbitrator. The design can create more than one response time metric.
5.3.3.2 Hidden Node
Metric name: Hidden node detection/misdetection.
Metric description: Success or failure in detecting a hidden node.
Metric measured property: Success or failure.
Metric scale: Binary.
Metric source: An external entity, a primary user, files a complaint about using its spectrum by the designed system.
Note: Need scripted scenarios to evaluate this metric. It is evaluated by an external entity not the designed system.
5.3.3.3 Meeting Traffic Demand
Metric name: Global throughput.
Metric description: Traffic going through the system over time (global throughput efficiency).
Metric measured property: Averaged over time.
Metric scale: bps.
Metric source: Global measure of traffic going through the system. Successful use of spectrum resources dynamically should increase the wireless network's capacity to accommodate higher traffic in bps.
Note: This metric is system dependent. Some systems, such as cellular systems, link this traffic demand to revenue making. The metric is not only interested in getting insight into achieving higher throughput, but the higher number of users that increases revenues. Some users' rates can be lowered but the service continues in order to accommodate more users as long as the service agreement is met.
5.3.3.4 Rippling
Metric name: Rippling.
Metric description: The stability of the assigned spectrum.
Metric measured property: Time.
Metric scale: Minutes.
Metric source: The DSA cognitive engine can track the time between two consecutive frequency updates.
Note: Rippling can have a negative impact on the previous metric (meeting global throughput). It can reduce the network throughput. This metric can be measured at the node level, at the gateway level, and at the central arbitrator level. The rippling impact at higher levels (e.g., central arbitrator) can have much worse impact than rippling at the local node. Evaluation of this metric depends on where it was measured.
5.3.3.5 Co‐site Interference Impact
Metric name: Co‐site impact.
Metric description: The ability to reduce co‐site impact on the assigned spectrum.
Metric measured property: SNIR.
Metric scale: dB.
Metric source: The DSA cognitive engines can track the SNIR from the collected spectrum sensing information and create an average for each waveform according to the waveform signal characteristics.
Note: Co‐site impact may or may not be tolerable by a given waveform. This metric can show the average time SNIR exceeded a given threshold that is considered intolerable while co‐site interference was known to exist and was accepted because of a policy or because of the limited availability of alternative spectrum bands that can be used.8
5.3.3.6 Other Metrics
The above metrics are just examples of what can be considered in DSA cloud services meteorology. The design can create categories of metrics. For example, the design can consider QoS category metrics to include packet delay, loss, and jitter. The design can consider security category metrics to include exposing a node to an eavesdropper or exposing a network to a jammer, the metric could measure the time of exposure to these security risks. The design can also consider the need for human intervention. A network administrator monitoring the use of DSA resources may intervene in cases of complete failure of the system to recover autonomously. The number of incidents that require human intervention can be turned into a metric. Some IaaS references name this metric the automatization degree.
5.3.3.7 Generalizing a Metric Description
One of the challenges that can face the designer of DSA as a set of cloud services for heterogeneous networks is ensuring consistency. We discussed in the previous chapters how different waveforms can have different link status metrics and how a common definition for link health metrics between all the waveforms used in a heterogeneous system is needed such that services as reactive routing are optimized appropriately. The design of DSA as a set of cloud services needs to also ensure that a metric is measured consistently with regard to all types of waveforms and all points where the metric is measured. Co‐site interference impact as a DSA service metric is a good example of creating consistency. Different signals have different tolerance to co‐site interference. Co‐site interference can have different levels of spectral density in different frequency harmonics. The co‐site impact as a DSA service metrics has to be normalized between all