Figure 1.1. Application mapping on the rate–latency plane with regard to the reliability requirement (Fettweis et al. 2019)
The reliability requirement, as part of the Tactile Internet, affects HW through new procedures and algorithms, which ultimately lead to expanded workloads in terms of additional data throughput and shorter deadlines for processing that data. Potentially, even new operating systems for MPSoCs centered on threat insulation and security will need to be investigated, which is outside the scope of this chapter, but the conclusions are the same. The prospective future 6G applications span vastly different data rates (10 kb/s – 1 Tb/s, i.e. 108 × span) and latency requirements (2 ms – 2 s i.e. 103 × span).
We have a broad and dynamic workload space with the total number of applications steadily increasing. These applications are sometimes concurrent in their operation, and in, at other times, they are exclusive, which adds another layer of complexity into consideration. For example, a handset user is either watching a video stream or playing an AR-based game while taking a stroll through the city, not both at the same time.
The challenges presented above, combined with new signal processing tasks associated with radio issues when working in new frequency ranges, make HW design choices particularly hard, efficiently balancing between architecture complexity and size, power consumption and cost. The HW challenge is certainly the “half a trillion dollar question”1. There is no other way to treat this than as an onion problem, and to peel it layer by layer, going step by step, the first step being standard specifications.
1.2.2. Standard specifications
High variability is built into modern standards through their many modes of operation. One aspect by which these modes may differ are workloads. Let us analyze the workloads by looking at the most advanced 5G standard.
1.2.2.1. Processing deadline variability
If we look at the transmission time interval (TTI)2 consisting of 14 Orthogonal Frequency Division Multiplexing (OFDM) symbols3 for 5G (3GPP 2019e), extrapolate its duration and compare it (see Figure 1.2) to the duration of 3GPP 4th Generation Long Term Evolution (4G) (3GPP 2018a) TTI of 14 OFDM symbols, we can make the following observations:
1 1) TTI duration is scaled with 2μ, where μ is a parameter4;
2 2) TTI duration is not a function of bandwidth (BW) allocated for that TTI;
3 3) for a fixed BW and the same number of OFDM symbols per TTI (i.e. the same amount of data), the duration of the TTI differs (i.e. the deadline to process that data differs);
4 4) there is a 16× difference in processing deadlines between the corner cases.
Depending on the mode of operation, the computational engine may need to process the same amount of data with deadlines shifting 16× during operation.
The processing deadline for baseband physical layer processing is constrained by the hybrid automatic repeat request (HARQ) media access control layer (MAC-L) procedure, which is 3 TTIs long. Let us assume, as a rule of thumb, that we have 1/3 of the 3 TTI budget associated with waveform modulation, while the other 2/3 are reserved for other processing steps5. With that in mind, the deadlines are simplified and match the TTI duration for the given subcarrier spacing parameter μ.
Figure 1.2. Comparing 14 OFDM symbols’ TTI duration of 4G and 5G
1.2.2.2. Data throughput variability
Next, let us investigate throughput requirements of the 5G specifications. At this point, we choose to compute and show requirements per handset modem chip. Note that there are additional device classes that support only a subset of the operating modes shown here. However, an advanced handset of the future should support all of the modes shown here, to use the full potential of different frequency ranges.
Previously, we performed a specification analysis (Damjancevic et al. 2019), although over the past 6 months the 5G specifications have expanded, and here we show the updated information. In Figure 1.3 and Figure 1.4, we have organized the throughput information and presented it in a readable form for FR1 and FR2, respectively. For comparison, we plot also the 4G data throughput requirement, which coexists in FR1, along with other legacy standards. Future FR2 5G systems will coexist with Super High Frequency (SHF) and Extremely High Frequency (EHF) communication and radar systems, which are region-specific, adding an extra layer of flexibility. Throughput is shown in maximum resource blocks (RB)6 over time, per channel BW for one spatial MIMO data stream layer7,8 in compliance with active (3GPP 2019c, d) specifications. From the figures, we can observe that:
1 1) there are many possible modes of operation;
2 2) there is a 352× difference between the processing data load corners (LTE, 1.4 MHz) – lower end and (μ = 3, 400 MHz) upper end in terms of RBs.
Figure 1.3. Processing load in kRB/s for 5G NR FR1 (Damjancevic et al. 2019)
We see a greater need for flexibility again emerging from the observations, compared to the 4G standard, with many more modes to support on top of the throughput difference. Now that we have identified the throughput corners in RBs,
we can assign and calculate the smallest and highest QAM and code bit rates allowed by the 4G9 and 5G10 specification sets, to the lower and upper ends, respectively.
Figure 1.4. Processing load in kRB/s for 5G NR FR2
This sets the low end at 200 kb/s per 1.4 MHz BW channel and spatial data layer11. Note that this is a hard bit rate, and the rate at different processing steps may be higher due to oversampling.