key: cord-0992463-lp0hko3u authors: Hussain Mohammadani, Khalid; Butt, Rizwan Aslam; Memon, Kamran Ali; Faizullah, Safiullah; Saifullah; Ishfaq, Muhammad title: Bayesian auction game theory-based DBA for XG symmetrical PON date: 2022-04-29 journal: Opt Quantum Electron DOI: 10.1007/s11082-022-03721-9 sha: a9befea7e88d83dd50ece08a1104948f53fe592d doc_id: 992463 cord_uid: lp0hko3u The next-generation passive optical networks (NG-PONs) (i.e., 50G-PON and Time-division-multiplexing/Wavelength-division-multiplexing, TWDM-PON) offer very high bandwidth with improved quality of service. In these PONs, the role of efficient Dynamic bandwidth allocation (DBA) becomes even more important in reducing the upstream delays, bandwidth waste and reducing the upstream delays and delay variance. These qualities of service metrics lead to improved Quality of Experience (QoE) for the end-users in addition to increased revenue for the service providers. This study introduces the game theory concept in the bandwidth distribution process in PON. Specifically, the Bayesian auction game theory (BAGT) process is used in the DBA process to address the unfair and inefficient distribution of upstream bandwidth to the optical network units (ONUs) in XG symmetrical PON(XGs-PON). The proposed BAGT scheme allocates the excess bandwidth to the entire ONUs in proportion to their demands reported via the bidding process. To validate the performance of the BAGT scheme, we also compare it with other existing DBA schemes namely; proportional allocation schemes (PAS), improved bandwidth utilization (IBU), and optimized round-robin (ORR) methods. The simulation results show that the proposed scheme results in higher system throughput and lower upstream delays than the other schemes. BAGT DBA also improves the bandwidth utilization by up to 38% to 50% compared to IBU, ORR, and PAS schemes and exhibits the minimum frame loss ratio. Internet users are increasing exponentially, with 70% of young people using the Internet regularly. The recent ITU report shows that the bandwidth usage growth rate was 6% higher in 2020 compared to the previous year. (International Telecommunication Union (ITU) Measuring Digital Development Facts and Figures, 2020) . The COVID-19 pandemic has further accelerated broadband subscriptions by 33%. Today's passive optical networks (PONs) have gradually evolved from gigabit TDM PON to TWDM PON to cater to these soaring bandwidth demands (Mohammadani et al., 2022) . This decade began with the deployment initiatives of 10 gigabit-capable symmetric (XGS-PON), i.e., next-generation PON (NG-PON) variant. Further, the standardization of higher-speed PON (HSP-PON), i.e., 50 Gb/s/λ (50G-PON), is under development by the ITU-T (Wang et al. 2020; Zhang et al. 2020) . In XGS-PON, the OLT uses a dynamic bandwidth assignment (DBA) scheme for efficient distribution of the available bandwidth resources among the ONUs due to a standard upstream wavelength. The typical DBA process works through the upstream and downstream (US & DS) frames. Figure 1 shows the flow of the bandwidth reporting (Ri) from the ONU to OLT and the bandwidth grant (Gi) from the OLT to respective ONUs. The OLT provides required bandwidth to ONUs in downstream traffic in the form of a bandwidth map, and all ONUs send specific information to OLT in upstream traffic in the form of buffer occupancy reports. At the OLT, the DBA process controls the arbitration mechanism, and all ONUs blindly follow it. In literature, many studies have investigated the DBA schemes for bandwidth distribution in PONs. However, the minimal upstream delay, efficient bandwidth utilization, and excess bandwidth allocation based on the traffic classes remain a significant challenge in designing DBA mechanisms for NG-PONs. These quality metrics help to improve the latency, QoE for the end-users and increase revenue for the service providers. Some DBA schemes use instant allocation on the bandwidth request received, like IPACT (Kramer et al. 2002) and other DBA schemes (Lai et al. 2015) do grant after all buffers requests are received. However, these schemes are only suitable for IEEE compliant PONs as ITU PONs are synchronous. The advanced approach, in this case, is to grant a minimum bandwidth) following the service level agreement (SLA) during a service interval (SI) and, in the case of unused bandwidth, distributed proportionally among the ONU traffic queues like in (Chang et al. 2006) . However, the authors in this study ignored the bandwidth distribution-based prioritized traffic classes, i.e., transmission containers (TCONTs) TCONT1(T1)-TCONT5(T5). GIANT (Paper et al. 2004) , and ITU compliant PON DBA allocates the residual bandwidth to the surplus component of T3 and T4 traffic groups as, where is the usable surplus allocation bytes for) once during a service interval (SI), which causes lengthy delays for all types of traffic classes. The Immediate Allocation with Colorless Grant (IACG) DBA dynamically allocates bandwidth per downstream frame (DSF) and uses the TCONT type-5 to distribute the remaining bandwidth to all ONUs (T5). For all TCONTs, it additionally operates independent service interval (SI) timers and byte counters, all of which makes it computationally costly and complex (Han 2014) . Other DBA techniques utilize repeated scheduling (Han 2012) or borrow and refund policy (Han et al. 2013 ) but have massive queue delays or neglect low priority services. Recently, the game theory has been applied in different optical access networks (OAN) environments to solve the unfair bandwidth distribution problem among the ONUs (Dalamagkas et al. 2018) . This study presented the typical game theory-based DBA (i.e., PAS) scheme, which achieved fair bandwidth allocation among the ONUs for an XGPON network, which significantly improved the average US latency of the network. Although the game theory approach is interesting, we believe that no other authors have applied for the excess bandwidth distribution as per the application bandwidth requirement for XGS-PON. In the PON, a shared pool of resources (US bandwidth) is available at OLT, and each ONU can monopolize the bandwidth sharing process for its bandwidth demand/benefit. However, traditional bandwidth allocations schemes cannot redistribute upstream bandwidth effectively. Especially when the traffic load from each ONU changes dramatically and OLT finds some unallocated/unused bandwidth resources. In such scenarios, game theory-based bandwidth redistribution is preferable for different user types with varying latency requirements. As a significant contribution of this paper, we formulate a novel DBA method utilizing the first price sealed bid (FPSB) auction as a Bayesian auction game theory for bandwidth allocation in ITU-compliant PON. Moreover, the Bayesian auction game theory provides each user opportunity to gain efficient bandwidth according to its demand by utilizing its strategy independently. Therefore, in comparison with existing DBA algorithms, the proposed Bayesian auction game theory-based DBA algorithm performs very well to improve the bandwidth assignment process to minimize the upstream delays and increase the throughput. This paper first reviews in Sect. 2 studies on existing DBAs to clarify unsatisfactory bandwidth allocation problems. Section 3 explains the system description and proposes a novel DBA scheme. Section 4 describes the simulation setup. Section 5 describes simulation results with discussion, and Sect. 6 ends the article with an appendix and references. The unallocated bandwidth assignment phase is very important for the DBA process as it plays a significantly important role in reducing the PON upstream latency at low and medium traffic loads. In existing studies, various DBAs distribute this unallocated bandwidth differently, like in a fixed manner (Butt et al. 2017; Han 2014) in proportion to ONU demand (Han et al. 2013; Mikaeil et al. 2017) or according to a prediction rule (Kamran Ali Memon et al. 2019a, b; . The CBU scheme (Butt et al. 2018a, b) uses a fixed bandwidth assignment approach for TCONT2, TCONT3, and TCONT4, which is assigned using the dynamic bandwidth report upstream (DBRu) slots with an odd intervalbased polling approach. The CBU allocates bandwidth to TCONT3 and TCONT4 after the surplus phase allocation (SPA). It assigns the residual frame bandwidth for that service class to each ONU on a percentage basis. Compared to efficient bandwidth utilization (EBU) (Han et al. 2013) , it reduces T4 traffic delay but cannot reduce T2 and T3 traffic delay. The weakness of the fixed approach is that TCONT2 and TCONT3 always get higher bandwidth share than TCONT4, which severely degrades its performance at higher traffic loads. Generally, the fixed assignment approach is inefficient and leads to bandwidth wastage due to not keeping into view the ONU demand. The IBU algorithm (Butt et al. 2017 ) also works in a similar way with a difference in the polling and scheduling mechanisms of CBU. The CBU actually improves IBU by integrating the unused bandwidth assignment approach of the EBU scheme with the IBU. Overall, all the fixed bandwidth assignment schemes lack the fairness of bandwidth assignment. The prediction schems try to improve this deficiency by predicting the bandwidth requirement of the ONU from its traffic demand pattern. There are different prediction approaches to predict the ONU future demand for example the study in (Kamran Ali Memon et al 2019a, b) estimates bandwidth demand based on a circular buffer that holds the last hundred demand values; the prediction method uses these values to estimate the mean and standard deviation for ONU potential demand prediction. The study in ) uses a combination of linear predictive model and kalman filter to estimate ONU bandwidth demand for the low and high priority ONUs respectively. However, in general the prediction approach does not perform well with bursty traffic due to high randomness. The assignment of the bandwidth in proportion to the ONU bandwidth demand performs better compared to both the other discussed approaches. The Optimized Round-robin (ORR) scheme (Mikaeil et al. 2017 ) is a combination of the GIANT DBA scheme and round-robin (RR) DBA. It assigns the surplus bandwidth of less loaded T-CONTs to highly loaded T-CONTs in every allocation cycle (t). ORR checks the current buffer occupancy reports ( R j t ) of TCONT (j), and assigns the grant allocation using G j t = min(R j t , W j t ) Following that, the ORR verifies overloading, and if the TCONT(j) requires more bandwidth than the current allocation bytes' limit ( W j t ), it increases the maximum allocation bytes' limit ( w max ) of TCONT(j) for the next cycle. Therefore, in this algorithm, the authors calculate the total excess bandwidth ( Ex j t ) amount as every cycle completes and then allocate the total surplus equally among the TCONT (j) using Ex j t = (BW − ∑ k j=0 W j t )∕j , where BW is the total available bandwidth for one cycle. However, the weakness of the ORR is that it does not use a colorless grant approach for the distribution of the unallocated bandwidth at the end of the guaranteed and surplus allocation phases. Therefore, the unallocated bandwidth ratio may be slightly higher when low traffic load. The use of game theory in the DBA process is another attractive approach to distributing the unallocated bandwidth to the ONUs demand. Such a scheme for ensuring a fair bandwidth distribution across ONUs in XG-PON is the proportional allocation scheme (PAS) (Dalamagkas et al. 2018) . This scheme uses a game theory model based on the tragedy of common rule for n participants who compete to share bandwidth resources. In this study, the guaranteed bandwidth demand is served first, and then it collects all pending Alloc-Ids requests ( R i ) and allocates further bandwidth using game theory model when the following conditions are met: (a) there is still available bandwidth, (b) at least two Alloc-IDs request additional bandwidth and (c) the additional bandwidth requests exceed the available bandwidth. The PAS calculates grant size ( a i ) using the Nash Equilibrium equation (S i × C)∕ ∑ j S j . The Final bandwidth allocation represents the maximum capacity allocations across all allocations ( i ) that belong to the utility function ( μ ). Although the PAS algorithm offers a balanced and fair bandwidth distribution, the possibility of increasing the mean delay is still there since PAS does not assign bandwidth as per ITU PON standards. The above literature review validates the argument that the existing DBA schemes either ignore or cannot assign/handle excess and unallocated bandwidth. These schemes, therefore, suffer from increased upstream delays and unallocated bandwidths ratio. In priority-based traffic classes (e.g., TCONTS classes), the efficient bandwidth distribution mechanism with excess bandwidth allocation would result in low upstream latency for different applications, better QoE, and increased revenue. Therefore, a new DBA is necessarily required to increase the bandwidth utilization rate in XGS-PON. The proposed work aims to develop a proper DBA scheme using the game theory model that efficiently manages bandwidth distributions and handles excess/unused bandwidth to improve the system's latency and revenue for the XGS-PON vendors. A sophisticated DBA is mandatory to efficiently manage the upstream bandwidth for ONUs according to their demand. This study makes the following assumptions to develop the game theory model-based DBA for NG-PON: (1) OLT acts as a game manager who sells the product (bandwidth) and executes the DBA as a game process. (2) The set of participating ONUs are modeled as bidders. Each bidder knows his own possible choice of bid value of type private, and they compete to acquire the necessary portion of resources from a shared pool of bandwidth resources. (3) For each choice of bidding strategy, each bidder's type, i.e., a TCONT within an ONU, receives a maximized payoff in the shape of Final bandwidth allocation. Based on their demand, all participants are solely concerned with maximizing their payoffs depending upon their valuations. Figure 2 displays the implementation of the game theory model for BAGT DBA in the NG-PON environment. Figure 2a shows the detailed processing flow, and Fig. 2b shows an implementation in the NG-PON environment. Next, we introduce the first price sealed bid (FPSB) auction game theory and Bayesian auction game theory while describing the crucial parameters and notations related to NG-PON like XGS-PON. Lastly, the modified DBA process, associated scheduling, and bandwidth assignment process following FPSB as a Bayesian game theory approach called Bayesian auction game theory (BAGT-DBA) in this paper explained in this section. Table 1 describes necessary game theory notations and parameters and illustrates their relationship with the NG-PON for the proposed BAGT DBA scheme. The generalized FPSB auction game theory mechanism is applied when n bidders compete for a k th slot position in a competition to distribute a resource. The process starts by asking each participant to send their sealed bid (secret offer) to the game executer, which means participants do not know each other's bid-offer (Cheng et al. 2018) . The executer assigns the 1st slot to the highest bidder ( b i ), offering the max-bid, the second-highest slot, the third-highest slot and so on. The Bayesian games can solve the problem of FPSB auction game theory (Han & Liu 2015) by maintaining the secrecy of the bids of each type of bidder. The executor is responsible for maintaining the equilibrium condition termed as a Bayesian Nash equilibrium (BNE). Due to the limited information available, each player decides its bid independently, considering its demand and available investment. Thus, the game executor can achieve the fairness of resource allocation without being prejudiced against any player. We consider a strategic game with incomplete information called the Bayesian game G = (N, J, B, µ) comprising of a finite set of bidders(N) having type set J and a set of action B. The payoffs ( μ ) for each bidder ( i ) depends on its . First, we simplify the case of two bidders and then develop a generalized model for n bidders. Let's assume that two bidders are competing for the same resource. Simultaneously bidders submit a bid b k and b i} . The auctioneer awards the item to the game-winner, who pays the highest bid. If they both bid the same amount, a coin toss determines the winner. Equation (1) shows the utilities or payoff ( μ) assumption of the bidders: Remaining unserved bid values of bidder i ∈ N of type j Each bidder knows its value for the resource but does not know the value of its competitor because the game scenario is Bayesian. It is assumed that v 1 and v 2 are uniformly distributed in the interval [0,1], as is generally the case in the Bayesian game auction models (Gibbons 1992) . Case 1: In the first-price sealed auction with two bidders whose valuations are unknown can be drawn from the uniform distribution on U[0,1], the unique BNE as In BNE, each bidder uses the best strategy and maximizes the expected payoff represented in Eq. (2). Further, its proof is in ref (Gibbons 1992 ). Case 2: The FPSB with n bidders whose valuations ( v i ) are independently drawn from a uniform distribution in the range [0,1], the (unique) symmetric equilibrium is given by the strategy profile n−1 n v 1 , … , n−1 n v i . Case 2 is directly adopted from (Watson 2013) for n bidders in Bayesian game. Let n ≥2 bidders play auction games. The value ( We assume that there is the symmetric BNE in which all bidders bid a constant fraction of their valuation given by Eq. (3): Here, for some number a that is the same for all players and is between zero and one to a uniform scenario. If for the player p k its expected payoff is μ k and with its valuation v k then if it wins the game, it means that its bid b k was greater than others player's bid and represent it using Eq. (4) as; When the game process starts, initially, the OLT and the DBA are unaware of the bidding values of the bidders. Therefore, the DBA process executes an auction scenario in the time interval (t, t + Δ) of fixed cycle duration Δ = 125 μs as equal to the frame duration of XGS-PON. The game executor (DBA) asks bidder ( i ), the ONU (i), to submit secrete bids(b j i wishes to get the segment of product, the upstream bandwidth, according to its valuation. The bidder (i) has its private valuation b(v i ) , independently drawn from the uniform distribution v i ∈ [0, v max ]. Therefore, according to FPSB auction game theory, the winner's payoff is equal to its bid b j i . Thus, in our case of the Bayesian game, the cost and bid value are the same and are identical to b j i of each bidder (i) . The OLT is the game manager that always tries to clear the market by assigning the resource (FB u ) as per the bids ∑ i,j μ j i = FB u , here FB u shows the total available upstream bandwidth and μ j i represents bandwidth payoff for type (j) of bidder (i) . We assume that during the BAGT DBA process, each bidder (i) makes its best strategy individually with the highest valuation or bid value to be a winner and get most of the bandwidth allocation for TCONT (j). (2) The scheduling process at the manager (OLT) is responsible for collecting all the possible secret bids of all the types of participating bidders during the upstream frame arrivals at the OLT. These bids (b j i ) are sent using the buffer occupancy reports by each TCONT (j) of ONU (i) . Unlike the IEEE PON, The ITU-compliant DBA process does not wait for the arrival of all the bids to execute the DBA process. The DBA process is executed every downstream cycle at the OLT. The sum of all b j i may exceed the FB u ; however, the sum of payoffs always satisfies the Eq. (5). In each DS cycle, the proposed DBA assigns bandwidth to all TCONTs followed by these three phases implementing priority-based scheduling (i.e., T1 > T2 > T3 > T4). The working of the proposed BAGT scheme is explained in Fig. 3 with the help of a flowchart. The working of the BAGT can be divided into three phases. The first Phase (brown dashed square) represents the dedicated fixed bandwidth ( DFB j i ) allocation to all ONU ( i ) for TCONT ( j ). In this phase, a fixed bandwidth allocation with different auction value models, common value, and private value is considered. In the common value model, bidder ( i ) receives the same bandwidth of TCONT (1). On the other hand, in the private value model, each bidder ( i ) may receive a different bandwidth size over different TCONTs ( j > 1 ) but not more than a predefined fixed limit. The second phase (orange dashed square) represents the game process during which the excess bandwidth is distributed to update bandwidth allocation of game-winner ( i ) i.e. TCONT ( j > 1 ). The third phase (blue dashed square) is the colorless grant (CG) bandwidth allocation. During this phase the unallocated bandwidth is assigned to ONU(i ) for TCONT(4). The proposed work does not violate any of the ITU-compliant traffic class definitions, and it does not prioritize T4 over T2 and T3 during a service interval (SI). As discussed above that the first phase of the flowchart in Fig. 3 assigns the fixed bandwidth allocation to TCONT( j ) of ONU(i ). First, the bandwidth is assigned using the common value model of the game theory to TCONT (1) of all ONU ( i ) according to Eq. (6). where DFB j i denotes dedicated fixed frame bytes according to SLA agreement for ONU (i) over TCONT( j ). The currently fixed bandwidth allocation ( j i ) is assigned to each TCONT (1) of ONU(i ) in every DS cycle as the constant payoff in accordance with the common value model. However, for the rest of the TCONT (j) Eq. (7) is used for payoff using the private-value game theory model. The payoff to TCONT (j), i.e., the j i , is only limited by the DFB j i as agreed by the bidder in the SLA. However, the assignment is made with a strict class priority as per ITU standard. In this phase, the unallocated bandwidth is assigned as an excess bandwidth using the private auction model based on the Bayesian auction game theory concept. Equation (9) is used for computing the further payoff μ j i against the remaining b R j i for the bidder (i). All the bidders have an equally likely chance of participating in the game process and winning it without any prejudice; however, we use an adaptive load approach for the auction process thus. Eq. (9) can be simplified as Eq. (10) Further, Eq. (11), which is the result of summation of Eq. (6) with Eq. (10), provides the maximum expected payoff of a bidder ( i ) after Phase-II can be given by Eq. (11), After phase 2, the BAGT executes phase 3 of the flowchart in which the remaining unallocated bandwidth is distributed as the colorless grant (CG). This allocation is distributed among the TCONT (4) of all bidders ( b 4 i ). First, the unallocated bandwidth , left after the execution of Phase 1 and Phase 2, is computed using Eq. (12). This bandwidth is equally distributed among all bidders ( b 4 i ) for best-effort traffic only using Eq. (13). This colorless grant is sent to all ONU ( i ) using TCONT (5) as also in (R. A. Butt et al., 2018a, b) . The BAGT scheme uses Algorithm-1 to assign excess bandwidth using the game theorybased auction process in phase 2 of the flowchart shown in Fig. 3 . It takes as input the; bidder (i ∈ N) , TCONT (j ∈ J), FB u , R j i and valuation(v j i ). It assigns bandwidth to each TCONT (j > 1), which is sent to ONU(i ) using the BWmap field of the downstream frames of the XGS-PON. First, the algorithm sorts all R j i in descending order in Line 1. Then, the bandwidth assignment is made proportional to the unserved bid R j i value subject to the availability of FB u . Line 2 computes the total summation ( sumB ) of remaining unserved bid ( R j i ) values. From lines 3 to 9 algorithm computes the maximum payoff ( μ j i (max)) in a while loop if FB u and sumB are greater than zero, and ONU(i ) is less than n . Line 4 uses Total Number of ONUs the Eq. (10), in which bandwidth is distributed based on the highest R j i . Line 5 and 6 update the FB u and SumB after the payoff computation and assignment to the game-winner ( i). The efficiency of the proposed algorithm in big O notation is analyzed as follows: The complexity of Line 1 is O(nLogn) and Line 2 is O(n) . The complexity of excess bandwidth calculate (line 3 to 9) is O(nLogn). The proposed algorithm runs for 3 TCONTs types instead of 4 TCONTs types; hence, the computational time complexity of the proposed algorithm is 3 × (O(nLogn) + O(n) + O(nLogn)) . Which is equal to O(nLogn) asymptotically. The simulation framework is based on the ITU-T XGS-PON standard (G Series, 2020). OMNET + + 5.5 is used to evaluate the performance of the proposed scheme in simulation, similar to (R. A. Butt et al., 2017) . The performance of the proposed scheme is also compared with earlier proposed DBA schemes; PAS, ORR, and IBU. Table 2 lists the key simulation parameters. A single OLT connects with 16 ONUs through the splitter node. Each ONU consists of four TCONTs as per ITU standards. To simulate a 20 km distance range between OLT to ONU, we set RTT = 210 μs value (Infinera 2020) . Users can have a bandwidth of 300 Mbps-1 Gbps using 10G PON technology(ZTE 2020); therefore, in this paper, the line rate bandwidth between the OLT and ONU is 800Mbps, with a 10 Gbps upstream and downstream line rate. For bandwidth distribution, our XGS-PON testbed follows the earliest PON studies (Butt et al. 2017 . Therefore, we have configured AB min1 =12,500 bytes and SImax1 = 10, which amounts to 80 Mbps (10%) for T1. We used AB min2 = 28125 bytes with SI max2 = 5 , which corresponds to 360 Mbps(45%) for T2 traffic. We assigned AB min3 = AB sur3 = 28125 bytes with SI min3 = SI max3 = 10 for T3 traffic, which amounts to 180Mbps guaranteed (22.5%) and non-assured bandwidth (22.5%) for T3 traffic TCONTs. We assigned AB sur4 = 62500 bytes with SI min4 = 10 for T4 traffic, which results in a bandwidth reservation of 400 Mbps (50%) on best-effort basis. For injecting traffic in the network, we use Poisson traffic and Self-Similar traffic model as well. Each ONU is configured with a dedicated instance of the traffic generator running inside it independently, as described in (Mohammadani et al. 2020 ) (Ali et al. 2019 ). To present the comparative analysis among the four DBAs, we simulate all the algorithms with similar stimulation parameters for a fair comparison under the Poisson distributed traffic scenario with exponentially varying inter-arrival times and self-similar traffic scenarios. The traffic load is varied from 0.01 to 0.99. The performance of the BAGT scheme is compared with other schemes; IBU, PAS, ORR in terms of upstream delay(s) of Type 1(T1), Type 2(T2), Type 3 (T3), and Type 4 (T4) traffic types, unallocated upstream Bandwidth Ratio (UBR), average frame loss ratio, and upstream throughput. Poisson distribution is used with exponentially varying inter-arrival times(IAT) for traffic frames. The traffic arrival rate ( )perONU is calculated using Eq. (14) for a selected load. Figure 4 illustrates the comparative performance of all DBAs in terms of US delays versus traffic load for all four traffic types (T1, T2, T3, and T4), throughput in Gbps, upstream unallocated bandwidth ratio (%), bandwidth utilization ratio (%), and frame loss ratio for two traffic types (T3 and T4), respectively. We have considered TCONT1(T1) to model the voice traffic with a fixed bandwidth requiring a constant bit rate(CBR). We can observe from Fig. 4a Figure 4b presents the upstream delay of T2 for all four DBAs against the Poisson traffic models. The ORR and PAS DBA perform quite closely with a difference of 9% in upstream delay at lower traffic loads. Due to inefficient utilization of the excess bandwidth, the ORR scheme shows up to 54% higher delays than both IBU and BAGT for Poisson. At a traffic load of 0.95, Fig. 4b shows that the T2 upstream delay in the case of PAS is 99.8%, 99.2%, and 67% more than BAGT, IBU, and ORR schemes, respectively. Although the PAS scheme has s a fair and balanced bandwidth distribution method, it ignores the high demand of ONUs for T2, which degrades its performance, especially at higher traffic loads. BAGT DBA has the lowest delay for T2 traffic at a traffic load below 0.8, which increases slightly with a higher traffic load. We consider T3 traffic with a variable bit rate that does not need an entirely guaranteed bandwidth. Figure 4c presents the results of the upstream delay of T3 traffic for all four DBA schemes under the Poisson traffic models. The upstream delays increase with the increase in traffic load for all the DBA schemes. Figure 4c indicates results of the Poisson traffic model where both BAGT and IBU show lower delays compared to both IBU and BAGT schemes at all loads, while PAS and ORR have up to 78% and 83% higher T3 upstream delay against them, respectively due to inefficient utilization of excess bandwidth. The T3 upstream delay in the BAGT case is about 32% less than the IBU at a lower load. On the other hand, the ORR and the PAS DBA show close performance at a difference of up to 10%, whereas ORR and PAS compared to IBU have 87% and 83% higher delays, respectively, at higher traffic load due to not utilizing the AB sur3 bandwidth share. Overall, the T3 upstream delay of BAGT is 20% lower than IBU at a higher traffic load. Figure 4d presents the upstream delay of T4 for all four DBAs. T4 represents the best-effort traffic that does not require the guaranteed bandwidth rate but the surplus bandwidth. Performance-wise in the Poisson traffic model, BAGT and IBU work well at lower-traffic load compared to PAS and ORR schemes. In contrast, PAS and ORR offer 78% and 83% more delay compared to BAGT, respectively. BAGT offers the least T4 upstream delay (i.e., about 32% lesser than IBU) at a lower load because BAGT gives a complete chance to T4 by using Eq. (13). IBU only assigs 36% of unAllocFbu bytes to T4 and does not give it an absolute opportunity (Butt et al. 2017) . Under the Poisson traffic model, the ORR and the PAS DBA work similarly with 7% upstream delay performance. Compared to IBU, ORR and PAS show 38% and 32% higher delays, respectively, at higher traffic load, as evident from Fig. 4d . The network throughput ( N T ) is defined as the total amount of data successfully delivered at OLT per second. We calculate network throughput using Eq. (15) (Kamran A. Memon et al. 2019a, b) . The result in Fig. 4e shows that the proposed BAGT DBA achieves the highest throughput compared to other DBAs under the Poisson traffic models. It implies that the overall channel utilization in the case of BAGT is comparatively better and achieves more than 8Gbps throughput, which is the 80% of the XGS-PON line rate. Initially, all DBAs show similar performance at lower traffic loads, as evident from Fig. 4e . As the traffic load increases, the throughput also gradually increases. From the mid to high traffic load, the network throughput of PAS and ORR increases slightly as both DBA schemes do not provide enough bandwidth to ONUs for their upstream transmissions. At a traffic load of 0.95, the BAGT achieves 7%, 36%, and 41% higher N T compared to other schemes, IBU, ORR, and PAS, respectively in Fig. 4e under the Poisson traffic model. This improvement in throughput for BAGT is due to better utilization of excess bandwidth with the help of a game theory-based bandwidth distribution mechanism. The unallocated bandwidth ratio (UBR) indicates how much of the overall available FB u is not utilized by each DBA at the OLT. Figure 4f presents the UBR for four DBA schemes in percent of the FB u under both traffic models. Expectedly, the UBR for the PAS and ORR DBA is high due to less FB u consumption. The UBR for both PAS and ORR is also more than BAGT and IBU at higher loads. The UBR of PAS is 40%, and the UBR of ORR is 38% higher than BAGT, respectively, under the Poisson traffic model in Fig. 4f. Figure 4 testifies that PAS and ORR do not use excess bandwidth and the remaining bandwidth. As expected, the BAGT beats the IBU due to the optimal bandwidth utilization for all TCONTs. We can notice that the UBR of the IBU algorithm is 0.14% more than the proposed algorithm at a higher traffic load under the Poisson traffic model in Fig. 4f . The above results confirm that the proposed BAGT algorithm wastes the minimum bandwidth compared to its counterparts. The reason behind the excellent performance of BAGT is that it efficiently utilizes the full available bandwidth per cycle for all traffic types making it a suitable choice for all types of traffics. Page 15 of 21 316 The bandwidth utilization ratio (%) indicates how much of the overall available FB u is utilized by each DBA at the OLT. Figure 4g presents the bandwidth utilization ratio for four DBA schemes in percent. It is observed from Fig. 4g that the BAGT and IBU DBA utilize more than 90% bandwidth per cycle, which helps to ONUs to reduce the delay(s). PAS and ORR utilize only fixed and guaranteed bandwidth; therefore graph shows the between 60 to 65% bandwidth utilization ratio of PAS and ORR in Fig. 4g . Figure 4h and i show the frame loss ratio (FLR) of the T3 and T4 classes for all the algorithms under Poisson traffic distribution models. Initially, the frame loss ratio at lower load is zero for all DBAs in both TCONT classes. In Fig. 4h , the T3 frame loss of PAS and ORR appears from 0.6 traffic load. ORR frame loss is lower than the PAS, but at a higher traffic load, the frame loss ratio of PAS is 20% than that of ORR. As PAS does not use AB sur3 ; therefore, its loss for T3 is higher. BAGT and IBU DBAs also exhibit some frame loss at higher load, but BAGT has the minimum loss among the rest for T3. It is due to excess bandwidth assignment by BAGT for T3. T4 FLR for PAS and ORR is higher, as shown in Fig. 4h , whereas BAGT and IBU DBA exhibit minimal FLR. Since both PAS and ORR do not use bandwidth, ONUs neither receive the required bandwidth nor pop more frames from their respective queues. At a higher load, the FLR also occurs in IBU and BAGT. BAGT has lower FLR in all DBA because BAGT assigns the total unAllocFbu bandwidth to T4, which gives it a higher chance at the CG phase than the share(i.e., 36% of unAllocFbu ) of T4 at CG in the IBU algorithm. We adopted the method directly from (Kramer et al. 2001) , implementing ON/OFF periods in simulation for self-similar traffic generation. Both periods follow Pareto distribution to generate a self-similar network traffic model with a Hurst(H) parameter of 0.8 as well as shape parameter (α) computed using relationship, H = (3-α)/2 resulting in α = 1.4. Figure 5a illustrates the performance of four DBAs for the T1 application. We have used T1 for CBR, so it requires fixed bandwidth. We can see from Fig. 5a that the US delay for all DBAs remains nearly constant in the case of T1. Figure 5b presents the US delay(s) for TCONT2(T2), it requires guaranteed bandwidth. From Fig. 5b , we observe that BAGT outperforms the PAS and ORR schemes and shows a close performance to the IBU scheme with Self Similar traffic load when traffic increases. Overall in a self-similar scenario, BAGT shows up to 47%, 31%, and 16% lower delays compared to PAS, ORR, and IBU schemes, respectively, due to its better strategy of utilizing the excess bandwidth using the game theory approach. Figure 5c presents a comparative self-similar upstream delay result in the case of TCONT3(T3). We can observe that the initial delay of T3 in BAGT is much lower than the other DBA schemes at a lower load. However, as load increases, the delay of BAGT also gradually increases but remains lower than other DBAs. At high load, the delay of BAGT is about 19%, 39%, and 56% lower than IBU PAS and ORR, respectively. In both traffic models, the BAGT and the IBU utilize both minimum guaranteed and surplus bandwidths, but BAGT also assigns additional bandwidth to T3 if needed before going to T4, which further improves its performance. The performance of BAGT DBA compared to the other three existing DBA schemes in the case of T4 with Self-similar traffic is presented in Fig. 5d . It can be observed that as the traffic load increases, the delay of all DBAs also increases. However, the comparative performance of the proposed BAGT DBA remains better than all the other schemes due to its ability to utilize the excess bandwidth, due to which the BAGT scheme gives a complete chance to T4 in its colorless grant distribution phase. In the case of Self Similar, we have used the same Eq. (15) for calculating the throughput. Figure 5e shows a comparative result of four DBAs where the proposed BAGT DBA achieves the highest throughput compared to other DBAs. Figure 5e shows that all DBAs perform similarly under low traffic loads. Slowly but steadily, the traffic increases, which leads to a gradual increase in network throughput. At a higher load of 0.95, BAGT achieves the N T that is 4%, 39%, and 51% higher than IBU, ORR, and PAS, respectively in Fig. 5e under Self similar traffic model. This improvement in throughput for BAGT is due to better utilization of excess bandwidth with the help of a game theory-based bandwidth distribution mechanism. Expectedly, the unallocated bandwidth ratio(UBR) for the PAS and ORR DBA is high due to less FB u consumption in self-similar traffic models as shown in Fig. 5f . The UBR of PAS is 32%, and the UBR of ORR is 24% higher than BAGT, respectively, in Fig. 5f . Because PAS and ORR do not use excess bandwidth and the remaining bandwidth. As expected, the BAGT beats the IBU due to the optimal bandwidth utilization for all TCONTs. It is observed that the UBR of IBU 0.3% higher than the BAGT under Self Similar traffic in Fig. 5f . Figure 5g shows the bandwidth utilization ratio (%) under self-similar traffic for all DBAs. BAGT utilizes about 2% to 5%, 32% to 48%, 24% to 47% more bandwidth than IBU, PAS and ORR, respectively. From Fig. 5g , we can observe the IBU and BAGT DBAs are almost nearly because they both imply the excess and colorless grant, but PAS and ORR do not imply any technique to utilize the excess and remaining bandwidth. The frame loss ratio of T3 and T4 are illustrated in Fig. 5h and i, respectively. The frame loss ratio of both TCONTs(T and T4) at lower load is zero for all DBAs. From Fig. 5h , we can see the frame loss of BAGT is about 27% at a higher load of 0.95 under self-similar bursty traffic. IBU has 42% FLR. The other PAS and ORR DBA have more than 64% FLR because they both do not utilize total bandwidth and do not provide efficient bandwidth to each ONU; therefore, overall, their FLR is higher than IBU and BAGT DBA. In Fig. 5i , the frame loss ratio for T4 of all DBA is zero at a lower load. As a load of self-similar traffic increases, the FLR increases in the case of PAS and ORR even on lower load. PAS and ORR do not implement excess bandwidth distribution. Therefore, the ONUs do not get enough bandwidth, and FLR is high. As the load increases, FLR occurs in IBU and BAGT as well, but overall, BAGT has lower FLR in all DBA due to its efficient bandwidth utilization algorithm. TCONT4(T4) takes advantage from the BAGT unallocated bandwidth or colorless grant policy, which gives full opportunity to T4 for reducing the delay and frame loss ratio. This research work presented a novel game theory-based DBA algorithm for XGS-PON that offers improved quality of service for the end-users by efficiently and effectively utilizing the excess bandwidth using a Bayesian game theory-based first-price sealed-bid auction mechanism. The performance of the proposed BAGT DBA was validated under varying traffic loads using the self-similar and Poisson distribution-based traffic models. The proposed BAGT DBA achieved < 1 ms overall upstream delay due to improved bandwidth utilization. The performance of the BAGT DBA was also compared to three existing DBAs; IBU, PAS, and ORR. The frame loss ratio was also observed to be the least with both traffic models for the BAGT scheme compared to the other schemes. The proposed scheme achieved higher network throughput than the other schemes in both traffic models. Table 3 provides a brief comparison of all four DBAs. The simulation results of all T1 to T4 traffic types show that BAGT is the most suitable DBA scheme for all traffic types. In our future work, we will investigate the performance of the BAGT DBA process with a real traffic trace of a PON operator. Traffic-Adaptive Inter Wavelength Load Balancing for TWDM PON Receiver ON time optimization for Watchful sleep mode to enhance Energy Savings of 10-Gigabit Passive Optical Network Efficient upstream bandwidth utilization with minimum bandwidth waste for time and wavelength division passive optical network Improved dynamic bandwidth allocation algorithm for XGPON Comprehensive bandwidth utilization and polling mechanism for XGPON GPON service level agreement based dynamic bandwidth assignment protocol Det-LB: A Load Balancing Approach in 802.11 wireless networks for industrial soft real-time applications PAS: A Fair Game-Driven DBA Scheme for XG-PON Systems 10-Gigabit-capable symmetric passive optical network (XGS-PON) Game Theory for Applied Economists. Game Theory for Applied Economists Iterative dynamic bandwidth allocation for XGPON Simple and feasible dynamic bandwidth and polling allocation for XGPON. International Conference on Advanced Communication Technology Development of Efficient Dynamic Bandwidth Allocation Algorithm for XGPON Bayes-Nash Equilibrium of the Generalized First-Price Auction Low Latency-How Low Can You Go? Background and Drivers Low-Laten cy-How-Low-Can-You-Go-0188-WP-RevB-0920. pdf International Telecommunication Union (ITU) Measuring digital development Facts and figures Ethernet PON (ePON): design and analysis of an optical access network Interleaved polling with adaptive cycle time (IPACT): A dynamic bandwidth distribution scheme in an optical access network Design and Analytical Analysis of a Novel DBA Algorithm with Dual-Polling Tables in EPON Dynamic bandwidth allocation algorithm with demand forecasting mechanism for bandwidth allocations in 10-gigabit-capable passive optical network Demand forecasting DBA algorithm for reducing packet delay with efficient bandwidth allocation in XG-PON. Electronics (switzerland) Performance evaluation of XG-PON based mobile front-haul transport in cloud-RAN architecture Highest cost first-based qos mapping scheme for fiber wireless architecture ONU migration using network coding technique in Virtual Multi-OLT PON Architecture Prioritized Multiplexing of traffic accessing an FSAN-compliant GPON. Prioritized Multiplexing of Traffic Accessing an FSAN -Compliant GPON Current Trends towards PON systems at 50+ Gbps Bandwidth allocation algorithm based on differential BRP models in ethernet PON Progress of ITU-T higher speed passive optical network (50G-PON) standardization Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations The authors are thankful to the State Key Laboratory of Information Photonics and Optical Communications, School of Electronic Engineering, BUPT, China for the research guidance provided whereas the authors have not received any financial support for this work. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.