This simulation compares the performance of a Normal Spinlock against two “AI” (adaptive/predictive) spinlock strategies. The goal is to determine if an adaptive approach, which tries to predict lock hold times, can offer lower average wait costs compared to a simple spinlock that always spins.
The simulation evaluates three types of spinlocks:
T
.
T
is determined by a heuristic: the 90th percentile of the observed hold times in the test set.T
unsuccessfully, a static penalty P
is incurred.T
is optimized via a grid search to find the value that minimizes the average wait cost for a fixed penalty P
.N_train
samples): Used to warm up the EMA predictor. Hold times are drawn from an exponential distribution (e.g., scale=1.0
) to simulate typically short, frequent operations.
$h_{train} \sim \text{Exponential}(\lambda_{train})$N_test
samples): A mixture of short (e.g., 80%, scale=1.0
) and long (e.g., 20%, scale=5.0
) hold times, drawn from exponential distributions and shuffled to create a realistic workload.
$h_{short} \sim \text{Exponential}(\lambda_{short})$
$h_{long} \sim \text{Exponential}(\lambda_{long})$Both AI spinlocks use an EMA to predict the next lock hold time. The prediction (pred
) is updated after each observed hold time (h
):
pred_new = α · h + (1 - α) · pred_old
Where:
pred_old
is the previous prediction.h
is the actual hold time just observed.pred_initial
) is established by “warming up” the EMA on the training data.Normal Spinlock:
The cost is simply the actual hold time h
.
` Cost_Normal = h`
AI Spinlocks (Heuristic and Tuned):
Let T
be the current spin threshold and P
be the static penalty for yielding/misprediction.
pred <= T
(AI predicts a short hold):
h <= T
(lock acquired within threshold):
Cost_AI = h
h > T
(lock not acquired within threshold, AI gives up):
Cost_AI = T + P
pred > T
(AI predicts a long hold and decides to yield immediately):
Cost_AI = P
The following chart shows the average wait cost for each strategy from the sample simulation run you provided (image path images/000AAAspinglock.png
):
Results from the image:
In this particular simulation run:
Overall, these results highlight that while adaptive strategies can offer improvements, their effectiveness is highly dependent on the choice of parameters (T
and P
) and the accuracy of the predictor. In this instance, the Tuned AI showed a small benefit, while the specific Heuristic AI did not. The “Tuned AI” essentially learned that a very conservative spinning policy (high T
) was optimal given the fixed penalty P
.