Context
Trying to chase down performance issues with our dashboard app (displaying a number of plots updated about once per second from a stream via a pipe), profiling indicated the _pop_freeze Bokeh function (and related). As we have several plots shown per page this was happening several times per second (and for every session).
With the help of Claude I found that manually calling doc.models.freeze() once per update can avoid this. We also later found holoviews#6315 added doc.models.freeze() inside the hold_render decorator, so each individual plot update avoids redundant recompute() calls from nested sub-plots.
However, in a dashboard that updates N plots per cycle (e.g. a grid of detector images), each pipe.send() is still the outermost freeze context. The per-plot hold_render freeze/unfreeze reaches zero and triggers recompute() — a full BFS traversal of the document model graph — once per plot. With N plots, that’s N traversals of the same tree per update cycle.
The problem
recompute() cost is O(M) where M is the total model count in the document. In our dashboard (~1500 models), each recompute takes ~90 ms. With 4 active plots updating per cycle, that’s ~350 ms spent in _pop_freeze → recompute → collect_models — about 44% of the entire update cycle.
The results of the first N−1 traversals are immediately invalidated by the next plot’s changes.
The fix: one outer freeze
Since Bokeh’s freeze is reentrant (counter-based), wrapping all pipe.send() calls in a single outer doc.models.freeze() prevents the inner per-plot freeze/unfreeze from reaching zero. recompute runs only once when the outer context exits:
with pn.io.hold():
with doc.models.freeze():
for pipe, data in zip(pipes, new_data):
pipe.send(data)
# recompute runs once here, not N times
This is the same idea as holoviews#6315, but one level up — batching across plots rather than within a single plot.
Questions
- We have tried adding the outer
freeze()manually in our update loop and it works well (during limited testing). But is it ok to calldoc.models.freeze()in my periodic update step or can there be any issues? - Is batching
doc.models.freeze()across multiplepipe.send()somethingpn.io.holdshould do, or rather note because it is also used in other contexts?
Minimal example
The ~attached~ (inlined because new users cannot add attachments) app shows a grid of 8 overlay plots updating every 2 s. Toggle between UNBATCHED (N recomputes) and BATCHED (1 recompute) and compare cycle times.
"""
Minimal example: batching ``doc.models.freeze()`` across multiple pipe.send() calls.
HoloViews #6315 added per-plot freeze batching inside ``hold_render``, so each
individual ``pipe.send()`` only triggers one ``recompute``. But when updating N
plots in a loop, the outermost freeze is still per-plot, giving N recomputes of
the full document model graph.
Wrapping the entire loop in a single ``doc.models.freeze()`` collapses N→1.
Run with::
panel serve freeze_batching_minimal.py --port 5099
Toggle between UNBATCHED / BATCHED and compare the cycle times.
"""
from __future__ import annotations
import time
from collections import deque
import holoviews as hv
import numpy as np
import panel as pn
hv.extension("bokeh")
pn.extension()
N_CELLS = 8
UPDATE_INTERVAL_MS = 2000
# -- Data generation ----------------------------------------------------------
_step = 0
def _make_overlay() -> hv.Overlay:
global _step
_step += 1
xs = np.linspace(0, 4 * np.pi, 200)
phase = _step * 0.15
curve = hv.Curve((xs, np.sin(xs + phase) + 0.3 * np.random.randn(len(xs))))
hist = hv.Histogram(np.histogram(np.random.exponential(2, 500), bins=40))
vlines = hv.VLines([np.pi + 0.5 * np.sin(phase)])
return (curve * hist * vlines).opts(shared_axes=True, framewise=True)
# -- Layout -------------------------------------------------------------------
pipes: list[hv.streams.Pipe] = []
grid = pn.GridSpec(ncols=4, nrows=2, sizing_mode="stretch_both", min_height=400)
for i in range(N_CELLS):
pipe = hv.streams.Pipe(data=_make_overlay())
pipes.append(pipe)
dmap = hv.DynamicMap(lambda data: data, streams=[pipe], cache_size=1)
pane = pn.pane.HoloViews(dmap, linked_axes=False, sizing_mode="stretch_both")
grid[divmod(i, 4)] = pane
mode_toggle = pn.widgets.RadioButtonGroup(
options=["UNBATCHED", "BATCHED"], value="UNBATCHED", button_type="primary"
)
stats_md = pn.pane.Markdown("*Waiting for first cycle...*", width=400)
# -- Periodic update ----------------------------------------------------------
_stats: dict[str, deque[float]] = {
"UNBATCHED": deque(maxlen=30),
"BATCHED": deque(maxlen=30),
}
def _format_stats() -> str:
def _row(label: str, times: deque[float], active: bool) -> str:
if not times:
return f"| {'**' if active else ''}{label}{'**' if active else ''} | — | — |"
avg = sum(times) / len(times)
prefix = "**" if active else ""
return (
f"| {prefix}{label}{prefix} "
f"| {prefix}{times[-1]:.0f} ms{prefix} "
f"| {prefix}{avg:.0f} ms{prefix} (n={len(times)}) |"
)
mode = mode_toggle.value
lines = [
"| Mode | Last | Avg |",
"|---|---|---|",
_row("UNBATCHED", _stats["UNBATCHED"], mode == "UNBATCHED"),
_row("BATCHED", _stats["BATCHED"], mode == "BATCHED"),
]
return "\n".join(lines)
def _update() -> None:
doc = pn.state.curdoc
if doc is None:
return
new_data = [_make_overlay() for _ in pipes]
t0 = time.perf_counter()
batched = mode_toggle.value == "BATCHED"
with pn.io.hold():
if batched:
# One outer freeze: inner hold_render freeze/unfreeze never reaches
# zero, so recompute runs only once when this context exits.
with doc.models.freeze():
for pipe, data in zip(pipes, new_data, strict=True):
pipe.send(data)
else:
# Each pipe.send() → hold_render → freeze/unfreeze → recompute.
# N cells = N full model-graph traversals per cycle.
for pipe, data in zip(pipes, new_data, strict=True):
pipe.send(data)
elapsed_ms = (time.perf_counter() - t0) * 1000
_stats[mode_toggle.value].append(elapsed_ms)
stats_md.object = _format_stats()
pn.state.add_periodic_callback(_update, period=UPDATE_INTERVAL_MS)
# -- Serve --------------------------------------------------------------------
pn.Row(
pn.Column(
"## freeze batching demo",
mode_toggle,
stats_md,
width=350,
),
grid,
sizing_mode="stretch_both",
).servable(title="freeze batching demo")
Environment
- Bokeh 3.8.0, HoloViews 1.21.0, Panel 1.8.1