How to optimize data transformation?

I’m probably misusing datashader terminology, but I’m trying to optimize data transformation. My data represent ellipses and have the form <x, y, semimajor, semiminor, tilt>. Currently, I’m pre-processing my data by mapping each <x, y, semimajor, semiminor, tilt> into a list of N points representing the ellipse. Is there a way I can add this to the datashader pipeline to leverage the parallelization/numba-fication/etc.? Currently, I’m just calling a numba jitted function to do the conversion, but this is likely less performant and space efficient (i.e., the resulting pandas dataframe containing the ellipse points is much larger than the original).