Hi, I’m using a tool which calls PolyDraw for the user to draw a number of rectangles on an image. However, I want to specify the coordinates programmatically. I can create a Polygons object, but its data attribute looks like this:
[OrderedDict([('x',
array([ 0. , 0. , 78.89600967, 78.89600967])),
('y',
array([ 0. , 326.79372197, 326.79372197, 0. ]))]),
OrderedDict([('x',
array([236.68802902, 236.68802902, 315.58403869, 315.58403869])),
('y',
array([ 0. , 326.79372197, 326.79372197, 0. ]))]),
OrderedDict([('x',
array([ 0. , 0. , 315.58403869, 315.58403869])),
('y',
array([ 0. , 81.69843049, 81.69843049, 0. ]))]),
...
…whereas the PolyDraw’s data looks like this:
PolyDraw(data={'xs': [[2.8796791443850234, 2.8796791443850234, 87.37165775401068, 87.37165775401068], [222.55882352941174, 313.81016042780743, 310.4304812834224, 215.7994652406417]], 'ys': [[5.169958140283405, 320.4390164362475, 321.90538414925203, 6.6363258532878895], [6.6363258532878895, 6.6363258532878895, 320.4390164362475, 318.9726487232431]]})}
I can generate either type of dicts, but I’m not sure how to put it into a “fake” PolyDraw object as the program later calls it, uses it, and expects it to sort of act like a “proper” PolyDraw object.