Commit Graph

13 Commits (7649df1a24c74c1d087908ae908ab67517df1fd3)

Author SHA1 Message Date
Tyler Goodlet 7649df1a24 Add commented append slice-len sanity check 2023-01-10 12:42:26 -05:00
Tyler Goodlet 4b76f9ec9a Align step curves the same as OHLC bars 2023-01-10 12:42:26 -05:00
Tyler Goodlet 28d9c781e8 Add `IncrementalFormatter.x_offset: np.ndarray`
Define the x-domain coords "offset" (determining the curve graphics
per-datum placement) for each formatter such that there's only on place
to change it when needed. Obviously each graphics type has it's own
dimensionality and this is reflected by the array shapes on each
subtype.
2023-01-10 12:42:26 -05:00
Tyler Goodlet 325fe1ca67 Set `path_arrays_from_ohlc(use_time_index=True)` on epoch indexing
Allows easily switching between normal array `int` indexing and time
indexing by just flipping the `Viz._index_field: str`.

Also, guard all the x-data audit breakpoints with a time indexing
condition.
2023-01-10 12:42:25 -05:00
Tyler Goodlet d2b7cb7b35 Move `Viz` layer to new `.ui` mod 2023-01-10 12:42:25 -05:00
Tyler Goodlet 57264f87c6 Drop unused `read_src_from_key: bool` to `.format_to_1d()` 2023-01-10 12:42:25 -05:00
Tyler Goodlet 30b9130be6 Fix formatter xy ndarray first prepend case
First allocation vs. first "prepend" of source data to an xy `ndarray`
format **must be mutex** in order to avoid a double prepend.

Previously when both blocks were executed we'd end up with
a `.xy_nd_start` that was decremented (at least) twice as much as it
should be on the first `.format_to_1d()` call which is obviously
incorrect (and causes problems for m4 downsampling as discussed below).
Further, since the underlying `ShmArray` buffer indexing is managed
(i.e. write-updated) completely independently from the incremental
formatter updates and internal xy indexing, we can't use
`ShmArray._first.value` and instead need to use the particular `.diff()`
output's prepend length value to decrement the `.xy_nd_start` on updates
after initial alloc.

Problems this resolves with m4:
- m4 uses a x-domain diff to calculate the number of "frames" to
  downsample to, this is normally based on the ratio of pixel columns on
  screen vs. the size of the input xy data.
- previously using an int-index (not epoch time) the max diff between
  first and last index would be the size of the input buffer and thus
  would never cause a large mem allocation issue (though it may have
  been inefficient in terms of needed size).
- with an epoch time index this max diff could explode if you had some
  near-now epoch time stamp **minus** an x-allocation value: generally
  some value in `[0.5, -0.5]` which would result in a massive frames and
  thus internal `np.ndarray()` allocation causing either a crash in
  `numba` code or actual system mem over allocation.

Further, put in some more x value checks that trigger breakpoints if we
detect values that caused this issue - we'll remove em after this has
been tested enough.
2023-01-10 12:42:25 -05:00
Tyler Goodlet 734c818ed0 Add some commented debug prints for default fmtr 2023-01-10 12:42:25 -05:00
Tyler Goodlet 3e62832580 Adjust all `slice_from_time()` calls to not expect mask 2023-01-10 12:42:25 -05:00
Tyler Goodlet 92a71293ac Use step size to determine bar gaps 2023-01-10 12:42:25 -05:00
Tyler Goodlet d8f325ddd9 Delegate formatter `.index_field` to the parent `Viz` 2023-01-10 12:42:25 -05:00
Tyler Goodlet a2d23244e7 Move qpath-ops routines back to separate mod 2023-01-10 12:42:25 -05:00
Tyler Goodlet 4ca8e23b5b Rename `.ui._pathops.py` -> `.ui._formatters.py 2023-01-10 12:42:25 -05:00