Merge pull request #444 from goodboy/spawn_modularize
Spawner modules: split up subactor spawning backendssubint_forkserver_backend
commit
e75e29b1dc
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
model: claude-opus-4-7[1m]
|
||||
service: claude
|
||||
session: subints-spawner-design-kickoff
|
||||
timestamp: 2026-04-17T03:49:18Z
|
||||
git_ref: 9703210
|
||||
scope: docs
|
||||
substantive: true
|
||||
raw_file: 20260417T034918Z_9703210_prompt_io.raw.md
|
||||
---
|
||||
|
||||
## Prompt
|
||||
|
||||
Drive the "first big boi, from GH issue" task seeded by
|
||||
`ai/prompt-io/prompts/subints_spawner.md`: design, plan
|
||||
and implement sub-interpreter (subint) spawn-backend
|
||||
support per issue #379, including (1) modularizing
|
||||
`tractor.spawn._spawn` into per-backend submods, (2) a new
|
||||
`._subint` backend, and (3) harness parametrization via the
|
||||
existing `--spawn-backend` / `start_method` pytest fixture
|
||||
in `tractor._testing.pytest`.
|
||||
|
||||
Follow-up clarifications from the user (this turn):
|
||||
1. Pin `<3.15` on this dev branch and feature-gate subint
|
||||
tests — chose option (a).
|
||||
2. Split Phase A (modularization) into its own PR first.
|
||||
3. Defer the `fork()`-via-subint hack to a follow-up.
|
||||
4. Harness flag is `pytest --spawn-backend <key>` CLI →
|
||||
`start_method` fixture (prompt file updated to match).
|
||||
|
||||
## Response summary
|
||||
|
||||
Produced a three-phase plan and a concrete Phase A (pure
|
||||
modularization) file-split plan for user review; no code
|
||||
written yet — the green-light to start Phase A was given
|
||||
in this same turn conditional on logging this prompt-io
|
||||
entry first.
|
||||
|
||||
Phases:
|
||||
- **A — modularize** `tractor/spawn/_spawn.py` (847 LOC):
|
||||
keep generic machinery in `_spawn.py`, extract
|
||||
`trio_proc` → `spawn/_trio.py`, `mp_proc` →
|
||||
`spawn/_mp.py`. No pin bump.
|
||||
- **B — `_subint` backend**: bump `pyproject.toml`
|
||||
`requires-python` upper to `<3.15`; add `'subint'` to
|
||||
`SpawnMethodKey`; reuse existing UDS transport; shm
|
||||
escape-hatch deferred.
|
||||
- **C — harness**: drive the valid-backend tuple in
|
||||
`tractor/_testing/pytest.py:345-349` from
|
||||
`typing.get_args(SpawnMethodKey)`; skip subint tests on
|
||||
Python < 3.14.
|
||||
|
||||
Key findings surfaced to the user:
|
||||
- `pyproject.toml:12` currently pins `<3.14`; PEP 734
|
||||
`concurrent.interpreters` only ships in 3.14 — the
|
||||
load-bearing constraint.
|
||||
- `_testing/pytest.py:345-349` hardcodes valid backends
|
||||
as a string tuple (`'mp_spawn'`, `'mp_forkserver'`,
|
||||
`'trio'`) — should be `get_args(SpawnMethodKey)`.
|
||||
- `_testing/pytest.py:228` already imports
|
||||
`try_set_start_method` from `tractor.spawn._spawn` —
|
||||
keeping the `_spawn.py` path as the "core" module
|
||||
avoids breaking external refs during Phase A.
|
||||
|
||||
Phase A commit breakdown (each green via `/run-tests`):
|
||||
1. Drive `_testing/pytest.py` backend-tuple from
|
||||
`get_args(SpawnMethodKey)` — prep.
|
||||
2. Extract `trio_proc` → `spawn/_trio.py`; `_methods`
|
||||
registry imports it.
|
||||
3. Extract `mp_proc` → `spawn/_mp.py`; `_methods` imports
|
||||
it.
|
||||
4. Tidy `spawn/__init__.py` docstring for the new layout.
|
||||
|
||||
Workflow: Phase A runs in a new worktree via `/open-wkt
|
||||
spawn_modularize` (branch `wkt/spawn_modularize`) and
|
||||
lands as its own PR before Phase B/C returns to the
|
||||
current `claude_bun_opus4.7_subint_spawner_backend`
|
||||
branch.
|
||||
|
||||
## Files changed
|
||||
|
||||
None this turn — design/planning only. The user updated
|
||||
`ai/prompt-io/prompts/subints_spawner.md` themselves to
|
||||
complete the truncated sentence on line 32 (harness-flag
|
||||
reference).
|
||||
|
||||
Upcoming (Phase A, next turn):
|
||||
- `tractor/spawn/_spawn.py` — trimmed to "core"
|
||||
- `tractor/spawn/_trio.py` — new; holds `trio_proc()`
|
||||
- `tractor/spawn/_mp.py` — new; holds `mp_proc()`
|
||||
- `tractor/spawn/__init__.py` — docstring refresh
|
||||
- `tractor/_testing/pytest.py` — backend tuple from
|
||||
`get_args(SpawnMethodKey)`
|
||||
|
||||
## Human edits
|
||||
|
||||
None — committed as generated. User completed the
|
||||
truncated sentence on line 32 of
|
||||
`ai/prompt-io/prompts/subints_spawner.md` themselves
|
||||
*before* this prompt-io log was written; that edit was
|
||||
the user's authoring, not an edit to AI output.
|
||||
|
|
@ -0,0 +1,117 @@
|
|||
---
|
||||
model: claude-opus-4-7[1m]
|
||||
service: claude
|
||||
timestamp: 2026-04-17T03:49:18Z
|
||||
git_ref: 9703210
|
||||
diff_cmd: git diff HEAD~1..HEAD
|
||||
---
|
||||
|
||||
No code was produced in this turn — this is a design/planning
|
||||
exchange for subint spawner support per issue #379. Captured
|
||||
verbatim below are the two substantive AI responses, which
|
||||
together fix the phased plan (A: modularize, B: subint, C:
|
||||
harness) and the concrete Phase A file split before any
|
||||
implementation begins.
|
||||
|
||||
## Turn 1 — initial triage + gating questions
|
||||
|
||||
> Read: issue #379, current `spawn/_spawn.py` (847 lines),
|
||||
> and pyproject pins.
|
||||
>
|
||||
> ## Task summary
|
||||
> Design/plan/implement subint backend per #379, plus
|
||||
> modularize `spawn/_spawn.py` into per-backend submods, plus
|
||||
> harness flag for test parametrization.
|
||||
>
|
||||
> ## Key finding
|
||||
> `pyproject.toml:12` pins `requires-python = ">=3.12, <3.14"`
|
||||
> — but `concurrent.interpreters` / PEP 734 **only lands in
|
||||
> 3.14**. This is the load-bearing design question before
|
||||
> coding.
|
||||
>
|
||||
> ## Proposed phasing (incremental commits)
|
||||
> - **Phase A — modularize** `_spawn.py`: extract `trio_proc`
|
||||
> → `spawn/_trio.py`, `mp_proc` → `spawn/_mp.py`, keep
|
||||
> generic helpers (`hard_kill`/`soft_kill`/`new_proc`
|
||||
> dispatcher/`SpawnMethodKey`) in a slim core. Tests green
|
||||
> at each step.
|
||||
> - **Phase B — `_subint` backend**: extend `SpawnMethodKey`
|
||||
> with `'subint'`; reuse UDS transport by default; shm is
|
||||
> escape-hatch follow-up.
|
||||
> - **Phase C — harness**: add backend-parametrize flag, skip
|
||||
> subint when Python < 3.14.
|
||||
>
|
||||
> ## Gating questions before I start
|
||||
> 1. **Python pin**: bump `<3.15` and feature-gate subint
|
||||
> tests (a), require 3.14 outright (b), or backport shim
|
||||
> (c)? I'd recommend (a).
|
||||
> 2. **PR strategy**: one branch, commits per phase (current
|
||||
> `claude_bun_opus4.7_subint_spawner_backend`)? Or split
|
||||
> Phase A as its own PR first?
|
||||
> 3. **`fork()`-via-subint hack** from the issue's "Our own
|
||||
> thoughts" section — v1 or follow-up? I'd defer to
|
||||
> follow-up.
|
||||
> 4. **Prompt file line 32 is truncated** — what's the
|
||||
> existing harness flag name you had in mind ("using the
|
||||
> existing ___")?
|
||||
>
|
||||
> Which of these are already decided vs. open?
|
||||
|
||||
## User answers (for the raw record)
|
||||
|
||||
1. bump `<3.15`, feature-gate — option (a).
|
||||
2. split Phase A as its own PR first.
|
||||
3. defer `fork()`-via-subint hack.
|
||||
4. harness flag is `--spawn-backend` CLI → `start_method`
|
||||
fixture, per `tractor._testing._pytest`. Prompt file
|
||||
updated accordingly.
|
||||
|
||||
## Turn 2 — concrete Phase A plan
|
||||
|
||||
> Good — prompt file now ends cleanly; harness flag
|
||||
> confirmed as `--spawn-backend` CLI → `start_method`
|
||||
> fixture (via `_testing/pytest.py:193,355`). Also noted
|
||||
> `pytest.py:345-349` hardcodes valid backends as a string
|
||||
> tuple — `typing.get_args(SpawnMethodKey)` should drive
|
||||
> that list.
|
||||
>
|
||||
> ## Phase A — concrete plan (separate PR)
|
||||
>
|
||||
> **File split** in `tractor/spawn/`:
|
||||
> - `_spawn.py` stays as "core": keeps `SpawnMethodKey`,
|
||||
> `_spawn_method`, `proc_waiter`, `try_set_start_method`,
|
||||
> `exhaust_portal`, `cancel_on_completion`, `hard_kill`,
|
||||
> `soft_kill`, `new_proc` dispatcher, `_methods`
|
||||
> registry. Kept at this path so `_testing/pytest.py:228`
|
||||
> and other external imports don't break.
|
||||
> - `_trio.py` ← `trio_proc()` (lines 448–671, ~224 lines)
|
||||
> - `_mp.py` ← `mp_proc()` (lines 672–842, ~171 lines)
|
||||
> - `_methods` dict imports the two backend fns from the
|
||||
> new modules.
|
||||
>
|
||||
> **Commits** (each green via `/run-tests`):
|
||||
> 1. Drive `_testing/pytest.py` backend-tuple from
|
||||
> `get_args(SpawnMethodKey)` — tiny prep.
|
||||
> 2. Extract `trio_proc` → `spawn/_trio.py`; `_methods`
|
||||
> imports it.
|
||||
> 3. Extract `mp_proc` → `spawn/_mp.py`; `_methods`
|
||||
> imports it.
|
||||
> 4. Tidy `spawn/__init__.py` docstring to describe new
|
||||
> layout.
|
||||
>
|
||||
> No pyproject pin bump in Phase A (that's Phase B).
|
||||
>
|
||||
> **Workflow**: I'd run this in a new worktree via
|
||||
> `/open-wkt spawn_modularize` (branch
|
||||
> `wkt/spawn_modularize`), then open a PR before returning
|
||||
> to the current `claude_bun_opus4.7_subint_spawner_backend`
|
||||
> branch for Phase B/C.
|
||||
>
|
||||
> Green-light to `/open-wkt` and start Phase A?
|
||||
|
||||
## User final response
|
||||
|
||||
> all this first then yes, green light.
|
||||
|
||||
— log this `/prompt-io` entry first, then proceed with
|
||||
Phase A in a worktree.
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
model: claude-opus-4-7[1m]
|
||||
service: claude
|
||||
session: subints-spawner-phase-a-impl
|
||||
timestamp: 2026-04-17T03:58:05Z
|
||||
git_ref: 61a73ba
|
||||
scope: code
|
||||
substantive: true
|
||||
raw_file: 20260417T035800Z_61a73ba_prompt_io.raw.md
|
||||
---
|
||||
|
||||
## Prompt
|
||||
|
||||
Follow-up from the design session logged in
|
||||
`20260417T034918Z_9703210_prompt_io.md`: the user
|
||||
greenlit Phase A after `/prompt-io` tracking, so
|
||||
execute the Phase A "modularize `spawn._spawn`" plan
|
||||
end-to-end in a new worktree — four logical steps
|
||||
(pytest harness prep, `trio_proc` extraction,
|
||||
`mp_proc` extraction, docstring refresh), each green
|
||||
via `/run-tests`.
|
||||
|
||||
User directives during execution:
|
||||
- **Option 3** for the test cadence ("continue A.2–A.4
|
||||
first, then run the full suite once at the end of
|
||||
Phase A").
|
||||
- **One commit** for the whole phase ("can't we just
|
||||
commit the whole patch in one commit?") instead of
|
||||
the 3/4-commit split I initially proposed.
|
||||
- **Don't pre-draft** commit messages — wait for the
|
||||
user to invoke `/commit-msg` (captured as feedback
|
||||
memory `feedback_no_auto_draft_commit_msgs.md`).
|
||||
|
||||
## Response summary
|
||||
|
||||
Produced the cohesive Phase A modularization patch,
|
||||
landed as commit `61a73bae` (subject: `Mv
|
||||
trio_proc`/`mp_proc` to per-backend submods`). Five
|
||||
files changed, +565 / -418 lines.
|
||||
|
||||
Key pieces of the patch (generated by claude,
|
||||
reviewed by the human before commit):
|
||||
- `tractor/spawn/_trio.py` — **new**; receives
|
||||
`trio_proc()` verbatim from `_spawn.py`; imports
|
||||
cross-backend helpers back from `._spawn`.
|
||||
- `tractor/spawn/_mp.py` — **new**; receives
|
||||
`mp_proc()` verbatim; uses `from . import _spawn`
|
||||
for late-binding access to the mutable `_ctx` /
|
||||
`_spawn_method` globals (design decision made
|
||||
during impl, not the original plan).
|
||||
- `tractor/spawn/_spawn.py` — shrunk 847 → 448 LOC;
|
||||
import pruning; bottom-of-module late imports for
|
||||
`trio_proc` / `mp_proc` with a one-line comment
|
||||
explaining the circular-dep reason.
|
||||
- `tractor/spawn/__init__.py` — docstring refresh
|
||||
describing the new layout.
|
||||
- `tractor/_testing/pytest.py` — the valid-backend
|
||||
set now comes from `typing.get_args(SpawnMethodKey)`
|
||||
so future additions (`'subint'`) don't need harness
|
||||
edits.
|
||||
|
||||
## Files changed
|
||||
|
||||
See `git diff 61a73ba~1..61a73ba --stat`:
|
||||
|
||||
```
|
||||
tractor/_testing/pytest.py | 12 +-
|
||||
tractor/spawn/__init__.py | 31 +++-
|
||||
tractor/spawn/_mp.py | 235 ++++++++++++++++++++++++
|
||||
tractor/spawn/_spawn.py | 413 +-------------------------------
|
||||
tractor/spawn/_trio.py | 292 ++++++++++++++++++++++++++++
|
||||
5 files changed, 565 insertions(+), 418 deletions(-)
|
||||
```
|
||||
|
||||
Validation:
|
||||
- import probe + `_methods` wiring check — OK
|
||||
- spawn-relevant test subset — 37 passed, 1 skipped
|
||||
- full suite — 350 passed, 14 skipped, 7 xfailed, 1
|
||||
xpassed
|
||||
|
||||
## Human edits
|
||||
|
||||
None — committed as generated by claude (no diff
|
||||
between `.claude/git_commit_msg_LATEST.md` and the
|
||||
committed body, as far as the assistant could
|
||||
observe).
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
---
|
||||
model: claude-opus-4-7[1m]
|
||||
service: claude
|
||||
timestamp: 2026-04-17T03:58:05Z
|
||||
git_ref: 61a73ba
|
||||
diff_cmd: git diff 61a73ba~1..61a73ba
|
||||
---
|
||||
|
||||
Code generated in this turn was committed verbatim as
|
||||
`61a73bae` ("Mv `trio_proc`/`mp_proc` to per-backend
|
||||
submods"). Per diff-ref mode, per-file code is captured
|
||||
via the pointers below, each followed by a prose
|
||||
summary of what the AI generated. Non-code output
|
||||
(sanity-check results, design rationale) is included
|
||||
verbatim.
|
||||
|
||||
## Per-file generated content
|
||||
|
||||
### `tractor/spawn/_trio.py` (new, 292 lines)
|
||||
|
||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/_trio.py`
|
||||
|
||||
Pure lift-and-shift of `trio_proc()` out of
|
||||
`tractor/spawn/_spawn.py` (previously lines 448–670).
|
||||
Added AGPL header + module docstring describing the
|
||||
backend; imports include local `from ._spawn import
|
||||
cancel_on_completion, hard_kill, soft_kill` which
|
||||
creates the bottom-of-module late-import pattern in
|
||||
the core file to avoid a cycle. All call sites,
|
||||
log-format strings, and body logic are byte-identical
|
||||
to the originals — no semantic change.
|
||||
|
||||
### `tractor/spawn/_mp.py` (new, 235 lines)
|
||||
|
||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/_mp.py`
|
||||
|
||||
Pure lift-and-shift of `mp_proc()` out of
|
||||
`tractor/spawn/_spawn.py` (previously lines 672–842).
|
||||
Same AGPL header convention. Key difference from
|
||||
`_trio.py`: uses `from . import _spawn` (module
|
||||
import, not from-import) for `_ctx` and
|
||||
`_spawn_method` references — these are mutated at
|
||||
runtime by `try_set_start_method()`, so late binding
|
||||
via `_spawn._ctx` / `_spawn._spawn_method` is required
|
||||
for correctness. Also imports `cancel_on_completion`,
|
||||
`soft_kill`, `proc_waiter` from `._spawn`.
|
||||
|
||||
### `tractor/spawn/_spawn.py` (modified, 847 → 448 LOC)
|
||||
|
||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/_spawn.py`
|
||||
|
||||
- removed `trio_proc()` body (moved to `_trio.py`)
|
||||
- removed `mp_proc()` body (moved to `_mp.py`)
|
||||
- pruned imports now unused in core: `sys`,
|
||||
`is_root_process`, `current_actor`,
|
||||
`is_main_process`, `_mp_main`, `ActorFailure`,
|
||||
`pretty_struct`, `_pformat`
|
||||
- added bottom-of-file late imports
|
||||
`from ._trio import trio_proc` and
|
||||
`from ._mp import mp_proc` with a one-line
|
||||
comment explaining why (circular dep)
|
||||
- `_methods` dict unchanged structurally; still binds
|
||||
`'trio' → trio_proc`, `'mp_spawn' → mp_proc`,
|
||||
`'mp_forkserver' → mp_proc`
|
||||
|
||||
### `tractor/spawn/__init__.py` (modified)
|
||||
|
||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/__init__.py`
|
||||
|
||||
Docstring rewrite only — no code. New text describes
|
||||
the per-backend `Layout` with `- ._spawn` (core),
|
||||
`- ._trio` (default), `- ._mp` (two mp variants), and
|
||||
reiterates the "no eager import" NOTE.
|
||||
|
||||
### `tractor/_testing/pytest.py` (modified)
|
||||
|
||||
> `git diff 61a73ba~1..61a73ba -- tractor/_testing/pytest.py`
|
||||
|
||||
Two-line logical change (plus a comment):
|
||||
- `from typing import` grows `get_args`
|
||||
- `pytest_generate_tests()` replaces the hardcoded
|
||||
`('mp_spawn','mp_forkserver','trio')` tuple with
|
||||
`get_args(SpawnMethodKey)` via lazy import
|
||||
`from tractor.spawn._spawn import SpawnMethodKey`
|
||||
inside the function body (matching the project's
|
||||
existing lazy-import convention for the spawn
|
||||
module)
|
||||
|
||||
## Non-code output (verbatim)
|
||||
|
||||
### Design rationale for the file split
|
||||
|
||||
From my Phase A plan turn (already captured in
|
||||
`20260417T034918Z_9703210_prompt_io.md`):
|
||||
- `_spawn.py` stays as the "core" rather than renamed
|
||||
to `_core.py` so external imports
|
||||
(`_testing/pytest.py:228` imports
|
||||
`try_set_start_method` from `tractor.spawn._spawn`)
|
||||
keep working without churn.
|
||||
- Per-backend extraction chosen over alternatives
|
||||
(e.g. splitting generic helpers further) because
|
||||
the immediate motivation is hosting a 3rd
|
||||
`_subint.py` sibling cleanly in Phase B.
|
||||
|
||||
### Sanity-check output (verbatim terminal excerpts)
|
||||
|
||||
Post-extraction import probe:
|
||||
```
|
||||
extraction OK
|
||||
_methods: {'trio': 'tractor.spawn._trio.trio_proc',
|
||||
'mp_spawn': 'tractor.spawn._mp.mp_proc',
|
||||
'mp_forkserver': 'tractor.spawn._mp.mp_proc'}
|
||||
```
|
||||
|
||||
Spawn-relevant test subset (`tests/test_local.py
|
||||
test_rpc.py test_spawning.py test_multi_program.py
|
||||
test_discovery.py`):
|
||||
```
|
||||
37 passed, 1 skipped, 14 warnings in 55.37s
|
||||
```
|
||||
|
||||
Full suite:
|
||||
```
|
||||
350 passed, 14 skipped, 7 xfailed, 1 xpassed,
|
||||
151 warnings in 437.73s (0:07:17)
|
||||
```
|
||||
|
||||
No regressions vs. `main`. One transient `-x`
|
||||
early-stop `ERROR` on
|
||||
`test_close_channel_explicit_remote_registrar[trio-True]`
|
||||
was flaky (passed solo, passed without `-x`), not
|
||||
caused by this refactor.
|
||||
|
||||
### Commit message
|
||||
|
||||
Also AI-drafted (via `/commit-msg`) — the 40-line
|
||||
message on commit `61a73bae` itself. Not reproduced
|
||||
here; see `git log -1 61a73bae`.
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
This is your first big boi, "from GH issue" design, plan and
|
||||
implement task.
|
||||
|
||||
We need to try and add sub-interpreter (aka subint) support per the
|
||||
issue,
|
||||
|
||||
https://github.com/goodboy/tractor/issues/379
|
||||
|
||||
Part of this work should include,
|
||||
|
||||
- modularizing and thus better organizing the `.spawn.*` subpkg by
|
||||
breaking up various backends currently in `spawn._spawn` into
|
||||
separate submods where it makes sense.
|
||||
|
||||
- add a new `._subint` backend which tries to keep as much of the
|
||||
inter-process-isolation machinery in use as possible but with plans
|
||||
to optimize for localhost only benefits as offered by python's
|
||||
subints where possible.
|
||||
|
||||
* utilizing localhost-only tpts like UDS, shm-buffers for
|
||||
performant IPC between subactors but also leveraging the benefits from
|
||||
the traditional OS subprocs mem/storage-domain isolation, linux
|
||||
namespaces where possible and as available/permitted by whatever
|
||||
is happening under the hood with how cpython implements subints.
|
||||
|
||||
* default configuration should encourage state isolation as with
|
||||
subprocs, but explicit public escape hatches to enable rigorously
|
||||
managed shm channels for high performance apps.
|
||||
|
||||
- all tests should be (able to be) parameterized to use the new
|
||||
`subints` backend and enabled by flag in the harness using the
|
||||
existing `pytest --spawn-backend <spawn-backend>` support offered in
|
||||
the `open_root_actor()` and `.testing._pytest` harness override
|
||||
fixture.
|
||||
|
|
@ -46,7 +46,7 @@ dependencies = [
|
|||
# built-in multi-actor `pdb` REPL
|
||||
"pdbp>=1.8.2,<2", # windows only (from `pdbp`)
|
||||
# typed IPC msging
|
||||
"msgspec>=0.19.0",
|
||||
"msgspec>=0.21.0",
|
||||
"cffi>=1.17.1",
|
||||
"bidict>=0.23.1",
|
||||
"multiaddr>=0.2.0",
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ import inspect
|
|||
import platform
|
||||
from typing import (
|
||||
Callable,
|
||||
get_args,
|
||||
)
|
||||
|
||||
import pytest
|
||||
|
|
@ -341,12 +342,11 @@ def pytest_generate_tests(
|
|||
# XXX some weird windows bug with `pytest`?
|
||||
spawn_backend = 'trio'
|
||||
|
||||
# TODO: maybe just use the literal `._spawn.SpawnMethodKey`?
|
||||
assert spawn_backend in (
|
||||
'mp_spawn',
|
||||
'mp_forkserver',
|
||||
'trio',
|
||||
)
|
||||
# drive the valid-backend set from the canonical `Literal` so
|
||||
# adding a new spawn backend (e.g. `'subint'`) doesn't require
|
||||
# touching the harness.
|
||||
from tractor.spawn._spawn import SpawnMethodKey
|
||||
assert spawn_backend in get_args(SpawnMethodKey)
|
||||
|
||||
# NOTE: used-to-be-used-to dyanmically parametrize tests for when
|
||||
# you just passed --spawn-backend=`mp` on the cli, but now we expect
|
||||
|
|
|
|||
|
|
@ -15,12 +15,26 @@
|
|||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
Actor process spawning machinery using
|
||||
multiple backends (trio, multiprocessing).
|
||||
Actor process spawning machinery using multiple backends.
|
||||
|
||||
NOTE: to avoid circular imports, this ``__init__``
|
||||
does NOT eagerly import submodules. Use direct
|
||||
module paths like ``tractor.spawn._spawn`` or
|
||||
``tractor.spawn._entry`` instead.
|
||||
- `._spawn`: cross-backend subactor-as-sub[proc|int] spawning
|
||||
and supervision routines.
|
||||
|
||||
Per-backend submodules (each exposes a single `*_proc()`
|
||||
coroutine registered in `_spawn._methods`):
|
||||
|
||||
- `._trio`: the `trio`-native subprocess backend (default,
|
||||
all platforms), spawns via `trio.lowlevel.open_process()`.
|
||||
|
||||
- `._mp`: the stdlib `multiprocessing` backend variants — driven by
|
||||
the `mp.context` bound to `_spawn._ctx`:
|
||||
* `'mp_spawn'`,
|
||||
* `'mp_forkserver'`
|
||||
|
||||
Entry-point helpers live in `._entry`/`._mp_fixup_main`/
|
||||
`._forkserver_override`.
|
||||
|
||||
NOTE: to avoid circular imports, this ``__init__`` does NOT eagerly
|
||||
import submodules.
|
||||
|
||||
'''
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ Sub-process entry points.
|
|||
from __future__ import annotations
|
||||
from functools import partial
|
||||
import multiprocessing as mp
|
||||
# import os
|
||||
from typing import (
|
||||
Any,
|
||||
TYPE_CHECKING,
|
||||
|
|
|
|||
|
|
@ -0,0 +1,234 @@
|
|||
# tractor: structured concurrent "actors".
|
||||
# Copyright 2018-eternity Tyler Goodlet.
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
The `multiprocessing` subprocess spawning backends (`spawn`
|
||||
and `forkserver` variants).
|
||||
|
||||
Driven by the stdlib `multiprocessing` context selected via
|
||||
`try_set_start_method()` in the `_spawn` core module, which
|
||||
sets the module-global `_ctx` and `_spawn_method` read here.
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
import multiprocessing as mp
|
||||
from typing import (
|
||||
Any,
|
||||
TYPE_CHECKING,
|
||||
)
|
||||
|
||||
import trio
|
||||
from trio import TaskStatus
|
||||
|
||||
from tractor.runtime._state import (
|
||||
current_actor,
|
||||
is_main_process,
|
||||
)
|
||||
from tractor.log import get_logger
|
||||
from tractor.discovery._addr import UnwrappedAddress
|
||||
from tractor.runtime._portal import Portal
|
||||
from tractor.runtime._runtime import Actor
|
||||
from tractor._exceptions import ActorFailure
|
||||
from ._entry import _mp_main
|
||||
# NOTE: module-import (not from-import) so we dynamically see
|
||||
# the *current* `_ctx` / `_spawn_method` values, which are mutated
|
||||
# by `try_set_start_method()` after module load time.
|
||||
from . import _spawn
|
||||
from ._spawn import (
|
||||
cancel_on_completion,
|
||||
proc_waiter,
|
||||
soft_kill,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from tractor.ipc import (
|
||||
_server,
|
||||
)
|
||||
from tractor.runtime._supervise import ActorNursery
|
||||
|
||||
|
||||
log = get_logger('tractor')
|
||||
|
||||
|
||||
async def mp_proc(
|
||||
name: str,
|
||||
actor_nursery: ActorNursery, # type: ignore # noqa
|
||||
subactor: Actor,
|
||||
errors: dict[tuple[str, str], Exception],
|
||||
# passed through to actor main
|
||||
bind_addrs: list[UnwrappedAddress],
|
||||
parent_addr: UnwrappedAddress,
|
||||
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||
*,
|
||||
infect_asyncio: bool = False,
|
||||
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||
proc_kwargs: dict[str, any] = {}
|
||||
|
||||
) -> None:
|
||||
|
||||
# uggh zone
|
||||
try:
|
||||
from multiprocessing import semaphore_tracker # type: ignore
|
||||
resource_tracker = semaphore_tracker
|
||||
resource_tracker._resource_tracker = resource_tracker._semaphore_tracker # noqa
|
||||
except ImportError:
|
||||
# 3.8 introduces a more general version that also tracks shared mems
|
||||
from multiprocessing import resource_tracker # type: ignore
|
||||
|
||||
assert _spawn._ctx
|
||||
start_method = _spawn._ctx.get_start_method()
|
||||
if start_method == 'forkserver':
|
||||
|
||||
from multiprocessing import forkserver # type: ignore
|
||||
# XXX do our hackery on the stdlib to avoid multiple
|
||||
# forkservers (one at each subproc layer).
|
||||
fs = forkserver._forkserver
|
||||
curr_actor = current_actor()
|
||||
if is_main_process() and not curr_actor._forkserver_info:
|
||||
# if we're the "main" process start the forkserver
|
||||
# only once and pass its ipc info to downstream
|
||||
# children
|
||||
# forkserver.set_forkserver_preload(enable_modules)
|
||||
forkserver.ensure_running()
|
||||
fs_info = (
|
||||
fs._forkserver_address, # type: ignore # noqa
|
||||
fs._forkserver_alive_fd, # type: ignore # noqa
|
||||
getattr(fs, '_forkserver_pid', None),
|
||||
getattr(
|
||||
resource_tracker._resource_tracker, '_pid', None),
|
||||
resource_tracker._resource_tracker._fd,
|
||||
)
|
||||
else: # request to forkerserver to fork a new child
|
||||
assert curr_actor._forkserver_info
|
||||
fs_info = (
|
||||
fs._forkserver_address, # type: ignore # noqa
|
||||
fs._forkserver_alive_fd, # type: ignore # noqa
|
||||
fs._forkserver_pid, # type: ignore # noqa
|
||||
resource_tracker._resource_tracker._pid,
|
||||
resource_tracker._resource_tracker._fd,
|
||||
) = curr_actor._forkserver_info
|
||||
else:
|
||||
# spawn method
|
||||
fs_info = (None, None, None, None, None)
|
||||
|
||||
proc: mp.Process = _spawn._ctx.Process( # type: ignore
|
||||
target=_mp_main,
|
||||
args=(
|
||||
subactor,
|
||||
bind_addrs,
|
||||
fs_info,
|
||||
_spawn._spawn_method,
|
||||
parent_addr,
|
||||
infect_asyncio,
|
||||
),
|
||||
# daemon=True,
|
||||
name=name,
|
||||
)
|
||||
|
||||
# `multiprocessing` only (since no async interface):
|
||||
# register the process before start in case we get a cancel
|
||||
# request before the actor has fully spawned - then we can wait
|
||||
# for it to fully come up before sending a cancel request
|
||||
actor_nursery._children[subactor.aid.uid] = (subactor, proc, None)
|
||||
|
||||
proc.start()
|
||||
if not proc.is_alive():
|
||||
raise ActorFailure("Couldn't start sub-actor?")
|
||||
|
||||
log.runtime(f"Started {proc}")
|
||||
|
||||
ipc_server: _server.Server = actor_nursery._actor.ipc_server
|
||||
try:
|
||||
# wait for actor to spawn and connect back to us
|
||||
# channel should have handshake completed by the
|
||||
# local actor by the time we get a ref to it
|
||||
event, chan = await ipc_server.wait_for_peer(
|
||||
subactor.aid.uid,
|
||||
)
|
||||
|
||||
# XXX: monkey patch poll API to match the ``subprocess`` API..
|
||||
# not sure why they don't expose this but kk.
|
||||
proc.poll = lambda: proc.exitcode # type: ignore
|
||||
|
||||
# except:
|
||||
# TODO: in the case we were cancelled before the sub-proc
|
||||
# registered itself back we must be sure to try and clean
|
||||
# any process we may have started.
|
||||
|
||||
portal = Portal(chan)
|
||||
actor_nursery._children[subactor.aid.uid] = (subactor, proc, portal)
|
||||
|
||||
# unblock parent task
|
||||
task_status.started(portal)
|
||||
|
||||
# wait for ``ActorNursery`` block to signal that
|
||||
# subprocesses can be waited upon.
|
||||
# This is required to ensure synchronization
|
||||
# with user code that may want to manually await results
|
||||
# from nursery spawned sub-actors. We don't want the
|
||||
# containing nurseries here to collect results or error
|
||||
# while user code is still doing it's thing. Only after the
|
||||
# nursery block closes do we allow subactor results to be
|
||||
# awaited and reported upwards to the supervisor.
|
||||
with trio.CancelScope(shield=True):
|
||||
await actor_nursery._join_procs.wait()
|
||||
|
||||
async with trio.open_nursery() as nursery:
|
||||
if portal in actor_nursery._cancel_after_result_on_exit:
|
||||
nursery.start_soon(
|
||||
cancel_on_completion,
|
||||
portal,
|
||||
subactor,
|
||||
errors
|
||||
)
|
||||
|
||||
# This is a "soft" (cancellable) join/reap which
|
||||
# will remote cancel the actor on a ``trio.Cancelled``
|
||||
# condition.
|
||||
await soft_kill(
|
||||
proc,
|
||||
proc_waiter,
|
||||
portal
|
||||
)
|
||||
|
||||
# cancel result waiter that may have been spawned in
|
||||
# tandem if not done already
|
||||
log.warning(
|
||||
"Cancelling existing result waiter task for "
|
||||
f"{subactor.aid.uid}")
|
||||
nursery.cancel_scope.cancel()
|
||||
|
||||
finally:
|
||||
# hard reap sequence
|
||||
if proc.is_alive():
|
||||
log.cancel(f"Attempting to hard kill {proc}")
|
||||
with trio.move_on_after(0.1) as cs:
|
||||
cs.shield = True
|
||||
await proc_waiter(proc)
|
||||
|
||||
if cs.cancelled_caught:
|
||||
proc.terminate()
|
||||
|
||||
proc.join()
|
||||
log.debug(f"Joined {proc}")
|
||||
|
||||
# pop child entry to indicate we are no longer managing subactor
|
||||
actor_nursery._children.pop(subactor.aid.uid)
|
||||
|
||||
# TODO: prolly report to ``mypy`` how this causes all sorts of
|
||||
# false errors..
|
||||
# subactor, proc, portal = actor_nursery._children.pop(subactor.uid)
|
||||
|
|
@ -15,12 +15,12 @@
|
|||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
"""
|
||||
Machinery for actor process spawning using multiple backends.
|
||||
Top level routines & machinery for actor-as-process/subint spawning
|
||||
over multiple backends.
|
||||
|
||||
"""
|
||||
from __future__ import annotations
|
||||
import multiprocessing as mp
|
||||
import sys
|
||||
import platform
|
||||
from typing import (
|
||||
Any,
|
||||
|
|
@ -34,32 +34,19 @@ from typing import (
|
|||
import trio
|
||||
from trio import TaskStatus
|
||||
|
||||
from ..devx import (
|
||||
debug,
|
||||
pformat as _pformat
|
||||
)
|
||||
from ..devx import debug
|
||||
from tractor.runtime._state import (
|
||||
current_actor,
|
||||
is_main_process,
|
||||
is_root_process,
|
||||
debug_mode,
|
||||
_runtime_vars,
|
||||
)
|
||||
from tractor.log import get_logger
|
||||
from tractor.discovery._addr import UnwrappedAddress
|
||||
from tractor.runtime._portal import Portal
|
||||
from tractor.runtime._runtime import Actor
|
||||
from ._entry import _mp_main
|
||||
from tractor._exceptions import ActorFailure
|
||||
from tractor.msg import (
|
||||
types as msgtypes,
|
||||
pretty_struct,
|
||||
)
|
||||
from tractor.msg import types as msgtypes
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from tractor.ipc import (
|
||||
_server,
|
||||
Channel,
|
||||
)
|
||||
from tractor.runtime._supervise import ActorNursery
|
||||
|
|
@ -445,398 +432,11 @@ async def new_proc(
|
|||
)
|
||||
|
||||
|
||||
async def trio_proc(
|
||||
name: str,
|
||||
actor_nursery: ActorNursery,
|
||||
subactor: Actor,
|
||||
errors: dict[tuple[str, str], Exception],
|
||||
|
||||
# passed through to actor main
|
||||
bind_addrs: list[UnwrappedAddress],
|
||||
parent_addr: UnwrappedAddress,
|
||||
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||
*,
|
||||
infect_asyncio: bool = False,
|
||||
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||
proc_kwargs: dict[str, any] = {}
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
Create a new ``Process`` using a "spawn method" as (configured using
|
||||
``try_set_start_method()``).
|
||||
|
||||
This routine should be started in a actor runtime task and the logic
|
||||
here is to be considered the core supervision strategy.
|
||||
|
||||
'''
|
||||
spawn_cmd = [
|
||||
sys.executable,
|
||||
"-m",
|
||||
# Hardcode this (instead of using ``_child.__name__`` to avoid a
|
||||
# double import warning: https://stackoverflow.com/a/45070583
|
||||
"tractor._child",
|
||||
# We provide the child's unique identifier on this exec/spawn
|
||||
# line for debugging purposes when viewing the process tree from
|
||||
# the OS; it otherwise can be passed via the parent channel if
|
||||
# we prefer in the future (for privacy).
|
||||
"--uid",
|
||||
# TODO, how to pass this over "wire" encodings like
|
||||
# cmdline args?
|
||||
# -[ ] maybe we can add an `msgtypes.Aid.min_tuple()` ?
|
||||
str(subactor.aid.uid),
|
||||
# Address the child must connect to on startup
|
||||
"--parent_addr",
|
||||
str(parent_addr)
|
||||
]
|
||||
|
||||
if subactor.loglevel:
|
||||
spawn_cmd += [
|
||||
"--loglevel",
|
||||
subactor.loglevel
|
||||
]
|
||||
# Tell child to run in guest mode on top of ``asyncio`` loop
|
||||
if infect_asyncio:
|
||||
spawn_cmd.append("--asyncio")
|
||||
|
||||
cancelled_during_spawn: bool = False
|
||||
proc: trio.Process|None = None
|
||||
ipc_server: _server.Server = actor_nursery._actor.ipc_server
|
||||
try:
|
||||
try:
|
||||
proc: trio.Process = await trio.lowlevel.open_process(spawn_cmd, **proc_kwargs)
|
||||
log.runtime(
|
||||
f'Started new child subproc\n'
|
||||
f'(>\n'
|
||||
f' |_{proc}\n'
|
||||
)
|
||||
|
||||
# wait for actor to spawn and connect back to us
|
||||
# channel should have handshake completed by the
|
||||
# local actor by the time we get a ref to it
|
||||
event, chan = await ipc_server.wait_for_peer(
|
||||
subactor.aid.uid
|
||||
)
|
||||
|
||||
except trio.Cancelled:
|
||||
cancelled_during_spawn = True
|
||||
# we may cancel before the child connects back in which
|
||||
# case avoid clobbering the pdb tty.
|
||||
if debug_mode():
|
||||
with trio.CancelScope(shield=True):
|
||||
# don't clobber an ongoing pdb
|
||||
if is_root_process():
|
||||
await debug.maybe_wait_for_debugger()
|
||||
|
||||
elif proc is not None:
|
||||
async with debug.acquire_debug_lock(
|
||||
subactor_uid=subactor.aid.uid
|
||||
):
|
||||
# soft wait on the proc to terminate
|
||||
with trio.move_on_after(0.5):
|
||||
await proc.wait()
|
||||
raise
|
||||
|
||||
# a sub-proc ref **must** exist now
|
||||
assert proc
|
||||
|
||||
portal = Portal(chan)
|
||||
actor_nursery._children[subactor.aid.uid] = (
|
||||
subactor,
|
||||
proc,
|
||||
portal,
|
||||
)
|
||||
|
||||
# send a "spawning specification" which configures the
|
||||
# initial runtime state of the child.
|
||||
sspec = msgtypes.SpawnSpec(
|
||||
_parent_main_data=subactor._parent_main_data,
|
||||
enable_modules=subactor.enable_modules,
|
||||
reg_addrs=subactor.reg_addrs,
|
||||
bind_addrs=bind_addrs,
|
||||
_runtime_vars=_runtime_vars,
|
||||
)
|
||||
log.runtime(
|
||||
f'Sending spawn spec to child\n'
|
||||
f'{{}}=> {chan.aid.reprol()!r}\n'
|
||||
f'\n'
|
||||
f'{pretty_struct.pformat(sspec)}\n'
|
||||
)
|
||||
await chan.send(sspec)
|
||||
|
||||
# track subactor in current nursery
|
||||
curr_actor: Actor = current_actor()
|
||||
curr_actor._actoruid2nursery[subactor.aid.uid] = actor_nursery
|
||||
|
||||
# resume caller at next checkpoint now that child is up
|
||||
task_status.started(portal)
|
||||
|
||||
# wait for ActorNursery.wait() to be called
|
||||
with trio.CancelScope(shield=True):
|
||||
await actor_nursery._join_procs.wait()
|
||||
|
||||
async with trio.open_nursery() as nursery:
|
||||
if portal in actor_nursery._cancel_after_result_on_exit:
|
||||
nursery.start_soon(
|
||||
cancel_on_completion,
|
||||
portal,
|
||||
subactor,
|
||||
errors
|
||||
)
|
||||
|
||||
# This is a "soft" (cancellable) join/reap which
|
||||
# will remote cancel the actor on a ``trio.Cancelled``
|
||||
# condition.
|
||||
await soft_kill(
|
||||
proc,
|
||||
trio.Process.wait, # XXX, uses `pidfd_open()` below.
|
||||
portal
|
||||
)
|
||||
|
||||
# cancel result waiter that may have been spawned in
|
||||
# tandem if not done already
|
||||
log.cancel(
|
||||
'Cancelling portal result reaper task\n'
|
||||
f'c)> {subactor.aid.reprol()!r}\n'
|
||||
)
|
||||
nursery.cancel_scope.cancel()
|
||||
|
||||
finally:
|
||||
# XXX NOTE XXX: The "hard" reap since no actor zombies are
|
||||
# allowed! Do this **after** cancellation/teardown to avoid
|
||||
# killing the process too early.
|
||||
if proc:
|
||||
reap_repr: str = _pformat.nest_from_op(
|
||||
input_op='>x)',
|
||||
text=subactor.pformat(),
|
||||
)
|
||||
log.cancel(
|
||||
f'Hard reap sequence starting for subactor\n'
|
||||
f'{reap_repr}'
|
||||
)
|
||||
|
||||
with trio.CancelScope(shield=True):
|
||||
# don't clobber an ongoing pdb
|
||||
if cancelled_during_spawn:
|
||||
# Try again to avoid TTY clobbering.
|
||||
async with debug.acquire_debug_lock(
|
||||
subactor_uid=subactor.aid.uid
|
||||
):
|
||||
with trio.move_on_after(0.5):
|
||||
await proc.wait()
|
||||
|
||||
await debug.maybe_wait_for_debugger(
|
||||
child_in_debug=_runtime_vars.get(
|
||||
'_debug_mode', False
|
||||
),
|
||||
header_msg=(
|
||||
'Delaying subproc reaper while debugger locked..\n'
|
||||
),
|
||||
|
||||
# TODO: need a diff value then default?
|
||||
# poll_steps=9999999,
|
||||
)
|
||||
# TODO: solve the following issue where we need
|
||||
# to do a similar wait like this but in an
|
||||
# "intermediary" parent actor that itself isn't
|
||||
# in debug but has a child that is, and we need
|
||||
# to hold off on relaying SIGINT until that child
|
||||
# is complete.
|
||||
# https://github.com/goodboy/tractor/issues/320
|
||||
# -[ ] we need to handle non-root parent-actors specially
|
||||
# by somehow determining if a child is in debug and then
|
||||
# avoiding cancel/kill of said child by this
|
||||
# (intermediary) parent until such a time as the root says
|
||||
# the pdb lock is released and we are good to tear down
|
||||
# (our children)..
|
||||
#
|
||||
# -[ ] so maybe something like this where we try to
|
||||
# acquire the lock and get notified of who has it,
|
||||
# check that uid against our known children?
|
||||
# this_uid: tuple[str, str] = current_actor().uid
|
||||
# await debug.acquire_debug_lock(this_uid)
|
||||
|
||||
if proc.poll() is None:
|
||||
log.cancel(f"Attempting to hard kill {proc}")
|
||||
await hard_kill(proc)
|
||||
|
||||
log.debug(f"Joined {proc}")
|
||||
else:
|
||||
log.warning('Nursery cancelled before sub-proc started')
|
||||
|
||||
if not cancelled_during_spawn:
|
||||
# pop child entry to indicate we no longer managing this
|
||||
# subactor
|
||||
actor_nursery._children.pop(subactor.aid.uid)
|
||||
|
||||
|
||||
async def mp_proc(
|
||||
name: str,
|
||||
actor_nursery: ActorNursery, # type: ignore # noqa
|
||||
subactor: Actor,
|
||||
errors: dict[tuple[str, str], Exception],
|
||||
# passed through to actor main
|
||||
bind_addrs: list[UnwrappedAddress],
|
||||
parent_addr: UnwrappedAddress,
|
||||
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||
*,
|
||||
infect_asyncio: bool = False,
|
||||
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||
proc_kwargs: dict[str, any] = {}
|
||||
|
||||
) -> None:
|
||||
|
||||
# uggh zone
|
||||
try:
|
||||
from multiprocessing import semaphore_tracker # type: ignore
|
||||
resource_tracker = semaphore_tracker
|
||||
resource_tracker._resource_tracker = resource_tracker._semaphore_tracker # noqa
|
||||
except ImportError:
|
||||
# 3.8 introduces a more general version that also tracks shared mems
|
||||
from multiprocessing import resource_tracker # type: ignore
|
||||
|
||||
assert _ctx
|
||||
start_method = _ctx.get_start_method()
|
||||
if start_method == 'forkserver':
|
||||
|
||||
from multiprocessing import forkserver # type: ignore
|
||||
# XXX do our hackery on the stdlib to avoid multiple
|
||||
# forkservers (one at each subproc layer).
|
||||
fs = forkserver._forkserver
|
||||
curr_actor = current_actor()
|
||||
if is_main_process() and not curr_actor._forkserver_info:
|
||||
# if we're the "main" process start the forkserver
|
||||
# only once and pass its ipc info to downstream
|
||||
# children
|
||||
# forkserver.set_forkserver_preload(enable_modules)
|
||||
forkserver.ensure_running()
|
||||
fs_info = (
|
||||
fs._forkserver_address, # type: ignore # noqa
|
||||
fs._forkserver_alive_fd, # type: ignore # noqa
|
||||
getattr(fs, '_forkserver_pid', None),
|
||||
getattr(
|
||||
resource_tracker._resource_tracker, '_pid', None),
|
||||
resource_tracker._resource_tracker._fd,
|
||||
)
|
||||
else: # request to forkerserver to fork a new child
|
||||
assert curr_actor._forkserver_info
|
||||
fs_info = (
|
||||
fs._forkserver_address, # type: ignore # noqa
|
||||
fs._forkserver_alive_fd, # type: ignore # noqa
|
||||
fs._forkserver_pid, # type: ignore # noqa
|
||||
resource_tracker._resource_tracker._pid,
|
||||
resource_tracker._resource_tracker._fd,
|
||||
) = curr_actor._forkserver_info
|
||||
else:
|
||||
# spawn method
|
||||
fs_info = (None, None, None, None, None)
|
||||
|
||||
proc: mp.Process = _ctx.Process( # type: ignore
|
||||
target=_mp_main,
|
||||
args=(
|
||||
subactor,
|
||||
bind_addrs,
|
||||
fs_info,
|
||||
_spawn_method,
|
||||
parent_addr,
|
||||
infect_asyncio,
|
||||
),
|
||||
# daemon=True,
|
||||
name=name,
|
||||
)
|
||||
|
||||
# `multiprocessing` only (since no async interface):
|
||||
# register the process before start in case we get a cancel
|
||||
# request before the actor has fully spawned - then we can wait
|
||||
# for it to fully come up before sending a cancel request
|
||||
actor_nursery._children[subactor.aid.uid] = (subactor, proc, None)
|
||||
|
||||
proc.start()
|
||||
if not proc.is_alive():
|
||||
raise ActorFailure("Couldn't start sub-actor?")
|
||||
|
||||
log.runtime(f"Started {proc}")
|
||||
|
||||
ipc_server: _server.Server = actor_nursery._actor.ipc_server
|
||||
try:
|
||||
# wait for actor to spawn and connect back to us
|
||||
# channel should have handshake completed by the
|
||||
# local actor by the time we get a ref to it
|
||||
event, chan = await ipc_server.wait_for_peer(
|
||||
subactor.aid.uid,
|
||||
)
|
||||
|
||||
# XXX: monkey patch poll API to match the ``subprocess`` API..
|
||||
# not sure why they don't expose this but kk.
|
||||
proc.poll = lambda: proc.exitcode # type: ignore
|
||||
|
||||
# except:
|
||||
# TODO: in the case we were cancelled before the sub-proc
|
||||
# registered itself back we must be sure to try and clean
|
||||
# any process we may have started.
|
||||
|
||||
portal = Portal(chan)
|
||||
actor_nursery._children[subactor.aid.uid] = (subactor, proc, portal)
|
||||
|
||||
# unblock parent task
|
||||
task_status.started(portal)
|
||||
|
||||
# wait for ``ActorNursery`` block to signal that
|
||||
# subprocesses can be waited upon.
|
||||
# This is required to ensure synchronization
|
||||
# with user code that may want to manually await results
|
||||
# from nursery spawned sub-actors. We don't want the
|
||||
# containing nurseries here to collect results or error
|
||||
# while user code is still doing it's thing. Only after the
|
||||
# nursery block closes do we allow subactor results to be
|
||||
# awaited and reported upwards to the supervisor.
|
||||
with trio.CancelScope(shield=True):
|
||||
await actor_nursery._join_procs.wait()
|
||||
|
||||
async with trio.open_nursery() as nursery:
|
||||
if portal in actor_nursery._cancel_after_result_on_exit:
|
||||
nursery.start_soon(
|
||||
cancel_on_completion,
|
||||
portal,
|
||||
subactor,
|
||||
errors
|
||||
)
|
||||
|
||||
# This is a "soft" (cancellable) join/reap which
|
||||
# will remote cancel the actor on a ``trio.Cancelled``
|
||||
# condition.
|
||||
await soft_kill(
|
||||
proc,
|
||||
proc_waiter,
|
||||
portal
|
||||
)
|
||||
|
||||
# cancel result waiter that may have been spawned in
|
||||
# tandem if not done already
|
||||
log.warning(
|
||||
"Cancelling existing result waiter task for "
|
||||
f"{subactor.aid.uid}")
|
||||
nursery.cancel_scope.cancel()
|
||||
|
||||
finally:
|
||||
# hard reap sequence
|
||||
if proc.is_alive():
|
||||
log.cancel(f"Attempting to hard kill {proc}")
|
||||
with trio.move_on_after(0.1) as cs:
|
||||
cs.shield = True
|
||||
await proc_waiter(proc)
|
||||
|
||||
if cs.cancelled_caught:
|
||||
proc.terminate()
|
||||
|
||||
proc.join()
|
||||
log.debug(f"Joined {proc}")
|
||||
|
||||
# pop child entry to indicate we are no longer managing subactor
|
||||
actor_nursery._children.pop(subactor.aid.uid)
|
||||
|
||||
# TODO: prolly report to ``mypy`` how this causes all sorts of
|
||||
# false errors..
|
||||
# subactor, proc, portal = actor_nursery._children.pop(subactor.uid)
|
||||
# NOTE: bottom-of-module to avoid a circular import since the
|
||||
# backend submodules pull `cancel_on_completion`/`soft_kill`/
|
||||
# `hard_kill`/`proc_waiter` from this module.
|
||||
from ._trio import trio_proc
|
||||
from ._mp import mp_proc
|
||||
|
||||
|
||||
# proc spawning backend target map
|
||||
|
|
|
|||
|
|
@ -0,0 +1,294 @@
|
|||
# tractor: structured concurrent "actors".
|
||||
# Copyright 2018-eternity Tyler Goodlet.
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
'''
|
||||
The `trio`-subprocess backend; the default for cross-platform.
|
||||
|
||||
Spawns sub-actors as fresh OS processes driven by
|
||||
`trio.lowlevel.open_process()`.
|
||||
|
||||
'''
|
||||
from __future__ import annotations
|
||||
import sys
|
||||
from typing import (
|
||||
Any,
|
||||
TYPE_CHECKING,
|
||||
)
|
||||
|
||||
import trio
|
||||
from trio import TaskStatus
|
||||
|
||||
from ..devx import (
|
||||
debug,
|
||||
pformat as _pformat,
|
||||
)
|
||||
from tractor.runtime._state import (
|
||||
current_actor,
|
||||
is_root_process,
|
||||
debug_mode,
|
||||
get_runtime_vars,
|
||||
)
|
||||
from tractor.log import get_logger
|
||||
from tractor.discovery._addr import UnwrappedAddress
|
||||
from tractor.runtime._portal import Portal
|
||||
from tractor.runtime._runtime import Actor
|
||||
from tractor.msg import (
|
||||
types as msgtypes,
|
||||
pretty_struct,
|
||||
)
|
||||
from ._spawn import (
|
||||
cancel_on_completion,
|
||||
hard_kill,
|
||||
soft_kill,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from tractor.ipc import (
|
||||
_server,
|
||||
)
|
||||
from tractor.runtime._supervise import ActorNursery
|
||||
|
||||
|
||||
log = get_logger('tractor')
|
||||
|
||||
|
||||
async def trio_proc(
|
||||
name: str,
|
||||
actor_nursery: ActorNursery,
|
||||
subactor: Actor,
|
||||
errors: dict[tuple[str, str], Exception],
|
||||
|
||||
# passed through to actor main
|
||||
bind_addrs: list[UnwrappedAddress],
|
||||
parent_addr: UnwrappedAddress,
|
||||
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||
*,
|
||||
infect_asyncio: bool = False,
|
||||
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||
proc_kwargs: dict[str, any] = {}
|
||||
|
||||
) -> None:
|
||||
'''
|
||||
Create a new ``Process`` using a "spawn method" as (configured using
|
||||
``try_set_start_method()``).
|
||||
|
||||
This routine should be started in a actor runtime task and the logic
|
||||
here is to be considered the core supervision strategy.
|
||||
|
||||
'''
|
||||
spawn_cmd = [
|
||||
sys.executable,
|
||||
"-m",
|
||||
# Hardcode this (instead of using ``_child.__name__`` to avoid a
|
||||
# double import warning: https://stackoverflow.com/a/45070583
|
||||
"tractor._child",
|
||||
# We provide the child's unique identifier on this exec/spawn
|
||||
# line for debugging purposes when viewing the process tree from
|
||||
# the OS; it otherwise can be passed via the parent channel if
|
||||
# we prefer in the future (for privacy).
|
||||
"--uid",
|
||||
# TODO, how to pass this over "wire" encodings like
|
||||
# cmdline args?
|
||||
# -[ ] maybe we can add an `msgtypes.Aid.min_tuple()` ?
|
||||
str(subactor.aid.uid),
|
||||
# Address the child must connect to on startup
|
||||
"--parent_addr",
|
||||
str(parent_addr)
|
||||
]
|
||||
|
||||
if subactor.loglevel:
|
||||
spawn_cmd += [
|
||||
"--loglevel",
|
||||
subactor.loglevel
|
||||
]
|
||||
# Tell child to run in guest mode on top of ``asyncio`` loop
|
||||
if infect_asyncio:
|
||||
spawn_cmd.append("--asyncio")
|
||||
|
||||
cancelled_during_spawn: bool = False
|
||||
proc: trio.Process|None = None
|
||||
ipc_server: _server.Server = actor_nursery._actor.ipc_server
|
||||
try:
|
||||
try:
|
||||
proc: trio.Process = await trio.lowlevel.open_process(spawn_cmd, **proc_kwargs)
|
||||
log.runtime(
|
||||
f'Started new child subproc\n'
|
||||
f'(>\n'
|
||||
f' |_{proc}\n'
|
||||
)
|
||||
|
||||
# wait for actor to spawn and connect back to us
|
||||
# channel should have handshake completed by the
|
||||
# local actor by the time we get a ref to it
|
||||
event, chan = await ipc_server.wait_for_peer(
|
||||
subactor.aid.uid
|
||||
)
|
||||
|
||||
except trio.Cancelled:
|
||||
cancelled_during_spawn = True
|
||||
# we may cancel before the child connects back in which
|
||||
# case avoid clobbering the pdb tty.
|
||||
if debug_mode():
|
||||
with trio.CancelScope(shield=True):
|
||||
# don't clobber an ongoing pdb
|
||||
if is_root_process():
|
||||
await debug.maybe_wait_for_debugger()
|
||||
|
||||
elif proc is not None:
|
||||
async with debug.acquire_debug_lock(
|
||||
subactor_uid=subactor.aid.uid
|
||||
):
|
||||
# soft wait on the proc to terminate
|
||||
with trio.move_on_after(0.5):
|
||||
await proc.wait()
|
||||
raise
|
||||
|
||||
# a sub-proc ref **must** exist now
|
||||
assert proc
|
||||
|
||||
portal = Portal(chan)
|
||||
actor_nursery._children[subactor.aid.uid] = (
|
||||
subactor,
|
||||
proc,
|
||||
portal,
|
||||
)
|
||||
|
||||
# send a "spawning specification" which configures the
|
||||
# initial runtime state of the child.
|
||||
sspec = msgtypes.SpawnSpec(
|
||||
_parent_main_data=subactor._parent_main_data,
|
||||
enable_modules=subactor.enable_modules,
|
||||
reg_addrs=subactor.reg_addrs,
|
||||
bind_addrs=bind_addrs,
|
||||
_runtime_vars=_runtime_vars,
|
||||
)
|
||||
log.runtime(
|
||||
f'Sending spawn spec to child\n'
|
||||
f'{{}}=> {chan.aid.reprol()!r}\n'
|
||||
f'\n'
|
||||
f'{pretty_struct.pformat(sspec)}\n'
|
||||
)
|
||||
await chan.send(sspec)
|
||||
|
||||
# track subactor in current nursery
|
||||
curr_actor: Actor = current_actor()
|
||||
curr_actor._actoruid2nursery[subactor.aid.uid] = actor_nursery
|
||||
|
||||
# resume caller at next checkpoint now that child is up
|
||||
task_status.started(portal)
|
||||
|
||||
# wait for ActorNursery.wait() to be called
|
||||
with trio.CancelScope(shield=True):
|
||||
await actor_nursery._join_procs.wait()
|
||||
|
||||
async with trio.open_nursery() as nursery:
|
||||
if portal in actor_nursery._cancel_after_result_on_exit:
|
||||
nursery.start_soon(
|
||||
cancel_on_completion,
|
||||
portal,
|
||||
subactor,
|
||||
errors
|
||||
)
|
||||
|
||||
# This is a "soft" (cancellable) join/reap which
|
||||
# will remote cancel the actor on a ``trio.Cancelled``
|
||||
# condition.
|
||||
await soft_kill(
|
||||
proc,
|
||||
trio.Process.wait, # XXX, uses `pidfd_open()` below.
|
||||
portal
|
||||
)
|
||||
|
||||
# cancel result waiter that may have been spawned in
|
||||
# tandem if not done already
|
||||
log.cancel(
|
||||
'Cancelling portal result reaper task\n'
|
||||
f'c)> {subactor.aid.reprol()!r}\n'
|
||||
)
|
||||
nursery.cancel_scope.cancel()
|
||||
|
||||
finally:
|
||||
# XXX NOTE XXX: The "hard" reap since no actor zombies are
|
||||
# allowed! Do this **after** cancellation/teardown to avoid
|
||||
# killing the process too early.
|
||||
if proc:
|
||||
reap_repr: str = _pformat.nest_from_op(
|
||||
input_op='>x)',
|
||||
text=subactor.pformat(),
|
||||
)
|
||||
log.cancel(
|
||||
f'Hard reap sequence starting for subactor\n'
|
||||
f'{reap_repr}'
|
||||
)
|
||||
|
||||
with trio.CancelScope(shield=True):
|
||||
# don't clobber an ongoing pdb
|
||||
if cancelled_during_spawn:
|
||||
# Try again to avoid TTY clobbering.
|
||||
async with debug.acquire_debug_lock(
|
||||
subactor_uid=subactor.aid.uid
|
||||
):
|
||||
with trio.move_on_after(0.5):
|
||||
await proc.wait()
|
||||
|
||||
await debug.maybe_wait_for_debugger(
|
||||
# NOTE: use the child's `_runtime_vars`
|
||||
# (the fn-arg dict shipped via `SpawnSpec`)
|
||||
# — NOT `get_runtime_vars()` which returns
|
||||
# the *parent's* live runtime state.
|
||||
child_in_debug=_runtime_vars.get(
|
||||
'_debug_mode', False
|
||||
),
|
||||
header_msg=(
|
||||
'Delaying subproc reaper while debugger locked..\n'
|
||||
),
|
||||
|
||||
# TODO: need a diff value then default?
|
||||
# poll_steps=9999999,
|
||||
)
|
||||
# TODO: solve the following issue where we need
|
||||
# to do a similar wait like this but in an
|
||||
# "intermediary" parent actor that itself isn't
|
||||
# in debug but has a child that is, and we need
|
||||
# to hold off on relaying SIGINT until that child
|
||||
# is complete.
|
||||
# https://github.com/goodboy/tractor/issues/320
|
||||
# -[ ] we need to handle non-root parent-actors specially
|
||||
# by somehow determining if a child is in debug and then
|
||||
# avoiding cancel/kill of said child by this
|
||||
# (intermediary) parent until such a time as the root says
|
||||
# the pdb lock is released and we are good to tear down
|
||||
# (our children)..
|
||||
#
|
||||
# -[ ] so maybe something like this where we try to
|
||||
# acquire the lock and get notified of who has it,
|
||||
# check that uid against our known children?
|
||||
# this_uid: tuple[str, str] = current_actor().uid
|
||||
# await debug.acquire_debug_lock(this_uid)
|
||||
|
||||
if proc.poll() is None:
|
||||
log.cancel(f"Attempting to hard kill {proc}")
|
||||
await hard_kill(proc)
|
||||
|
||||
log.debug(f"Joined {proc}")
|
||||
else:
|
||||
log.warning('Nursery cancelled before sub-proc started')
|
||||
|
||||
if not cancelled_during_spawn:
|
||||
# pop child entry to indicate we no longer managing this
|
||||
# subactor
|
||||
actor_nursery._children.pop(subactor.aid.uid)
|
||||
36
uv.lock
36
uv.lock
|
|
@ -277,24 +277,26 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "msgspec"
|
||||
version = "0.19.0"
|
||||
version = "0.21.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/cf/9b/95d8ce458462b8b71b8a70fa94563b2498b89933689f3a7b8911edfae3d7/msgspec-0.19.0.tar.gz", hash = "sha256:604037e7cd475345848116e89c553aa9a233259733ab51986ac924ab1b976f8e", size = 216934, upload-time = "2024-12-27T17:40:28.597Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e3/60/f79b9b013a16fa3a58350c9295ddc6789f2e335f36ea61ed10a21b215364/msgspec-0.21.1.tar.gz", hash = "sha256:2313508e394b0d208f8f56892ca9b2799e2561329de9763b19619595a6c0f72c", size = 319193, upload-time = "2026-04-12T21:44:50.394Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/5f/a70c24f075e3e7af2fae5414c7048b0e11389685b7f717bb55ba282a34a7/msgspec-0.19.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f98bd8962ad549c27d63845b50af3f53ec468b6318400c9f1adfe8b092d7b62f", size = 190485, upload-time = "2024-12-27T17:39:44.974Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/b0/1b9763938cfae12acf14b682fcf05c92855974d921a5a985ecc197d1c672/msgspec-0.19.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:43bbb237feab761b815ed9df43b266114203f53596f9b6e6f00ebd79d178cdf2", size = 183910, upload-time = "2024-12-27T17:39:46.401Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/81/0c8c93f0b92c97e326b279795f9c5b956c5a97af28ca0fbb9fd86c83737a/msgspec-0.19.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4cfc033c02c3e0aec52b71710d7f84cb3ca5eb407ab2ad23d75631153fdb1f12", size = 210633, upload-time = "2024-12-27T17:39:49.099Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/ef/c5422ce8af73928d194a6606f8ae36e93a52fd5e8df5abd366903a5ca8da/msgspec-0.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d911c442571605e17658ca2b416fd8579c5050ac9adc5e00c2cb3126c97f73bc", size = 213594, upload-time = "2024-12-27T17:39:51.204Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/2b/4137bc2ed45660444842d042be2cf5b18aa06efd2cda107cff18253b9653/msgspec-0.19.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:757b501fa57e24896cf40a831442b19a864f56d253679f34f260dcb002524a6c", size = 214053, upload-time = "2024-12-27T17:39:52.866Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/e6/8ad51bdc806aac1dc501e8fe43f759f9ed7284043d722b53323ea421c360/msgspec-0.19.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:5f0f65f29b45e2816d8bded36e6b837a4bf5fb60ec4bc3c625fa2c6da4124537", size = 219081, upload-time = "2024-12-27T17:39:55.142Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/ef/27dd35a7049c9a4f4211c6cd6a8c9db0a50647546f003a5867827ec45391/msgspec-0.19.0-cp312-cp312-win_amd64.whl", hash = "sha256:067f0de1c33cfa0b6a8206562efdf6be5985b988b53dd244a8e06f993f27c8c0", size = 187467, upload-time = "2024-12-27T17:39:56.531Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/cb/2842c312bbe618d8fefc8b9cedce37f773cdc8fa453306546dba2c21fd98/msgspec-0.19.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f12d30dd6266557aaaf0aa0f9580a9a8fbeadfa83699c487713e355ec5f0bd86", size = 190498, upload-time = "2024-12-27T17:40:00.427Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/95/c40b01b93465e1a5f3b6c7d91b10fb574818163740cc3acbe722d1e0e7e4/msgspec-0.19.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:82b2c42c1b9ebc89e822e7e13bbe9d17ede0c23c187469fdd9505afd5a481314", size = 183950, upload-time = "2024-12-27T17:40:04.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e8/f0/5b764e066ce9aba4b70d1db8b087ea66098c7c27d59b9dd8a3532774d48f/msgspec-0.19.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:19746b50be214a54239aab822964f2ac81e38b0055cca94808359d779338c10e", size = 210647, upload-time = "2024-12-27T17:40:05.606Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/87/bc14f49bc95c4cb0dd0a8c56028a67c014ee7e6818ccdce74a4862af259b/msgspec-0.19.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60ef4bdb0ec8e4ad62e5a1f95230c08efb1f64f32e6e8dd2ced685bcc73858b5", size = 213563, upload-time = "2024-12-27T17:40:10.516Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/53/2f/2b1c2b056894fbaa975f68f81e3014bb447516a8b010f1bed3fb0e016ed7/msgspec-0.19.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac7f7c377c122b649f7545810c6cd1b47586e3aa3059126ce3516ac7ccc6a6a9", size = 213996, upload-time = "2024-12-27T17:40:12.244Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/5a/4cd408d90d1417e8d2ce6a22b98a6853c1b4d7cb7669153e4424d60087f6/msgspec-0.19.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a5bc1472223a643f5ffb5bf46ccdede7f9795078194f14edd69e3aab7020d327", size = 219087, upload-time = "2024-12-27T17:40:14.881Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/d8/f15b40611c2d5753d1abb0ca0da0c75348daf1252220e5dda2867bd81062/msgspec-0.19.0-cp313-cp313-win_amd64.whl", hash = "sha256:317050bc0f7739cb30d257ff09152ca309bf5a369854bbf1e57dffc310c1f20f", size = 187432, upload-time = "2024-12-27T17:40:16.256Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/cf/317224852c00248c620a9bcf4b26e2e4ab8afd752f18d2a6ef73ebd423b6/msgspec-0.21.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d4248cf0b6129b7d230eacd493c17cc2d4f3989f3bb7f633a928a85b7dcfa251", size = 196188, upload-time = "2026-04-12T21:44:07.181Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/81/074612945c0666078f7366f40000013de9f6ba687491d450df699bceebc9/msgspec-0.21.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5102c7e9b3acff82178449b85006d96310e690291bb1ea0142f1b24bcb8aabcb", size = 188473, upload-time = "2026-04-12T21:44:08.736Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/37/655101799590bcc5fddb2bd3fe0e6194e816c2d1da7c361725f5eb89a910/msgspec-0.21.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:846758412e9518252b2ac9bffd6f0e54d9ff614f5f9488df7749f81ff5c80920", size = 218871, upload-time = "2026-04-12T21:44:09.917Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/d1/d4cd9fe89c7d400d7a18f86ccc94daa3f0927f53558846fcb60791dce5d6/msgspec-0.21.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:21995e74b5c598c2e004110ad66ec7f1b8c20bf2bcf3b2de8fd9a3094422d3ff", size = 225025, upload-time = "2026-04-12T21:44:11.191Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/bf/e20549e602b9edccadeeff98760345a416f9cce846a657e8b18e3396b212/msgspec-0.21.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6129f0cca52992e898fd5344187f7c8127b63d810b2fd73e36fca73b4c6475ee", size = 222672, upload-time = "2026-04-12T21:44:12.481Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/68/04d7a8f0f786545cf9b8c280c57aa6befb5977af6e884b8b54191cbe44b3/msgspec-0.21.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ef3ec2296248d1f8b9231acb051b6d471dfde8f21819e86c9adaaa9f42918521", size = 227303, upload-time = "2026-04-12T21:44:13.709Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/4d/619866af2840875be408047bf9e70ceafbae6ab50660de7134ed1b25eb86/msgspec-0.21.1-cp312-cp312-win_amd64.whl", hash = "sha256:d4ab834a054c6f0cbeef6df9e7e1b33d5f1bc7b86dea1d2fd7cad003873e783d", size = 190017, upload-time = "2026-04-12T21:44:14.977Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5e/2e/a8f9eca8fd00e097d7a9e99ba8a4685db994494448e3d4f0b7f6e9a3c0f7/msgspec-0.21.1-cp312-cp312-win_arm64.whl", hash = "sha256:628aaa35c74950a8c59da330d7e98917e1c7188f983745782027748ee4ca573e", size = 175345, upload-time = "2026-04-12T21:44:16.431Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/74/f11ede02839b19ff459f88e3145df5d711626ca84da4e23520cebf819367/msgspec-0.21.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:764173717a01743f007e9f74520ed281f24672c604514f7d76c1c3a10e8edb66", size = 196176, upload-time = "2026-04-12T21:44:17.613Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/40/4476c1bd341418a046c4955aff632ec769315d1e3cb94e6acf86d461f9ed/msgspec-0.21.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:344c7cd0eaed1fb81d7959f99100ef71ec9b536881a376f11b9a6c4803365697", size = 188524, upload-time = "2026-04-12T21:44:18.815Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ca/d9/9e9d7d7e5061b47540d03d640fab9b3965ba7ae49c1b2154861c8f007518/msgspec-0.21.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:48943e278b3854c2f89f955ddc6f9f430d3f0784b16e47d10604ee0463cd21f5", size = 218880, upload-time = "2026-04-12T21:44:20.028Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/66/2bb344f34abb4b57e60c7c9c761994e0417b9718ec1460bf00c296f2a7ea/msgspec-0.21.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a9aa659ebb0101b1cbc31461212b87e341d961f0ab0772aaf068a99e001ec4aa", size = 225050, upload-time = "2026-04-12T21:44:21.577Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/84/7c1e412f76092277bf760cef12b7979d03314d259ab5b5cafde5d0c1722d/msgspec-0.21.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7b27d1a8ead2b6f5b0c4f2d07b8be1ccfcc041c8a0e704781edebe3ae13c484", size = 222713, upload-time = "2026-04-12T21:44:22.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/27/0bba04b2b4ef05f3d068429410bc71d2cea925f1596a8f41152cccd5edb8/msgspec-0.21.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:38fe93e86b61328fe544cb7fd871fad5a27c8734bfda90f65e5dbe288ae50f61", size = 227259, upload-time = "2026-04-12T21:44:24.11Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/2d/09574b0eea02fed2c2c1383dbaae2c7f79dc16dcd6487a886000afb5d7c4/msgspec-0.21.1-cp313-cp313-win_amd64.whl", hash = "sha256:8bc666331c35fcce05a7cd2d6221adbe0f6058f8e750711413d22793c080ac6a", size = 189857, upload-time = "2026-04-12T21:44:25.359Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/46/34/105b1576ad182879914f0c821f17ee1d13abb165cb060448f96fe2aff078/msgspec-0.21.1-cp313-cp313-win_arm64.whl", hash = "sha256:42bb1241e0750c1a4346f2aa84db26c5ffd99a4eb3a954927d9f149ff2f42898", size = 175403, upload-time = "2026-04-12T21:44:26.608Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
@ -678,7 +680,7 @@ requires-dist = [
|
|||
{ name = "bidict", specifier = ">=0.23.1" },
|
||||
{ name = "cffi", specifier = ">=1.17.1" },
|
||||
{ name = "colorlog", specifier = ">=6.8.2,<7" },
|
||||
{ name = "msgspec", specifier = ">=0.19.0" },
|
||||
{ name = "msgspec", specifier = ">=0.21.0" },
|
||||
{ name = "multiaddr", specifier = ">=0.2.0" },
|
||||
{ name = "pdbp", specifier = ">=1.8.2,<2" },
|
||||
{ name = "platformdirs", specifier = ">=4.4.0" },
|
||||
|
|
|
|||
Loading…
Reference in New Issue