-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share events to Python without copying via arrow
crate
#228
Conversation
@haixuanTao I updated the PR with the latest changes from For sending data, there are now two different methods. The |
Hi Philipp, thanks for making this possible! I don´t really want to maintain both Or it's too not break backward compatibility only, which I can manage for |
I think we can postpone this. Especially if it requires to add another API endpoint. If we have zero-copy on read the performance will already be really good. |
So, I think that we don't have to support all data-types. But one that can be interesting is a HashMap data-types. This way we can encapsulate our metadata type as Arrow. We can even use something like serde_arrow and derive our struct without having to do all the ser/de implementation at the API level. ( e.g. : One thing that I personally think is really important is to be able to forward the data-shape, the number of column and row. I don't know how hard are those implementation, we can postpone it if it's too time-consuming and we want to ship the arrow API first. We can postpone all other data-types for now. |
I think that we can postpone this until we have more clarity on the Arrow feature set we fill comfortable with. This would avoid having to maintain the 3 API while proving the concept. |
I'm having issue running the python operator example. It seems that the data is not been received by the receiving node, even though it's sent by the sent node. I have tried to use |
Thanks for testing! I'm not sure what's going wrong yet, by I see these issues too. To make debugging easier, I started to split off unrelated changes into separate PRs, starting with #246. |
Don't bind the lifetime of the event to the `next` call anymore. This makes it possible to use the original event in Python, which does not support borrowed data or lifetimes. The drawback is that we no longer have the guarantee that an event is freed before the next call to `recv`.
…sical array Some applications might not need the data in arrow format. By creating a custom object, we can provide the data both as a classical `PyBytes` and as a zero-copy arrow array.
…ibility Sending bytes using arrow is possible through new `send_output_arrow` method.
There might still be some pending drop tokens after the receiving end of the event stream was closed. So we don't want to break from the receiver thread directly. Instead we keep it running until the control channel signals that it expects no more drop tokens by closing the `finished_drop_tokens` channel. This happens either when all required drop tokens were received, or because of a timeout.
The Python garbage collection will drop them non-deterministically in the background. Also, the dropping cannot happen while the GIL is held by our wait code.
We only support `UInt8` arrays right now.
This is mostly about convenience when working with arrow data types, as the data is still being copied once.
Thanks for testing! I just added support for sending arrow arrays from operators in 235f6da. |
So, there is a weird error that prevent user from calling two times def on_event(
self,
dora_event: dict,
send_output: Callable[[str, bytes], None],
) -> DoraStatus:
if dora_event["type"] == "INPUT":
length = dora_event["value"]
length = dora_event["value"] # <--- this will fail
return DoraStatus.CONTINUE The error is going to be the following: dora/apis/python/operator/src/lib.rs Line 31 in 235f6da
We can either fix it or give a heads up. |
Thanks for testing! I pushed c1544f8 to fix this issue. |
Just FYI, I have tested it again but i'm seeing slightly longer latency with the same code replacing node with operator. I will investigate more later on.
edit (phil-opp): formatted as table original datadora Node,8,514.0151
dora Node,64,518.3472199999999
dora Node,512,539.93719
dora Node,2048,507.9925899999999
dora Node,4096,610.57567
dora Node,16384,760.21873
dora Node,40960,2484.4595199999994
dora Node,409600,2773.14837
dora Node,4096000,1728.2458500000002
dora Node,40960000,6397.578890000001
dora Operator,8,7823.535460006497
dora Operator,64,1139.437851472196
dora Operator,512,1319.1025445623814
dora Operator,2048,1319.3751484989364
dora Operator,4096,1377.7049504851404
dora Operator,16384,1210.5615544644727
dora Operator,40960,549.8982871241685
dora Operator,409600,841.9508415788033
dora Operator,4096000,4740.129366341941
dora Operator,40960000,29820.334663385765 |
So, I think that's its linked to the current implementation of the dora/binaries/runtime/src/operator/python.rs Lines 272 to 273 in 3a35c22
The additional ownership hapens here: dora/binaries/runtime/src/operator/python.rs Line 280 in 3a35c22
because sharing data between operator and runtime is defined as dora/binaries/runtime/src/operator/mod.rs Lines 63 to 68 in 3a35c22
I wonder if we can remove the runtime channel copy easily. This additional copy was likely already present before this pull request. Therefore we can make it a separate PR as it is only loosely linked to the current changes. |
Thanks for investigating!
It's a bit strange that the operator latency is much smaller than the node latency for message sizes 40960 and 409600. For all other data sizes, it's the other way around. Do you have any idea why this is the case?
Yeah, this definitely has some additional costs for larger messages. I'll look into it. I'm not sure what's causing the large latency for small messages though.. |
…ode API This provides an alternative way to send outputs with less lifetime restrictions than the closure-based interface.
will check absolutely as well as for small message! |
Instead of doing an additional copy to send them from the operator thread to the runtime thread. This commit uses the new `allocate_data_sample` and `send_output_sample` methods introduced in d7cd370.
Should be fixed by 1948a45. |
I know get the close latency value for both dora Node,8,545.4713700000001
dora Node,64,533.91939
dora Node,512,548.10928
dora Node,2048,851.3406300000001
dora Node,4096,1078.9238400000002
dora Node,16384,1121.3838500000002
dora Node,40960,1113.2833500000002
dora Node,409600,1230.84563
dora Node,4096000,1636.1486600000003
dora Node,40960000,4848.450280000001
dora Node,8,908.44376
dora Operator,8,817.0425798016367
dora Operator,64,688.4469109383357
dora Operator,512,1090.3208020692437
dora Operator,2048,1117.994148606356
dora Operator,4096,1177.8792475452785
dora Operator,16384,1150.2645544835716
dora Operator,40960,1158.827108732715
dora Operator,409600,1305.7527920152845
dora Operator,4096000,1759.6348711101587
dora Operator,40960000,4455.981317012557 Thanks for the quick fixe. |
Great, thanks for testing! Is there anything else blocking this PR or do you think it's ready to be merged? |
I'm working on the |
Ok, so I finally find out my issue within dora-drives which is unrelated to our implementation. I had an issue with This has been fix in |
Implements the proposal in #224 on top of #213.
Future work: