Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zero-valued enq_timestamp and deq_timedelta in egress pipeline #905

Closed
sundararajan20 opened this issue Jun 5, 2020 · 6 comments · Fixed by #906
Closed

zero-valued enq_timestamp and deq_timedelta in egress pipeline #905

sundararajan20 opened this issue Jun 5, 2020 · 6 comments · Fixed by #906

Comments

@sundararajan20
Copy link

The fields enq_timestamp and deq_timedelta are 0 in the egress pipeline of fabric.p4 when run on stratum_bmv2 switches. This is reflected when INT is running and is configured to add the metadata fields ingress timestamp and egress timestamp. The reported values of these INT metadata fields are thus 0. However, the values of the fields ingress_global_timestamp and egress_global_timestamp are correctly set.

One way to reproduce this is by adding the following two logging commands to the beginning of the FabricEgress control block in fabric.p4

control FabricEgress (inout parsed_headers_t hdr,
                      inout fabric_metadata_t fabric_metadata,
                      inout standard_metadata_t standard_metadata) {

    apply {
        _PRE_EGRESS
        log_msg("standard_metadata.enq_timestamp = {}", {standard_metadata.enq_timestamp});
        log_msg("standard_metadata.deq_timedelta = {}", {standard_metadata.deq_timedelta});
        log_msg("standard_metadata.ingress_global_timestamp = {}", {standard_metadata.ingress_global_timestamp});
        log_msg("standard_metadata.egress_global_timestamp = {}", {standard_metadata.egress_global_timestamp});
        ...
    }
}

This results in the following log messages in the stratum_bmv2 log:

[03:11:49.720] [bmv2] [T] [thread 88] [0.0] [cxt 0] fabric.p4(102) Primitive         log_msg(\"standard_metadata.enq_timestamp = {}\", {standard_metadata.enq_timestamp});
[03:11:49.720] [bmv2] [I] [thread 88] standard_metadata.enq_timestamp = 0
[03:11:49.720] [bmv2] [T] [thread 88] [0.0] [cxt 0] fabric.p4(103) Primitive         log_msg(\"standard_metadata.deq_timedelta = {}\", {standard_metadata.deq_timedelta});
[03:11:49.721] [bmv2] [I] [thread 88] standard_metadata.deq_timedelta = 0
[03:11:49.721] [bmv2] [T] [thread 88] [0.0] [cxt 0] fabric.p4(104) Primitive         log_msg(\"standard_metadata.ingress_global_timestamp = {}\", {standard_metadata.ingress_global_timestamp});
[03:11:49.721] [bmv2] [I] [thread 88] standard_metadata.ingress_global_timestamp = 9579215
[03:11:49.721] [bmv2] [T] [thread 88] [0.0] [cxt 0] fabric.p4(105) Primitive         log_msg(\"standard_metadata.egress_global_timestamp = {}\", {standard_metadata.egress_global_timestamp});
[03:11:49.722] [bmv2] [I] [thread 88] standard_metadata.egress_global_timestamp = 9787594
@antoninbas
Copy link
Member

I tried to reproduce this and couldn't:

[21:33:06.679] [bmv2] [T] [thread 19194] [0.0] [cxt 0] fabric.p4(102) Primitive         log_msg(\"standard_metadata.enq_timestamp = {}\", {standard_metadata.enq_timestamp});
[21:33:06.679] [bmv2] [I] [thread 19194] standard_metadata.enq_timestamp = 31441158
[21:33:06.679] [bmv2] [T] [thread 19194] [0.0] [cxt 0] fabric.p4(103) Primitive         log_msg(\"standard_metadata.deq_timedelta = {}\", {standard_metadata.deq_timedelta});
[21:33:06.679] [bmv2] [I] [thread 19194] standard_metadata.deq_timedelta = 63
[21:33:06.679] [bmv2] [T] [thread 19194] [0.0] [cxt 0] fabric.p4(104) Primitive         log_msg(\"standard_metadata.ingress_global_timestamp = {}\", {standard_metadata.ingress_global_timestamp});
[21:33:06.679] [bmv2] [I] [thread 19194] standard_metadata.ingress_global_timestamp = 31440622
[21:33:06.679] [bmv2] [T] [thread 19194] [0.0] [cxt 0] fabric.p4(105) Primitive         log_msg(\"standard_metadata.egress_global_timestamp = {}\", {standard_metadata.egress_global_timestamp});
[21:33:06.679] [bmv2] [I] [thread 19194] standard_metadata.egress_global_timestamp = 31441219

But there could be other things at play. Could you share the bmv2 JSON file produced by the compiler? That would be helpful.

@sundararajan20
Copy link
Author

Please find the zipped json file here: bmv2.json.zip.

@antoninbas
Copy link
Member

Sorry I forgot to ask, could you include the p4info file as well?

@sundararajan20
Copy link
Author

Sure, please find it here:
p4info.txt.zip

@antoninbas
Copy link
Member

Thanks for the info. I was able to reproduce the issue. I should be able to fix it in the next couple of days, but I don't know how long it will take for the fix to be integrated into the stratum_bmv2 build.

antoninbas added a commit that referenced this issue Jun 5, 2020
In some case, the do_swap() call in simple_switch never returned 0
(because the swap was performed independently of the target), which
meant that the with_queueing_metadata flag was not updated.

This created an issue in the following specific scenario:
 * simple_switch is started with a JSON config that does not include
   queueing metadata fields
 * a JSON config swap is performed (using the runtime_CLI or P4Runtime,
   in the case of simple_switch_grpc), and the new config includes all
   queueing metadata fields
 * queueing metadata fields still show as 0 for all subsequent traffic

To fix this, we get rid of the assumption that targets are in charge of
calling do_swap() themselves, which must data back to when Packet
instantiation was not guarded by a lock (thus requiring the
collaboration of targets to ensure no new Packets were created as the
swap if being performed).

From now targets are notified using the swap_notify_() virtual method,
which targets can override.

Fixes #905
@sundararajan20
Copy link
Author

Great, thanks for looking into this!

antoninbas added a commit that referenced this issue Jun 5, 2020
In some case, the do_swap() call in simple_switch never returned 0
(because the swap was performed independently of the target), which
meant that the with_queueing_metadata flag was not updated.

This created an issue in the following specific scenario:
 * simple_switch is started with a JSON config that does not include
   queueing metadata fields
 * a JSON config swap is performed (using the runtime_CLI or P4Runtime,
   in the case of simple_switch_grpc), and the new config includes all
   queueing metadata fields
 * queueing metadata fields still show as 0 for all subsequent traffic

To fix this, we get rid of the assumption that targets are in charge of
calling do_swap() themselves, which must data back to when Packet
instantiation was not guarded by a lock (thus requiring the
collaboration of targets to ensure no new Packets were created as the
swap if being performed).

From now targets are notified using the swap_notify_() virtual method,
which targets can override.

Fixes #905
ccascone added a commit to ccascone/stratum-1 that referenced this issue Jun 5, 2020
pudelkoM pushed a commit to stratum/stratum that referenced this issue Jun 5, 2020
pudelkoM added a commit to stratum/stratum that referenced this issue Jun 16, 2020
Move port maps to hal/config

Revert "First cleanup steps"

This reverts commit 1b8e184.

Conditionally mount config dir inside docker

Package config files in stratum_bf Debian package (#262)

PR #259 moved them.

Initialize the CPU port when Stratum started (#256)

Co-authored-by: Maximilian Pudelko <[email protected]>

Bump BMv2 commit fix to zero-valued timestamps (#264)

The issue is described here:
p4lang/behavioral-model#905

Update cpplint (#263)

Overwrite existing port map symlinks

These can exist from previous runs or different installs.
We should remove the to ensure consistent startup behavior.

Do not start stratum_bf if insmod failed

There is no point in continuing Stratum startup.
pudelkoM added a commit to stratum/stratum that referenced this issue Jun 16, 2020
Move port maps to hal/config

Revert "First cleanup steps"

This reverts commit 1b8e184.

Conditionally mount config dir inside docker

Package config files in stratum_bf Debian package (#262)

PR #259 moved them.

Initialize the CPU port when Stratum started (#256)

Co-authored-by: Maximilian Pudelko <[email protected]>

Bump BMv2 commit fix to zero-valued timestamps (#264)

The issue is described here:
p4lang/behavioral-model#905

Update cpplint (#263)

Overwrite existing port map symlinks

These can exist from previous runs or different installs.
We should remove the to ensure consistent startup behavior.

Do not start stratum_bf if insmod failed

There is no point in continuing Stratum startup.

Fix missing $

Unifiy docker config mount
bocon13 pushed a commit to stratum/stratum that referenced this issue Jun 18, 2020
* Unifiy docker config mount

Move port maps to hal/config

Revert "First cleanup steps"

This reverts commit 1b8e184.

Conditionally mount config dir inside docker

Package config files in stratum_bf Debian package (#262)

PR #259 moved them.

Initialize the CPU port when Stratum started (#256)

Co-authored-by: Maximilian Pudelko <[email protected]>

Bump BMv2 commit fix to zero-valued timestamps (#264)

The issue is described here:
p4lang/behavioral-model#905

Update cpplint (#263)

Overwrite existing port map symlinks

These can exist from previous runs or different installs.
We should remove the to ensure consistent startup behavior.

Do not start stratum_bf if insmod failed

There is no point in continuing Stratum startup.

Fix missing $

Unifiy docker config mount

* Restructure README

* Handle non-ONLP platforms

* Update stratum/hal/bin/barefoot/README.md

Co-authored-by: Carmelo Cascone <[email protected]>

* Update stratum/hal/bin/barefoot/README.md

Co-authored-by: Carmelo Cascone <[email protected]>

* Remove container on exit

* Fix link to example port map

* Address comments

* Address comments

* Rename stratum-entrypoint.sh to start-stratum.sh

* Minor README cleanup

* Fix typo

Co-authored-by: Carmelo Cascone <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants