-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: testing/synctest: new package for testing concurrent code #67434
Comments
I really like how simple this API is.
How does time work when goroutines aren't idle? Does it stand still, or does it advance at the usual rate? If it stands still, it seems like that could break software that assumes time will advance during computation (that maybe that's rare in practice). If it advances at the usual rate, it seems like that reintroduces a source of flakiness. E.g., in your example, the 1 second sleep will advance time by 1 second, but then on a slow system the checking thread may still not execute for a long time. What are the bounds of the fake time implementation? Presumably if you're making direct system calls that interact with times or durations, we're not going to do anything about that. Are we going to make any attempt at faking time in the file system?
What if a goroutine is blocked on a channel that goes outside the group? This came to mind in the context of whether this could be used to coordinate a multi-process client/server test, though I think it would also come up if there's any sort of interaction with a background worker goroutine or pool.
What happens if multiple goroutines in a group call Wait? I think the options are to panic or to consider all of them idle, in which case they would all wake up when every other goroutine in the group is idle. What happens if you have nested groups, say group A contains group B, and a goroutine in B is blocked in Wait, and then a goroutine in A calls Wait? I think your options are to panic (though that feels wrong), wake up both if all of the goroutines in group A are idle, or wake up just B if all of the goroutines in B are idle (but this block waking up A until nothing is calling Wait in group B). |
Time stands still, except when all goroutines in a group are idle. (Same as the playground behaves, I believe.) This would break software that assumes time will advance. You'd need to use something else to test that case.
The time package: Faking time in the filesystem seems complicated and highly specialized, so I don't think we should try. Code which cares about file timestamps will need to use a test
As proposed, this would count as an idle goroutine. If you fail to isolate the system under test this will probably cause problems, so don't do that.
As proposed, none of them ever wake up and your test times out, or possibly panics if we can detect that all goroutines are blocked in that case. Having them all wake at the same time would also be reasonable.
Oh, I didn't think of that. Nested groups are too complicated, |
This is a very interesting proposal! I feel worried that the Assuming that's a valid concern (if it isn't then I'll retract this entire comment!), I could imagine mitigating it in two different ways:
(I apologize in advance if I misunderstood any part of the proposal or if I am missing something existing that's already similarly convenient to |
The fact that I think using idle-wait synchronization outside of tests is always going to be a mistake. It's fragile and fiddly, and you're better served by explicit synchronization. (This prompts the question: Isn't this fragile and fiddly inside tests as well? It is, but using a fake clock removes much of the sources of fragility, and tests often have requirements that make the fiddliness a more worthwhile tradeoff. In the expiring cache example, for example, non-test code will never need to guarantee that a cache entry expires precisely at the nanosecond defined.) So while perhaps we could offer a standalone synchroniziation primitive outside of As for passing a |
Interesting proposal. I like that it allows for waiting for a group of goroutines, as opposed to all goroutines in my proposal (#65336), though I do have some concerns:
|
One of the goals of this proposal is to minimize the amount of unnatural code required to make a system testable. Mock time implementations require replacing calls to idiomatic time package functions with a testable interface. Putting fake time in the standard library would let us just write the idiomatic code without compromising testability. For timeouts, the Also, it would be pointless for |
I wanted to evaluate practical usage of the proposed API. I wrote a version of Run and Wait based on parsing the output of runtime.Stack. Wait calls runtime.Gosched in a loop until all goroutines in the current group are idle. I also wrote a fake time implementation. Combined, these form a reasonable facsimile of the proposed synctest package, with some limitations: The code under test needs to be instrumented to call the fake time functions, and to call a marking function after creating new goroutines. Also, you need to call a synctest.Sleep function in tests to advance the fake clock. I then added this instrumentation to net/http. The synctest package does not work with real network connections, so I added an in-memory net.Conn implementation to the net/http tests. I also added an additional helper to net/http's tests, which simplifies some of the experimentation below: var errStillRunning = errors.New("async op still running")
// asyncResult is the result of an asynchronous operation.
type asyncResult[T any] struct {}
// runAsync runs f in a new goroutine,
// and returns an asyncResult which is populated with the result of f when it finishes.
// runAsync calls synctest.Wait after running f.
func runAsync[T any](f func() (T, error)) *asyncResult[T]
// done reports whether the asynchronous operation has finished.
func (r *asyncResult[T]) done() bool
// result returns the result of the asynchronous operation.
// It returns errStillRunning if the operation is still running.
func (r *asyncResult[T]) result() (T, error) One of the longest-running tests in the net/http package is TestServerShutdownStateNew (https://go.googlesource.com/go/+/refs/tags/go1.22.3/src/net/http/serve_test.go#5611). This test creates a server, opens a connection to it, and calls Server.Shutdown. It asserts that the server, which is expected to wait 5 seconds for the idle connection to close, shuts down in no less than 2.5 seconds and no more than 7.5 seconds. This test generally takes about 5-6 seconds to run in both HTTP/1 and HTTP/2 modes. The portion of this test which performs the shutdown is: shutdownRes := make(chan error, 1)
go func() {
shutdownRes <- ts.Config.Shutdown(context.Background())
}()
readRes := make(chan error, 1)
go func() {
_, err := c.Read([]byte{0})
readRes <- err
}()
// TODO(#59037): This timeout is hard-coded in closeIdleConnections.
// It is undocumented, and some users may find it surprising.
// Either document it, or switch to a less surprising behavior.
const expectTimeout = 5 * time.Second
t0 := time.Now()
select {
case got := <-shutdownRes:
d := time.Since(t0)
if got != nil {
t.Fatalf("shutdown error after %v: %v", d, err)
}
if d < expectTimeout/2 {
t.Errorf("shutdown too soon after %v", d)
}
case <-time.After(expectTimeout * 3 / 2):
t.Fatalf("timeout waiting for shutdown")
}
// Wait for c.Read to unblock; should be already done at this point,
// or within a few milliseconds.
if err := <-readRes; err == nil {
t.Error("expected error from Read")
} I wrapped the test in a synctest.Run call and changed it to use the in-memory connection. I then rewrote this section of the test: shutdownRes := runAsync(func() (struct{}, error) {
return struct{}{}, ts.Config.Shutdown(context.Background())
})
readRes := runAsync(func() (int, error) {
return c.Read([]byte{0})
})
// TODO(#59037): This timeout is hard-coded in closeIdleConnections.
// It is undocumented, and some users may find it surprising.
// Either document it, or switch to a less surprising behavior.
const expectTimeout = 5 * time.Second
synctest.Sleep(expectTimeout - 1)
if shutdownRes.done() {
t.Fatal("shutdown too soon")
}
synctest.Sleep(2 * time.Second)
if _, err := shutdownRes.result(); err != nil {
t.Fatalf("Shutdown() = %v, want complete", err)
}
if n, err := readRes.result(); err == nil || err == errStillRunning {
t.Fatalf("Read() = %v, %v; want error", n, err)
} The test exercises the same behavior it did before, but it now runs instantaneously. (0.01 seconds on my laptop.) I made an interesting discovery after converting the test: The server does not actually shut down in 5 seconds. In the initial version of this test, I checked for shutdown exactly 5 seconds after calling Shutdown. The test failed, reporting that the Shutdown call had not completed. Examining the Shutdown function revealed that the server polls for closed connections during shutdown, with a maximum poll interval of 500ms, and therefore shutdown can be delayed slightly past the point where connections have shut down. I changed the test to check for shutdown after 6 seconds. But once again, the test failed. Further investigation revealed this code (https://go.googlesource.com/go/+/refs/tags/go1.22.3/src/net/http/server.go#3041): st, unixSec := c.getState()
// Issue 22682: treat StateNew connections as if
// they're idle if we haven't read the first request's
// header in over 5 seconds.
if st == StateNew && unixSec < time.Now().Unix()-5 {
st = StateIdle
} The comment states that new connections are considered idle for 5 seconds, but thanks to the low granularity of Unix timestamps the test can consider one idle for as little as 4 or as much as 6 seconds. Combined with the 500ms poll interval (and ignoring any added scheduler delay), Shutdown may take up to 6.5 seconds to complete, not 5. Using a fake clock rather than a real one not only speeds up this test dramatically, but it also allows us to more precisely test the behavior of the system under test. Another slow test is TestTransportExpect100Continue (https://go.googlesource.com/go/+/refs/tags/go1.22.3/src/net/http/transport_test.go#1188). This test sends an HTTP request containing an "Expect: 100-continue" header, which indicates that the client is waiting for the server to indicate that it wants the request body before it sends it. In one variation, the server does not send a response; after a 2 second timeout, the client gives up waiting and sends the request. This test takes 2 seconds to execute, thanks to this timeout. In addition, the test does not validate the timing of the client sending the request body; in particular, tests pass even if the client waits The portion of the test which sends the request is: resp, err := c.Do(req) I changed this to: rt := runAsync(func() (*Response, error) {
return c.Do(req)
})
if v.timeout {
synctest.Sleep(expectContinueTimeout-1)
if rt.done() {
t.Fatalf("RoundTrip finished too soon")
}
synctest.Sleep(1)
}
resp, err := rt.result()
if err != nil {
t.Fatal(err)
} This test now executes instantaneously. It also verifies that the client does or does not wait for the ExpectContinueTimeout as expected. I made one discovery while converting this test. The synctest.Run function blocks until all goroutines in the group have exited. (In the proposed synctest package, Run will panic if all goroutines become blocked (deadlock), but I have not implemented that feature in the test version of the package.) The test was hanging in Run, due to leaking a goroutine. I tracked this down to a missing net.Conn.Close call, which was leaving an HTTP client reading indefinitely from an idle and abandoned server connection. In this case, Run's behavior caused me some confusion, but ultimately led to the discovery of a real (if fairly minor) bug in the test. (I'd probably have experienced less confusion, but I initially assumed this was a bug in the implementation of Run.) At one point during this exercise, I accidentally called testing.T.Run from within a synctest.Run group. This results in, at the very best, quite confusing behavior. I think we would want to make it possible to detect when running within a group, and have testing.T.Run panic in this case. My experimental implementation of the synctest package includes a synctest.Sleep function by necessity: It was much easier to implement with an explicit call to advance the fake clock. However, I found in writing these tests that I often want to sleep and then wait for any timers to finish executing before continuing. I think, therefore, that we should have one additional convenience function: package synctest
// Sleep pauses the current goroutine for the duration d,
// and then blocks until every goroutine in the current group is idle.
// It is identical to calling time.Sleep(d) followed by Wait.
//
// The caller of Sleep must be in a goroutine created by Run,
// or a goroutine transitively started by Run.
// If it is not, Sleep panics.
func Sleep(d time.Duration) {
time.Sleep(d)
Wait()
} The net/http package was not designed to support testing with a fake clock. This has served as an obstacle to improving the state of the package's tests, many of which are slow, flaky, or both. Converting net/http to be testable with my experimental version of synctest required a small number of minor changes. A runtime-supported synctest would have required no changes at all to net/http itself. Converting net/http tests to use synctest required adding an in-memory net.Conn. (I didn't attempt to use net.Pipe, because its fully-synchronous behavior tends to cause problems in tests.) Aside from this, the changes required were very small. My experiment is in https://go.dev/cl/587657. |
This proposal has been added to the active column of the proposals project |
Commenting here due to @rsc's request: Relative to my proposal #65336, I have the following concerns:
|
Regarding overriding the The In contrast, we can test code which calls Time is fundamentally different in that there is no way to use real time in a test without making the test flaky and slow. Time is also different from an Since we can't use real time in tests, we can insert a testable wrapper around the In addition, if we define a standard testable wrapper around the clock, we are essentially declaring that all public packages which deal with time should provide a way to plumb in a clock. (Some packages do this already, of course; crypto/tls.Config.Time is an example in That's an option, of course. But it would be a very large change to the Go ecosystem as a whole. |
The pprof.SetGoroutineLabels disagrees.
It doesn't try to hide it, more like tries to restrict people from relying on numbers.
If I understood proposal correctly, it will wait for any goroutine (and recursively) that was started using |
Yes, if you call |
Given that there's more precedent for goroutine identity than I had previously thought, and seeing how However, I'm still a little ambivalent about goroutine groups affecting That being said, I agree that plumbing a time/clock interface through existing code is indeed tedious, and having |
Thanks for doing the experiment. I find the results pretty compelling.
I don't quite understand this function. Given the fake time implementation, if you sleep even a nanosecond past timer expiry, aren't you already guaranteed that those timers will have run because the fake time won't advance to your sleep deadline until everything is blocked again?
Partly I was wondering about nested groups because I've been scheming other things that the concept of a goroutine group could be used for. Though it's true that, even if we have groups for other purposes, it may make sense to say that synctest groups cannot be nested, even if in general groups can be nested. |
You're right that sleeping past the deadline of a timer is sufficient. The It's fairly natural to sleep to the exact instant of a timer, however. If a cache entry expires in some amount of time, it's easy to sleep for that exact amount of time, possibly using the same constant that the cache timeout was initialized with, rather than adding a nanosecond. Adding nanoseconds also adds a small but real amount of confusion to a test in various small ways: The time of logged events drifts off the integer second, rate calculations don't come out as cleanly, and so on. Plus, if you forget to add the necessary adjustment or otherwise accidentally sleep directly onto the instant of a timer's expiry, you get a race condition. Cleaner, I think, for the test code to always resynchronize after poking the system under test. This doesn't have to be a function in the synctest package, of course;
I'm very intrigued! I've just about convinced myself that there's a useful general purpose synchronization API hiding in here, but I'm not sure what it is or what it's useful for. |
For what it's worth, I think it's a good thing that virtual time is included in this, because it makes sure that this package isn't used in production settings. It makes it only suitable for tests (and very suitable). |
It sounds like the API is still:
Damien suggested adding also:
The difference between time.Sleep and synctest.Sleep seems subtle enough that it seems like you should have to spell out the Wait at the call sites where you need it. The only time you really need Wait is if you know someone else is waking up at that very moment. But then if they've both done the Sleep+Wait form then you still have a problem. You really only want some of the call sites (maybe just one) to use the Sleep+Wait form. I suppose that the production code will use time.Sleep since it's not importing synctest, so maybe it's clear that the test harness is the only one that will call Sleep+Wait. On the other hand, fixing a test failure by changing s/time.Sleep/synctest.Sleep/ will be a strange-looking bug fix. Better to have to add synctest.Wait instead. If we really need this, it could be synctest.SleepAndWait but that's what statements are for. Probably too subtle and should just limit the proposal to Run and Wait. |
Some additional suggestions for the description of the
Additionally, for "mutex operation", let's list out the the exact operations considered for implementation/testing completeness:
|
The API looks simple and that is excellent. What I am worried about is the unexpected failure modes, leading to undetected regressions, which might need tight support in the testing package to detect. Imagine you unit test your code but are unable to mock out a dependency. Maybe due to lack of experience or bad design of existing code I have to work with. That dependency that suddenly starts calling a syscall (e.g. to lazily try to tune the library using a sync.Once instead of on init time and having a timeout). Without support in testing you will never detect that now and only your tests will suddenly time out after an innocent minor dependency update. |
May I ortgogonally to the previous comment suggest to limit this package to standard library only to gather more experience with that approach before ? That would also allow to sketch out integration with the testing package in addition to finding more pitfalls. |
Can you expand more on what you mean by undetected regressions? If the code under test (either directly, or through a dependency) unexpectedly calls a blocking syscall,
What kind of support are you thinking of? |
What does this do?
Does it succeed or panic? It's not clear to me from the API docs because:
This is obviously a degenerate case, but I think it also applies if a test wanted to get the fake time features when testing otherwise non-concurrent code. |
In this case, the goroutine calling |
I've been thinking about this, and I think unbubbling channels at the end of the bubble isn't the right choice. The problem is that it's reasonable for a test to want to shut down background goroutines in a cleanup function. For example, a test may start a server listening on a fake network socket (with a background goroutine blocked in net.Listener.Accept) and stop the server in a cleanup function. If the cleanup function runs after the bubble exits, then it runs too late: a bubble never exits cleanly while any bubbled goroutines are still executing. Cleanup functions registered in a bubble should execute in the bubble. This is independent of the question of whether bubbling channels is a good idea or not--even if we don't associate channels with bubbles, we still want to be able to shut down a test completely before Run exits and its bubble ends. |
Putting on hold for experience with synctest under a GOEXPERIMENT (#69687). Discussion can of course continue, but this way we'll hold off on looking at this each week until there's more experience with it. |
Change https://go.dev/cl/629735 mentions this issue: |
Add an internal (for now) implementation of testing/synctest. The synctest.Run function executes a tree of goroutines in an isolated environment using a fake clock. The synctest.Wait function allows a test to wait for all other goroutines within the test to reach a blocking point. For #67434 For #69687 Change-Id: Icb39e54c54cece96517e58ef9cfb18bf68506cfc Reviewed-on: https://go-review.googlesource.com/c/go/+/591997 Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]>
Change https://go.dev/cl/629856 mentions this issue: |
Should this issue be closed now that #69687 is closed as implemented? |
For #67434 Fixes #70452 Change-Id: Ie655a9e55837aa68b6bfb0bb69b6c8caaf3bbea5 Reviewed-on: https://go-review.googlesource.com/c/go/+/629856 Reviewed-by: Russ Cox <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]> Auto-Submit: Damien Neil <[email protected]> Reviewed-by: Michael Pratt <[email protected]>
#70512 is an interesting case which could conceivably occur in user code: syscall/js.FuncOf takes a Go function and returns a value which can be passed to JavaScript for execution. This is used by the js version of syscall to invoke callback-based JS APIs. This causes a problem when the caller of FuncOf is in a synctest bubble. In #70512, the caller waits for an asynchronous JS API to finish by creating a channel and waiting for the FuncOf function to send a value on it. This caused a cross-bubble channel send, and a panic. (The panic in this case is helpful, and I think this is a point in favor of associating channels with bubbles: The bubbled goroutine waiting on the channel is not "durably blocked"--it's essentially blocked on a syscall, and will wake when the call completes.) I'm fixing this in https://go.dev/cl/631055 by making syscall/js.FuncOf aware of synctest. If a bubbled goroutine creates a function with FuncOf, its bubble will not become idle until the function is released with Func.Release, and the Go callback function will be executed in the bubble of the creator. (Fortunately syscall/js.Func already requires manual cleanup with an existing Release method, so we can easily track the lifetime of these functions.) For the common case, this makes everything just work and is transparent to the user. If users use syscall/js.FuncOf directly, however, it's a complication that they may need to be aware of. |
Interesting case! I think it makes sense to consider an asynchronous callback to be in the same bubble. In some sense, it is conceptually like the caller creates the goroutine for the callback to run on. On the other hand, I think a js.FuncOf function can also be called synchronously, e.g. Go -> JS -> Go, all synchronous calls. In this case the callback runs on the same goroutine for the synchronous call. Usually it is the same goroutine that creates the FuncOf, so the same bubble makes sense. But it could be also a different goroutine, potentially in a different bubble? |
Thanks for this, @neild! As part of writing up something to promote the experiment I noticed that tickers require a Stop for synctest.Run to return. For example, with this test: func Test(t *testing.T) {
synctest.Run(func() {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
var hits atomic.Int32
go func() {
tick := time.NewTicker(time.Millisecond)
// no defer tick.Stop()
for {
select {
case <-ctx.Done():
return
case <-tick.C:
hits.Add(1)
}
}
}()
time.Sleep(3 * time.Millisecond)
cancel()
got := int(hits.Load())
if want := 3; got != want {
t.Fatalf("got %v, want %v", got, want)
}
})
} It emits the message from Fatalf and then hangs. Once I added the |
Thanks for testing the package out! I'm convinced now that advancing the fake clock after the root goroutine returns is confusing and a mistake. I ran into a similar confusion in #67434 (comment), where a background goroutine in a poll loop unexpectedly kept synctest.Run executing forever. I think the right behavior should be: When the function called by synctest.Run returns, time stops advancing in the bubble. Run waits for all other goroutines in the bubble to exit before returning, but it does not advance the fake clock. If all remaining goroutines are durably blocked, Run panics. |
Change https://go.dev/cl/635375 mentions this issue: |
For golang/go#67434 Change-Id: I6d6f0eb4d498ec65e05bedb40b68b778f7ad591f Reviewed-on: https://go-review.googlesource.com/c/website/+/635375 Reviewed-by: Michael Pratt <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]> Auto-Submit: Damien Neil <[email protected]>
Could the same underlying implementation be used to replace the faketime in the playground? |
I've tested out testing/synctest for some of my pet projects, and I've found a somewhat ugly, but functional, way for testing packages that use I've used Monkey for monkeypatching, channel-based mock-TCP implementation like this, and the following code in func TestMain(m *testing.M) {
// Patch net.Listen to use mock.Listen
monkey.Patch(net.Listen, mock.Listen)
// Patch Dialer.DialContext to use mock.Dialer.DialContext
monkey.PatchInstanceMethod(
reflect.TypeOf(&net.Dialer{}),
"DialContext",
func(d *net.Dialer, ctx context.Context, network, address string) (net.Conn, error) {
md := mock.Dialer{}
return md.DialContext(ctx, network, address)
},
)
} Running the tests that contain network calls using |
Possibly, but there are enough differences in behavior that it might not be a simplification. I don't have any plans to try to unify the implementations at this time. If someone wants to give it a shot, it's probably first best to wait and see what the outcome of the synctest experiment is (possibly this all gets reverted!). |
From working on https://github.com/jellevandenhooff/gosim, which simulates time like testing/synctest but also tries to do much more, I have some thoughts and questions:
|
This is an interesting idea, but probably out of scope for the synctest proposal. Currently, the runtime does the opposite when running under the race detector: Goroutine scheduling is made less deterministic, to aid in detecting accidental dependencies on the current scheduler behavior.
I don't know. I think that's a different feature than synctest is attempting to provide. Inside Google, we have a test library to aid in executing test functions in a subprocess. It's used something like this: func Test(t *testing.T) {
cmd := exectest.Command(t, "unrecovered panic", func(t *testing.T) {
os.Exit(2)
})
_, err := cmd.CombinedOutput()
if err == nil {
t.Fatalf("expected unsuccessful exit, got none")
}
} The This is useful for testing functions that terminate the current process (like os.Exit). However, it's a fairly sharp-edged package--any
Time is, I think, somewhat unique in terms of testing. The time package does not provide any testable abstractions. It exports functions like It's also not possible to use real time in tests, or at least not well: Tests which sleep for some amount of real time are always slow and/or flaky. In contrast, the net package provides a number of useful abstractions: You can write code which operates on a Another difference is that there's less variation in what users may want from a fake time implementation: I do think that if the synctest experiment is successful, it will increase the need for good fake implementations of
Clock drift seems highly specialized and out of scope for what synctest is trying to provide. The precision of time under synctest is definitely artificial: A timer set to fire at a certain time will always fire at exactly that time, and time does not pass except when goroutines block. I don't see a way to avoid this; we could introduce clock drift, but at the expense of making it harder to write deterministic tests. |
While profiling allocations using the new
Switching back to
I would not assume the ns/op be accurate since the code is not really sleeping anymore, but the large negative duration is a little surprising. |
With a normal |
@prattmic Thank you for the insights. I had a hunch it was something along those lines. I'm wondering if there is a way to detect the use of |
We could also have B.Loop use the real clock, even when in a bubble. Not saying that's the right choice, but it's an option. If we don't have B.Loop use the real clock, it should probably panic when invoked in a bubble. |
Current proposal statusThe The experimental package API is as follows: // Run executes f in a new goroutine.
//
// The new goroutine and any goroutines transitively started by it form
// an isolated "bubble".
// Run waits for all goroutines in the bubble to exit before returning.
//
// Goroutines in the bubble use a synthetic time implementation.
// The initial time is midnight UTC 2000-01-01.
//
// Time advances when every goroutine in the bubble is blocked.
// For example, a call to time.Sleep will block until all other
// goroutines are blocked and return after the bubble's clock has
// advanced. See [Wait] for the specific definition of blocked.
//
// If every goroutine is blocked and there are no timers scheduled,
// Run panics.
//
// Channels, time.Timers, and time.Tickers created within the bubble
// are associated with it. Operating on a bubbled channel, timer, or ticker
// from outside the bubble panics.
func Run(f func())
// Wait blocks until every goroutine within the current bubble,
// other than the current goroutine, is durably blocked.
// It panics if called from a non-bubbled goroutine,
// or if two goroutines in the same bubble call Wait at the same time.
//
// A goroutine is durably blocked if can only be unblocked by another
// goroutine in its bubble. The following operations durably block
// a goroutine:
// - a send or receive on a channel from within the bubble
// - a select statement where every case is a channel within the bubble
// - sync.Cond.Wait
// - time.Sleep
//
// A goroutine executing a system call or waiting for an external event
// such as a network operation is not durably blocked.
// For example, a goroutine blocked reading from an network connection
// is not durably blocked even if no data is currently available on the
// connection, because it may be unblocked by data written from outside
// the bubble or may be in the process of receiving data from a kernel
// network buffer.
//
// A goroutine is not durably blocked when blocked on a send or receive
// on a channel that was not created within its bubble, because it may
// be unblocked by a channel receive or send from outside its bubble.
func Wait() As this package is experimental, it is not subject to the Go 1 compatibility promise. Depending on experience and feedback, we may promote testing/synctest to a fully-supported package, possibly with incompatible changes from the current version; continue the experiment in future versions, again possibly with amendments; or drop the experiment entirely and remove the package. Your feedback is essential! If you try out testing/synctest, please report your experiences, both positive and negative, in this issue. Known issuesTime advances forever after main test goroutine exitsThis call to synctest.Run(func() {
time.NewTicker(1 * time.Second)
}) The problem is that synctest.Run(func() {
go func() {
for {
time.Sleep(1 * time.Second)
// do something
}
}()
} A potential fix for this issue is to say that the fake clock stops advancing once the goroutine started by Confusing error when
|
Current proposal status: #67434 (comment)
This is a proposal for a new package to aid in testing concurrent code.
This package has two main features:
As an example, let us say we are testing an expiring concurrent cache:
A naive test for this cache might look something like this:
This test has a couple problems. It's slow, taking four seconds to execute. And it's flaky, because it assumes the cache entry will not have expired one second before its deadline and will have expired one second after. While computers are fast, it is not uncommon for an overloaded CI system to pause execution of a program for longer than a second.
We can make the test less flaky by making it slower, or we can make the test faster at the expense of making it flakier, but we can't make it fast and reliable using this approach.
We can design our Cache type to be more testable. We can inject a fake clock to give us control over time in tests. When advancing the fake clock, we will need some mechanism to ensure that any timers that fire have executed before progressing the test. These changes come at the expense of additional code complexity: We can no longer use time.Timer, but must use a testable wrapper. Background goroutines need additional synchronization points.
The synctest package simplifies all of this. Using synctest, we can write:
This is identical to the naive test above, wrapped in synctest.Run and with the addition of two calls to synctest.Wait. However:
A limitation of the synctest.Wait function is that it does not recognize goroutines blocked on network or other I/O operations as idle. While the scheduler can identify a goroutine blocked on I/O, it cannot distinguish between a goroutine that is genuinely blocked and one which is about to receive data from a kernel network buffer. For example, if a test creates a loopback TCP connection, starts a goroutine reading from one side of the connection, and then writes to the other, the read goroutine may remain in I/O wait for a brief time before the kernel indicates that the connection has become readable. If synctest.Wait considered a goroutine in I/O wait to be idle, this would cause nondeterminism in cases such as this,
Tests which use synctest with network connections or other external data sources should use a fake implementation with deterministic behavior. For net.Conn, net.Pipe can create a suitable in-memory connection.
This proposal is based in part on experience with tests in the golang.org/x/net/http2 package. Tests of an HTTP client or server often involve multiple interacting goroutines and timers. For example, a client request may involve goroutines writing to the server, reading from the server, and reading from the request body; as well as timers covering various stages of the request process. The combination of fake clocks and an operation which waits for all goroutines in the test to stabilize has proven effective.
The text was updated successfully, but these errors were encountered: