Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
118312: roachtest: remove interesting whitespace characters in random SQL test logs r=mgartner a=mgartner

In #102038 we started writing statements to the query log in SQL
comments. The change removes newline characters in the statement to
ensure that the statement will be on one line all within the comment.
However, we still see test logs where the comments are broken onto
multiple lines, as in #118273, which makes the file invalid
syntactically. This makes reproducing the test failure more difficult
because the comments have to be manually fixed so that the log file can
be successfully parsed.

I was able to determine that the line breaks are coming from other
"interesting" whitespace characters. The `xxd` output from the log file
in #118273 shows the carriage return character, `0d` being used in the
middle of a column name:

    00000090: 2041 5320 2263 6f0d 6c33 3830 2246 524f   AS "co.l380"FRO

This commit strips all interesting whitespace characters from the
statement to prevent the comment from being broken on multiple lines.

Epic: None

Release note: None


118368: roachprod: fix default backup schedule creation on start r=herkolategan a=renatolabs

For a while now, `roachprod` has created a default backup schedule on cluster creation when `--schedule-backups` is passed. This is also the default in clusters created to run roachtests.

When we fixed an issue with starting external-process tenants in 3715eb5, however, we unadvertently changed the order of two operations performed by roachprod on cluster start: setting the default cluster settings, and creating the default backup schedule.

As a consequence, the command used to create the backup schedule fails because, at that point, we haven't configured a license key yet.

To make matters worse, there was a bug in the error handling of `createFixedBackupSchedule` that prevented errors from being reported to the user; these errors were being swallowed and went unnoticed for a few months.

In this commit, we fix the error checking in that function, and also officially remove code that partially supported creating backups or admin users in tenants. Currently, roachprod will run that part of the setup for the system tenant. In the future, we might revisit this and also create a backup schedule and admin users for application tenants.

Epic: none

Release note: None

118476: changefeedccl: use correct channel size in parallelio r=jayshrivastava a=jayshrivastava

Previously, the channels used for sending requests and receiving results were too small. This meant that a caller could block on sending a request even after acquiring quota. This change ensures that the size of the channels is large enough so that this blocking does not occur.

Closes: #118463
Closes: #118462
Closes: #118461
Closes: #118460
Closes: #118459
Closes: #118458
Epic: none

118482: kvserver: increase shutdown propagation time in range merge test r=andrewbaptist a=kvoli

`TestStoreRangeMergeDuringShutDown` could occasionally flake when the shutdown hadn't propagated before applying the lease.

Increase the post-shutdown sleep from 10ms to 20ms.

Fixes: #118348
Release note: None

118483: roachpb: increase test make priority trials r=andrewbaptist a=kvoli

`TestMakePriority` could (very) rarely flake due to slight differences in the sampled vs underlying distribution. Increase the trial runs by 33% from 750k to 1000k to reduce the likelihood of this occurring.

Fixes: #118399
Release note: None

118498: gcjob_test: deflake TestSchemaChangeGCJob r=rafiss a=rafiss

This updates a test assertion so that if the GC TTL already began, the test does not fail.

fixes #117485
fixes #118467
Release note: None

Co-authored-by: Marcus Gartner <[email protected]>
Co-authored-by: Renato Costa <[email protected]>
Co-authored-by: Jayant Shrivastava <[email protected]>
Co-authored-by: Austen McClernon <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
  • Loading branch information
6 people committed Jan 30, 2024
7 parents b6d1474 + 51318d5 + cf1166e + b5799ba + 1d21fe7 + dda6cfa + f2afcc1 commit 38dd16a
Show file tree
Hide file tree
Showing 6 changed files with 46 additions and 88 deletions.
16 changes: 11 additions & 5 deletions pkg/ccl/changefeedccl/parallel_io.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ import (

"github.com/cockroachdb/cockroach/pkg/settings"
"github.com/cockroachdb/cockroach/pkg/settings/cluster"
"github.com/cockroachdb/cockroach/pkg/util"
"github.com/cockroachdb/cockroach/pkg/util/ctxgroup"
"github.com/cockroachdb/cockroach/pkg/util/intsets"
"github.com/cockroachdb/cockroach/pkg/util/quotapool"
Expand Down Expand Up @@ -84,15 +85,18 @@ func NewParallelIO(
metrics metricsRecorder,
settings *cluster.Settings,
) *ParallelIO {
quota := uint64(requestQuota.Get(&settings.SV))
wg := ctxgroup.WithContext(ctx)
io := &ParallelIO{
retryOpts: retryOpts,
wg: wg,
metrics: metrics,
ioHandler: handler,
quota: quotapool.NewIntPool("changefeed-parallel-io", uint64(requestQuota.Get(&settings.SV))),
requestCh: make(chan AdmittedIORequest, numWorkers),
resultCh: make(chan IOResult, numWorkers),
quota: quotapool.NewIntPool("changefeed-parallel-io", quota),
// NB: The size of these channels should not be less than the quota. This prevents the producer from
// blocking on sending requests which have been admitted.
requestCh: make(chan AdmittedIORequest, quota),
resultCh: make(chan IOResult, quota),
doneCh: make(chan struct{}),
}

Expand Down Expand Up @@ -161,8 +165,10 @@ var requestQuota = settings.RegisterIntSetting(
"changefeed.parallel_io.request_quota",
"the number of requests which can be admitted into the parallelio"+
" system before blocking the producer",
128,
settings.PositiveInt,
int64(util.ConstantWithMetamorphicTestChoice(
"changefeed.parallel_io.request_quota",
128, 16, 32, 64, 256).(int)),
settings.IntInRange(1, 256),
settings.WithVisibility(settings.Reserved),
)

Expand Down
17 changes: 12 additions & 5 deletions pkg/cmd/roachtest/tests/query_comparison_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ import (
"sort"
"strings"
"time"
"unicode"

"github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster"
"github.com/cockroachdb/cockroach/pkg/cmd/roachtest/option"
Expand Down Expand Up @@ -447,11 +448,17 @@ func (h *queryComparisonHelper) runQuery(stmt string) ([][]string, error) {
// such a scenario, since the stmt didn't execute successfully, it won't get
// logged by the caller).
h.logStmt(fmt.Sprintf("-- %s: %s", timeutil.Now(),
// Remove all newline symbols to log this stmt as a single line. This
// way this auxiliary logging takes up less space (if the stmt executes
// successfully, it'll still get logged with the nice formatting).
strings.ReplaceAll(stmt, "\n", "")),
)
// Remove all control characters, including newline symbols. to log this
// stmt as a single line. This way this auxiliary logging takes up less
// space (if the stmt executes successfully, it'll still get logged with
// the nice formatting).
strings.Map(func(r rune) rune {
if unicode.IsControl(r) {
return -1
}
return r
}, stmt),
))

runQueryImpl := func(stmt string) ([][]string, error) {
rows, err := h.conn.Query(stmt)
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/client_merge_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4133,7 +4133,7 @@ func TestStoreRangeMergeDuringShutdown(t *testing.T) {
// Sleep to give the shutdown time to propagate. The test appeared to work
// without this sleep, but best to be somewhat robust to different
// goroutine schedules.
time.Sleep(10 * time.Millisecond)
time.Sleep(20 * time.Millisecond)
} else {
state.Unlock()
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/roachpb/data_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -929,7 +929,7 @@ func TestMakePriority(t *testing.T) {
}

// Generate values for all priorities.
const trials = 750000
const trials = 1000000
values := make([][trials]enginepb.TxnPriority, len(userPs))
for i, userPri := range userPs {
for tr := 0; tr < trials; tr++ {
Expand Down
84 changes: 10 additions & 74 deletions pkg/roachprod/install/cockroach.go
Original file line number Diff line number Diff line change
Expand Up @@ -417,20 +417,18 @@ func (c *SyncedCluster) Start(ctx context.Context, l *logger.Logger, startOpts S
storageCluster = startOpts.KVCluster
}
if startOpts.Target == StartDefault {
if err := storageCluster.waitForDefaultTargetCluster(ctx, l, startOpts); err != nil {
return errors.Wrap(err, "failed to wait for default target cluster")
if err = storageCluster.setClusterSettings(ctx, l, startOpts.GetInitTarget(), startOpts.VirtualClusterName); err != nil {
return err
}
// Only after a successful cluster initialization should we attempt to schedule backups.

storageCluster.createAdminUserForSecureCluster(ctx, l, startOpts)

if startOpts.ScheduleBackups && shouldInit && config.CockroachDevLicense != "" {
if err := c.createFixedBackupSchedule(ctx, l, startOpts.ScheduleBackupArgs); err != nil {
if err := storageCluster.createFixedBackupSchedule(ctx, l, startOpts.ScheduleBackupArgs); err != nil {
return err
}
}
}
c.createAdminUserForSecureCluster(ctx, l, startOpts)
if err = storageCluster.setClusterSettings(ctx, l, startOpts.GetInitTarget(), startOpts.VirtualClusterName); err != nil {
return err
}
}

return nil
Expand Down Expand Up @@ -979,71 +977,6 @@ func (c *SyncedCluster) initializeCluster(
return res, err
}

// waitForDefaultTargetCluster checks for the existence of a
// config-profile flag that leads to the use of an application tenant
// as 'default target cluster'; if that is the case, we wait for all
// nodes to be aware of the cluster setting before proceding. Without
// this logic, follow-up tasks in the process of creating the cluster
// could run before the cluster setting is propagated, and they would
// apply to the system tenant instead.
func (c *SyncedCluster) waitForDefaultTargetCluster(
ctx context.Context, l *logger.Logger, startOpts StartOpts,
) error {
var hasCustomTargetCluster bool
for _, arg := range startOpts.ExtraArgs {
// If there is a config profile and that is set to either a '+app'
// profile or 'replication-source', we know that the default
// target cluster setting will be set to the application tenant.
if strings.Contains(arg, "config-profile") &&
(strings.Contains(arg, "+app") || strings.Contains(arg, "replication-source")) {
hasCustomTargetCluster = true
break
}
}

if !hasCustomTargetCluster {
return nil
}

l.Printf("waiting for default target cluster")
retryOpts := retry.Options{MaxRetries: 20}
return retryOpts.Do(ctx, func(ctx context.Context) error {
// TODO(renato): use server.controller.default_target_cluster once
// 23.1 is no longer supported.
const stmt = "SHOW CLUSTER SETTING server.controller.default_tenant"
res, err := c.ExecSQL(ctx, l, Nodes{startOpts.GetInitTarget()}, SystemInterfaceName, 0, []string{"-e", stmt})
if err != nil {
return errors.Wrap(err, "error reading cluster setting")
}

if len(res) > 0 {
if res[0].Err != nil {
return errors.Wrapf(res[0].Err, "node %d", res[0].Node)
}

if strings.Contains(res[0].CombinedOut, "system") {
return errors.Newf("target cluster on n%d is still system", res[0].Node)
}
}

// Once we know the cluster setting points to the default target
// cluster, we attempt to run a dummy SQL statement until that
// succeeds (i.e., until the target cluster is able to handle
// requests.)
const pingStmt = "SELECT 1;"
res, err = c.ExecSQL(ctx, l, Nodes{startOpts.GetInitTarget()}, "", 0, []string{"-e", pingStmt})
if err != nil {
return errors.Wrap(err, "error connecting to default target cluster")
}

if res[0] != nil && res[0].Err != nil {
err = errors.CombineErrors(err, res[0].Err)
}

return err
})
}

// createAdminUserForSecureCluster creates a `roach` user with admin
// privileges. The password used matches the virtual cluster name
// ('system' for the storage cluster). If it cannot be created, this
Expand Down Expand Up @@ -1083,6 +1016,9 @@ func (c *SyncedCluster) createAdminUserForSecureCluster(
if err := retryOpts.Do(ctx, func(ctx context.Context) error {
// We use the first node in the virtual cluster to create the user.
firstNode := c.TargetNodes()[0]
if startOpts.VirtualClusterName == "" {
startOpts.VirtualClusterName = SystemInterfaceName
}
results, err := c.ExecSQL(
ctx, l, Nodes{firstNode}, startOpts.VirtualClusterName, startOpts.SQLInstance, []string{
"-e", stmts,
Expand Down Expand Up @@ -1412,7 +1348,7 @@ func (c *SyncedCluster) createFixedBackupSchedule(
if res != nil {
out = res.CombinedOut
}
return errors.Wrapf(err, "~ %s\n%s", fullCmd, out)
return errors.Wrapf(errors.CombineErrors(err, res.Err), "~ %s\n%s", fullCmd, out)
}

if out := strings.TrimSpace(res.CombinedOut); out != "" {
Expand Down
13 changes: 11 additions & 2 deletions pkg/sql/gcjob_test/gc_job_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -234,9 +234,18 @@ func doTestSchemaChangeGCJob(t *testing.T, dropItem DropItem, ttlTime TTLTime) {
// Check that the job started.
jobIDStr := strconv.Itoa(int(job.ID()))
testutils.SucceedsSoon(t, func() error {
return jobutils.VerifyRunningSystemJob(
if err := jobutils.VerifyRunningSystemJob(
t, sqlDB, 0, jobspb.TypeSchemaChangeGC, sql.RunningStatusWaitingGC, lookupJR,
)
); err != nil {
// Since the intervals are set very low, the GC TTL job may have already
// started. If so, the status will be "deleting data" since "waiting for
// GC TTL" will have completed already.
if testutils.IsError(err, "expected running status waiting for GC TTL, got deleting data") {
return nil
}
return err
}
return nil
})

if ttlTime != FUTURE {
Expand Down

0 comments on commit 38dd16a

Please sign in to comment.