Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: unit type support #75

Merged
merged 2 commits into from
Dec 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 22 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,28 +98,28 @@ Check out on [go-dcp](https://github.com/Trendyol/go-dcp#configuration)

### Kafka Specific Configuration

| Variable | Type | Required | Default | Description |
|-------------------------------------|-------------------|----------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `kafka.collectionTopicMapping` | map[string]string | yes | | Defines which Couchbase collection events will be sent to which topic,:warning: **If topic information is entered in the mapper, it will OVERWRITE this config**. |
| `kafka.brokers` | []string | yes | | Broker ip and port information |
| `kafka.producerBatchSize` | integer | no | 2000 | Maximum message count for batch, if exceed flush will be triggered. |
| `kafka.producerBatchBytes` | 64 bit integer | no | 10485760 | Maximum size(byte) for batch, if exceed flush will be triggered. |
| `kafka.producerBatchTimeout` | time.duration | no | 1 nano second | Time limit on how often incomplete message batches will be flushed. |
| `kafka.producerMaxAttempts` | int | no | math.MaxInt | Limit on how many attempts will be made to deliver a message. |
| `kafka.producerBatchTickerDuration` | time.Duration | no | 10s | Batch is being flushed automatically at specific time intervals for long waiting messages in batch. |
| `kafka.readTimeout` | time.Duration | no | 30s | segmentio/kafka-go - Timeout for read operations |
| `kafka.writeTimeout` | time.Duration | no | 30s | segmentio/kafka-go - Timeout for write operations |
| `kafka.compression` | integer | no | 0 | Compression can be used if message size is large, CPU usage may be affected. 0=None, 1=Gzip, 2=Snappy, 3=Lz4, 4=Zstd |
| `kafka.requiredAcks` | integer | no | 1 | segmentio/kafka-go - Number of acknowledges from partition replicas required before receiving a response to a produce request. 0=fire-and-forget, do not wait for acknowledgements from the, 1=wait for the leader to acknowledge the writes, -1=wait for the full ISR to acknowledge the writes |
| `kafka.secureConnection` | bool | no | false | Enable secure Kafka. |
| `kafka.rootCAPath` | string | no | *not set | Define root CA path. |
| `kafka.interCAPath` | string | no | *not set | Define inter CA path. |
| `kafka.scramUsername` | string | no | *not set | Define scram username. |
| `kafka.scramPassword` | string | no | *not set | Define scram password. |
| `kafka.metadataTTL` | time.Duration | no | 60s | TTL for the metadata cached by segmentio, increase it to reduce network requests. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Transport.MetadataTTL). |
| `kafka.metadataTopics` | []string | no | | Topic names for the metadata cached by segmentio, define topics here that the connector may produce. In large Kafka clusters, this will reduce memory usage. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Transport.MetadataTopics). |
| `kafka.clientID` | string | no | | Unique identifier that the transport communicates to the brokers when it sends requests. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Transport.ClientID). |
| `kafka.allowAutoTopicCreation` | bool | no | false | Create topic if missing. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Writer.AllowAutoTopicCreation). |
| Variable | Type | Required | Default | Description |
|-------------------------------------|-------------------|----------|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `kafka.collectionTopicMapping` | map[string]string | yes | | Defines which Couchbase collection events will be sent to which topic,:warning: **If topic information is entered in the mapper, it will OVERWRITE this config**. |
| `kafka.brokers` | []string | yes | | Broker ip and port information |
| `kafka.producerBatchSize` | integer | no | 2000 | Maximum message count for batch, if exceed flush will be triggered. |
| `kafka.producerBatchBytes` | 64 bit integer | no | 10mb | Maximum size(byte) for batch, if exceed flush will be triggered. `10mb` is default. |
| `kafka.producerBatchTimeout` | time.duration | no | 1 nano second | Time limit on how often incomplete message batches will be flushed. |
| `kafka.producerMaxAttempts` | int | no | math.MaxInt | Limit on how many attempts will be made to deliver a message. |
| `kafka.producerBatchTickerDuration` | time.Duration | no | 10s | Batch is being flushed automatically at specific time intervals for long waiting messages in batch. |
| `kafka.readTimeout` | time.Duration | no | 30s | segmentio/kafka-go - Timeout for read operations |
| `kafka.writeTimeout` | time.Duration | no | 30s | segmentio/kafka-go - Timeout for write operations |
| `kafka.compression` | integer | no | 0 | Compression can be used if message size is large, CPU usage may be affected. 0=None, 1=Gzip, 2=Snappy, 3=Lz4, 4=Zstd |
| `kafka.requiredAcks` | integer | no | 1 | segmentio/kafka-go - Number of acknowledges from partition replicas required before receiving a response to a produce request. 0=fire-and-forget, do not wait for acknowledgements from the, 1=wait for the leader to acknowledge the writes, -1=wait for the full ISR to acknowledge the writes |
| `kafka.secureConnection` | bool | no | false | Enable secure Kafka. |
| `kafka.rootCAPath` | string | no | *not set | Define root CA path. |
| `kafka.interCAPath` | string | no | *not set | Define inter CA path. |
| `kafka.scramUsername` | string | no | *not set | Define scram username. |
| `kafka.scramPassword` | string | no | *not set | Define scram password. |
| `kafka.metadataTTL` | time.Duration | no | 60s | TTL for the metadata cached by segmentio, increase it to reduce network requests. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Transport.MetadataTTL). |
| `kafka.metadataTopics` | []string | no | | Topic names for the metadata cached by segmentio, define topics here that the connector may produce. In large Kafka clusters, this will reduce memory usage. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Transport.MetadataTopics). |
| `kafka.clientID` | string | no | | Unique identifier that the transport communicates to the brokers when it sends requests. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Transport.ClientID). |
| `kafka.allowAutoTopicCreation` | bool | no | false | Create topic if missing. For more detail please check [docs](https://pkg.go.dev/github.com/segmentio/kafka-go#Writer.AllowAutoTopicCreation). |

### Kafka Metadata Configuration(Use it if you want to store the checkpoint data in Kafka)

Expand Down
8 changes: 5 additions & 3 deletions config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,13 @@ import (
"math"
"time"

"github.com/Trendyol/go-dcp/helpers"

"github.com/Trendyol/go-dcp/config"
)

type Kafka struct {
ProducerBatchBytes any `yaml:"producerBatchBytes"`
CollectionTopicMapping map[string]string `yaml:"collectionTopicMapping"`
InterCAPath string `yaml:"interCAPath"`
ScramUsername string `yaml:"scramUsername"`
Expand All @@ -16,9 +19,8 @@ type Kafka struct {
ClientID string `yaml:"clientID"`
Brokers []string `yaml:"brokers"`
MetadataTopics []string `yaml:"metadataTopics"`
ProducerBatchBytes int64 `yaml:"producerBatchBytes"`
ProducerBatchTimeout time.Duration `yaml:"producerBatchTimeout"`
ProducerMaxAttempts int `yaml:"producerMaxAttempts"`
ProducerBatchTimeout time.Duration `yaml:"producerBatchTimeout"`
ReadTimeout time.Duration `yaml:"readTimeout"`
WriteTimeout time.Duration `yaml:"writeTimeout"`
RequiredAcks int `yaml:"requiredAcks"`
Expand Down Expand Up @@ -60,7 +62,7 @@ func (c *Connector) ApplyDefaults() {
}

if c.Kafka.ProducerBatchBytes == 0 {
c.Kafka.ProducerBatchBytes = 10485760
c.Kafka.ProducerBatchBytes = helpers.ResolveUnionIntOrStringValue("10mb")
}

if c.Kafka.RequiredAcks == 0 {
Expand Down
4 changes: 3 additions & 1 deletion kafka/producer/producer.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ package producer
import (
"time"

"github.com/Trendyol/go-dcp/helpers"

"github.com/Trendyol/go-dcp-kafka/config"
gKafka "github.com/Trendyol/go-dcp-kafka/kafka"
"github.com/Trendyol/go-dcp/models"
Expand All @@ -29,7 +31,7 @@ func NewProducer(kafkaClient gKafka.Client,
config.Kafka.ProducerBatchTickerDuration,
writer,
config.Kafka.ProducerBatchSize,
config.Kafka.ProducerBatchBytes,
int64(helpers.ResolveUnionIntOrStringValue(config.Kafka.ProducerBatchBytes)),
dcpCheckpointCommit,
),
}, nil
Expand Down
Loading