gazctl Command

gazctl

Usage:
  gazctl [OPTIONS] <command>

gazctl is a tool for interacting with Gazette brokers and consumer applications.

See --help pages of each sub-command for documentation and usage examples.
Optionally configure gazctl with a 'gazctl.ini' file in the current working directory,
or with '~/.config/gazette/gazctl.ini'. Use the 'print-config' sub-command to inspect
the tool's current configuration.


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

Available commands:
  attach-uuids  Generate and attach UUIDs to text input records
  journals      Interact with broker journals
  print-config  Print combined configuration and exit
  shards        Interact with consumer shards

gazctl attach-uuids

Usage:
  gazctl [OPTIONS] attach-uuids [attach-uuids-OPTIONS] [Paths...]

For each line of each argument input file, generate a RFC 4122 v1 compatible
UUID and, using the --template, combine it with the input line into output
written to stdout. If no input file arguments are given, stdin is read instead.

Exactly-once processing semantics require that messages carry a v1 UUID which
is authored by Gazette. The UUID encodes a unique producer ID, monotonic Clock,
and transaction flags.

attach-uuids facilitates pre-processing text files or unix pipelines in
preparation for appending to a journal, by associating each input with a
corresponding UUID. UUIDs are flagged as committed, meaning they will be
processed immediately by readers. attach-uuids may be used directly in a
pipeline of streamed records.

When processing files in preparation for append to Gazette, it's best practice
to attach UUIDs into new temporary file(s), and then append the temporary files
to journals. This ensures messages are processed only once even if one or both
of the attach-uuids or append steps fail partway through and are restarted.

However avoid appending many small files in this way, as each invocation of
attach-uuids generates a new random producer ID, and each producer ID requires
that consumers track a very small amount of state (eg, its Clock). Instead,
first combine many small files into few large ones before attaching UUIDs.

Prefix CSV rows with a UUID (using the default --template):
>  gazctl attach-uuids inputOne.csv inputTwo.csv inputN.csv

Prefix CSV rows, but skip a initial header row of each input:
>  gazctl attach-uuids --skip-header inputOne.csv inputTwo.csv

Postfix CSV rows with a UUID (use $'..' to correctly handle newline escape):
>  gazctl attach-uuids input.csv --template=$'{{.Line}},{{.UUID}}\n'

Wrap JSON inputs with a UUID:
> gazctl attach-uuids input.json --template=$'{"uuid": "{{.UUID}}","record":{{.Line}}}\n'

Optionally compose with "jq" to un-nest the JSON objects:
> gazctl attach-uuids input.json --template=$'{"uuid": "{{.UUID}}","record":{{.Line}}}\n' \
>	| jq -c '{uuid: .uuid} + .record'


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[attach-uuids command options]
          --template=                Go text/template for output (default: "{{.UUID}},{{.Line}}\n")
          --max-length=              Maximum allowed byte-length of an input line (default: 4194304)
          --skip-header              Omit the first line of each input file

gazctl journals append

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] append [append-OPTIONS]

Append content to one or more journals.

A label --selector is required, and determines the set of journals which are appended.
See "journals list --help" for details and examples of using journal selectors.

If --framing 'none', then --mapping must be 'random' and the input is atomically
appended to a random journal of the selector. Note --selector name=my/journal/name
can be used to append to a specific journal.

If --framing 'lines' then records are read from the input line-by-line. Each is
mapped to a journal and appended on an atomic-per-record basis. The relative
ordering of records in a specific mapped journal is preserved. --framing 'fixed'
functions like 'lines', except that records are read from input delimited by a
a leading fixed-framing header. Note that record delimiters (newlines or fixed-
framing headers) are retained and included when appending into mapped journals.

If --mapping 'random', each record is independently mapped to a random journal.
If --mapping 'modulo' or 'rendezvous', then each input record is expected to be
preceded by a partition-key written with the same --framing (eg, if --framing
'lines' then 'A-Partition-Key\nA-Record\n'). The partition key is used to map
the record which follows to a target journal under the applicable mapping scheme
(eg, modulo arithmetic or rendezvous / "cuckoo" hashing). To use binary
partition keys with --mapping 'lines', encoded each partition key using base64
and specify --base64.

If --mapping 'direct', then each input record is preceded by a journal name,
which must be a current journal of the --selector, and to which the record is
appended.

Use --log.level=debug to inspect individual mapping decisions.

Examples:

# Write the content of ./fizzbuzz to my/journal:
gazctl journals append -l name=my/journal -i ./fizzbuz

# Write two records to partitions of my-label mapped by Key1 and Key2, respectively:
gazctl journals append -l my-label --framing 'lines' --mapping 'modulo' --base64 << EOF
S2V5MQ==
{"Msg": "record 1"}
S2V5Mg==
{"Msg": "record 2"}
EOF

# Serve all writers to my-fifo as a long-lived daemon. Note that posix FIFOs do
# not EOF while at least one process holds an open write descriptor. But, do
# take care to have just one pipe writer at a time:
mkfifo my-fifo
cat /dev/stdout > my-fifo &	# Hold my-fifo open so gazctl doesn't read EOF.
gazctl journals append -l my-label --framing 'lines' --mapping 'rendezvous' --input my-fifo


Application Options:
      --zone=                                         Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]                   Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color]                  Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                                          Show this help message

[journals command options]

    Broker:
          --broker.address=                           Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=                        Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=                         Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[append command options]
      -l, --selector=                                 Label selector of journals to append to
      -i, --input=                                    Input file path. Use '-' for stdin (default: -)
      -f, --framing=[none|lines|fixed]                Framing of records in input, if any (default: none)
      -m, --mapping=[random|modulo|rendezvous|direct] Mapping function of records to journals (default: random)
          --base64                                    Partition keys under 'lines' framing are interpreted as base64

gazctl journals apply

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] apply [apply-OPTIONS]

Apply a collection of JournalSpec creations, updates, or deletions.

JournalSpecs should be provided as a YAML journal hierarchy, the format
produced by "gazctl journals list". This YAML hierarchy format is sugar for
succinctly representing a collection of JournalSpecs, which typically exhibit
common prefixes and configuration. gazctl will flatten the YAML hierarchy
into the implicated collection of JournalSpec changes, and send each to the
brokers for application.

Brokers verify that the etcd "revision" field of each JournalSpec is correct,
and will fail the entire apply operation if any have since been updated. A
common operational pattern is to list, edit, and re-apply a collection of
JournalSpecs; this check ensures concurrent modifications are caught.

You may explicitly inform the broker to apply your JournalSpecs regardless of the
current state of specifications in Etcd by passing in a revision value of -1.
This commonly done when operators keep JournalSpecs in version control as their
source of truth.

JournalSpecs may be created by setting "revision" to zero or omitting altogether.

JournalSpecs may be deleted by setting field "delete" to true on individual
journals or parents thereof in the hierarchy. Note that deleted parent prefixes
will cascade only to JournalSpecs *explicitly listed* as children of the prefix
in the YAML, and not to other JournalSpecs which may exist with the prefix but
are not enumerated.

In the event that this command generates more changes than are possible in a
single Etcd transaction given the current server configuration (default 128),
gazctl supports a flag which will send changes in batches of at most
--max-txn-size. However, this means the entire apply is no longer issued as
a single Etcd transaction and it should therefore be used with caution.
If possible, prefer to use label selectors to limit the number of changes.

Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[journals command options]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[apply command options]
          --specs=                   Input specifications path to apply. Use '-' for stdin (default: -)
          --dry-run                  Perform a dry-run of the apply
          --max-txn-size=            maximum number of specs to be processed within an apply transaction. If 0, the default, all changes are issued in a single transaction (default: 0)

gazctl journals edit

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] edit [edit-OPTIONS]

Edit and apply journal specifications.

The edit command allows you to directly edit journal specifications matching
the supplied LabelSelector. It will open the editor defined by your GAZ_EDITOR or
EDITOR environment variables or fall back to 'vi'. Editing from Windows is
currently not supported.

Upon exiting the editor, if the file has been changed, it will be validated and
applied. If the file is invalid or fails to apply, the editor is re-opened.
Exiting the editor with no changes or saving an empty file are interpreted as
the user aborting the edit attempt.

Use --selector to supply a LabelSelector which constrains the set of returned
journal specifications. See "journals list --help" for details and examples.

Edit specifications of journals having an exact name:
>    gazctl journals edit --selector "name in (foo/bar, baz/bing)"

Use an alternative editor
>    GAZ_EDITOR=nano gazctl journals edit --selector "prefix = my/prefix/"

In the event that this command generates more changes than are possible in a
single Etcd transaction given the current server configuration (default 128),
gazctl supports a flag which will send changes in batches of at most
--max-txn-size. However, this means the entire apply is no longer issued as
a single Etcd transaction and it should therefore be used with caution.
If possible, prefer to use label selectors to limit the number of changes.

Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[journals command options]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[edit command options]
      -l, --selector=                Label Selector query to filter on
          --max-txn-size=            maximum number of specs to be processed within an apply transaction. If 0, the default, all changes are issued in a single transaction (default: 0)

gazctl journals fragments

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] fragments [fragments-OPTIONS]

List fragments of selected journals.

A label --selector is required, and determines the set of journals for which
fragments are listed. See "journals list --help" for details and examples of
using journal selectors.

Use --from and/or --to to retrieve fragments persisted within the given time
range. Note that each fragment is evaluated based on its modification
timestamp as supplied by its backing fragment store. Usually this will be the
time at which the fragment was uploaded to the store, but may not be if
another process has modified or touched the fragment (Gazette itself will never
modify a fragment once written). --from and --to are given in Unix seconds since
the epoch. Use the 'date' tool to convert humanized timestamps to epoch values.

If --url-ttl, the broker will generate and return a signed GET URL having the
given TTL, suitable for directly reading the fragment from the backing store.

Results can be output in a variety of --format options:
json: Prints Fragments encoded as JSON, one per line.
proto: Prints Fragments and response headers in protobuf text format.
table: Prints as a humanized table.

Combining --from, --to, and --url-ttl enables this command to generate inputs for
regularly-run batch processing pipelines. For example, a cron job running at ten
past the hour would fetch fragments persisted between the beginning and end of
the last hour with an accompanying signed URL. That fragment list becomes input
to an hourly batch pipeline run, which can directly read journal data from URLs
without consulting brokers (or even being aware of them).

See also the 'flush_interval' JournalSpec field, which can be used to bound the
maximum delay of a record being written to a journal, vs that same record being
persisted with its fragment to a backing store. Note that a fragment's time is
an upper-bound on the append time of all contained records, and a fragment
persisted at 4:01pm may contain records from 3:59pm. A useful pattern is to
extend the queried range slightly (eg from 3:00-4:05pm), and then filter on
record timestamps to the precise desired range (of 3:00-4:00pm).

Examples:

# List fragments of a journal in a formatted table:
gazctl journals fragments -l name=my/journal

# List fragments created in the last hour in prototext format, including a signed URL.
gazctl journals fragments -l name=my/journal --url-ttl 1m --from $(date -d "1 hour ago" '+%s') --format proto

# List fragments of journals matching my-label which were persisted between 3:00AM
# and 4:05AM today with accompanying signed URL, output as JSON.
gazctl journals fragments -l my-label --format json --url-ttl 1h --from $(date -d 3AM '+%s') --to $(date -d 4:05AM '+%s') --format json


Application Options:
      --zone=                         Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]   Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color]  Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                          Show this help message

[journals command options]

    Broker:
          --broker.address=           Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=        Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=         Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[fragments command options]
      -l, --selector=                 Label Selector query to filter on
      -o, --format=[table|json|proto] Output format (default: table)
          --from=                     Restrict to fragments created at or after this time, in unix seconds since epoch
          --to=                       Restrict to fragments created before this time, in unix seconds since epoch
          --url-ttl=                  Provide a signed GET URL with the given TTL

gazctl journals list

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] list [list-OPTIONS]

List journal specifications and status.

Use --selector to supply a LabelSelector which constrains the set of returned
journals. Journal selectors support additional meta-labels "name" and "prefix".

Match JournalSpecs having an exact name:
>    --selector "name in (foo/bar, baz/bing)"

Match JournalSpecs having a name prefix (must end in '/'):
>    --selector "prefix = my/prefix/"

Results can be output in a variety of --format options:
yaml:  Prints a YAML journal hierarchy, compatible with "journals apply"
json:  Prints JournalSpecs encoded as JSON, one per line.
proto: Prints JournalSpecs encoded in protobuf text format
table: Prints as a table (see other flags for column choices)

When output as a journal hierarchy, gazctl will "hoist" the returned collection
of JournalSpecs into a hierarchy of journals having common prefixes and,
typically, common configuration. This hierarchy is simply sugar for and is
exactly equivalent to the original JournalSpecs.


Application Options:
      --zone=                              Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]        Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color]       Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                               Show this help message

[journals command options]

    Broker:
          --broker.address=                Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=             Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=              Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[list command options]
      -l, --selector=                      Label Selector query to filter on
      -o, --format=[table|yaml|json|proto] Output format (default: table)
      -L, --label-columns=                 Labels to present as columns, eg -L label-one -L label-two
      -p, --primary                        Show primary column
      -r, --replicas                       Show replicas column
          --rf                             Show replication factor column
          --stores                         Show fragment store column

gazctl journals prune

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] prune [prune-OPTIONS]

Deletes fragments across all configured fragment stores of matching journals that are older than the configured retention.

There is a caveat when pruning journals. For a given journal, there could be multiple fragments covering the same offset. These fragments contain identical data at a given offset, but the brokers are tracking only the largest fragment, i.e. the fragment that covers the largest span of offsets. As a result, the prune command will delete only this tracked fragment, leaving the smaller fragments untouched. As a workaround, operators can wait for the fragment listing to refresh and prune the journals again.

Use --selector to supply a LabelSelector to select journals to prune.
See "journals list --help" for details and examples.


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[journals command options]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[prune command options]
      -l, --selector=                Label Selector query to filter on
          --dry-run                  Perform a dry-run of the apply

gazctl journals read

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] read [read-OPTIONS]

Read the contents of one or more journals.

A label --selector is required, and determines the set of journals which are read.
See "journals list --help" for details and examples of using journal selectors.

Matched journals are read concurrently, and their content is multiplexed into
the output file (or stdout). Content is copied to the output in whole-fragment
chunks, and so long as journal appends reflect whole message boundaries, this
command will also respect those boundaries in the merged output.

The --selector is evaluated both at startup and also periodically during
execution. As new journals are matched by the selector, and old ones stop
matching, corresponding read operations are started and stopped.

Journals are read until the write-head is reached (OFFSET_NOT_YET_AVAILABLE),
or gazctl is signaled (Ctrl-C or SIGTERM). If --block is specified, reads will
block upon reaching the write-head and thereafter stream content as it commits.

By default reads of journals begin at byte offset 0. If --offsets is specified,
it must exist and be a JSON mapping of journal name to read offset, and is used
to supply the initial read offsets for selected journals. --offsets-out in turn
is a path to which final journal offsets are written on exit (either due to
Ctrl-C or because all available content has been read). If --offsets and
--offsets-out are the same path, the existing offsets will be retained and
moved to a ".previous" suffix.

If --tail is specified and a journal is not present in --offsets, then its read
begins at its current write-head. This option generally only makes sense with
--block, but can also be used to initialize --offsets-out.

When running in high-volume production settings, be sure to set a non-zero
--broker.cache.size to significantly reduce broker load. Aside from controlling
the cache size itself, a non-zero value will:
* Disable broker-side proxying of requests, such that gazctl directly routes and
dispatches to applicable brokers, and
* Turn of broker proxy reads of fragment files in backing stores. Instead,
gazctl will read directly from stores via signed URLs that brokers provide.

When client-side reads of fragments stored to a 'file://' backing store are
desired, use the --file-root option to specify the directory of the store (eg,
this might be the local mount-point of a NAS array also used by brokers).

Examples:

# Read all available journal content:
gazctl journals read -l name=my/journal

# Streaming read from tail of current (and future) journals matching my-label:
gazctl journals read -l my-label --block --tail

# Read new content from matched journals since the last invocation. Dispatch to
# brokers in our same availability zone where available, and directly read
# persisted fragments from their respective stores:
echo "{}" > offsets.json # Must already exist.
gazctl journals read -l my-label -o output --offsets offsets.json --offsets-out offsets.json --broker.cache.size=256 --zone=us-east-1


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[journals command options]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[read command options]
      -l, --selector=                Label selector of journals to read
      -b, --block                    Do not exit on journal EOF; wait for new data until signaled
          --tail                     Start reading from the journal write-head (rather than offset 0)
      -o, --output=                  Output file path. Use '-' for stdout (default: -)
          --offsets=                 Path from which initial journal offsets are read at startup
          --offsets-out=             Path to which final journal offsets are written at exit
          --file-root=               Filesystem path which roots file:// fragment store

gazctl journals reset-head

Usage:
  gazctl [OPTIONS] journals [journals-OPTIONS] reset-head [reset-head-OPTIONS]

Reset the append offset of journals.

Gazette appends are transactional: all brokers must agree on the exact offsets
at which an append operation will be written into a journal. The offset is an
explicit participate in the broker's transaction protocol. New participants are
"caught up" on the current offset by participating in broker transactions, and
brokers will delay releasing responsibility for a journal until all peers have
participated in a synchronizing transaction. This makes Gazette tolerant to up
to R-1 independent broker process failures, where R is the replication factor
of the journal.

However, disasters and human errors do happen, and if R or more independent
failures occur, Gazette employs a fail-safe to minimize the potential for a
journal offset to be written more than once: brokers require that the remote
fragment index not include a fragment offset larger than the append offset known
to replicating broker peers, and will refuse the append if this constraint is
violated.

Eg, If N >= R failures occur, then the set of broker peers of a journal will not
have participated in an append transaction; their append offset will be zero,
which is less than the maximum offset contained in the fragment store. The
brokers will refuse all appends to preclude double-writing of an offset.

This condition must be explicitly cleared by the Gazette operator using the
reset-head command. The operator should delay running reset-head until absolutely
confident that all journal fragments have been persisted to cloud storage (eg,
because all previous broker processes have exited).

Then, the effect of reset-head is to jump the append offset forward to the
maximum indexed offset, allowing new append operations to proceed.

reset-head is safe to run against journals which are in a fully consistent state,
though it is likely to fail harmlessly if the journal is being actively written.


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[journals command options]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[reset-head command options]
      -l, --selector=                Label Selector query to filter on

gazctl print-config

Usage:
  gazctl [OPTIONS] print-config

print-config parses the combined configuration from gazctl.ini, flags,
and environment variables, and then writes the configuration to stdout in INI format.


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

gazctl shards apply

Usage:
  gazctl [OPTIONS] shards [shards-OPTIONS] apply [apply-OPTIONS]

Apply a collection of ShardSpec creations, updates, or deletions.

ShardSpecs should be provided as a YAML list, the same format produced by
"gazctl shards list". Consumers verify that the Etcd "revision" field of each
ShardSpec is correct, and will fail the entire apply operation if any have since
been updated. A common operational pattern is to list, edit, and re-apply a
collection of ShardSpecs; this check ensures concurrent modifications are caught.

You may explicitly inform the broker to apply your ShardSpecs regardless of the
current state of specifications in Etcd by passing in a revision value of -1.
This commonly done when operators keep ShardSpecs in version control as their
source of truth.

ShardSpecs may be created by setting "revision" to zero or omitting it altogether.

ShardSpecs may be deleted by setting their field "delete" to true.

In the event that this command generates more changes than are possible in a
single Etcd transaction given the current server configuration (default 128),
gazctl supports a flag which will send changes in batches of at most
--max-txn-size. However, this means the entire apply is no longer issued as
a single Etcd transaction and it should therefore be used with caution.
If possible, prefer to use label selectors to limit the number of changes.

Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[shards command options]

    Consumer:
          --consumer.address=        Service address endpoint (default: http://localhost:8080) [$CONSUMER_ADDRESS]
          --consumer.cache.size=     Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$CONSUMER_CACHE_SIZE]
          --consumer.cache.ttl=      Time-to-live of route cache entries. (default: 1m) [$CONSUMER_CACHE_TTL]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[apply command options]
          --specs=                   Input specifications path to apply. Use '-' for stdin (default: -)
          --dry-run                  Perform a dry-run of the apply
          --max-txn-size=            maximum number of specs to be processed within an apply transaction. If 0, the default, all changes are issued in a single transaction (default: 0)

gazctl shards edit

Usage:
  gazctl [OPTIONS] shards [shards-OPTIONS] edit [edit-OPTIONS]

Edit and apply shard specifications.

The edit command allows you to directly edit shard specifications matching
the supplied LabelSelector. It will open the editor defined by your GAZ_EDITOR or
EDITOR environment variables or fall back to 'vi'. Editing from Windows is
currently not supported.

Upon exiting the editor, if the file has been changed, it will be validated and
applied. If the file is invalid or fails to apply, the editor is re-opened.
Exiting the editor with no changes or saving an empty file are interpreted as
the user aborting the edit attempt.

Use --selector to supply a LabelSelector which constrains the set of returned
shard specifications. See "shards list --help" for details and examples.

Edit specifications of shards having an exact ID:
>    gazctl shards edit --selector "id in (foo, bar)"

Use an alternative editor
>    GAZ_EDITOR=nano gazctl shards edit --selector "id = baz"

In the event that this command generates more changes than are possible in a
single Etcd transaction given the current server configuration (default 128),
gazctl supports a flag which will send changes in batches of at most
--max-txn-size. However, this means the entire apply is no longer issued as
a single Etcd transaction and it should therefore be used with caution.
If possible, prefer to use label selectors to limit the number of changes.

Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[shards command options]

    Consumer:
          --consumer.address=        Service address endpoint (default: http://localhost:8080) [$CONSUMER_ADDRESS]
          --consumer.cache.size=     Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$CONSUMER_CACHE_SIZE]
          --consumer.cache.ttl=      Time-to-live of route cache entries. (default: 1m) [$CONSUMER_CACHE_TTL]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[edit command options]
      -l, --selector=                Label Selector query to filter on
          --max-txn-size=            maximum number of specs to be processed within an apply transaction. If 0, the default, all changes are issued in a single transaction (default: 0)

gazctl shards list

Usage:
  gazctl [OPTIONS] shards [shards-OPTIONS] list [list-OPTIONS]

List shard specifications and status.

Use --selector to supply a LabelSelector which constrains the set of returned
shards. Shard selectors support an additional meta-label "id".

Match ShardSpecs having a specific ID:
>    --selector "id in (shard-12, shard-34)"

Results can be output in a variety of --format options:
yaml:  Prints shards in YAML form, compatible with "shards apply"
json:  Prints ShardSpecs encoded as JSON
proto: Prints ShardSpecs encoded in protobuf text format
table: Prints as a table (see other flags for column choices)

It's recommended that --lag be used with a relatively focused --selector,
as fetching consumption lag for a large number of shards may take a while.


Application Options:
      --zone=                              Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]        Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color]       Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                               Show this help message

[shards command options]

    Consumer:
          --consumer.address=              Service address endpoint (default: http://localhost:8080) [$CONSUMER_ADDRESS]
          --consumer.cache.size=           Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$CONSUMER_CACHE_SIZE]
          --consumer.cache.ttl=            Time-to-live of route cache entries. (default: 1m) [$CONSUMER_CACHE_TTL]

    Broker:
          --broker.address=                Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=             Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=              Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[list command options]
      -l, --selector=                      Label Selector query to filter on
      -o, --format=[table|yaml|json|proto] Output format (default: table)
      -L, --label-columns=                 Labels to present as columns, eg -L label-one -L label-two
      -p, --primary                        Show primary column
      -r, --replicas                       Show replicas column
          --rf                             Show replication factor column
          --lag                            Show the amount of unread data for each shard

gazctl shards prune

Usage:
  gazctl [OPTIONS] shards [shards-OPTIONS] prune [prune-OPTIONS]

Recovery logs capture every write which has ever occurred in a Shard DB.
This includes all prior writes of client keys & values, and also RocksDB
compactions, which can significantly inflate the total volume of writes
relative to the data currently represented in a RocksDB.

Prune log examines the provided hints to identify Fragments of the log
which have no intersection with any live files of the DB, and can thus
be safely deleted.


Application Options:
      --zone=                        Availability zone within which this process is running (default: local) [$ZONE]

Logging:
      --log.level=[info|debug|warn]  Logging level (default: info) [$LOG_LEVEL]
      --log.format=[json|text|color] Logging output format (default: text) [$LOG_FORMAT]

Help Options:
  -h, --help                         Show this help message

[shards command options]

    Consumer:
          --consumer.address=        Service address endpoint (default: http://localhost:8080) [$CONSUMER_ADDRESS]
          --consumer.cache.size=     Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$CONSUMER_CACHE_SIZE]
          --consumer.cache.ttl=      Time-to-live of route cache entries. (default: 1m) [$CONSUMER_CACHE_TTL]

    Broker:
          --broker.address=          Service address endpoint (default: http://localhost:8080) [$BROKER_ADDRESS]
          --broker.cache.size=       Size of client route cache. If <= zero, no cache is used (server always proxies) (default: 0) [$BROKER_CACHE_SIZE]
          --broker.cache.ttl=        Time-to-live of route cache entries. (default: 1m) [$BROKER_CACHE_TTL]

[prune command options]
      -l, --selector=                Label Selector query to filter on
          --dry-run                  Perform a dry-run of the apply