宽屏模式

php-rdkafka中Topic配置属性表(翻译备用, 弃用的就不翻译了)

Topic configuration properties

Property C/P Range Default Importance Description
request.required.acks P -1 .. 1000 -1 high 该字段表示leader broker在响应请求之前必须从ISR broker收到的确认次数:0= broker不向客户端发送任何响应/ack, -1或all= broker将阻塞,直到所有同步副本(ISR)提交消息。如果ISR设置中的副本少于min. sync.replicas(代理配置),则生成请求将失败。
Type: integer
acks P -1 .. 1000 -1 high request.required.acks的别名:该字段表示leader broker在响应请求之前必须从ISR代理接收确认的数量:0=代理不向客户端发送任何响应/ack, -1或all=代理将阻塞,直到所有同步副本(ISR)提交消息。如果ISR设置中的副本少于min. sync.replicas(代理配置),则生成请求将失败。
Type: integer
request.timeout.ms P 1 .. 900000 30000 medium 生产者请求的应答超时(以毫秒为单位)。此值仅由代理强制执行,并且依赖于request.required.acks为!= 0。
Type: integer
message.timeout.ms P 0 .. 2147483647 300000 high 本地消息超时。此值仅在本地强制执行,并限制生成的消息等待成功传递的时间。时间为0是无限的。这是librdkafka可以用来传递消息(包括重试)的最大时间。当超过重试计数或消息超时时,会发生传递错误。如果是事务性的,且 transactional.id 已配置, 消息超时将自动调整为 transaction.timeout.ms
Type: integer
delivery.timeout.ms P 0 .. 2147483647 300000 high message.timeout.ms 的别名
Type: integer
queuing.strategy P fifo, lifo fifo low EXPERIMENTAL: 可能改变或移除的. 弃用 Producer queuing strategy. FIFO preserves produce ordering, while LIFO prioritizes new messages.
Type: enum value
produce.offset.report P true, false false low DEPRECATED No longer used.
Type: boolean
partitioner P consistent_random high Partitioner: random -随机分布,consistent - CRC32哈希键(Empty和NULL键被映射到单个分区),consistent_random - CRC32哈希键(Empty和NULL键被随机分区),murmur2 - Java Producer兼容murmur2哈希键(NULL键被映射到单个分区),murmur2_random - Java Producer兼容murmur2哈希键(NULL键被随机分区)。这在功能上相当于Java Producer中的默认分区),fnv1a - FNV-1a哈希键(NULL键映射到单个分区),fnv1a_random - FNV-1a哈希键(NULL键随机分区)。
Type: string
partitioner_cb P low Custom 自定义分区回调(使用rd_kafka_topic_conf_set_partitioner_cb()设置)
Type: see dedicated API
msg_order_cmp P low EXPERIMENTAL: subject to change or removal. DEPRECATED Message queue ordering comparator (set with rd_kafka_topic_conf_set_msg_order_cmp()). Also see queuing.strategy.
Type: see dedicated API
opaque * low 应用程序opaque(通过rd_kafka_topic_conf_set_opaque()设置)
Type: see dedicated API
compression.codec P none, gzip, snappy, lz4, zstd, inherit inherit 用于压缩消息集的压缩编解码器。inherit = inherit global compression.codec 配置。
Type: enum value
compression.type P none, gzip, snappy, lz4, zstd none medium 压缩别名。编解码器:用于压缩消息集的压缩编解码器。这是所有主题的默认值,可以被主题配置属性compression.codec 覆盖。
Type: enum value
compression.level P -1 .. 12 -1 medium 由配置属性选择的算法的压缩级别参数。更高的值将导致更好的压缩,但代价是更多的CPU使用。可用范围取决于算法:对于gzip, [0-9];[0-12]为lz4;只有0表示时髦;-1 =依赖于编解码器的默认压缩级别.
Type: integer
auto.commit.enable C true, false true low **弃用 ** [LEGACY PROPERTY: This property is used by the simple legacy consumer only. When using the high-level KafkaConsumer, the global enable.auto.commit property must be used instead]. If true, periodically commit offset of the last message handed to the application. This committed offset will be used when the process restarts to pick up where it left off. If false, the application will have to call rd_kafka_offset_store() to store an offset (optional). Offsets will be written to broker or local file according to offset.store.method.
Type: boolean
enable.auto.commit C true, false true low **弃用 ** Alias for auto.commit.enable: [LEGACY PROPERTY: This property is used by the simple legacy consumer only. When using the high-level KafkaConsumer, the global enable.auto.commit property must be used instead]. If true, periodically commit offset of the last message handed to the application. This committed offset will be used when the process restarts to pick up where it left off. If false, the application will have to call rd_kafka_offset_store() to store an offset (optional). Offsets will be written to broker or local file according to offset.store.method.
Type: boolean
auto.commit.interval.ms C 10 .. 86400000 60000 high [遗留属性]:此设置仅由简单的遗留使用者使用。当使用高级kafkconsumerer时,必须使用全局auto.commit.interval.ms属性。用户偏移量提交(写入)到偏移量存储的频率(以毫秒为单位)。
Type: integer
auto.offset.reset C smallest, earliest, beginning, largest, latest, end, error largest high 当在偏移量存储中没有初始偏移量或期望的偏移量超出范围时采取的操作:'最小','最早' -自动将偏移量重置为最小偏移量,'最大','最新' -自动将偏移量重置为最大偏移量,'错误' -触发错误(ERR__AUTO_OFFSET_RESET),该错误通过消费消息并检查'message->err'来检索。
Type: enum value
offset.store.path C . low 弃用 Path to local file for storing offsets. If the path is a directory a filename will be automatically generated in that directory based on the topic and partition. File-based offset storage will be removed in a future version.
Type: string
offset.store.sync.interval.ms C -1 .. 86400000 -1 low 弃用 fsync() interval for the offset file, in milliseconds. Use -1 to disable syncing, and 0 for immediate sync after each write. File-based offset storage will be removed in a future version.
Type: integer
offset.store.method C file, broker broker low 弃用 Offset commit store method: 'file' - DEPRECATED: local file store (offset.store.path, et.al), 'broker' - broker commit store (requires "group.id" to be configured and Apache Kafka 0.8.2 or later on the broker.).
Type: enum value
consume.callback.max.messages C 0 .. 1000000 0 low 在一次rd_kafka_consume_callback*()调用中发送的最大消息数(0 = unlimited)
Type: integer

Larwas
请先登录后发表评论
  • latest comments
  • 总共0条评论