You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -33,33 +33,32 @@ This tutorial assumes you are starting fresh and have no existing Kafka data. Ho
33
33
Kafka Streams is a client library for building mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed, and much more.
34
34
35
35
This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist of the `[WordCountDemo](https://github.com/apache/kafka/blob/4.3/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java)` example code.
36
-
37
-
```java
38
-
// Serializers/deserializers (serde) for String and Long types
39
-
finalSerde<String> stringSerde =Serdes.String();
40
-
finalSerde<Long> longSerde =Serdes.Long();
41
-
42
-
// Construct a `KStream` from the input topic "streams-plaintext-input", where message values
43
-
// represent lines of text (for the sake of this example, we ignore whatever may be stored
It implements the WordCount algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is designed to operate on an **infinite, unbounded stream** of data. Similar to the bounded variant, it is a stateful algorithm that tracks and updates the counts of words. However, since it must assume potentially unbounded input data, it will periodically output its current state and results while continuing to process more data because it cannot know when it has processed "all" the input data.
65
64
@@ -68,192 +67,175 @@ As the first step, we will start Kafka (unless you already have it started) and
68
67
### Step 1: Download the code
69
68
70
69
[Download](https://www.apache.org/dyn/closer.cgi?path=/kafka/4.3.0/kafka_2.13-4.3.0.tgz"Kafka downloads") the 4.3.0 release and un-tar it. Note that there are multiple downloadable Scala versions and we choose to use the recommended version (2.13) here:
### Step 3: Prepare input topic and start Kafka producer
98
93
99
94
Next, we create the input topic named **streams-plaintext-input** and the output topic named **streams-wordcount-output** :
100
-
101
-
```bash
102
-
$ bin/kafka-topics.sh --create \
103
-
--bootstrap-server localhost:9092 \
104
-
--replication-factor 1 \
105
-
--partitions 1 \
106
-
--topic streams-plaintext-input
107
-
Created topic "streams-plaintext-input".
108
-
```
95
+
96
+
97
+
$ bin/kafka-topics.sh --create \
98
+
--bootstrap-server localhost:9092 \
99
+
--replication-factor 1 \
100
+
--partitions 1 \
101
+
--topic streams-plaintext-input
102
+
Created topic "streams-plaintext-input".
109
103
110
104
Note: we create the output topic with compaction enabled because the output stream is a changelog stream (cf. explanation of application output below).
111
-
112
-
```bash
113
-
$ bin/kafka-topics.sh --create \
114
-
--bootstrap-server localhost:9092 \
115
-
--replication-factor 1 \
116
-
--partitions 1 \
117
-
--topic streams-wordcount-output \
118
-
--config cleanup.policy=compact
119
-
Created topic "streams-wordcount-output".
120
-
```
105
+
106
+
107
+
$ bin/kafka-topics.sh --create \
108
+
--bootstrap-server localhost:9092 \
109
+
--replication-factor 1 \
110
+
--partitions 1 \
111
+
--topic streams-wordcount-output \
112
+
--config cleanup.policy=compact
113
+
Created topic "streams-wordcount-output".
121
114
122
115
The created topic can be described with the same **kafka-topics** tool:
The demo application will read from the input topic **streams-plaintext-input** , perform the computations of the WordCount algorithm on each of the read messages, and continuously write its current results to the output topic **streams-wordcount-output**. Hence there won't be any STDOUT output except log entries as the results are written back into in Kafka.
141
132
142
133
Optionally, use `group.protocol=streams` to enable the new rebalancing protocol introduced in KIP-1071. This shifts task assignment logic from the client to the broker, reducing rebalance times and eliminating stop-the-world rebalances.
Now let's write some message with the console producer into the input topic **streams-plaintext-input** by entering a single line of text and then hit <RETURN>. This will send a new message to the input topic, where the message key is null and the message value is the string encoded text line that you just entered (in practice, input data for applications will typically be streaming continuously into Kafka, rather than being manually entered as we do in this quickstart):
This message will be processed by the Wordcount application and the following output data will be written to the **streams-wordcount-output** topic and printed by the console consumer:
Here, the first column is the Kafka message key in `java.lang.String` format and represents a word that is being counted, and the second column is the message value in `java.lang.Long`format, representing the word's latest count.
195
181
196
182
Now let's continue writing one more message with the console producer into the input topic **streams-plaintext-input**. Enter the text line "hello kafka streams" and hit <RETURN>. Your terminal should look as follows:
Here the last printed lines **kafka 2** and **streams 2** indicate updates to the keys **kafka** and **streams** whose counts have been incremented from **1** to **2**. Whenever you write further input messages to the input topic, you will observe new messages being added to the **streams-wordcount-output** topic, representing the most recent word counts as computed by the WordCount application. Let's enter one final input text line "join kafka summit" and hit <RETURN> in the console producer to the input topic **streams-plaintext-input** before we wrap up this quickstart:
As one can see, outputs of the Wordcount application is actually a continuous stream of updates, where each output record (i.e. each line in the original output above) is an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
0 commit comments