Skip to content

Latest commit

 

History

History
383 lines (254 loc) · 21.4 KB

Report1.md

File metadata and controls

383 lines (254 loc) · 21.4 KB

Kafka UI: An Intuitive Interface for Apache Kafka Cluster Management

Team Members: Mingxin Hou, Yiren Xu, Bingyao Liu

Part I: Introduction & Set Up

1. Intro and Overview

1.1 Intro to UI for Apache Kafka

Kafka UI is an open-source web application designed to provide a user-friendly interface for the management and monitoring of Apache Kafka clusters. It enables users to navigate through topics, view cluster status, inspect consumer groups, configure settings, and more, without the need to interact directly with the underlying details of the Kafka cluster.[1]

The documentation and user manual for this project can be found here. It provides guidance on configuring and using Kafka UI.

1.2 Technical Overview

  • Frontend: The Kafka UI project employs TypeScript and React framework, adopting a single-page application (SPA) model for UI construction. TypeScript use emphasizes type safety and error management during development.

  • Backend: The backend programming language is Java, used with the Spring Boot framework to manage server-side logic, database interactions, and communication with Kafka. It conforms to a microservices architecture, supporting independent deployment and scaling of services.

  • Project Management and Build Tools: The project utilizes Maven as its project management and build tool, demonstrating strict control over dependency management and build processes, ensuring consistency in builds and efficiency in project maintenance.

1.3 Code Overview

According to the statistics generated by the cloc tool on Linux, the Kafka UI project encompasses a multitude of programming languages, detailed as follows:

  • TypeScript is the primary programming language of the project, consisting of 6,481 files and 325,842 lines of code, indicating the significance of the front-end component.
  • JSON is employed for configuration files and data interchange, with 1,810 files comprising 248,555 lines of code.
  • Markdown is utilized for documentation, represented by 1,330 files, signifying well-maintained project documentation.
  • Java is used for backend logic, with 416 files and 35,370 lines of code, serving as the project's backend programming language.
  • Languages such as XML, Python, YAML, HTML, CSS, and others are applied for configuration, scripting, and web page design.

The project also leverages a variety of scripting and configuration languages, including Bourne Shell, Maven, Sass, ANTLR Grammar, etc., demonstrating its multifaceted build and configuration management.

In total, the project possesses 27,946 files and 4,112,984 lines of code, displaying a vast and complex codebase that necessitates appropriate testing and maintenance strategies to ensure its robustness.

---------------------------------------------------------------------------------------
Language                             files          blank        comment           code
---------------------------------------------------------------------------------------
TypeScript                            6481          42985         203840         325842
JSON                                  1810            107              0         248555
Markdown                              1330          61984              0         142141
Java                                   416           5195            693          33570
XML                                     94             23             50          25261
Python                                  49           4370           7926          23036
YAML                                   198           1615            243          19008
HTML                                    14           4659          11214          18380
CSS                                     64            281             54           8626
Bourne Shell                           128            487            207           2919
Maven                                    5             44             10           1303
Sass                                     5            129              0            684
ANTLR Grammar                            1             86              3            532
Windows Module Definition                5             83              0            451
Lisp                                     2             42             38            258
C#                                       1             55              9            186
DOS Batch                                3             36              0            156
SVG                                      6              1              2            131
make                                     6             44             31            130
PHP                                      1             13             19            124
Protocol Buffers                         6             19              1             82
Bourne Again Shell                       3             13              1             51
TOML                                     1              5              0             36
Dockerfile                               3             12              6             24
SQL                                      1              2              0             22
C++                                      2             12             19             20
Nix                                      1              1              0             19
CoffeeScript                             1              1              0              0
---------------------------------------------------------------------------------------
SUM:                                 27946         396566         820149        4112984
---------------------------------------------------------------------------------------

2.Setup & Build

2.1 Fork & Clone to the local machine

The first thing we did is to fork the project into our own repository. The original repository was at https://github.com/provectus/kafka-ui, which was shown below.

Fig 1-2-1 Fork button

Fig 1-2-1 Fork button

We used the button as pointed by the red arrow to fork the project, then typed the command below to clone the project by Git into the local disk.

git clone https://github.com/mingxin0607/kafka-ui.git

2.2 Build & Run

To build and run this project, we mainly follow the instructions from documents of the original project. There are many details to pay attention to in order to successfully build this project, so we recorded our own build experience below. In case of troubles while building the project, we recommend you to search the chat history of community discord. We are also glad to help.

Please noted that there are issues with building this project on Windows system. We suggest to use wsl on Windows. And some unit tests would fail on mac os, which is also a problem with the original project, so we generally recommend Linux system to build this project.

2.2.1 Build & Run on macOS (Docker)

Initially, we operated on a macOS platform and utilized Docker to build and execute the project. To prepare our development environment, we verified the inclusion of the following tools:

  • Java 17 package for Maven and Spring Boot backend (Java 21 may fail some tests)
  • Node.js and npm (for frontend application)
  • Docker (for container building and running)
  • Maven (for building the project including the Kafka UI backend).

Step 1: Backend--Build Docker Image:

  • Open terminal, navigate to the project root.
  • Build the project including kafka-ui-api backend:
./mvnw clean install -Pprod

If skipping the tests:

./mvnw clean install -Dmaven.test.skip=true -Pprod

To build only the kafka-ui-api:

./mvnw -f kafka-ui-api/pom.xml clean install -Pprod -DskipUIBuild=true

Step 2: Frontend:

  • Navigate to frontend directory (e.g., kafka-ui-react-app).
  • Install Dependencies:
npm install
  • Build Frontend Application:
npm run build

Step 3: Running the Project:

  • Run with Docker Compose:
docker-compose -f ./documentation/compose/kafka-ui.yaml up -d

Step 4: Access Kafka UI:

2.2.2 Build & Run on Ubuntu (Without Docker)

We used wsl on Windows and built the project withour docker. Please noted that Docker still needs to be up for running the tests. And please use sudo before each command if the current user of wsl is not root user.

Method1: Quick Project Execution (Without Building JAR):

  • To run Kafka UI quickly without manually building a JAR file, execute a pre-built JAR file with this command (Releases could be downloaded from the original project):
java -Dspring.config.additional-location=<path-to-application-local.yml> --add-opens java.rmi/javax.rmi.ssl=ALL-UNNAMED -jar <path-to-kafka-ui-jar>

Replace <path-to-application-local.yml> and with actual file paths. Configure Kafka cluster details in application-local.yml.

Method2: Manual Build and Execution (Without Docker):

  • Comment out the docker-maven-plugin in kafka-ui-api/pom.xml.
  • Build the JAR file with Maven:
mvn clean install
  • Locate the built JAR (kafka-ui-api-0.0.1-SNAPSHOT.jar) in kafka-ui-api/target.
  • If the prod profile is not active, run the command below
mvn -Pprod clean install -DskipTests=true
  • Run the JAR file using this command.
java -Dspring.config.additional-location=<path-to-application-local.yml> --add-opens java.rmi/javax.rmi.ssl=ALL-UNNAMED -jar <path-to-kafka-ui-jar>

2.3 Build Issues

macOS

  1. After installing JDK 17, remember to switch the default Java version to the current one:

    • Use the following command to list all installed Java versions and their locations:

      /usr/libexec/java_home -V
      
    • Use the following command to configure the default Java version to JDK 17 (replace the path with your JDK 17 installation path):

      export JAVA_HOME=$(/usr/libexec/java_home -v 17)
      
    • Update the PATH environment variable to ensure that the bin directory of JDK 17 comes before others:

      export PATH=$JAVA_HOME/bin:$PATH
      
    • Verify that the Java version has been switched to JDK 17:

      java -version
      
  2. Check the versions of Node.js, npm, and Docker:

    node -v
    npm -v
    docker --version
    
  3. macOS system version needs to match Docker. If you have macOS Catalina (10.15.7), you can refer to the following post to download Docker:

    Install Docker on macOS Catalina

  4. When using npm install to build the frontend application, you may need to:

    • Delete the existing node_modules directory and package-lock.json file, then reinstall dependencies:

      rm -rf node_modules package-lock.json
      
      npm install
      
    • Modify some dependency versions in pom.xml. I changed the following dependencies:

      "vite": "^5.0.0"

      "@types/node": "^20.0.0"

    • After resolving conflicts, execute:

      npm install
      
  5. Confirm if Docker is running:

    docker ps
    

    If the 8080 port doesn't immediately redirect to the webpage, it might still be loading. Wait a bit longer, and the page should appear.

Windows Subsystem for Linux

  1. If there is a problem with "com.provectus.kafka.ui.service.KafkaConfigSanitizerTest" , please check if docker engine is up.

  2. Please use sudo before every instruction and run it as root user.

Part II. Existing Test

2.1 Existing Testing Frameworks

The project primarily utilizes JUnit5, specified as **<junit.version>5.9.1</junit.version>**, as its core testing framework, complemented by additional tools to enhance testing capabilities:

  • Mockito (<mockito.version>5.3.1</mockito.version>) is employed for mocking dependencies in unit tests, thus simplifying and ensuring their reliability.
  • Testcontainers (<testcontainers.version>1.17.5</testcontainers.version>) provides a reusable testing environment that closely simulates production settings, enhancing the accuracy of integration tests.
  • Maven Surefire Plugin (<maven-surefire-plugin.version>3.1.2</maven-surefire-plugin.version>) automates the testing workflow, including test discovery and execution, and generates detailed reports.

This integration of testing frameworks and tools not only boosts the efficiency and quality of Java project testing but also supports the development teams in achieving high-quality code standards through simplified unit testing, accurate integration testing, and streamlined test management.

2.2 Existing Testing Practices

The testing files are mainly restored in the path as below:

kafka-ui/kafka-ui-api/src/test

The following description is around the main testing categories based on the project’s testing files.

Integration Testing

  • AbstractIntegrationTest: These tests ensure theintegration between the frontend React application and the Kafka backend. They validate that API calls for fetching Kafka topics, consumer groups, or managing message production and consumption are executed flawlessly. Additionally, they confirm the successful integration with other services like schema registries and Kafka Connect.

Service Testing

  • BrokerServiceTest, TopicsServicePaginationTest: These tests examine the business logic for interacting with Kafka brokers and topics. They confirm the application's capabilities to list, create, delete, and paginate through topics effectively. Furthermore, they guarantee accurate management of consumer groups and their offsets, alongside correct implementation of custom logic for Kafka's administrative functions.

Serde Testing

  • Int32SerdeTest, AvroEmbeddedSerdeTest: Serde tests ensure that messages are properly serialized for production into Kafka topics and deserialized upon consumption. They cover testing for various data formats, including Avro, Protobuf, integers, and strings, to ensure seamless compatibility and accuracy across Kafka clients and services.

Controller Testing

  • ApplicationConfigControllerTest: These tests rigorously validate the HTTP interfaces provided by the backend, ensuring that REST API endpoints respond precisely as intended. They confirm that requests to these endpoints yield the correct status codes, headers, and body content, and accurately process request parameters for operations like fetching or updating configuration details.

Utility Class Testing

  • DynamicConfigOperationsTest: These tests verify the functionality of utility classes that provide essential support, such as dynamic configuration management, polling mechanisms, and failover strategies. They ensure these utilities operate consistently across various scenarios, underpinning the application's operational reliability.

2.3 Running Environment

To ensure a smooth execution of the tests, please meet the following prerequisites:

  • Java 17 or newer must be installed.
  • Git must be installed.
  • Docker is optional but recommended for those who prefer running tests within containers.

If opting to use Docker, the backend will be containerized based on a Docker image. To set up the backend environment and run the tests, execute the following shell command:

./mvnw clean install -Dmaven.test.skip=true -Pprod

This command builds the project and skips the test phase, primarily used for production builds. For testing purposes, especially when you want to execute all the test cases, use the following Maven command:

mvn clean test

This command cleans the target directory, compiles the source code, and runs all tests in the src/test/java directory against the compiled code.

2.4 Managing and Analyzing Test Reports

When faced with numerous report files, we can employ text search tools to locate files containing specific keywords, such as "ERROR". This approach facilitates quick identification of problematic tests.

For Linux and macOS, the grep command can be utilized:

grep -i "ERROR" /path/to/kafka-ui/kafka-ui-api/target/surefire-reports/*.txt

Viewing and Addressing Failed Tests:

  • Review Detailed Test Reports: For each failed test, the Surefire plugin generates a detailed report file within the target/surefire-reports directory for the corresponding test class. It's important to locate the .txt report files associated with the failed tests identified, such as com.provectus.kafka.ui.KafkaConsumerGroupTests.txt and com.provectus.kafka.ui.emitter.MessageFiltersTest$GroovyScriptFilter.txt. These report files contain the names of the failed test methods, reasons for the failures, and detailed stack traces. This information is crucial for pinpointing the source of the problem and facilitating effective troubleshooting.

Part III. Functional testing and partition testing

3.1 Functional Testing

Functional Testing is a type of software testing that validates the software system against the functional requirements/specifications. The purpose of Functional tests is to test each function of the software application, by providing appropriate input, verifying the output against the Functional requirements.

Functional testing mainly involves black box testing and it is not concerned about the source code of the application. This testing checks User Interface, APIs, Database, Security, Client/Server communication and other functionality of the Application Under Test. The testing can be done either manually or using automation.

Purpose of functional testing

  • Ensuring Correct Functionality

Systematic functional testing is essential to ensure that the software functions correctly according to its specifications and requirements. This helps in delivering a product that meets user expectations.

  • Comprehensive Coverage

Systematic functional testing aims to provide comprehensive coverage of the entire application. This includes testing individual units, ensuring proper integration, and validating the system as a whole.

  • Building User Confidence

Thorough functional testing builds confidence among users and stakeholders, assuring them that the software not only works but works reliably under various conditions.

3.2 Partition Testing

Partition Testing is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed. An advantage of this approach is reduction in the time required for testing software due to lesser number of test cases.

Purpose of partition testing

  • Efficient Testing

Partition testing allows for efficient testing by dividing the input space into equivalence classes. This reduces the number of test cases needed while ensuring that each class is adequately represented.

  • Identifying Critical Input Scenarios

Not all possible inputs need to be tested individually. Partition testing helps identify critical input scenarios, allowing testers to focus on representative values that are likely to reveal defects.

  • Identifying Defects Associated with Input Classes

Defects often cluster around specific input classes. Partition testing aids in identifying and addressing issues associated with different input partitions, contributing to improved software reliability.

The feature for partition testing

We chose the class Int32Serde implemented at "kafka-ui-api/src/main/java/com/provectus/kafka/ui/serdes/builtin/Int32Serde.java". It deals with serialization and deserialization of a 32-bit integer. It implemented interface BuiltInSerde at "kafka-ui-api/src/main/java/com/provectus/kafka/ui/serdes/BuiltInSerde.java", which extends another interface Serde at "kafka-ui-serde-api/src/main/java/com/provectus/kafka/ui/serde/api/Serde.java".

New partition tests

The input space of this feature is 32-bit integers in the form of string. We partitioned the input space into 4 parts based on the nature of integers: zero, positive integer[1, 2147483647], negative integer[-2147483648, -1], invalid input (including all integers beyond the scope [-2147483648, 2147483647] and all invalid string that does not represent an integer).

We chose number "1234" and "2147483647" to represent the positive integer partition, number "-2147483648" to represent the negative partition and chose "null" and "2147483648" to represent invalid input. We included both boundaries and random representative values to cover all partitions and make sure the feature is stable at boundaries. For valid input, after serialization and deserialization, the value should be the same. For invalid input, an exception should be thrown.

New test cases and their documents were included in "kafka-ui-api/src/test/java/com/provectus/kafka/ui/serdes/builtin/Int32SerdeTest.java". To run all JUnit tests, use the command "mvn clean test".