Skip to content

Commit 9a118de

Browse files
authored
chore: rename to raystack (#217)
1 parent e4e5977 commit 9a118de

File tree

383 files changed

+2223
-2220
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

383 files changed

+2223
-2220
lines changed

.github/ISSUE_TEMPLATE/config.yml

-8
This file was deleted.

.github/workflows/package.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -45,5 +45,5 @@ jobs:
4545
context: .
4646
push: true
4747
tags: |
48-
odpf/firehose:latest
49-
odpf/firehose:${{ steps.get_version.outputs.version-without-v }}
48+
raystack/firehose:latest
49+
raystack/firehose:${{ steps.get_version.outputs.version-without-v }}

Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@ COPY --from=GRADLE_BUILD ./jolokia-jvm-agent.jar /opt/firehose
1010
COPY --from=GRADLE_BUILD ./src/main/resources/log4j.xml /opt/firehose/etc/log4j.xml
1111
COPY --from=GRADLE_BUILD ./src/main/resources/logback.xml /opt/firehose/etc/logback.xml
1212
WORKDIR /opt/firehose
13-
CMD ["java", "-cp", "bin/*:/work-dir/*", "io.odpf.firehose.launch.Main", "-server", "-Dlogback.configurationFile=etc/firehose/logback.xml", "-Xloggc:/var/log/firehose"]
13+
CMD ["java", "-cp", "bin/*:/work-dir/*", "org.raystack.firehose.launch.Main", "-server", "-Dlogback.configurationFile=etc/firehose/logback.xml", "-Xloggc:/var/log/firehose"]

README.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Firehose
22

3-
![build workflow](https://github.com/odpf/firehose/actions/workflows/build.yml/badge.svg)
4-
![package workflow](https://github.com/odpf/firehose/actions/workflows/package.yml/badge.svg)
3+
![build workflow](https://github.com/raystack/firehose/actions/workflows/build.yml/badge.svg)
4+
![package workflow](https://github.com/raystack/firehose/actions/workflows/package.yml/badge.svg)
55
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?logo=apache)](LICENSE)
6-
[![Version](https://img.shields.io/github/v/release/odpf/firehose?logo=semantic-release)](Version)
6+
[![Version](https://img.shields.io/github/v/release/raystack/firehose?logo=semantic-release)](Version)
77

88
Firehose is a cloud native service for delivering real-time streaming data to destinations such as service endpoints (HTTP or GRPC) & managed databases (Postgres, InfluxDB, Redis, Elasticsearch, Prometheus and MongoDB). With Firehose, you don't need to write applications or manage resources. It can be scaled up to match the throughput of your data. If your data is present in Kafka, Firehose delivers it to the destination(SINK) that you specified.
99

@@ -25,7 +25,7 @@ Discover why users choose Firehose as their main Kafka Consumer
2525
- Elasticsearch
2626
- Redis
2727
- Bigquery
28-
- BigTable
28+
- BigTable
2929
- Blob Storage/Object Storage :
3030
- Google Cloud Storage
3131

@@ -47,28 +47,28 @@ Explore the following resources to get started with Firehose:
4747

4848
## Run with Docker
4949

50-
Use the docker hub to download firehose [docker image](https://hub.docker.com/r/odpf/firehose/). You need to have docker installed in your system.
50+
Use the docker hub to download firehose [docker image](https://hub.docker.com/r/raystack/firehose/). You need to have docker installed in your system.
5151

5252
```
5353
# Download docker image from docker hub
54-
$ docker pull odpf/firehose
54+
$ docker pull raystack/firehose
5555
5656
# Run the following docker command for a simple log sink.
57-
$ docker run -e SOURCE_KAFKA_BROKERS=127.0.0.1:6667 -e SOURCE_KAFKA_CONSUMER_GROUP_ID=kafka-consumer-group-id -e SOURCE_KAFKA_TOPIC=sample-topic -e SINK_TYPE=log -e SOURCE_KAFKA_CONSUMER_CONFIG_AUTO_OFFSET_RESET=latest -e INPUT_SCHEMA_PROTO_CLASS=com.github.firehose.sampleLogProto.SampleLogMessage -e SCHEMA_REGISTRY_STENCIL_ENABLE=true -e SCHEMA_REGISTRY_STENCIL_URLS=http://localhost:9000/artifactory/proto-descriptors/latest odpf/firehose:latest
57+
$ docker run -e SOURCE_KAFKA_BROKERS=127.0.0.1:6667 -e SOURCE_KAFKA_CONSUMER_GROUP_ID=kafka-consumer-group-id -e SOURCE_KAFKA_TOPIC=sample-topic -e SINK_TYPE=log -e SOURCE_KAFKA_CONSUMER_CONFIG_AUTO_OFFSET_RESET=latest -e INPUT_SCHEMA_PROTO_CLASS=com.github.firehose.sampleLogProto.SampleLogMessage -e SCHEMA_REGISTRY_STENCIL_ENABLE=true -e SCHEMA_REGISTRY_STENCIL_URLS=http://localhost:9000/artifactory/proto-descriptors/latest/raystack/firehose:latest
5858
```
5959

6060
**Note:** Make sure your protos (.jar file) are located in `work-dir`, this is required for Filter functionality to work.
6161

6262
## Run with Kubernetes
6363

64-
- Create a firehose deployment using the helm chart available [here](https://github.com/odpf/charts/tree/main/stable/firehose)
64+
- Create a firehose deployment using the helm chart available [here](https://github.com/raystack/charts/tree/main/stable/firehose)
6565
- Deployment also includes telegraf container which pushes stats metrics
6666

6767
## Running locally
6868

6969
```sh
7070
# Clone the repo
71-
$ git clone https://github.com/odpf/firehose.git
71+
$ git clone https://github.com/raystack/firehose.git
7272

7373
# Build the jar
7474
$ ./gradlew clean build
@@ -101,11 +101,11 @@ Development of Firehose happens in the open on GitHub, and we are grateful to th
101101

102102
Read our [contributing guide](docs/docs/contribute/contribution.md) to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to Firehose.
103103

104-
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/odpf/firehose/labels/good%20first%20issue) that contain bugs which have a relatively limited scope. This is a great place to get started.
104+
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/raystack/firehose/labels/good%20first%20issue) that contain bugs which have a relatively limited scope. This is a great place to get started.
105105

106106
## Credits
107107

108-
This project exists thanks to all the [contributors](https://github.com/odpf/firehose/graphs/contributors).
108+
This project exists thanks to all the [contributors](https://github.com/raystack/firehose/graphs/contributors).
109109

110110
## License
111111

build.gradle

+12-12
Original file line numberDiff line numberDiff line change
@@ -32,15 +32,16 @@ lombok {
3232
sha256 = ""
3333
}
3434

35-
group 'io.odpf'
36-
version '0.7.4'
35+
group 'org.raystack'
36+
version '0.8.0'
3737

3838
def projName = "firehose"
3939

4040
sourceCompatibility = JavaVersion.VERSION_1_8
4141
targetCompatibility = JavaVersion.VERSION_1_8
4242

4343
repositories {
44+
mavenLocal()
4445
mavenCentral()
4546
jcenter()
4647
maven {
@@ -54,7 +55,7 @@ private Properties loadEnv() {
5455
properties
5556
}
5657

57-
def mainClassName = "io.odpf.firehose.launch.Main"
58+
def mainClassName = "org.raystack.firehose.launch.Main"
5859

5960
dependencies {
6061
implementation group: 'com.google.protobuf', name: 'protobuf-java', version: '3.1.0'
@@ -71,7 +72,7 @@ dependencies {
7172
implementation group: 'org.apache.commons', name: 'commons-jexl', version: '2.1'
7273
implementation group: 'org.apache.commons', name: 'commons-lang3', version: '3.5'
7374
implementation group: 'com.google.code.gson', name: 'gson', version: '2.7'
74-
implementation group: 'io.odpf', name: 'stencil', version: '0.2.1' exclude group: 'org.slf4j'
75+
implementation group: 'org.raystack', name: 'stencil', version: '0.4.0' exclude group: 'org.slf4j'
7576
implementation group: 'software.amazon.awssdk', name: 's3', version: '2.17.129'
7677
implementation group: 'org.influxdb', name: 'influxdb-java', version: '2.5'
7778
implementation group: 'com.jayway.jsonpath', name: 'json-path', version: '2.4.0'
@@ -85,7 +86,7 @@ dependencies {
8586
exclude group: "log4j", module: "log4j"
8687
}
8788
implementation 'io.confluent:monitoring-interceptors:3.0.0'
88-
implementation "io.grpc:grpc-all:1.38.0"
89+
implementation 'io.grpc:grpc-all:1.53.0'
8990
implementation group: 'org.jfrog.buildinfo', name: 'build-info-extractor', version: '2.6.3'
9091
implementation group: 'com.google.gradle', name: 'osdetector-gradle-plugin', version: '1.2.1'
9192
implementation group: 'org.apache.ivy', name: 'ivy', version: '2.2.0'
@@ -98,10 +99,9 @@ dependencies {
9899
implementation 'com.gojek.parquet:parquet-hadoop:1.11.9'
99100
implementation group: 'com.github.os72', name: 'protobuf-dynamic', version: '1.0.1'
100101
implementation platform('com.google.cloud:libraries-bom:20.5.0')
101-
implementation 'com.google.cloud:google-cloud-storage:1.114.0'
102-
implementation 'com.google.cloud:google-cloud-bigquery:1.115.0'
103-
implementation 'org.apache.logging.log4j:log4j-core:2.17.1'
104-
implementation group: 'io.odpf', name: 'depot', version: '0.3.8'
102+
implementation 'com.google.cloud:google-cloud-storage:2.20.1'
103+
implementation 'org.apache.logging.log4j:log4j-core:2.20.0'
104+
implementation group: 'org.raystack', name: 'depot', version: '0.4.0'
105105
implementation group: 'com.networknt', name: 'json-schema-validator', version: '1.0.59' exclude group: 'org.slf4j'
106106

107107
testImplementation group: 'junit', name: 'junit', version: '4.11'
@@ -146,7 +146,7 @@ test {
146146
events "passed", "skipped", "failed"
147147
}
148148
useJUnit {
149-
excludeCategories 'io.odpf.firehose.test.categories.IntegrationTest'
149+
excludeCategories 'org.raystack.firehose.test.categories.IntegrationTest'
150150
}
151151
doLast {
152152
delete "$projectDir/src/test/resources/__files"
@@ -158,7 +158,7 @@ clean {
158158
}
159159
jar {
160160
manifest {
161-
attributes 'Main-Class': 'io.odpf.firehose.launch.Main'
161+
attributes 'Main-Class': 'org.raystack.firehose.launch.Main'
162162
duplicatesStrategy = 'exclude'
163163
zip64 = true
164164
}
@@ -181,7 +181,7 @@ publishing {
181181
repositories {
182182
maven {
183183
name = "GitHubPackages"
184-
url = "https://maven.pkg.github.com/odpf/firehose"
184+
url = "https://maven.pkg.github.com/raystack/firehose"
185185
credentials {
186186
username = System.getenv("GITHUB_ACTOR")
187187
password = System.getenv("GITHUB_TOKEN")

docs/docs/advance/filters.md

+14-15
Original file line numberDiff line numberDiff line change
@@ -6,43 +6,42 @@ Following variables need to be set to enable JSON/JEXL filters.
66

77
Defines whether to use `JSON` Schema-based filters or `JEXL`-based filters or `NO_OP` \(i.e. no filtering\)
88

9-
* Example value: `JSON`
10-
* Type: `optional`
11-
* Default value`: NO_OP`
9+
- Example value: `JSON`
10+
- Type: `optional`
11+
- Default value`: NO_OP`
1212

1313
## `FILTER_JSON_ESB_MESSAGE_TYPE`
1414

1515
Defines the format type of the input ESB messages, i.e. JSON/Protobuf. This field is required only for JSON filters.
1616

17-
* Example value: `JSON`
18-
* Type: `optional`
17+
- Example value: `JSON`
18+
- Type: `optional`
1919

2020
## `FILTER_SCHEMA_PROTO_CLASS`
2121

2222
The fully qualified name of the proto schema so that the key/message in Kafka could be parsed.
2323

24-
* Example value: `com.gojek.esb.driverlocation.DriverLocationLogKey`
25-
* Type: `optional`
24+
- Example value: `com.raystack.esb.driverlocation.DriverLocationLogKey`
25+
- Type: `optional`
2626

2727
## `FILTER_DATA_SOURCE`
2828

2929
`key`/`message`/`none`depending on where to apply filter
3030

31-
* Example value: `key`
32-
* Type: `optional`
33-
* Default value`: none`
31+
- Example value: `key`
32+
- Type: `optional`
33+
- Default value`: none`
3434

3535
## `FILTER_JEXL_EXPRESSION`
3636

3737
JEXL filter expression
3838

39-
* Example value: `driverLocationLogKey.getVehicleType()=="BIKE"`
40-
* Type: `optional`
39+
- Example value: `driverLocationLogKey.getVehicleType()=="BIKE"`
40+
- Type: `optional`
4141

4242
## `FILTER_JSON_SCHEMA`
4343

4444
JSON Schema string containing the filter rules to be applied.
4545

46-
* Example value: `{"properties":{"order_number":{"const":"1253"}}}`
47-
* Type: `optional`
48-
46+
- Example value: `{"properties":{"order_number":{"const":"1253"}}}`
47+
- Type: `optional`

docs/docs/concepts/architecture.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ _**Sink**_
4040
- All the existing sink types follow the same contract/lifecycle defined in `AbstractSink.java`. It consists of two stages:
4141
- Prepare: Transformation over-filtered messages’ list to prepare the sink-specific insert/update client requests.
4242
- Execute: Requests created in the Prepare stage are executed at this step and a list of failed messages is returned \(if any\) for retry.
43-
- Underlying implementation of AbstractSink can use implementation present in [depot](https://github.com/odpf/depot).
43+
- Underlying implementation of AbstractSink can use implementation present in [depot](https://github.com/raystack/depot).
4444
- If the batch has any failures, Firehose will retry to push the failed messages to the sink
4545

4646
_**SinkPool**_
@@ -71,7 +71,9 @@ The final state of message can be any one of the followings after it is consumed
7171
One can monitor via plotting the metrics related to messages.
7272

7373
### Schema Handling
74+
7475
- Incase when `INPUT_SCHEMA_DATA_TYPE is set to protobuf`
76+
7577
- Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data. Data streams on Kafka topics are bound to a protobuf schema.
7678
- Firehose deserializes the data consumed from the topics using the Protobuf descriptors generated out of the artifacts. The artifactory is an HTTP interface that Firehose uses to deserialize.
7779
- The schema handling ie., find the mapped schema for the topic, downloading the descriptors, and dynamically being notified of/updating with the latest schema is abstracted through the Stencil library.
@@ -81,10 +83,8 @@ One can monitor via plotting the metrics related to messages.
8183
Schema Caching, dynamic schema updates, etc. are features of the stencil client library.
8284

8385
- Incase when `INPUT_SCHEMA_DATA_TYPE is set to json`
84-
- Currently this config is only supported in Bigquery sink,
85-
- For json, in bigquery sink the schema is dynamically inferred from incoming data, in future we plan to provide json schema support via stencil.
86-
87-
86+
- Currently this config is only supported in Bigquery sink,
87+
- For json, in bigquery sink the schema is dynamically inferred from incoming data, in future we plan to provide json schema support via stencil.
8888

8989
## Firehose Integration
9090

docs/docs/concepts/monitoring.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -71,11 +71,11 @@ Lastly, set up Telegraf to send metrics to InfluxDB, following the corresponding
7171

7272
#### Firehose deployed on Kubernetes _\*\*_
7373

74-
1. Follow[ this guide](https://github.com/odpf/charts/tree/main/stable/firehose#readme) for deploying Firehose on a Kubernetes cluster using a Helm chart.
75-
2. Configure the following parameters in the default [values.yaml](https://github.com/odpf/charts/blob/main/stable/firehose/values.yaml) file and run -
74+
1. Follow[ this guide](https://github.com/raystack/charts/tree/main/stable/firehose#readme) for deploying Firehose on a Kubernetes cluster using a Helm chart.
75+
2. Configure the following parameters in the default [values.yaml](https://github.com/raystack/charts/blob/main/stable/firehose/values.yaml) file and run -
7676

7777
```text
78-
$ helm install my-release -f values.yaml odpf/firehose
78+
$ helm install my-release -f values.yaml raystack/firehose
7979
```
8080

8181
| Key | Type | Default | Description |

docs/docs/concepts/overview.md

+6-7
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ scale. This section explains the overall architecture of Firehose and describes
1010
## [Monitoring Firehose with exposed metrics](monitoring.md)
1111

1212
Always know what’s going on with your deployment with
13-
built-in [monitoring](https://github.com/odpf/firehose/blob/main/docs/assets/firehose-grafana-dashboard.json) of
13+
built-in [monitoring](https://github.com/raystack/firehose/blob/main/docs/assets/firehose-grafana-dashboard.json) of
1414
throughput, response times, errors and more. This section contains guides, best practices and advises related to
1515
managing Firehose in production.
1616

@@ -27,18 +27,17 @@ Firehose provides various templating features
2727

2828
Decorators are used for chained processing of messages.
2929

30-
* SinkWithFailHandler
31-
* SinkWithRetry
32-
* SinkWithDlq
33-
* SinkFinal
30+
- SinkWithFailHandler
31+
- SinkWithRetry
32+
- SinkWithDlq
33+
- SinkFinal
3434

3535
## [FirehoseConsumer](consumer.md)
36+
3637
A firehose consumer read messages from kafka, pushes those messages to sink and commits offsets back to kafka based on certain strategies.
3738

3839
## [Offsets](offsets.md)
3940

4041
Offset manager is a data structure used to manage offsets asynchronously. An offset should only be committed when a
4142
message is processed fully. Offset manager maintains a state of all the offsets of all topic-partitions, that can be
4243
committed. It can also be used by sinks to manage its own offsets.
43-
44-

docs/docs/contribute/contribution.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
The following is a set of guidelines for contributing to Firehose. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. Here are some important resources:
44

55
- The [Concepts](../guides/create_firehose.md) section will explain to you about Firehose architecture,
6-
- Our [roadmap](https://github.com/odpf/firehose/blob/main/docs/roadmap.md) is the 10k foot view of where we're going, and
7-
- Github [issues](https://github.com/odpf/firehose/issues) track the ongoing and reported issues.
6+
- Our [roadmap](https://github.com/raystack/firehose/blob/main/docs/roadmap.md) is the 10k foot view of where we're going, and
7+
- Github [issues](https://github.com/raystack/firehose/issues) track the ongoing and reported issues.
88

99
Development of Firehose happens in the open on GitHub, and we are grateful to the community for contributing bug fixes and improvements. Read below to learn how you can take part in improving Firehose.
1010

@@ -23,14 +23,14 @@ The following parts are open for contribution:
2323
- Provide suggestions to make the user experience better
2424
- Provide suggestions to Improve the documentation
2525

26-
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/odpf/firehose/labels/good%20first%20issue) that contain bugs that have a relatively limited scope. This is a great place to get started.
26+
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/raystack/firehose/labels/good%20first%20issue) that contain bugs that have a relatively limited scope. This is a great place to get started.
2727

2828
## How can I contribute?
2929

3030
We use RFCs and GitHub issues to communicate ideas.
3131

3232
- You can report a bug or suggest a feature enhancement or can just ask questions. Reach out on Github discussions for this purpose.
33-
- You are also welcome to add a new common sink in [depot](https://github.com/odpf/depot), improve monitoring and logging and improve code quality.
33+
- You are also welcome to add a new common sink in [depot](https://github.com/raystack/depot), improve monitoring and logging and improve code quality.
3434
- You can help with documenting new features or improve existing documentation.
3535
- You can also review and accept other contributions if you are a maintainer.
3636

@@ -53,4 +53,4 @@ Please follow these practices for your change to get merged fast and smoothly:
5353
- If you are introducing a completely new feature or making any major changes to an existing one, we recommend starting with an RFC and get consensus on the basic design first.
5454
- Make sure your local build is running with all the tests and checkstyle passing.
5555
- If your change is related to user-facing protocols/configurations, you need to make the corresponding change in the documentation as well.
56-
- Docs live in the code repo under [`docs`](https://github.com/odpf/firehose/tree/7d0df99962507e6ad2147837c4536f36d52d5a48/docs/docs/README.md) so that changes to that can be done in the same PR as changes to the code.
56+
- Docs live in the code repo under [`docs`](https://github.com/raystack/firehose/tree/main/docs/docs/README.md) so that changes to that can be done in the same PR as changes to the code.

0 commit comments

Comments
 (0)