Skip to content

Commit d9f014f

Browse files
Revert "feat: redis sink using depot (#193)" (#204)
This reverts commit 01de086.
1 parent 4237ead commit d9f014f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+2760
-35
lines changed

build.gradle

+1-1
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ dependencies {
101101
implementation 'com.google.cloud:google-cloud-storage:1.114.0'
102102
implementation 'com.google.cloud:google-cloud-bigquery:1.115.0'
103103
implementation 'org.apache.logging.log4j:log4j-core:2.17.1'
104-
implementation group: 'io.odpf', name: 'depot', version: '0.3.3'
104+
implementation group: 'io.odpf', name: 'depot', version: '0.2.1'
105105
implementation group: 'com.networknt', name: 'json-schema-validator', version: '1.0.59' exclude group: 'org.slf4j'
106106

107107
testImplementation group: 'junit', name: 'junit', version: '4.11'

docs/docs/sinks/redis-sink.md

+72-13
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,80 @@
1-
# Redis Sink
1+
# Redis
22

3-
Redis Sink is implemented in Firehose using the Redis sink connector implementation in ODPF Depot. You can check out ODPF Depot Github repository [here](https://github.com/odpf/depot).
3+
A Redis sink Firehose \(`SINK_TYPE`=`redis`\) requires the following variables to be set along with Generic ones
44

5-
### Data Types
6-
Redis sink can be created in 3 different modes based on the value of [`SINK_REDIS_DATA_TYPE`](https://github.com/odpf/depot/blob/main/docs/reference/configuration/redis.md#sink_redis_data_type): HashSet, KeyValue or List
7-
- `Hashset`: For each message, an entry of the format `key : field : value` is generated and pushed to Redis. Field and value are generated on the basis of the config [`SINK_REDIS_HASHSET_FIELD_TO_COLUMN_MAPPING`](https://github.com/odpf/depot/blob/main/docs/reference/configuration/redis.md#sink_redis_hashset_field_to_column_mapping)
8-
- `List`: For each message entry of the format `key : value` is generated and pushed to Redis. Value is fetched for the Proto field name provided in the config [`SINK_REDIS_LIST_DATA_FIELD_NAME`](https://github.com/odpf/depot/blob/main/docs/reference/configuration/redis.md#sink_redis_list_data_field_name)
9-
- `KeyValue`: For each message entry of the format `key : value` is generated and pushed to Redis. Value is fetched for the proto field name provided in the config [`SINK_REDIS_KEY_VALUE_DATA_FIELD_NAME`](https://github.com/odpf/depot/blob/main/docs/reference/configuration/redis.md#sink_redis_key_value_data_field_name)
5+
### `SINK_REDIS_URLS`
106

11-
The `key` is picked up from a field in the message itself.
7+
REDIS instance hostname/IP address followed by its port.
128

13-
Limitation: Depot Redis sink only supports Key-Value, HashSet and List entries as of now.
9+
- Example value: `localhos:6379,localhost:6380`
10+
- Type: `required`
1411

15-
### Configuration
12+
### `SINK_REDIS_DATA_TYPE`
1613

17-
For Redis sink in Firehose we need to set first (`SINK_TYPE`=`redis`). There are some generic configs which are common across different sink types which need to be set which are mentioned in [generic.md](../advance/generic.md). Redis sink specific configs are mentioned in ODPF Depot repository. You can check out the Redis Sink configs [here](https://github.com/odpf/depot/blob/main/docs/reference/configuration/redis.md)
14+
To select whether you want to push your data as a HashSet or as a List.
1815

16+
- Example value: `Hashset`
17+
- Type: `required`
18+
- Default value: `List`
1919

20-
### Deployment Types
21-
Redis sink, as of now, supports two different Deployment Types `Standalone` and `Cluster`. This can be configured in the Depot environment variable `SINK_REDIS_DEPLOYMENT_TYPE`.
20+
### `SINK_REDIS_KEY_TEMPLATE`
21+
22+
The string that will act as the key for each Redis entry. This key can be configured as per the requirement, a constant or can extract value from each message and use that as the Redis key.
23+
24+
- Example value: `Service\_%%s,1`
25+
26+
This will take the value with index 1 from proto and create the Redis keys as per the template\
27+
28+
- Type: `required`
29+
30+
### `INPUT_SCHEMA_PROTO_TO_COLUMN_MAPPING`
31+
32+
This is the field that decides what all data will be stored in the HashSet for each message.
33+
34+
- Example value: `{"6":"customer_id", "2":"order_num"}`
35+
- Type: `required (For Hashset)`
36+
37+
### `SINK_REDIS_LIST_DATA_PROTO_INDEX`
38+
39+
This field decides what all data will be stored in the List for each message.
40+
41+
- Example value: `6`
42+
43+
This will get the value of the field with index 6 in your proto and push that to the Redis list with the corresponding keyTemplate\
44+
45+
- Type: `required (For List)`
46+
47+
### `SINK_REDIS_KEY_VALUE_DATA_PROTO_INDEX`
48+
49+
This field decides what data will be stored in the value part of key-value pair
50+
51+
- Example value: `6`
52+
53+
This will get the value of the field with index 6 in your proto and push that to the Redis as value with the corresponding keyTemplate\
54+
55+
- Type: `required (For KeyValue)`
56+
57+
### `SINK_REDIS_TTL_TYPE`
58+
59+
- Example value: `DURATION`
60+
- Type: `optional`
61+
- Default value: `DISABLE`
62+
- Choice of Redis TTL type.It can be:\
63+
- `DURATION`: After which the Key will be expired and removed from Redis \(UNIT- seconds\)\
64+
- `EXACT_TIME`: Precise UNIX timestamp after which the Key will be expired
65+
66+
### `SINK_REDIS_TTL_VALUE`
67+
68+
Redis TTL value in Unix Timestamp for `EXACT_TIME` TTL type, In Seconds for `DURATION` TTL type.
69+
70+
- Example value: `100000`
71+
- Type: `optional`
72+
- Default value: `0`
73+
74+
### `SINK_REDIS_DEPLOYMENT_TYPE`
75+
76+
The Redis deployment you are using. At present, we support `Standalone` and `Cluster` types.
77+
78+
- Example value: `Standalone`
79+
- Type: `required`
80+
- Default value: `Standalone`
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
package io.odpf.firehose.config;
2+
3+
import io.odpf.firehose.config.converter.RedisSinkDataTypeConverter;
4+
import io.odpf.firehose.config.converter.RedisSinkTtlTypeConverter;
5+
import io.odpf.firehose.config.converter.RedisSinkDeploymentTypeConverter;
6+
import io.odpf.firehose.config.enums.RedisSinkDataType;
7+
import io.odpf.firehose.config.enums.RedisSinkTtlType;
8+
import io.odpf.firehose.config.enums.RedisSinkDeploymentType;
9+
10+
public interface RedisSinkConfig extends AppConfig {
11+
@Key("SINK_REDIS_URLS")
12+
String getSinkRedisUrls();
13+
14+
@Key("SINK_REDIS_KEY_TEMPLATE")
15+
String getSinkRedisKeyTemplate();
16+
17+
@Key("SINK_REDIS_DATA_TYPE")
18+
@DefaultValue("HASHSET")
19+
@ConverterClass(RedisSinkDataTypeConverter.class)
20+
RedisSinkDataType getSinkRedisDataType();
21+
22+
@Key("SINK_REDIS_LIST_DATA_PROTO_INDEX")
23+
String getSinkRedisListDataProtoIndex();
24+
25+
@Key("SINK_REDIS_KEY_VALUE_DATA_PROTO_INDEX")
26+
String getSinkRedisKeyValuetDataProtoIndex();
27+
28+
@Key("SINK_REDIS_TTL_TYPE")
29+
@DefaultValue("DISABLE")
30+
@ConverterClass(RedisSinkTtlTypeConverter.class)
31+
RedisSinkTtlType getSinkRedisTtlType();
32+
33+
@Key("SINK_REDIS_TTL_VALUE")
34+
@DefaultValue("0")
35+
long getSinkRedisTtlValue();
36+
37+
@Key("SINK_REDIS_DEPLOYMENT_TYPE")
38+
@DefaultValue("Standalone")
39+
@ConverterClass(RedisSinkDeploymentTypeConverter.class)
40+
RedisSinkDeploymentType getSinkRedisDeploymentType();
41+
42+
43+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package io.odpf.firehose.config.converter;
2+
3+
import io.odpf.firehose.config.enums.RedisSinkDataType;
4+
import org.aeonbits.owner.Converter;
5+
6+
import java.lang.reflect.Method;
7+
8+
public class RedisSinkDataTypeConverter implements Converter<RedisSinkDataType> {
9+
@Override
10+
public RedisSinkDataType convert(Method method, String input) {
11+
return RedisSinkDataType.valueOf(input.toUpperCase());
12+
}
13+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package io.odpf.firehose.config.converter;
2+
3+
import io.odpf.firehose.config.enums.RedisSinkDeploymentType;
4+
import org.aeonbits.owner.Converter;
5+
6+
import java.lang.reflect.Method;
7+
8+
public class RedisSinkDeploymentTypeConverter implements Converter<RedisSinkDeploymentType> {
9+
@Override
10+
public RedisSinkDeploymentType convert(Method method, String input) {
11+
return RedisSinkDeploymentType.valueOf(input.toUpperCase());
12+
}
13+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package io.odpf.firehose.config.converter;
2+
3+
import io.odpf.firehose.config.enums.RedisSinkTtlType;
4+
import org.aeonbits.owner.Converter;
5+
6+
import java.lang.reflect.Method;
7+
8+
public class RedisSinkTtlTypeConverter implements Converter<RedisSinkTtlType> {
9+
@Override
10+
public RedisSinkTtlType convert(Method method, String input) {
11+
return RedisSinkTtlType.valueOf(input.toUpperCase());
12+
}
13+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
package io.odpf.firehose.config.enums;
2+
3+
public enum RedisSinkDataType {
4+
LIST,
5+
HASHSET,
6+
KEYVALUE,
7+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
package io.odpf.firehose.config.enums;
2+
3+
public enum RedisSinkDeploymentType {
4+
STANDALONE,
5+
CLUSTER
6+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
package io.odpf.firehose.config.enums;
2+
3+
public enum RedisSinkTtlType {
4+
EXACT_TIME,
5+
DURATION,
6+
DISABLE
7+
}

src/main/java/io/odpf/firehose/sink/SinkFactory.java

+3-11
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,9 @@
33
import io.odpf.depot.bigquery.BigQuerySink;
44
import io.odpf.depot.bigquery.BigQuerySinkFactory;
55
import io.odpf.depot.config.BigQuerySinkConfig;
6-
import io.odpf.depot.config.RedisSinkConfig;
76
import io.odpf.depot.log.LogSink;
87
import io.odpf.depot.log.LogSinkFactory;
98
import io.odpf.depot.metrics.StatsDReporter;
10-
import io.odpf.depot.redis.RedisSink;
11-
import io.odpf.depot.redis.RedisSinkFactory;
129
import io.odpf.firehose.config.KafkaConsumerConfig;
1310
import io.odpf.firehose.config.enums.SinkType;
1411
import io.odpf.firehose.consumer.kafka.OffsetManager;
@@ -23,6 +20,7 @@
2320
import io.odpf.firehose.sink.jdbc.JdbcSinkFactory;
2421
import io.odpf.firehose.sink.mongodb.MongoSinkFactory;
2522
import io.odpf.firehose.sink.prometheus.PromSinkFactory;
23+
import io.odpf.firehose.sink.redis.RedisSinkFactory;
2624
import io.odpf.stencil.client.StencilClient;
2725
import org.aeonbits.owner.ConfigFactory;
2826

@@ -36,7 +34,6 @@ public class SinkFactory {
3634
private final OffsetManager offsetManager;
3735
private BigQuerySinkFactory bigQuerySinkFactory;
3836
private LogSinkFactory logSinkFactory;
39-
private RedisSinkFactory redisSinkFactory;
4037
private final Map<String, String> config;
4138

4239
public SinkFactory(KafkaConsumerConfig kafkaConsumerConfig,
@@ -60,6 +57,7 @@ public void init() {
6057
case HTTP:
6158
case INFLUXDB:
6259
case ELASTICSEARCH:
60+
case REDIS:
6361
case GRPC:
6462
case PROMETHEUS:
6563
case BLOB:
@@ -69,12 +67,6 @@ public void init() {
6967
logSinkFactory = new LogSinkFactory(config, statsDReporter);
7068
logSinkFactory.init();
7169
return;
72-
case REDIS:
73-
redisSinkFactory = new RedisSinkFactory(
74-
ConfigFactory.create(RedisSinkConfig.class, config),
75-
statsDReporter);
76-
redisSinkFactory.init();
77-
return;
7870
case BIGQUERY:
7971
BigquerySinkUtils.addMetadataColumns(config);
8072
bigQuerySinkFactory = new BigQuerySinkFactory(
@@ -103,7 +95,7 @@ public Sink getSink() {
10395
case ELASTICSEARCH:
10496
return EsSinkFactory.create(config, statsDReporter, stencilClient);
10597
case REDIS:
106-
return new GenericOdpfSink(new FirehoseInstrumentation(statsDReporter, RedisSink.class), sinkType.name(), redisSinkFactory.create());
98+
return RedisSinkFactory.create(config, statsDReporter, stencilClient);
10799
case GRPC:
108100
return GrpcSinkFactory.create(config, statsDReporter, stencilClient);
109101
case PROMETHEUS:
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
package io.odpf.firehose.sink.redis;
2+
3+
import io.odpf.firehose.message.Message;
4+
import io.odpf.firehose.metrics.FirehoseInstrumentation;
5+
import io.odpf.firehose.sink.AbstractSink;
6+
import io.odpf.firehose.sink.redis.client.RedisClient;
7+
import io.odpf.firehose.sink.redis.exception.NoResponseException;
8+
9+
import java.util.List;
10+
11+
/**
12+
* RedisSink allows messages consumed from kafka to be persisted to a redis.
13+
* The related configurations for RedisSink can be found here: {@see io.odpf.firehose.config.RedisSinkConfig}
14+
*/
15+
public class RedisSink extends AbstractSink {
16+
17+
private RedisClient redisClient;
18+
19+
/**
20+
* Instantiates a new Redis sink.
21+
*
22+
* @param firehoseInstrumentation the instrumentation
23+
* @param sinkType the sink type
24+
* @param redisClient the redis client
25+
*/
26+
public RedisSink(FirehoseInstrumentation firehoseInstrumentation, String sinkType, RedisClient redisClient) {
27+
super(firehoseInstrumentation, sinkType);
28+
this.redisClient = redisClient;
29+
}
30+
31+
/**
32+
* process messages before sending to redis.
33+
*
34+
* @param messages the messages
35+
*/
36+
@Override
37+
protected void prepare(List<Message> messages) {
38+
redisClient.prepare(messages);
39+
}
40+
41+
/**
42+
* Send data to redis.
43+
*
44+
* @return the list
45+
* @throws NoResponseException the no response exception
46+
*/
47+
@Override
48+
protected List<Message> execute() throws NoResponseException {
49+
return redisClient.execute();
50+
}
51+
52+
@Override
53+
public void close() {
54+
getFirehoseInstrumentation().logInfo("Redis connection closing");
55+
redisClient.close();
56+
}
57+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
package io.odpf.firehose.sink.redis;
2+
3+
4+
import io.odpf.depot.metrics.StatsDReporter;
5+
import io.odpf.firehose.config.RedisSinkConfig;
6+
import io.odpf.firehose.metrics.FirehoseInstrumentation;
7+
import io.odpf.firehose.sink.AbstractSink;
8+
import io.odpf.firehose.sink.redis.client.RedisClient;
9+
import io.odpf.firehose.sink.redis.client.RedisClientFactory;
10+
import io.odpf.stencil.client.StencilClient;
11+
import org.aeonbits.owner.ConfigFactory;
12+
13+
import java.util.Map;
14+
15+
/**
16+
* Factory class to create the RedisSink.
17+
* <p>
18+
* The firehose would reflectively instantiate this factory
19+
* using the configurations supplied and invoke {@see #create(Map < String, String > configuration, StatsDClient statsDReporter, StencilClient client)}
20+
* to obtain the RedisSink implementation.
21+
*/
22+
public class RedisSinkFactory {
23+
24+
/**
25+
* Creates Redis sink.
26+
*
27+
* @param configuration the configuration
28+
* @param statsDReporter the stats d reporter
29+
* @param stencilClient the stencil client
30+
* @return the abstract sink
31+
*/
32+
public static AbstractSink create(Map<String, String> configuration, StatsDReporter statsDReporter, StencilClient stencilClient) {
33+
RedisSinkConfig redisSinkConfig = ConfigFactory.create(RedisSinkConfig.class, configuration);
34+
FirehoseInstrumentation firehoseInstrumentation = new FirehoseInstrumentation(statsDReporter, RedisSinkFactory.class);
35+
String redisConfig = String.format("\n\tredis.urls = %s\n\tredis.key.template = %s\n\tredis.sink.type = %s"
36+
+ "\n\tredis.list.data.proto.index = %s\n\tredis.ttl.type = %s\n\tredis.ttl.value = %d",
37+
redisSinkConfig.getSinkRedisUrls(),
38+
redisSinkConfig.getSinkRedisKeyTemplate(),
39+
redisSinkConfig.getSinkRedisDataType().toString(),
40+
redisSinkConfig.getSinkRedisListDataProtoIndex(),
41+
redisSinkConfig.getSinkRedisTtlType().toString(),
42+
redisSinkConfig.getSinkRedisTtlValue());
43+
firehoseInstrumentation.logDebug(redisConfig);
44+
firehoseInstrumentation.logInfo("Redis server type = {}", redisSinkConfig.getSinkRedisDeploymentType());
45+
46+
RedisClientFactory redisClientFactory = new RedisClientFactory(statsDReporter, redisSinkConfig, stencilClient);
47+
RedisClient client = redisClientFactory.getClient();
48+
firehoseInstrumentation.logInfo("Connection to redis established successfully");
49+
return new RedisSink(new FirehoseInstrumentation(statsDReporter, RedisSink.class), "redis", client);
50+
}
51+
}

0 commit comments

Comments
 (0)