Skip to content

Commit

Permalink
[metrics] Undo overenthusiastic parts of renaming data -> dimensions …
Browse files Browse the repository at this point in the history
…-> attributes
  • Loading branch information
Jeremy Boynes committed Jun 12, 2018
1 parent 11418e6 commit 007a1f4
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 20 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@
public abstract class MetricRecorder<R extends MetricRecorder.RecorderContext> {

/**
* Recorder-specific cpntext for taking measurements. Combines the user-supplied
* Recorder-specific context for taking measurements. Combines the user-supplied
* attributes with any additional metadata recorder implementations may require.
*/
public static class RecorderContext {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@

/**
* Logic for mapping a metric event and associated context to a set of
* attributes in CloudWatch.
* dimensions in CloudWatch.
*
* CloudWatch uses attributes as a filtering mechanism for viewing metric
* events, and requires the client to specify the attributes associated
* CloudWatch uses dimensions as a filtering mechanism for viewing metric
* events, and requires the client to specify the dimensions associated
* with an event when recording it in CloudWatch. An instance of this class
* contains the desired dimension mappings for an application, to ensure that
* metric events are filtered as desired.
Expand All @@ -47,20 +47,20 @@
* does a lookup based on the stored information. Ideally every event recorded
* would send with it all available context information, and a separate system
* (perhaps CloudWatch, perhaps something sitting in front) would allow those
* consuming the attributes to filter as needed.
* consuming the data to filter as needed.
*
* A DimensionMapper should be configured at application startup/configuration,
* like:
* <pre>
* {@code
DimensionMapper.Builder builder = new DimensionMapper.Builder();
// Some attributes may be required for every metric event
// If every metric can/should share the same attributes, configuring global
// attributes may be all that is required.
// Some dimensions may be required for every metric event
// If every metric can/should share the same dimensions, configuring global
// dimensions may be all that is required.
builder.addGlobalDimension(ContextData.ID);
// Specific metrics may have their own distinct set of attributes
// Specific metrics may have their own distinct set of dimensions
builder.addMetric(StandardMetric.TIME,
Arrays.asList(StandardContext.OPERATION, SpecificContext.SOME_DIMENSION));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,17 +34,17 @@
* {@link MetricDatum} appropriate to send to CloudWatch.
*
* Allows adding metrics for a period of time, then flushing all the aggregated
* attributes, then accumulating more.
* data, then accumulating more.
* <p>
* Package-private as this is intended purely as an implementation detail of
* the CloudWatchRecorder. The implementation is specific to how it is used
* by the recorder, and does not currently support "general purpose"
* aggregation of metric attributes/MetricDatums.
* aggregation of metric data/MetricDatums.
* <p>
* Metrics are aggregated by namespace, metricName, attributes, and unit.
* Matching metrics events will be added together to form a single
* {@link StatisticSet}.
* No aggregation across disparate attributes is supported.
* No aggregation across disparate dimensions is supported.
* <p>
* Dimensions to use for a metric are determined by a {@link DimensionMapper},
* with values pulled from the current metric context.
Expand All @@ -66,7 +66,7 @@ public MetricDataAggregator(final DimensionMapper dimensionMapper)
/**
* Add a metric event to be aggregated.
* Events with the same name, unit, and attributes will have their values
* aggregated into {@link StatisticSet}s, with the aggregated attributes
* aggregated into {@link StatisticSet}s, with the aggregated data
* available via {@link #flush}.
*
* @param context Metric context to use for dimension information
Expand Down Expand Up @@ -97,15 +97,15 @@ public void add(
/**
* Flush all the current aggregated MetricDatum and return as a list.
* This is safe to call concurrently with {@link #add}.
* All attributes added prior to a flush call will be included in the returned aggregate.
* Any attributes added after the flush call returns will be included in a subsequent flush.
* All data added prior to a flush call will be included in the returned aggregate.
* Any data added after the flush call returns will be included in a subsequent flush.
* Data added while a flush call is processing may be included in the current flush
* or a subsequent flush, but will not be included twice.
*
* The timestamp on the aggregated attributes will be the time it was flushed,
* The timestamp on the aggregated data will be the time it was flushed,
* not the time of any of the original metric events.
*
* @return list of all attributes aggregated since the last flush
* @return list of all data aggregated since the last flush
*/
public List<MetricDatum> flush() {
if (statisticsMap.size() == 0) {
Expand All @@ -116,7 +116,7 @@ public List<MetricDatum> flush() {
// at this time in the statisticsMap.
// Note that this iterates over the key set of the underlying map, and
// removes keys from the map at the same time. It is possible keys may
// be added during this iteration, or attributes for keys modified between
// be added during this iteration, or data for keys modified between
// a key being chosen for iteration and being removed from the map.
// This is ok. Any new keys will be picked up on subsequent flushes.
//TODO: use two maps and swap between, to ensure 'perfect' segmentation?
Expand Down Expand Up @@ -151,7 +151,7 @@ private static StatisticSet sum( StatisticSet v1, StatisticSet v2 ) {
* the same:
* namespace
* metric name
* attributes
* dimensions
* unit
*/
private static final class DatumKey {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
/**
* Simple MetricRecorder that sends metrics events to a file.
*
* The attributes will be written to a file with the current time slice appended to
* The data will be written to a file with the current time slice appended to
* the end of the name, and periodically rolled over to a new file.
*
* This implementation is bare-bones, and does not serialize to any
Expand Down

0 comments on commit 007a1f4

Please sign in to comment.