hdrhistogram/lib.rs
1//! HdrSample is a port of Gil Tene's HdrHistogram to native Rust. It provides recording and
2//! analyzing of sampled data value counts across a large, configurable value range with
3//! configurable precision within the range. The resulting "HDR" histogram allows for fast and
4//! accurate analysis of the extreme ranges of data with non-normal distributions, like latency.
5//!
6//! # HdrHistogram
7//!
8//! What follows is a description from [the HdrHistogram
9//! website](https://hdrhistogram.github.io/HdrHistogram/). Users are encouraged to read the
10//! documentation from the original [Java
11//! implementation](https://github.com/HdrHistogram/HdrHistogram), as most of the concepts
12//! translate directly to the Rust port.
13//!
14//! HdrHistogram supports the recording and analyzing of sampled data value counts across a
15//! configurable integer value range with configurable value precision within the range. Value
16//! precision is expressed as the number of significant digits in the value recording, and provides
17//! control over value quantization behavior across the value range and the subsequent value
18//! resolution at any given level.
19//!
20//! For example, a Histogram could be configured to track the counts of observed integer values
21//! between 0 and 3,600,000,000 while maintaining a value precision of 3 significant digits across
22//! that range. Value quantization within the range will thus be no larger than 1/1,000th (or 0.1%)
23//! of any value. This example Histogram could be used to track and analyze the counts of observed
24//! response times ranging between 1 microsecond and 1 hour in magnitude, while maintaining a value
25//! resolution of 1 microsecond up to 1 millisecond, a resolution of 1 millisecond (or better) up
26//! to one second, and a resolution of 1 second (or better) up to 1,000 seconds. At it's maximum
27//! tracked value (1 hour), it would still maintain a resolution of 3.6 seconds (or better).
28//!
29//! HDR Histogram is designed for recording histograms of value measurements in latency and
30//! performance sensitive applications. Measurements show value recording times as low as 3-6
31//! nanoseconds on modern (circa 2014) Intel CPUs. The HDR Histogram maintains a fixed cost in both
32//! space and time. A Histogram's memory footprint is constant, with no allocation operations
33//! involved in recording data values or in iterating through them. The memory footprint is fixed
34//! regardless of the number of data value samples recorded, and depends solely on the dynamic
35//! range and precision chosen. The amount of work involved in recording a sample is constant, and
36//! directly computes storage index locations such that no iteration or searching is ever involved
37//! in recording data values.
38//!
39//! If you are looking for FFI bindings to
40//! [`HdrHistogram_c`](https://github.com/HdrHistogram/HdrHistogram_c), you want the
41//! [`hdrhistogram_c`](https://crates.io/crates/hdrhistogram_c) crate instead.
42//!
43//! # Interacting with the library
44//!
45//! HdrSample's API follows that of the original HdrHistogram Java implementation, with some
46//! modifications to make its use more idiomatic in Rust. The description in this section has been
47//! adapted from that given by the [Python port](https://github.com/HdrHistogram/HdrHistogram_py),
48//! as it gives a nicer first-time introduction to the use of HdrHistogram than the Java docs do.
49//!
50//! HdrSample is generally used in one of two modes: recording samples, or querying for analytics.
51//! In distributed deployments, the recording may be performed remotely (and possibly in multiple
52//! locations), to then be aggregated later in a central location for analysis.
53//!
54//! ## Recording samples
55//!
56//! A histogram instance is created using the `::new` methods on the `Histogram` struct. These come
57//! in three variants: `new`, `new_with_max`, and `new_with_bounds`. The first of these only sets
58//! the required precision of the sampled data, but leaves the value range open such that any value
59//! may be recorded. A `Histogram` created this way (or one where auto-resize has been explicitly
60//! enabled) will automatically resize itself if a value that is too large to fit in the current
61//! dataset is encountered. `new_with_max` sets an upper bound on the values to be recorded, and
62//! disables auto-resizing, thus preventing any re-allocation during recording. If the application
63//! attempts to record a larger value than this maximum bound, the `record` call will return an
64//! error. Finally, `new_with_bounds` restricts the lowest representable value of the dataset,
65//! such that a smaller range needs to be covered (thus reducing the overall allocation size).
66//!
67//! For example the example below shows how to create a `Histogram` that can count values in the
68//! `[1..3600000]` range with 1% precision, which could be used to track latencies in the range `[1
69//! msec..1 hour]`).
70//!
71//! ```
72//! use hdrhistogram::Histogram;
73//! let mut hist = Histogram::<u64>::new_with_bounds(1, 60 * 60 * 1000, 2).unwrap();
74//!
75//! // samples can be recorded using .record, which will error if the value is too small or large
76//! hist.record(54321).expect("value 54321 should be in range");
77//!
78//! // for ergonomics, samples can also be recorded with +=
79//! // this call will panic if the value is out of range!
80//! hist += 54321;
81//!
82//! // if the code that generates the values is subject to Coordinated Omission,
83//! // the self-correcting record method should be used instead.
84//! // for example, if the expected sampling interval is 10 msec:
85//! hist.record_correct(54321, 10).expect("value 54321 should be in range");
86//! ```
87//!
88//! Note the `u64` type. This type can be changed to reduce the storage overhead for all the
89//! histogram bins, at the cost of a risk of saturating if a large number of samples end up in the
90//! same bin.
91//!
92//! ## Querying samples
93//!
94//! At any time, the histogram can be queried to return interesting statistical measurements, such
95//! as the total number of recorded samples, or the value at a given quantile:
96//!
97//! ```
98//! use hdrhistogram::Histogram;
99//! let hist = Histogram::<u64>::new(2).unwrap();
100//! // ...
101//! println!("# of samples: {}", hist.len());
102//! println!("99.9'th percentile: {}", hist.value_at_quantile(0.999));
103//! ```
104//!
105//! Several useful iterators are also provided for quickly getting an overview of the dataset. The
106//! simplest one is `iter_recorded()`, which yields one item for every non-empty sample bin. All
107//! the HdrHistogram iterators are supported in HdrSample, so look for the `*Iterator` classes in
108//! the [Java documentation](https://hdrhistogram.github.io/HdrHistogram/JavaDoc/).
109//!
110//! ```
111//! use hdrhistogram::Histogram;
112//! let hist = Histogram::<u64>::new(2).unwrap();
113//! // ...
114//! for v in hist.iter_recorded() {
115//! println!("{}'th percentile of data is {} with {} samples",
116//! v.percentile(), v.value_iterated_to(), v.count_at_value());
117//! }
118//! ```
119//!
120//! ## Panics and error handling
121//!
122//! As long as you're using safe, non-panicking functions (see below), this library should never
123//! panic. Any panics you encounter are a bug; please file them in the issue tracker.
124//!
125//! A few functions have their functionality exposed via `AddAssign` and `SubAssign`
126//! implementations. These alternate forms are equivalent to simply calling `unwrap()` on the
127//! normal functions, so the normal rules of `unwrap()` apply: view with suspicion when used in
128//! production code, etc.
129//!
130//! | Returns Result | Panics on error | Functionality |
131//! | ------------------------------ | ------------------ | ------------------------------- |
132//! | `h.record(v)` | `h += v` | Increment count for value `v` |
133//! | `h.add(h2)` | `h += h2` | Add `h2`'s counts to `h` |
134//! | `h.subtract(h2)` | `h -= h2` | Subtract `h2`'s counts from `h` |
135//!
136//! Other than the panicking forms of the above functions, everything will return `Result` or
137//! `Option` if it can fail.
138//!
139//! ## `usize` limitations
140//!
141//! Depending on the configured number of significant digits and maximum value, a histogram's
142//! internal storage may have hundreds of thousands of cells. Systems with a 16-bit `usize` cannot
143//! represent pointer offsets that large, so relevant operations (creation, deserialization, etc)
144//! will fail with a suitable error (e.g. `CreationError::UsizeTypeTooSmall`). If you are using such
145//! a system and hitting these errors, reducing the number of significant digits will greatly reduce
146//! memory consumption (and therefore the need for large `usize` values). Lowering the max value may
147//! also help as long as resizing is disabled.
148//!
149//! 32- and above systems will not have any such issues, as all possible histograms fit within a
150//! 32-bit index.
151//!
152//! ## Floating point accuracy
153//!
154//! Some calculations inherently involve floating point values, like `value_at_quantile`, and are
155//! therefore subject to the precision limits of IEEE754 floating point calculations. The user-
156//! visible consequence of this is that in certain corner cases, you might end up with a bucket (and
157//! therefore value) that is higher or lower than it would be if the calculation had been done
158//! with arbitrary-precision arithmetic. However, double-precision IEEE754 (i.e. `f64`) is very
159//! good at its job, so these cases should be rare. Also, we haven't seen a case that was off by
160//! more than one bucket.
161//!
162//! To minimize FP precision losses, we favor working with quantiles rather than percentiles. A
163//! quantile represents a portion of a set with a number in `[0, 1]`. A percentile is the same
164//! concept, except it uses the range `[0, 100]`. Working just with quantiles means we can skip an
165//! FP operation in a few places, and therefore avoid opportunities for precision loss to creep in.
166//!
167//! # Limitations and Caveats
168//!
169//! As with all the other HdrHistogram ports, the latest features and bug fixes from the upstream
170//! HdrHistogram implementations may not be available in this port. A number of features have also
171//! not (yet) been implemented:
172//!
173//! - Concurrency support (`AtomicHistogram`, `ConcurrentHistogram`, …).
174//! - `DoubleHistogram`.
175//! - The `Recorder` feature of HdrHistogram.
176//! - Value shifting ("normalization").
177//! - Textual output methods. These seem almost orthogonal to HdrSample, though it might be
178//! convenient if we implemented some relevant traits (CSV, JSON, and possibly simple
179//! `fmt::Display`).
180//!
181//! Most of these should be fairly straightforward to add, as the code aligns pretty well with the
182//! original Java/C# code. If you do decide to implement one and send a PR, please make sure you
183//! also port the [test
184//! cases](https://github.com/HdrHistogram/HdrHistogram/tree/master/src/test/java/org/HdrHistogram),
185//! and try to make sure you implement appropriate traits to make the use of the feature as
186//! ergonomic as possible.
187
188#![deny(
189 missing_docs,
190 trivial_casts,
191 trivial_numeric_casts,
192 unused_extern_crates,
193 unused_import_braces,
194 unused_results,
195 variant_size_differences
196)]
197// Enable feature(test) is enabled so that we can have benchmarks of private code
198#![cfg_attr(all(test, feature = "bench_private"), feature(test))]
199
200#[cfg(all(test, feature = "bench_private"))]
201extern crate test;
202
203#[cfg(feature = "serialization")]
204#[macro_use]
205extern crate nom;
206
207use num_traits::ToPrimitive;
208use std::borrow::Borrow;
209use std::cmp;
210use std::ops::{Add, AddAssign, Sub, SubAssign};
211
212use iterators::HistogramIterator;
213
214/// Min value of a new histogram.
215/// Equivalent to `u64::max_value()`, but const functions aren't allowed (yet).
216/// See <https://github.com/rust-lang/rust/issues/24111>
217const ORIGINAL_MIN: u64 = (-1_i64 >> 63) as u64;
218/// Max value of a new histogram.
219const ORIGINAL_MAX: u64 = 0;
220
221/// `Histogram` is the core data structure in HdrSample. It records values, and performs analytics.
222///
223/// At its heart, it keeps the count for recorded samples in "buckets" of values. The resolution
224/// and distribution of these buckets is tuned based on the desired highest trackable value, as
225/// well as the user-specified number of significant decimal digits to preserve. The values for the
226/// buckets are kept in a way that resembles floats and doubles: there is a mantissa and an
227/// exponent, and each bucket represents a different exponent. The "sub-buckets" within a bucket
228/// represent different values for the mantissa.
229///
230/// To a first approximation, the sub-buckets of the first
231/// bucket would hold the values `0`, `1`, `2`, `3`, …, the sub-buckets of the second bucket would
232/// hold `0`, `2`, `4`, `6`, …, the third would hold `0`, `4`, `8`, and so on. However, the low
233/// half of each bucket (except bucket 0) is unnecessary, since those values are already covered by
234/// the sub-buckets of all the preceeding buckets. Thus, `Histogram` keeps the top half of every
235/// such bucket.
236///
237/// For the purposes of explanation, consider a `Histogram` with 2048 sub-buckets for every bucket,
238/// and a lowest discernible value of 1:
239///
240/// <pre>
241/// The 0th bucket covers 0...2047 in multiples of 1, using all 2048 sub-buckets
242/// The 1st bucket covers 2048..4097 in multiples of 2, using only the top 1024 sub-buckets
243/// The 2nd bucket covers 4096..8191 in multiple of 4, using only the top 1024 sub-buckets
244/// ...
245/// </pre>
246///
247/// Bucket 0 is "special" here. It is the only one that has 2048 entries. All the rest have
248/// 1024 entries (because their bottom half overlaps with and is already covered by the all of
249/// the previous buckets put together). In other words, the `k`'th bucket could represent `0 *
250/// 2^k` to `2048 * 2^k` in 2048 buckets with `2^k` precision, but the midpoint of `1024 * 2^k
251/// = 2048 * 2^(k-1)`, which is the k-1'th bucket's end. So, we would use the previous bucket
252/// for those lower values as it has better precision.
253///
254#[derive(Debug, Clone)]
255pub struct Histogram<T: Counter> {
256 auto_resize: bool,
257
258 // >= 2 * lowest_discernible_value
259 highest_trackable_value: u64,
260 // >= 1
261 lowest_discernible_value: u64,
262 // in [0, 5]
263 significant_value_digits: u8,
264
265 // in [1, 64]
266 bucket_count: u8,
267 // 2^(sub_bucket_half_count_magnitude + 1) = [2, 2^18]
268 sub_bucket_count: u32,
269 // sub_bucket_count / 2 = [1, 2^17]
270 sub_bucket_half_count: u32,
271 // In [0, 17]
272 sub_bucket_half_count_magnitude: u8,
273 // The bottom sub bucket's bits set, shifted by unit magnitude.
274 // The highest bit will be (one-indexed) sub bucket count magnitude + unit_magnitude.
275 sub_bucket_mask: u64,
276
277 // Number of leading zeros that would be used by the largest value in bucket 0.
278 // in [1, 63]
279 leading_zero_count_base: u8,
280
281 // Largest exponent of 2 that's smaller than the lowest discernible value. In [0, 62].
282 unit_magnitude: u8,
283 // low unit_magnitude bits set
284 unit_magnitude_mask: u64,
285
286 max_value: u64,
287 min_non_zero_value: u64,
288
289 total_count: u64,
290 counts: Vec<T>,
291}
292
293/// Module containing the implementations of all `Histogram` iterators.
294pub mod iterators;
295
296impl<T: Counter> Histogram<T> {
297 // ********************************************************************************************
298 // Histogram administrative read-outs
299 // ********************************************************************************************
300
301 /// Get the current number of distinct values that can be represented in the histogram.
302 pub fn distinct_values(&self) -> usize {
303 self.counts.len()
304 }
305
306 /// Get the lowest discernible value for the histogram in its current configuration.
307 pub fn low(&self) -> u64 {
308 self.lowest_discernible_value
309 }
310
311 /// Get the highest trackable value for the histogram in its current configuration.
312 pub fn high(&self) -> u64 {
313 self.highest_trackable_value
314 }
315
316 /// Get the number of significant value digits kept by this histogram.
317 pub fn sigfig(&self) -> u8 {
318 self.significant_value_digits
319 }
320
321 /// Get the total number of samples recorded.
322 #[deprecated(since = "6.0.0", note = "use `len` instead")]
323 pub fn count(&self) -> u64 {
324 self.total_count
325 }
326
327 /// Get the total number of samples recorded.
328 pub fn len(&self) -> u64 {
329 self.total_count
330 }
331
332 /// Returns true if this histogram has no recorded values.
333 pub fn is_empty(&self) -> bool {
334 self.total_count == 0
335 }
336
337 /// Get the number of buckets used by the histogram to cover the highest trackable value.
338 ///
339 /// This method differs from `.len()` in that it does not count the sub buckets within each
340 /// bucket.
341 ///
342 /// This method is probably only useful for testing purposes.
343 pub fn buckets(&self) -> u8 {
344 self.bucket_count
345 }
346
347 /// Returns true if this histogram is currently able to auto-resize as new samples are recorded.
348 pub fn is_auto_resize(&self) -> bool {
349 self.auto_resize
350 }
351
352 // ********************************************************************************************
353 // Methods for looking up the count for a given value/index
354 // ********************************************************************************************
355
356 /// Find the bucket the given value should be placed in.
357 /// Returns `None` if the corresponding index cannot be represented in `usize`.
358 fn index_for(&self, value: u64) -> Option<usize> {
359 let bucket_index = self.bucket_for(value);
360 let sub_bucket_index = self.sub_bucket_for(value, bucket_index);
361
362 debug_assert!(sub_bucket_index < self.sub_bucket_count);
363 debug_assert!(bucket_index == 0 || (sub_bucket_index >= self.sub_bucket_half_count));
364
365 // Calculate the index for the first entry that will be used in the bucket (halfway through
366 // sub_bucket_count). For bucket_index 0, all sub_bucket_count entries may be used, but
367 // bucket_base_index is still set in the middle.
368 let bucket_base_index =
369 (i32::from(bucket_index) + 1) << self.sub_bucket_half_count_magnitude;
370
371 // Calculate the offset in the bucket. This subtraction will result in a positive value in
372 // all buckets except the 0th bucket (since a value in that bucket may be less than half
373 // the bucket's 0 to sub_bucket_count range). However, this works out since we give bucket 0
374 // twice as much space.
375 let offset_in_bucket = sub_bucket_index as i32 - self.sub_bucket_half_count as i32;
376
377 let index = bucket_base_index + offset_in_bucket;
378 // This is always non-negative because offset_in_bucket is only negative (and only then by
379 // sub_bucket_half_count at most) for bucket 0, and bucket_base_index will be halfway into
380 // bucket 0's sub buckets in that case.
381 debug_assert!(index >= 0);
382 index.to_usize()
383 }
384
385 /// Find the bucket the given value should be placed in.
386 /// If the value is bigger than what this histogram can express, the last valid bucket index
387 /// is returned instead.
388 fn index_for_or_last(&self, value: u64) -> usize {
389 self.index_for(value)
390 .map_or(self.last_index(), |i| cmp::min(i, self.last_index()))
391 }
392
393 /// Get a mutable reference to the count bucket for the given value, if it is in range.
394 fn mut_at(&mut self, value: u64) -> Option<&mut T> {
395 self.index_for(value)
396 .and_then(move |i| self.counts.get_mut(i))
397 }
398
399 /// Get the index of the last histogram bin.
400 fn last_index(&self) -> usize {
401 self.distinct_values()
402 .checked_sub(1)
403 .expect("Empty counts array?")
404 }
405
406 // ********************************************************************************************
407 // Histograms should be cloneable.
408 // ********************************************************************************************
409
410 /// Get a copy of this histogram, corrected for coordinated omission.
411 ///
412 /// To compensate for the loss of sampled values when a recorded value is larger than the
413 /// expected interval between value samples, the new histogram will include an auto-generated
414 /// additional series of decreasingly-smaller (down to the `interval`) value records for each
415 /// count found in the current histogram that is larger than the `interval`.
416 ///
417 /// Note: This is a post-correction method, as opposed to the at-recording correction method
418 /// provided by `record_correct`. The two methods are mutually exclusive, and only one of the
419 /// two should be be used on a given data set to correct for the same coordinated omission
420 /// issue.
421 ///
422 /// See notes in the description of the Histogram calls for an illustration of why this
423 /// corrective behavior is important.
424 ///
425 /// If `interval` is larger than 0, add auto-generated value records as appropriate if value is
426 /// larger than `interval`.
427 pub fn clone_correct(&self, interval: u64) -> Histogram<T> {
428 let mut h = Histogram::new_from(self);
429 for v in self.iter_recorded() {
430 h.record_n_correct(v.value_iterated_to(), v.count_at_value(), interval)
431 .expect("Same dimensions; all values should be representable");
432 }
433 h
434 }
435
436 /// Overwrite this histogram with the given histogram. All data and statistics in this
437 /// histogram will be overwritten.
438 pub fn set_to<B: Borrow<Histogram<T>>>(&mut self, source: B) -> Result<(), AdditionError> {
439 self.reset();
440 self.add(source.borrow())
441 }
442
443 /// Overwrite this histogram with the given histogram while correcting for coordinated
444 /// omission. All data and statistics in this histogram will be overwritten. See
445 /// `clone_correct` for more detailed explanation about how correction is applied
446 pub fn set_to_corrected<B: Borrow<Histogram<T>>>(
447 &mut self,
448 source: B,
449 interval: u64,
450 ) -> Result<(), RecordError> {
451 self.reset();
452 self.add_correct(source, interval)
453 }
454
455 // ********************************************************************************************
456 // Add and subtract methods for, well, adding or subtracting two histograms
457 // ********************************************************************************************
458
459 /// Add the contents of another histogram to this one.
460 ///
461 /// Returns an error if values in the other histogram cannot be stored; see `AdditionError`.
462 pub fn add<B: Borrow<Histogram<T>>>(&mut self, source: B) -> Result<(), AdditionError> {
463 let source = source.borrow();
464
465 // If source is empty there's nothing to add
466 if source.is_empty() {
467 return Ok(());
468 }
469
470 // make sure we can take the values in source
471 let top = self.highest_equivalent(self.value_for(self.last_index()));
472 if top < source.max() {
473 if !self.auto_resize {
474 return Err(AdditionError::OtherAddendValueExceedsRange);
475 }
476 // We're growing the histogram, so new high > old high and is therefore >= 2x low.
477 self.resize(source.max())
478 .map_err(|_| AdditionError::ResizeFailedUsizeTypeTooSmall)?;
479 }
480
481 let matching_buckets = self.bucket_count == source.bucket_count
482 && self.sub_bucket_count == source.sub_bucket_count
483 && self.unit_magnitude == source.unit_magnitude;
484 if matching_buckets && self.is_empty() {
485 // Counts arrays are of the same length and meaning.
486 // If self is empty (all counters are zeroes) we can copy the source histogram with a memory copy.
487 self.counts[..].copy_from_slice(&source.counts[..]);
488 self.total_count = source.total_count;
489 self.min_non_zero_value = source.min_non_zero_value;
490 self.max_value = source.max_value;
491 } else if matching_buckets {
492 // Counts arrays are of the same length and meaning,
493 // so we can just iterate and add directly:
494 let mut observed_other_total_count: u64 = 0;
495 for i in 0..source.distinct_values() {
496 let other_count = source
497 .count_at_index(i)
498 .expect("iterating inside source length");
499 if other_count != T::zero() {
500 // indexing is safe: same configuration as `source`, and the index was valid for
501 // `source`.
502 self.counts[i] = self.counts[i].saturating_add(other_count);
503 observed_other_total_count =
504 observed_other_total_count.saturating_add(other_count.as_u64());
505 }
506 }
507
508 self.total_count = self.total_count.saturating_add(observed_other_total_count);
509 let mx = source.max();
510 if mx > self.max() {
511 self.update_max(mx);
512 }
513 let mn = source.min_nz();
514 if mn < self.min_nz() {
515 self.update_min(mn);
516 }
517 } else {
518 // Arrays are not a direct match (or the other could change on the fly in some valid
519 // way), so we can't just stream through and add them. Instead, go through the array
520 // and add each non-zero value found at it's proper value:
521
522 // Do max value first, to avoid max value updates on each iteration:
523 let other_max_index = source
524 .index_for(source.max())
525 .expect("Index for max value must exist");
526 let other_count = source
527 .count_at_index(other_max_index)
528 .expect("max's index must exist");
529 self.record_n(source.value_for(other_max_index), other_count)
530 .expect("Record must succeed; already resized for max value");
531
532 // Record the remaining values, up to but not including the max value:
533 for i in 0..other_max_index {
534 let other_count = source
535 .count_at_index(i)
536 .expect("index before max must exist");
537 if other_count != T::zero() {
538 self.record_n(source.value_for(i), other_count)
539 .expect("Record must succeed; already recorded max value");
540 }
541 }
542 }
543
544 // TODO:
545 // if source.start_time < self.start_time {
546 // self.start_time = source.start_time;
547 // }
548 // if source.end_time > self.end_time {
549 // self.end_time = source.end_time;
550 // }
551 Ok(())
552 }
553
554 /// Add the contents of another histogram to this one, while correcting for coordinated
555 /// omission.
556 ///
557 /// To compensate for the loss of sampled values when a recorded value is larger than the
558 /// expected interval between value samples, the values added will include an auto-generated
559 /// additional series of decreasingly-smaller (down to the given `interval`) value records for
560 /// each count found in the current histogram that is larger than `interval`.
561 ///
562 /// Note: This is a post-recording correction method, as opposed to the at-recording correction
563 /// method provided by `record_correct`. The two methods are mutually exclusive, and only one
564 /// of the two should be be used on a given data set to correct for the same coordinated
565 /// omission issue.
566 ///
567 /// See notes in the description of the `Histogram` calls for an illustration of why this
568 /// corrective behavior is important.
569 ///
570 /// See `RecordError` for error conditions.
571 pub fn add_correct<B: Borrow<Histogram<T>>>(
572 &mut self,
573 source: B,
574 interval: u64,
575 ) -> Result<(), RecordError> {
576 let source = source.borrow();
577
578 for v in source.iter_recorded() {
579 self.record_n_correct(v.value_iterated_to(), v.count_at_value(), interval)?;
580 }
581 Ok(())
582 }
583
584 /// Subtract the contents of another histogram from this one.
585 ///
586 /// See `SubtractionError` for error conditions.
587 pub fn subtract<B: Borrow<Histogram<T>>>(
588 &mut self,
589 subtrahend: B,
590 ) -> Result<(), SubtractionError> {
591 let subtrahend = subtrahend.borrow();
592
593 // If the source is empty there's nothing to subtract
594 if subtrahend.is_empty() {
595 return Ok(());
596 }
597
598 // make sure we can take the values in source
599 let top = self.highest_equivalent(self.value_for(self.last_index()));
600 if top < self.highest_equivalent(subtrahend.max()) {
601 return Err(SubtractionError::SubtrahendValueExceedsMinuendRange);
602 }
603
604 let old_min_highest_equiv = self.highest_equivalent(self.min());
605 let old_max_lowest_equiv = self.lowest_equivalent(self.max());
606
607 // If total_count is at the max value, it may have saturated, so we must restat
608 let mut needs_restat = self.total_count == u64::max_value();
609
610 for i in 0..subtrahend.distinct_values() {
611 let other_count = subtrahend
612 .count_at_index(i)
613 .expect("index inside subtrahend len must exist");
614 if other_count != T::zero() {
615 let other_value = subtrahend.value_for(i);
616 {
617 let mut_count = self.mut_at(other_value);
618
619 if let Some(c) = mut_count {
620 // TODO Perhaps we should saturating sub here? Or expose some form of
621 // pluggability so users could choose to error or saturate? Both seem
622 // useful. It's also sort of inconsistent with overflow, which now
623 // saturates.
624 *c = (*c)
625 .checked_sub(&other_count)
626 .ok_or(SubtractionError::SubtrahendCountExceedsMinuendCount)?;
627 } else {
628 panic!("Tried to subtract value outside of range: {}", other_value);
629 }
630 }
631
632 // we might have just set the min / max to have zero count.
633 if other_value <= old_min_highest_equiv || other_value >= old_max_lowest_equiv {
634 needs_restat = true;
635 }
636
637 if !needs_restat {
638 // if we're not already going to recalculate everything, subtract from
639 // total_count
640 self.total_count = self
641 .total_count
642 .checked_sub(other_count.as_u64())
643 .expect("total count underflow on subtraction");
644 }
645 }
646 }
647
648 if needs_restat {
649 let l = self.distinct_values();
650 self.restat(l);
651 }
652
653 Ok(())
654 }
655
656 // ********************************************************************************************
657 // Setters and resetters.
658 // ********************************************************************************************
659
660 /// Clear the contents of this histogram while preserving its statistics and configuration.
661 pub fn clear(&mut self) {
662 for c in &mut self.counts {
663 *c = T::zero();
664 }
665 self.total_count = 0;
666 }
667
668 /// Reset the contents and statistics of this histogram, preserving only its configuration.
669 pub fn reset(&mut self) {
670 self.clear();
671
672 self.reset_max(ORIGINAL_MAX);
673 self.reset_min(ORIGINAL_MIN);
674 // self.normalizing_index_offset = 0;
675 // self.start_time = time::Instant::now();
676 // self.end_time = time::Instant::now();
677 // self.tag = String::new();
678 }
679
680 /// Control whether or not the histogram can auto-resize and auto-adjust it's highest trackable
681 /// value as high-valued samples are recorded.
682 pub fn auto(&mut self, enabled: bool) {
683 self.auto_resize = enabled;
684 }
685
686 // ********************************************************************************************
687 // Construction.
688 // ********************************************************************************************
689
690 /// Construct an auto-resizing `Histogram` with a lowest discernible value of 1 and an
691 /// auto-adjusting highest trackable value. Can auto-resize up to track values up to
692 /// `(i64::max_value() / 2)`.
693 ///
694 /// See [`new_with_bounds`] for info on `sigfig`.
695 ///
696 /// [`new_with_bounds`]: #method.new_with_bounds
697 pub fn new(sigfig: u8) -> Result<Histogram<T>, CreationError> {
698 let mut h = Self::new_with_bounds(1, 2, sigfig);
699 if let Ok(ref mut h) = h {
700 h.auto_resize = true;
701 }
702 h
703 }
704
705 /// Construct a `Histogram` given a known maximum value to be tracked, and a number of
706 /// significant decimal digits. The histogram will be constructed to implicitly track
707 /// (distinguish from 0) values as low as 1. Auto-resizing will be disabled.
708 ///
709 /// See [`new_with_bounds`] for info on `high` and `sigfig`.
710 ///
711 /// [`new_with_bounds`]: #method.new_with_bounds
712 pub fn new_with_max(high: u64, sigfig: u8) -> Result<Histogram<T>, CreationError> {
713 Self::new_with_bounds(1, high, sigfig)
714 }
715
716 /// Construct a `Histogram` with known upper and lower bounds for recorded sample values.
717 ///
718 /// `low` is the lowest value that can be discerned (distinguished from 0) by the histogram,
719 /// and must be a positive integer that is >= 1. It may be internally rounded down to nearest
720 /// power of 2. Providing a lowest discernible value (`low`) is useful is situations where the
721 /// units used for the histogram's values are much smaller that the minimal accuracy required.
722 /// E.g. when tracking time values stated in nanosecond units, where the minimal accuracy
723 /// required is a microsecond, the proper value for `low` would be 1000. If you're not sure,
724 /// use 1.
725 ///
726 /// `high` is the highest value to be tracked by the histogram, and must be a
727 /// positive integer that is `>= (2 * low)`. If you're not sure, use `u64::max_value()`.
728 ///
729 /// `sigfig` Specifies the number of significant figures to maintain. This is the number of
730 /// significant decimal digits to which the histogram will maintain value resolution and
731 /// separation. Must be in the range [0, 5]. If you're not sure, use 3. As `sigfig` increases,
732 /// memory usage grows exponentially, so choose carefully if there will be many histograms in
733 /// memory at once or if storage is otherwise a concern.
734 ///
735 /// Returns an error if the provided parameters are invalid; see `CreationError`.
736 pub fn new_with_bounds(low: u64, high: u64, sigfig: u8) -> Result<Histogram<T>, CreationError> {
737 // Verify argument validity
738 if low < 1 {
739 return Err(CreationError::LowIsZero);
740 }
741 if low > u64::max_value() / 2 {
742 // avoid overflow in 2 * low
743 return Err(CreationError::LowExceedsMax);
744 }
745 if high < 2 * low {
746 return Err(CreationError::HighLessThanTwiceLow);
747 }
748 if sigfig > 5 {
749 return Err(CreationError::SigFigExceedsMax);
750 }
751
752 // Given a 3 decimal point accuracy, the expectation is obviously for "+/- 1 unit at 1000".
753 // It also means that it's "ok to be +/- 2 units at 2000". The "tricky" thing is that it is
754 // NOT ok to be +/- 2 units at 1999. Only starting at 2000. So internally, we need to
755 // maintain single unit resolution to 2x 10^decimal_points.
756
757 // largest value with single unit resolution, in [2, 200_000].
758 let largest = 2 * 10_u32.pow(u32::from(sigfig));
759
760 let unit_magnitude = (low as f64).log2().floor() as u8;
761 let unit_magnitude_mask = (1 << unit_magnitude) - 1;
762
763 // We need to maintain power-of-two sub_bucket_count (for clean direct indexing) that is
764 // large enough to provide unit resolution to at least
765 // largest_value_with_single_unit_resolution. So figure out
766 // largest_value_with_single_unit_resolution's nearest power-of-two (rounded up), and use
767 // that.
768 // In [1, 18]. 2^18 > 2 * 10^5 (the largest possible
769 // largest_value_with_single_unit_resolution)
770 let sub_bucket_count_magnitude = (f64::from(largest)).log2().ceil() as u8;
771 let sub_bucket_half_count_magnitude = sub_bucket_count_magnitude - 1;
772 let sub_bucket_count = 1_u32 << u32::from(sub_bucket_count_magnitude);
773
774 if unit_magnitude + sub_bucket_count_magnitude > 63 {
775 // sub_bucket_count entries can't be represented, with unit_magnitude applied, in a
776 // u64. Technically it still sort of works if their sum is 64: you can represent all
777 // but the last number in the shifted sub_bucket_count. However, the utility of such a
778 // histogram vs ones whose magnitude here fits in 63 bits is debatable, and it makes
779 // it harder to work through the logic. Sums larger than 64 are totally broken as
780 // leading_zero_count_base would go negative.
781 return Err(CreationError::CannotRepresentSigFigBeyondLow);
782 };
783
784 let sub_bucket_half_count = sub_bucket_count / 2;
785 // sub_bucket_count is always at least 2, so subtraction won't underflow
786 let sub_bucket_mask = (u64::from(sub_bucket_count) - 1) << unit_magnitude;
787
788 let mut h = Histogram {
789 auto_resize: false,
790
791 highest_trackable_value: high,
792 lowest_discernible_value: low,
793 significant_value_digits: sigfig,
794
795 // set by resize() below
796 bucket_count: 0,
797 sub_bucket_count,
798
799 // Establish leading_zero_count_base, used in bucket_index_of() fast path:
800 // subtract the bits that would be used by the largest value in bucket 0.
801 leading_zero_count_base: 64 - unit_magnitude - sub_bucket_count_magnitude,
802 sub_bucket_half_count_magnitude,
803
804 unit_magnitude,
805 sub_bucket_half_count,
806
807 sub_bucket_mask,
808
809 unit_magnitude_mask,
810 max_value: ORIGINAL_MAX,
811 min_non_zero_value: ORIGINAL_MIN,
812
813 total_count: 0,
814 // set by alloc() below
815 counts: Vec::new(),
816 };
817
818 // Already checked that high >= 2*low
819 h.resize(high)
820 .map_err(|_| CreationError::UsizeTypeTooSmall)?;
821 Ok(h)
822 }
823
824 /// Construct a `Histogram` with the same range settings as a given source histogram,
825 /// duplicating the source's start/end timestamps (but NOT its contents).
826 pub fn new_from<F: Counter>(source: &Histogram<F>) -> Histogram<T> {
827 let mut h = Self::new_with_bounds(
828 source.lowest_discernible_value,
829 source.highest_trackable_value,
830 source.significant_value_digits,
831 )
832 .expect("Using another histogram's parameters failed");
833
834 // h.start_time = source.start_time;
835 // h.end_time = source.end_time;
836 h.auto_resize = source.auto_resize;
837 h.counts.resize(source.distinct_values(), T::zero());
838 h
839 }
840
841 // ********************************************************************************************
842 // Recording samples.
843 // ********************************************************************************************
844
845 /// Record `value` in the histogram.
846 ///
847 /// Returns an error if `value` exceeds the highest trackable value and auto-resize is
848 /// disabled.
849 pub fn record(&mut self, value: u64) -> Result<(), RecordError> {
850 self.record_n(value, T::one())
851 }
852
853 /// Record `value` in the histogram, clamped to the range of the histogram.
854 ///
855 /// This method cannot fail, as any values that are too small or too large to be tracked will
856 /// automatically be clamed to be in range. Be aware that this *will* hide extreme outliers
857 /// from the resulting histogram without warning. Since the values are clamped, the histogram
858 /// will also not be resized to accomodate the value, even if auto-resize is enabled.
859 pub fn saturating_record(&mut self, value: u64) {
860 self.saturating_record_n(value, T::one())
861 }
862
863 /// Record multiple samples for a value in the histogram, adding to the value's current count.
864 ///
865 /// `count` is the number of occurrences of this value to record.
866 ///
867 /// Returns an error if `value` cannot be recorded; see `RecordError`.
868 pub fn record_n(&mut self, value: u64, count: T) -> Result<(), RecordError> {
869 self.record_n_inner(value, count, false)
870 }
871
872 /// Record multiple samples for a value in the histogram, each one clamped to the histogram's
873 /// range.
874 ///
875 /// `count` is the number of occurrences of this value to record.
876 ///
877 /// This method cannot fail, as values that are too small or too large to be recorded will
878 /// automatically be clamed to be in range. Be aware that this *will* hide extreme outliers
879 /// from the resulting histogram without warning. Since the values are clamped, the histogram
880 /// will also not be resized to accomodate the value, even if auto-resize is enabled.
881 pub fn saturating_record_n(&mut self, value: u64, count: T) {
882 self.record_n_inner(value, count, true).unwrap()
883 }
884
885 fn record_n_inner(&mut self, mut value: u64, count: T, clamp: bool) -> Result<(), RecordError> {
886 let recorded_without_resize = if let Some(c) = self.mut_at(value) {
887 *c = (*c).saturating_add(count);
888 true
889 } else {
890 false
891 };
892
893 if !recorded_without_resize {
894 if clamp {
895 value = if value > self.highest_trackable_value {
896 self.highest_trackable_value
897 } else {
898 // must be smaller than the lowest_discernible_value, since self.mut_at(value)
899 // failed, and it's not too large (per above).
900 self.lowest_discernible_value
901 };
902
903 let c = self
904 .mut_at(value)
905 .expect("unwrap must succeed since low and high are always representable");
906 *c = c.saturating_add(count);
907 } else if !self.auto_resize {
908 return Err(RecordError::ValueOutOfRangeResizeDisabled);
909 } else {
910 // We're growing the histogram, so new high > old high and is therefore >= 2x low.
911 self.resize(value)
912 .map_err(|_| RecordError::ResizeFailedUsizeTypeTooSmall)?;
913 self.highest_trackable_value =
914 self.highest_equivalent(self.value_for(self.last_index()));
915
916 {
917 let c = self.mut_at(value).expect("value should fit after resize");
918 // after resize, should be no possibility of overflow because this is a new slot
919 *c = (*c)
920 .checked_add(&count)
921 .expect("count overflow after resize");
922 }
923 }
924 }
925
926 self.update_min_max(value);
927 self.total_count = self.total_count.saturating_add(count.as_u64());
928 Ok(())
929 }
930
931 /// Record a value in the histogram while correcting for coordinated omission.
932 ///
933 /// See `record_n_correct` for further documentation.
934 pub fn record_correct(&mut self, value: u64, interval: u64) -> Result<(), RecordError> {
935 self.record_n_correct(value, T::one(), interval)
936 }
937
938 /// Record multiple values in the histogram while correcting for coordinated omission.
939 ///
940 /// To compensate for the loss of sampled values when a recorded value is larger than the
941 /// expected interval between value samples, this method will auto-generate and record an
942 /// additional series of decreasingly-smaller (down to `interval`) value records.
943 ///
944 /// Note: This is a at-recording correction method, as opposed to the post-recording correction
945 /// method provided by `correct_clone`. The two methods are mutually exclusive, and only one of
946 /// the two should be be used on a given data set to correct for the same coordinated omission
947 /// issue.
948 ///
949 /// Returns an error if `value` exceeds the highest trackable value and auto-resize is
950 /// disabled.
951 pub fn record_n_correct(
952 &mut self,
953 value: u64,
954 count: T,
955 interval: u64,
956 ) -> Result<(), RecordError> {
957 self.record_n(value, count)?;
958 if interval == 0 {
959 return Ok(());
960 }
961
962 if value > interval {
963 // only enter loop when calculations will stay non-negative
964 let mut missing_value = value - interval;
965 while missing_value >= interval {
966 self.record_n_inner(missing_value, count, false)?;
967 missing_value -= interval;
968 }
969 }
970
971 Ok(())
972 }
973
974 // ********************************************************************************************
975 // Iterators
976 // ********************************************************************************************
977
978 /// Iterate through histogram values by quantile levels.
979 ///
980 /// The iteration mechanic for this iterator may appear somewhat confusing, but it yields
981 /// fairly pleasing output. The iterator starts with a *quantile step size* of
982 /// `1/halving_period`. For every iteration, it yields a value whose quantile is that much
983 /// greater than the previously emitted quantile (i.e., initially 0, 0.1, 0.2, etc.). Once
984 /// `halving_period` values have been emitted, the quantile step size is halved, and the
985 /// iteration continues.
986 ///
987 /// `ticks_per_half_distance` must be at least 1.
988 ///
989 /// The iterator yields an `iterators::IterationValue` struct.
990 ///
991 /// One subtlety of this iterator is that you can reach a value whose cumulative count yields
992 /// a quantile of 1.0 far sooner than the quantile iteration would reach 1.0. Consider a
993 /// histogram with count 1 at value 1, and count 1000000 at value 1000. At any quantile
994 /// iteration above `1/1000001 = 0.000000999`, iteration will have necessarily proceeded to
995 /// the index for value 1000, which has all the remaining counts, and therefore quantile (for
996 /// the value) of 1.0. This is why `IterationValue` has both `quantile()` and
997 /// `quantile_iterated_to()`. Additionally, to avoid a bunch of unhelpful iterations once
998 /// iteration has reached the last value with non-zero count, quantile iteration will skip
999 /// straight to 1.0 as well.
1000 ///
1001 /// ```
1002 /// use hdrhistogram::Histogram;
1003 /// use hdrhistogram::iterators::IterationValue;
1004 /// let mut hist = Histogram::<u64>::new_with_max(10000, 4).unwrap();
1005 /// for i in 0..10000 {
1006 /// hist += i;
1007 /// }
1008 ///
1009 /// let mut perc = hist.iter_quantiles(1);
1010 ///
1011 /// println!("{:?}", hist.iter_quantiles(1).collect::<Vec<_>>());
1012 ///
1013 /// assert_eq!(
1014 /// perc.next(),
1015 /// Some(IterationValue::new(hist.value_at_quantile(0.0001), 0.0001, 0.0, 1, 1))
1016 /// );
1017 /// // step size = 50
1018 /// assert_eq!(
1019 /// perc.next(),
1020 /// Some(IterationValue::new(hist.value_at_quantile(0.5), 0.5, 0.5, 1, 5000 - 1))
1021 /// );
1022 /// // step size = 25
1023 /// assert_eq!(
1024 /// perc.next(),
1025 /// Some(IterationValue::new(hist.value_at_quantile(0.75), 0.75, 0.75, 1, 2500))
1026 /// );
1027 /// // step size = 12.5
1028 /// assert_eq!(
1029 /// perc.next(),
1030 /// Some(IterationValue::new(hist.value_at_quantile(0.875), 0.875, 0.875, 1, 1250))
1031 /// );
1032 /// // step size = 6.25
1033 /// assert_eq!(
1034 /// perc.next(),
1035 /// Some(IterationValue::new(hist.value_at_quantile(0.9375), 0.9375, 0.9375, 1, 625))
1036 /// );
1037 /// // step size = 3.125
1038 /// assert_eq!(
1039 /// perc.next(),
1040 /// Some(IterationValue::new(hist.value_at_quantile(0.9688), 0.9688, 0.96875, 1, 313))
1041 /// );
1042 /// // etc...
1043 /// ```
1044 pub fn iter_quantiles(
1045 &self,
1046 ticks_per_half_distance: u32,
1047 ) -> HistogramIterator<T, iterators::quantile::Iter<T>> {
1048 // TODO upper bound on ticks per half distance? 2^31 ticks is not useful
1049 iterators::quantile::Iter::new(self, ticks_per_half_distance)
1050 }
1051
1052 /// Iterates through histogram values using linear value steps. The iteration is performed in
1053 /// steps of size `step`, each one yielding the count for all values in the preceeding value
1054 /// range of size `step`. The iterator terminates when all recorded histogram values are
1055 /// exhausted.
1056 ///
1057 /// The iterator yields an `iterators::IterationValue` struct.
1058 ///
1059 /// ```
1060 /// use hdrhistogram::Histogram;
1061 /// use hdrhistogram::iterators::IterationValue;
1062 /// let mut hist = Histogram::<u64>::new_with_max(1000, 3).unwrap();
1063 /// hist += 100;
1064 /// hist += 500;
1065 /// hist += 800;
1066 /// hist += 850;
1067 ///
1068 /// let mut perc = hist.iter_linear(100);
1069 /// assert_eq!(
1070 /// perc.next(),
1071 /// Some(IterationValue::new(99, hist.quantile_below(99), hist.quantile_below(99), 0, 0))
1072 /// );
1073 /// assert_eq!(
1074 /// perc.next(),
1075 /// Some(IterationValue::new(199, hist.quantile_below(199), hist.quantile_below(199), 0, 1))
1076 /// );
1077 /// assert_eq!(
1078 /// perc.next(),
1079 /// Some(IterationValue::new(299, hist.quantile_below(299), hist.quantile_below(299), 0, 0))
1080 /// );
1081 /// assert_eq!(
1082 /// perc.next(),
1083 /// Some(IterationValue::new(399, hist.quantile_below(399), hist.quantile_below(399), 0, 0))
1084 /// );
1085 /// assert_eq!(
1086 /// perc.next(),
1087 /// Some(IterationValue::new(499, hist.quantile_below(499), hist.quantile_below(499), 0, 0))
1088 /// );
1089 /// assert_eq!(
1090 /// perc.next(),
1091 /// Some(IterationValue::new(599, hist.quantile_below(599), hist.quantile_below(599), 0, 1))
1092 /// );
1093 /// assert_eq!(
1094 /// perc.next(),
1095 /// Some(IterationValue::new(699, hist.quantile_below(699), hist.quantile_below(699), 0, 0))
1096 /// );
1097 /// assert_eq!(
1098 /// perc.next(),
1099 /// Some(IterationValue::new(799, hist.quantile_below(799), hist.quantile_below(799), 0, 0))
1100 /// );
1101 /// assert_eq!(
1102 /// perc.next(),
1103 /// Some(IterationValue::new(899, hist.quantile_below(899), hist.quantile_below(899), 0, 2))
1104 /// );
1105 /// assert_eq!(perc.next(), None);
1106 /// ```
1107 pub fn iter_linear(&self, step: u64) -> HistogramIterator<T, iterators::linear::Iter<T>> {
1108 iterators::linear::Iter::new(self, step)
1109 }
1110
1111 /// Iterates through histogram values at logarithmically increasing levels. The iteration is
1112 /// performed in steps that start at `start` and increase exponentially according to `exp`. The
1113 /// iterator terminates when all recorded histogram values are exhausted.
1114 ///
1115 /// The iterator yields an `iterators::IterationValue` struct.
1116 ///
1117 /// ```
1118 /// use hdrhistogram::Histogram;
1119 /// use hdrhistogram::iterators::IterationValue;
1120 /// let mut hist = Histogram::<u64>::new_with_max(1000, 3).unwrap();
1121 /// hist += 100;
1122 /// hist += 500;
1123 /// hist += 800;
1124 /// hist += 850;
1125 ///
1126 /// let mut perc = hist.iter_log(1, 10.0);
1127 /// assert_eq!(
1128 /// perc.next(),
1129 /// Some(IterationValue::new(0, hist.quantile_below(0), hist.quantile_below(0), 0, 0))
1130 /// );
1131 /// assert_eq!(
1132 /// perc.next(),
1133 /// Some(IterationValue::new(9, hist.quantile_below(9), hist.quantile_below(9), 0, 0))
1134 /// );
1135 /// assert_eq!(
1136 /// perc.next(),
1137 /// Some(IterationValue::new(99, hist.quantile_below(99), hist.quantile_below(99), 0, 0))
1138 /// );
1139 /// assert_eq!(
1140 /// perc.next(),
1141 /// Some(IterationValue::new(999, hist.quantile_below(999), hist.quantile_below(999), 0, 4))
1142 /// );
1143 /// assert_eq!(perc.next(), None);
1144 /// ```
1145 pub fn iter_log(&self, start: u64, exp: f64) -> HistogramIterator<T, iterators::log::Iter<T>> {
1146 iterators::log::Iter::new(self, start, exp)
1147 }
1148
1149 /// Iterates through all recorded histogram values using the finest granularity steps supported
1150 /// by the underlying representation. The iteration steps through all non-zero recorded value
1151 /// counts, and terminates when all recorded histogram values are exhausted.
1152 ///
1153 /// The iterator yields an `iterators::IterationValue` struct.
1154 ///
1155 /// ```
1156 /// use hdrhistogram::Histogram;
1157 /// use hdrhistogram::iterators::IterationValue;
1158 /// let mut hist = Histogram::<u64>::new_with_max(1000, 3).unwrap();
1159 /// hist += 100;
1160 /// hist += 500;
1161 /// hist += 800;
1162 /// hist += 850;
1163 ///
1164 /// let mut perc = hist.iter_recorded();
1165 /// assert_eq!(
1166 /// perc.next(),
1167 /// Some(IterationValue::new(100, hist.quantile_below(100), hist.quantile_below(100), 1, 1))
1168 /// );
1169 /// assert_eq!(
1170 /// perc.next(),
1171 /// Some(IterationValue::new(500, hist.quantile_below(500), hist.quantile_below(500), 1, 1))
1172 /// );
1173 /// assert_eq!(
1174 /// perc.next(),
1175 /// Some(IterationValue::new(800, hist.quantile_below(800), hist.quantile_below(800), 1, 1))
1176 /// );
1177 /// assert_eq!(
1178 /// perc.next(),
1179 /// Some(IterationValue::new(850, hist.quantile_below(850), hist.quantile_below(850), 1, 1))
1180 /// );
1181 /// assert_eq!(perc.next(), None);
1182 /// ```
1183 pub fn iter_recorded(&self) -> HistogramIterator<T, iterators::recorded::Iter> {
1184 iterators::recorded::Iter::new(self)
1185 }
1186
1187 /// Iterates through all histogram values using the finest granularity steps supported by the
1188 /// underlying representation. The iteration steps through all possible unit value levels,
1189 /// regardless of whether or not there were recorded values for that value level, and
1190 /// terminates when all recorded histogram values are exhausted.
1191 ///
1192 /// The iterator yields an `iterators::IterationValue` struct.
1193 ///
1194 /// ```
1195 /// use hdrhistogram::Histogram;
1196 /// use hdrhistogram::iterators::IterationValue;
1197 /// let mut hist = Histogram::<u64>::new_with_max(10, 1).unwrap();
1198 /// hist += 1;
1199 /// hist += 5;
1200 /// hist += 8;
1201 ///
1202 /// let mut perc = hist.iter_all();
1203 /// assert_eq!(perc.next(), Some(IterationValue::new(0, 0.0, 0.0, 0, 0)));
1204 /// assert_eq!(
1205 /// perc.next(),
1206 /// Some(IterationValue::new(1, hist.quantile_below(1), hist.quantile_below(1), 1, 1))
1207 /// );
1208 /// assert_eq!(
1209 /// perc.next(),
1210 /// Some(IterationValue::new(2, hist.quantile_below(2), hist.quantile_below(2), 0, 0))
1211 /// );
1212 /// assert_eq!(
1213 /// perc.next(),
1214 /// Some(IterationValue::new(3, hist.quantile_below(3), hist.quantile_below(3), 0, 0))
1215 /// );
1216 /// assert_eq!(
1217 /// perc.next(),
1218 /// Some(IterationValue::new(4, hist.quantile_below(4), hist.quantile_below(4), 0, 0))
1219 /// );
1220 /// assert_eq!(
1221 /// perc.next(),
1222 /// Some(IterationValue::new(5, hist.quantile_below(5), hist.quantile_below(5), 1, 1))
1223 /// );
1224 /// assert_eq!(
1225 /// perc.next(),
1226 /// Some(IterationValue::new(6, hist.quantile_below(6), hist.quantile_below(6), 0, 0))
1227 /// );
1228 /// assert_eq!(
1229 /// perc.next(),
1230 /// Some(IterationValue::new(7, hist.quantile_below(7), hist.quantile_below(7), 0, 0))
1231 /// );
1232 /// assert_eq!(
1233 /// perc.next(),
1234 /// Some(IterationValue::new(8, hist.quantile_below(8), hist.quantile_below(8), 1, 1))
1235 /// );
1236 /// assert_eq!(
1237 /// perc.next(),
1238 /// Some(IterationValue::new(9, hist.quantile_below(9), hist.quantile_below(9), 0, 0))
1239 /// );
1240 /// assert_eq!(perc.next(), Some(IterationValue::new(10, 1.0, 1.0, 0, 0)));
1241 /// ```
1242 pub fn iter_all(&self) -> HistogramIterator<T, iterators::all::Iter> {
1243 iterators::all::Iter::new(self)
1244 }
1245
1246 // ********************************************************************************************
1247 // Data statistics
1248 // ********************************************************************************************
1249
1250 /// Get the lowest recorded value level in the histogram.
1251 /// If the histogram has no recorded values, the value returned will be 0.
1252 pub fn min(&self) -> u64 {
1253 if self.total_count == 0
1254 || self
1255 .count_at_index(0)
1256 .expect("counts array must be non-empty")
1257 != T::zero()
1258 {
1259 0
1260 } else {
1261 self.min_nz()
1262 }
1263 }
1264
1265 /// Get the highest recorded value level in the histogram.
1266 /// If the histogram has no recorded values, the value returned is undefined.
1267 pub fn max(&self) -> u64 {
1268 if self.max_value == ORIGINAL_MAX {
1269 ORIGINAL_MAX
1270 } else {
1271 self.highest_equivalent(self.max_value)
1272 }
1273 }
1274
1275 /// Get the lowest recorded non-zero value level in the histogram.
1276 /// If the histogram has no recorded values, the value returned is `u64::max_value()`.
1277 pub fn min_nz(&self) -> u64 {
1278 if self.min_non_zero_value == ORIGINAL_MIN {
1279 ORIGINAL_MIN
1280 } else {
1281 self.lowest_equivalent(self.min_non_zero_value)
1282 }
1283 }
1284
1285 /// Determine if two values are equivalent with the histogram's resolution. Equivalent here
1286 /// means that value samples recorded for any two equivalent values are counted in a common
1287 /// total count.
1288 pub fn equivalent(&self, value1: u64, value2: u64) -> bool {
1289 self.lowest_equivalent(value1) == self.lowest_equivalent(value2)
1290 }
1291
1292 /// Get the computed mean value of all recorded values in the histogram.
1293 pub fn mean(&self) -> f64 {
1294 if self.total_count == 0 {
1295 return 0.0;
1296 }
1297
1298 self.iter_recorded().fold(0.0_f64, |total, v| {
1299 // TODO overflow?
1300 total
1301 + self.median_equivalent(v.value_iterated_to()) as f64 * v.count_at_value().as_f64()
1302 / self.total_count as f64
1303 })
1304 }
1305
1306 /// Get the computed standard deviation of all recorded values in the histogram
1307 pub fn stdev(&self) -> f64 {
1308 if self.total_count == 0 {
1309 return 0.0;
1310 }
1311
1312 let mean = self.mean();
1313 let geom_dev_tot = self.iter_recorded().fold(0.0_f64, |gdt, v| {
1314 let dev = self.median_equivalent(v.value_iterated_to()) as f64 - mean;
1315 gdt + (dev * dev) * v.count_since_last_iteration() as f64
1316 });
1317
1318 (geom_dev_tot / self.total_count as f64).sqrt()
1319 }
1320
1321 /// Get the value at a given percentile.
1322 ///
1323 /// This is simply `value_at_quantile` multiplied by 100.0. For best floating-point precision,
1324 /// use `value_at_quantile` directly.
1325 pub fn value_at_percentile(&self, percentile: f64) -> u64 {
1326 self.value_at_quantile(percentile / 100.0)
1327 }
1328
1329 /// Get the value at a given quantile.
1330 ///
1331 /// When the given quantile is > 0.0, the value returned is the value that the given
1332 /// percentage of the overall recorded value entries in the histogram are either smaller than
1333 /// or equivalent to. When the given quantile is 0.0, the value returned is the value that
1334 /// all value entries in the histogram are either larger than or equivalent to.
1335 ///
1336 /// Two values are considered "equivalent" if `self.equivalent` would return true.
1337 ///
1338 /// If the total count of the histogram has exceeded `u64::max_value()`, this will return
1339 /// inaccurate results.
1340 pub fn value_at_quantile(&self, quantile: f64) -> u64 {
1341 // Cap at 1.0
1342 let quantile = if quantile > 1.0 { 1.0 } else { quantile };
1343
1344 let fractional_count = quantile * self.total_count as f64;
1345 // If we're part-way into the next highest int, we should use that as the count
1346 let mut count_at_quantile = fractional_count.ceil() as u64;
1347
1348 // Make sure we at least reach the first recorded entry
1349 if count_at_quantile == 0 {
1350 count_at_quantile = 1;
1351 }
1352
1353 let mut total_to_current_index: u64 = 0;
1354 for i in 0..self.counts.len() {
1355 // Direct indexing is safe; indexes must reside in counts array.
1356 // TODO overflow
1357 total_to_current_index += self.counts[i].as_u64();
1358 if total_to_current_index >= count_at_quantile {
1359 let value_at_index = self.value_for(i);
1360 return if quantile == 0.0 {
1361 self.lowest_equivalent(value_at_index)
1362 } else {
1363 self.highest_equivalent(value_at_index)
1364 };
1365 }
1366 }
1367
1368 0
1369 }
1370
1371 /// Get the percentile of samples at and below a given value.
1372 ///
1373 /// This is simply `quantile_below* multiplied by 100.0. For best floating-point precision, use
1374 /// `quantile_below` directly.
1375 pub fn percentile_below(&self, value: u64) -> f64 {
1376 self.quantile_below(value) * 100.0
1377 }
1378
1379 /// Get the quantile of samples at or below a given value.
1380 ///
1381 /// The value returned is the quantile of values recorded in the histogram that are
1382 /// smaller than or equivalent to the given value.
1383 ///
1384 /// Two values are considered "equivalent" if `self.equivalent` would return true.
1385 ///
1386 /// If the value is larger than the maximum representable value, it will be clamped to the
1387 /// max representable value.
1388 ///
1389 /// If the total count of the histogram has reached `u64::max_value()`, this will return
1390 /// inaccurate results.
1391 pub fn quantile_below(&self, value: u64) -> f64 {
1392 if self.total_count == 0 {
1393 return 1.0;
1394 }
1395
1396 let target_index = self.index_for_or_last(value);
1397 // TODO use RangeInclusive when it's stable to avoid checked_add
1398 let end = target_index.checked_add(1).expect("usize overflow");
1399 // count the smaller half
1400 let (slice, lower) = if target_index < self.counts.len() / 2 {
1401 (&self.counts[0..end], true)
1402 } else {
1403 (&self.counts[end..], false)
1404 };
1405 let iter = slice.iter().map(Counter::as_u64);
1406
1407 let total_to_current_index = if self.total_count < u64::MAX {
1408 // if the total didn't saturate then any partial count shouldn't either.
1409 // iter::sum optimizes better than the saturating_add fallback below
1410 iter.sum::<u64>()
1411 } else {
1412 iter.fold(0u64, u64::saturating_add)
1413 };
1414 let total_to_current_index = if lower {
1415 total_to_current_index
1416 } else {
1417 self.total_count - total_to_current_index
1418 };
1419 total_to_current_index.as_f64() / self.total_count as f64
1420 }
1421
1422 /// Get the count of recorded values within a range of value levels (inclusive to within the
1423 /// histogram's resolution).
1424 ///
1425 /// `low` gives the lower value bound on the range for which to provide the recorded count.
1426 /// Will be rounded down with `lowest_equivalent`. Similarly, `high` gives the higher value
1427 /// bound on the range, and will be rounded up with `highest_equivalent`. The function returns
1428 /// the total count of values recorded in the histogram within the value range that is `>=
1429 /// lowest_equivalent(low)` and `<= highest_equivalent(high)`.
1430 ///
1431 /// If either value is larger than the maximum representable value, it will be clamped to the
1432 /// max representable value.
1433 ///
1434 /// The count will saturate at u64::max_value().
1435 pub fn count_between(&self, low: u64, high: u64) -> u64 {
1436 let low_index = self.index_for_or_last(low);
1437 let high_index = self.index_for_or_last(high);
1438 // TODO use RangeInclusive when it's stable to avoid checked_add
1439 (low_index..high_index.checked_add(1).expect("usize overflow"))
1440 .map(|i| self.count_at_index(i).expect("index is <= last_index()"))
1441 .fold(0_u64, |t, v| t.saturating_add(v.as_u64()))
1442 }
1443
1444 /// Get the count of recorded values at a specific value (to within the histogram resolution at
1445 /// the value level).
1446 ///
1447 /// The count is computed across values recorded in the histogram that are within the value
1448 /// range that is `>= lowest_equivalent(value)` and `<= highest_equivalent(value)`.
1449 ///
1450 /// If the value is larger than the maximum representable value, it will be clamped to the
1451 /// max representable value.
1452 pub fn count_at(&self, value: u64) -> T {
1453 self.count_at_index(self.index_for_or_last(value))
1454 .expect("index is <= last_index()")
1455 }
1456
1457 // ********************************************************************************************
1458 // Public helpers
1459 // ********************************************************************************************
1460
1461 /// Get the lowest value that is equivalent to the given value within the histogram's
1462 /// resolution. Equivalent here means that value samples recorded for any two equivalent values
1463 /// are counted in a common total count.
1464 pub fn lowest_equivalent(&self, value: u64) -> u64 {
1465 let bucket_index = self.bucket_for(value);
1466 let sub_bucket_index = self.sub_bucket_for(value, bucket_index);
1467 self.value_from_loc(bucket_index, sub_bucket_index)
1468 }
1469
1470 /// Get the highest value that is equivalent to the given value within the histogram's
1471 /// resolution. Equivalent here means that value samples recorded for any two equivalent values
1472 /// are counted in a common total count.
1473 ///
1474 /// Note that the return value is capped at `u64::max_value()`.
1475 pub fn highest_equivalent(&self, value: u64) -> u64 {
1476 if value == u64::max_value() {
1477 u64::max_value()
1478 } else {
1479 self.next_non_equivalent(value) - 1
1480 }
1481 }
1482
1483 /// Get a value that lies in the middle (rounded up) of the range of values equivalent the
1484 /// given value. Equivalent here means that value samples recorded for any two equivalent
1485 /// values are counted in a common total count.
1486 ///
1487 /// Note that the return value is capped at `u64::max_value()`.
1488 pub fn median_equivalent(&self, value: u64) -> u64 {
1489 // adding half of the range to the bottom of the range shouldn't overflow
1490 self.lowest_equivalent(value)
1491 .checked_add(self.equivalent_range(value) >> 1)
1492 .expect("median equivalent should not overflow")
1493 }
1494
1495 /// Get the next value that is *not* equivalent to the given value within the histogram's
1496 /// resolution. Equivalent means that value samples recorded for any two equivalent values are
1497 /// counted in a common total count.
1498 ///
1499 /// Note that the return value is capped at `u64::max_value()`.
1500 pub fn next_non_equivalent(&self, value: u64) -> u64 {
1501 self.lowest_equivalent(value)
1502 .saturating_add(self.equivalent_range(value))
1503 }
1504
1505 /// Get the size (in value units) of the range of values that are equivalent to the given value
1506 /// within the histogram's resolution. Equivalent here means that value samples recorded for
1507 /// any two equivalent values are counted in a common total count.
1508 pub fn equivalent_range(&self, value: u64) -> u64 {
1509 let bucket_index = self.bucket_for(value);
1510 1_u64 << (self.unit_magnitude + bucket_index)
1511 }
1512
1513 /// Turn this histogram into a [`SyncHistogram`].
1514 #[cfg(feature = "sync")]
1515 pub fn into_sync(self) -> SyncHistogram<T> {
1516 SyncHistogram::from(self)
1517 }
1518
1519 // ********************************************************************************************
1520 // Internal helpers
1521 // ********************************************************************************************
1522
1523 /// Computes the matching histogram value for the given histogram bin.
1524 ///
1525 /// `index` must be no larger than `u32::max_value()`; no possible histogram uses that much
1526 /// storage anyway. So, any index that comes from a valid histogram location will be safe.
1527 ///
1528 /// If the index is for a position beyond what this histogram is configured for, the correct
1529 /// corresponding value will be returned, but of course it won't have a corresponding count.
1530 ///
1531 /// If the index maps to a value beyond `u64::max_value()`, the result will be garbage.
1532 fn value_for(&self, index: usize) -> u64 {
1533 // Dividing by sub bucket half count will yield 1 in top half of first bucket, 2 in
1534 // in the top half (i.e., the only half that's used) of the 2nd bucket, etc, so subtract 1
1535 // to get 0-indexed bucket indexes. This will be -1 for the bottom half of the first bucket.
1536 let mut bucket_index = (index >> self.sub_bucket_half_count_magnitude) as isize - 1;
1537
1538 // Calculate the remainder of dividing by sub_bucket_half_count, shifted into the top half
1539 // of the corresponding bucket. This will (temporarily) map indexes in the lower half of
1540 // first bucket into the top half.
1541
1542 // The subtraction won't underflow because half count is always at least 1.
1543 // TODO precalculate sub_bucket_half_count mask if benchmarks show improvement
1544 let mut sub_bucket_index = ((index.to_u32().expect("index must fit in u32"))
1545 & (self.sub_bucket_half_count - 1))
1546 + self.sub_bucket_half_count;
1547 if bucket_index < 0 {
1548 // lower half of first bucket case; move sub bucket index back
1549 sub_bucket_index -= self.sub_bucket_half_count;
1550 bucket_index = 0;
1551 }
1552 self.value_from_loc(bucket_index as u8, sub_bucket_index)
1553 }
1554
1555 /// Returns count at index, or None if out of bounds
1556 fn count_at_index(&self, index: usize) -> Option<T> {
1557 self.counts.get(index).cloned()
1558 }
1559
1560 /// Returns an error if the index doesn't exist.
1561 #[cfg(feature = "serialization")]
1562 fn set_count_at_index(&mut self, index: usize, count: T) -> Result<(), ()> {
1563 let r = self.counts.get_mut(index).ok_or(())?;
1564 *r = count;
1565 Ok(())
1566 }
1567
1568 /// Compute the lowest (and therefore highest precision) bucket index whose sub-buckets can
1569 /// represent the value.
1570 #[inline]
1571 fn bucket_for(&self, value: u64) -> u8 {
1572 // Calculates the number of powers of two by which the value is greater than the biggest
1573 // value that fits in bucket 0. This is the bucket index since each successive bucket can
1574 // hold a value 2x greater. The mask maps small values to bucket 0.
1575 // Will not underflow because sub_bucket_mask caps the leading zeros to no more than
1576 // leading_zero_count_base.
1577 self.leading_zero_count_base - (value | self.sub_bucket_mask).leading_zeros() as u8
1578 }
1579
1580 /// Compute the position inside a bucket at which the given value should be recorded, indexed
1581 /// from position 0 in the bucket (in the first half, which is not used past the first bucket).
1582 /// For bucket_index > 0, the result will be in the top half of the bucket.
1583 #[inline]
1584 fn sub_bucket_for(&self, value: u64, bucket_index: u8) -> u32 {
1585 // Since bucket_index is simply how many powers of 2 greater value is than what will fit in
1586 // bucket 0 (that is, what will fit in [0, sub_bucket_count)), we shift off that many
1587 // powers of two, and end up with a number in [0, sub_bucket_count).
1588 // For bucket_index 0, this is just value. For bucket index k > 0, we know value won't fit
1589 // in bucket (k - 1) by definition, so this calculation won't end up in the lower half of
1590 // [0, sub_bucket_count) because that would mean it would also fit in bucket (k - 1).
1591 // As unit magnitude grows, the maximum possible bucket index should shrink because it is
1592 // based off of sub_bucket_mask, so this shouldn't lead to an overlarge shift.
1593 (value >> (bucket_index + self.unit_magnitude)) as u32
1594 }
1595
1596 /// Compute the value corresponding to the provided bucket and sub bucket indices.
1597 /// The indices given must map to an actual u64; providing contrived indices that would map to
1598 /// a value larger than u64::max_value() will yield garbage.
1599 #[inline]
1600 fn value_from_loc(&self, bucket_index: u8, sub_bucket_index: u32) -> u64 {
1601 // Sum won't overflow; bucket_index and unit_magnitude are both <= 64.
1602 // However, the resulting shift may overflow given bogus input, e.g. if unit magnitude is
1603 // large and the input sub_bucket_index is for an entry in the counts index that shouldn't
1604 // be used (because this calculation will overflow).
1605 u64::from(sub_bucket_index) << (bucket_index + self.unit_magnitude)
1606 }
1607
1608 /// Find the number of buckets needed such that `value` is representable.
1609 fn buckets_to_cover(&self, value: u64) -> u8 {
1610 // Shift won't overflow because sub_bucket_magnitude + unit_magnitude <= 63.
1611 // the k'th bucket can express from 0 * 2^k to sub_bucket_count * 2^k in units of 2^k
1612 let mut smallest_untrackable_value =
1613 u64::from(self.sub_bucket_count) << self.unit_magnitude;
1614
1615 // always have at least 1 bucket
1616 let mut buckets_needed = 1;
1617 while smallest_untrackable_value <= value {
1618 if smallest_untrackable_value > u64::max_value() / 2 {
1619 // next shift will overflow, meaning that bucket could represent values up to ones
1620 // greater than i64::max_value, so it's the last bucket
1621 return buckets_needed + 1;
1622 }
1623 smallest_untrackable_value <<= 1;
1624 buckets_needed += 1;
1625 }
1626 buckets_needed
1627 }
1628
1629 /// Compute the actual number of bins to use for the given bucket count (that is, including the
1630 /// sub-buckets within each top-level bucket).
1631 ///
1632 /// If we have `N` such that `sub_bucket_count * 2^N > high`, we need storage for `N+1` buckets,
1633 /// each with enough slots to hold the top half of the `sub_bucket_count` (the lower half is
1634 /// covered by previous buckets), and the +1 being used for the lower half of the 0'th bucket.
1635 /// Or, equivalently, we need 1 more bucket to capture the max value if we consider the
1636 /// sub-bucket length to be halved.
1637 fn num_bins(&self, number_of_buckets: u8) -> u32 {
1638 (u32::from(number_of_buckets) + 1) * (self.sub_bucket_half_count)
1639 }
1640
1641 /// Resize the underlying counts array such that it can cover the given `high` value.
1642 ///
1643 /// `high` must be at least 2x the lowest discernible value.
1644 ///
1645 /// Returns an error if the new size cannot be represented as a `usize`.
1646 fn resize(&mut self, high: u64) -> Result<(), UsizeTypeTooSmall> {
1647 // will not overflow because lowest_discernible_value must be at least as small as
1648 // u64::max_value() / 2 to have passed initial validation
1649 assert!(
1650 high >= 2 * self.lowest_discernible_value,
1651 "highest trackable value must be >= (2 * lowest discernible value)"
1652 );
1653
1654 // establish counts array length:
1655 let buckets_needed = self.buckets_to_cover(high);
1656 let len = self
1657 .num_bins(buckets_needed)
1658 .to_usize()
1659 .ok_or(UsizeTypeTooSmall)?;
1660
1661 // establish exponent range needed to support the trackable value with no overflow:
1662 self.bucket_count = buckets_needed;
1663
1664 // establish the new highest trackable value:
1665 self.highest_trackable_value = high;
1666
1667 // expand counts to also hold the new counts
1668 self.counts.resize(len, T::zero());
1669 Ok(())
1670 }
1671
1672 /// Set internally tracked max_value to new value if new value is greater than current one.
1673 fn update_max(&mut self, value: u64) {
1674 let internal_value = value | self.unit_magnitude_mask; // Max unit-equivalent value
1675 if internal_value > self.max_value {
1676 self.max_value = internal_value;
1677 }
1678 }
1679
1680 /// Set internally tracked min_non_zero_value to new value if new value is smaller than current
1681 /// one.
1682 fn update_min(&mut self, value: u64) {
1683 if value <= self.unit_magnitude_mask {
1684 return; // Unit-equivalent to 0.
1685 }
1686
1687 let internal_value = value & !self.unit_magnitude_mask; // Min unit-equivalent value
1688 if internal_value < self.min_non_zero_value {
1689 self.min_non_zero_value = internal_value;
1690 }
1691 }
1692
1693 fn update_min_max(&mut self, value: u64) {
1694 if value > self.max_value {
1695 self.update_max(value);
1696 }
1697 if value < self.min_non_zero_value && value != 0 {
1698 self.update_min(value);
1699 }
1700 }
1701
1702 fn reset_max(&mut self, max: u64) {
1703 self.max_value = max | self.unit_magnitude_mask; // Max unit-equivalent value
1704 }
1705
1706 fn reset_min(&mut self, min: u64) {
1707 let internal_value = min & !self.unit_magnitude_mask; // Min unit-equivalent value
1708 self.min_non_zero_value = if min == u64::max_value() {
1709 min
1710 } else {
1711 internal_value
1712 };
1713 }
1714
1715 /// Recalculate min, max, total_count.
1716 fn restat(&mut self, length_to_scan: usize) {
1717 self.reset_max(ORIGINAL_MAX);
1718 self.reset_min(ORIGINAL_MIN);
1719
1720 let mut restat_state = RestatState::new();
1721
1722 assert!(length_to_scan <= self.counts.len());
1723 for i in 0..length_to_scan {
1724 // Direct indexing safe because of assert above
1725 let count = self.counts[i];
1726 if count != T::zero() {
1727 restat_state.on_nonzero_count(i, count);
1728 }
1729 }
1730
1731 restat_state.update_histogram(self);
1732 }
1733}
1734
1735/// Stores the state to calculate the max, min, and total count for a histogram by iterating across
1736/// the counts.
1737struct RestatState<T: Counter> {
1738 max_index: Option<usize>,
1739 min_index: Option<usize>,
1740 total_count: u64,
1741 phantom: std::marker::PhantomData<T>,
1742}
1743
1744impl<T: Counter> RestatState<T> {
1745 fn new() -> RestatState<T> {
1746 RestatState {
1747 max_index: None,
1748 min_index: None,
1749 total_count: 0,
1750 phantom: std::marker::PhantomData,
1751 }
1752 }
1753
1754 /// Should be called on every non-zero count found
1755 #[inline]
1756 fn on_nonzero_count(&mut self, index: usize, count: T) {
1757 self.total_count = self.total_count.saturating_add(count.as_u64());
1758
1759 self.max_index = Some(index);
1760
1761 if self.min_index.is_none() && index != 0 {
1762 self.min_index = Some(index);
1763 }
1764 }
1765
1766 /// Write updated min, max, total_count into histogram.
1767 /// Called once all counts have been iterated across.
1768 fn update_histogram(self, h: &mut Histogram<T>) {
1769 if let Some(max_i) = self.max_index {
1770 let max = h.highest_equivalent(h.value_for(max_i));
1771 h.update_max(max);
1772 }
1773 if let Some(min_i) = self.min_index {
1774 let min = h.value_for(min_i);
1775 h.update_min(min);
1776 }
1777
1778 h.total_count = self.total_count;
1779 }
1780}
1781
1782// ********************************************************************************************
1783// Trait implementations
1784// ********************************************************************************************
1785
1786// make it more ergonomic to add and subtract histograms
1787impl<'a, T: Counter> AddAssign<&'a Histogram<T>> for Histogram<T> {
1788 fn add_assign(&mut self, source: &'a Histogram<T>) {
1789 self.add(source).unwrap();
1790 }
1791}
1792
1793impl<T: Counter> AddAssign<Histogram<T>> for Histogram<T> {
1794 fn add_assign(&mut self, source: Histogram<T>) {
1795 self.add(&source).unwrap();
1796 }
1797}
1798
1799impl<T: Counter> Add<Histogram<T>> for Histogram<T> {
1800 type Output = Histogram<T>;
1801 fn add(mut self, rhs: Histogram<T>) -> Self::Output {
1802 self += rhs;
1803 self
1804 }
1805}
1806
1807impl<'a, T: Counter> Add<&'a Histogram<T>> for Histogram<T> {
1808 type Output = Histogram<T>;
1809 fn add(mut self, rhs: &'a Histogram<T>) -> Self::Output {
1810 self += rhs;
1811 self
1812 }
1813}
1814
1815use std::iter;
1816impl<T: Counter> iter::Sum for Histogram<T> {
1817 fn sum<I>(mut iter: I) -> Self
1818 where
1819 I: Iterator<Item = Self>,
1820 {
1821 match iter.next() {
1822 Some(mut first) => {
1823 for h in iter {
1824 first += h;
1825 }
1826 first
1827 }
1828 None => Histogram::new(3).expect("histograms with sigfig=3 should always work"),
1829 }
1830 }
1831}
1832
1833impl<'a, T: Counter> SubAssign<&'a Histogram<T>> for Histogram<T> {
1834 fn sub_assign(&mut self, other: &'a Histogram<T>) {
1835 self.subtract(other).unwrap();
1836 }
1837}
1838
1839impl<T: Counter> SubAssign<Histogram<T>> for Histogram<T> {
1840 fn sub_assign(&mut self, source: Histogram<T>) {
1841 self.subtract(&source).unwrap();
1842 }
1843}
1844
1845impl<T: Counter> Sub<Histogram<T>> for Histogram<T> {
1846 type Output = Histogram<T>;
1847 fn sub(mut self, rhs: Histogram<T>) -> Self::Output {
1848 self -= rhs;
1849 self
1850 }
1851}
1852
1853impl<'a, T: Counter> Sub<&'a Histogram<T>> for Histogram<T> {
1854 type Output = Histogram<T>;
1855 fn sub(mut self, rhs: &'a Histogram<T>) -> Self::Output {
1856 self -= rhs;
1857 self
1858 }
1859}
1860
1861// make it more ergonomic to record samples
1862impl<T: Counter> AddAssign<u64> for Histogram<T> {
1863 fn add_assign(&mut self, value: u64) {
1864 self.record(value).unwrap();
1865 }
1866}
1867
1868// allow comparing histograms
1869impl<T: Counter, F: Counter> PartialEq<Histogram<F>> for Histogram<T>
1870where
1871 T: PartialEq<F>,
1872{
1873 fn eq(&self, other: &Histogram<F>) -> bool {
1874 if self.lowest_discernible_value != other.lowest_discernible_value
1875 || self.significant_value_digits != other.significant_value_digits
1876 {
1877 return false;
1878 }
1879 if self.total_count != other.total_count {
1880 return false;
1881 }
1882 if self.max() != other.max() {
1883 return false;
1884 }
1885 if self.min_nz() != other.min_nz() {
1886 return false;
1887 }
1888
1889 (0..self.counts.len()).all(|i| {
1890 self.counts[i]
1891 == match other.count_at_index(i) {
1892 Some(c) => c,
1893 None => return false,
1894 }
1895 })
1896 }
1897}
1898
1899// /**
1900// * Indicate whether or not the histogram is capable of supporting auto-resize functionality.
1901// * Note that this is an indication that enabling auto-resize by calling set_auto_resize() is
1902// * allowed, and NOT that the histogram will actually auto-resize. Use is_auto_resize() to
1903// * determine if the histogram is in auto-resize mode.
1904// * @return auto_resize setting
1905// */
1906// public boolean supports_auto_resize() { return true; }
1907
1908// TODO: shift
1909// TODO: hash
1910
1911#[path = "tests/tests.rs"]
1912#[cfg(test)]
1913mod tests;
1914
1915mod core;
1916pub mod errors;
1917#[cfg(feature = "serialization")]
1918pub mod serialization;
1919pub use self::core::counter::*;
1920pub use errors::*;
1921#[cfg(feature = "sync")]
1922pub mod sync;
1923#[cfg(feature = "sync")]
1924pub use sync::SyncHistogram;