=head1 NAME rrdtool create - Set up a new Round Robin Database =for html
PDF version.
=head1 SYNOPSIS B B I S<[B<--start>|B<-b> I]> S<[B<--step>|B<-s> I]> S<[BIB<:>IB<:>I]> IB<:>IB<:>I]> S<[BIB<:>I]> =head1 DESCRIPTION The create function of the RRDtool lets you set up new Round Robin Database (B) files. The file is created at its final, full size and filled with I<*UNKNOWN*> data. =over 8 =item I The name of the B you want to create. B files should end with the extension F<.rrd>. However, B will accept any filename. =item B<--start>|B<-b> I (default: now - 10s) Specifies the time in seconds since 1970-01-01 UTC when the first value should be added to the B. B will not accept any data timed before or at the time specified. See also AT-STYLE TIME SPECIFICATION section in the I documentation for more ways to specify time. =item B<--step>|B<-s> I (default: 300 seconds) Specifies the base interval in seconds with which data will be fed into the B. =item BIB<:>IB<:>I A single B can accept input from several data sources (B). (e.g. Incoming and Outgoing traffic on a specific communication line). With the B configuration option you must define some basic properties of each data source you want to use to feed the B. I is the name you will use to reference this particular data source from an B. A I must be 1 to 19 characters long in the characters [a-zA-Z0-9_]. I defines the Data Source Type. The remaining arguments of a data source entry depend upon the data source type. For GAUGE, COUNTER, DERIVE, and ABSOLUTE the format for a data source entry is: BIB<:>IB<:>IB<:>IB<:>I For COMPUTE data sources, the format is: BIB<:>IB<:>I To decide on a data source type, review the definitions that follow. Consult the section on "HOW TO MEASURE" for further insight. =over 4 =item B is for things like temperatures or number of people in a room or value of a RedHat share. =item B is for continuous incrementing counters like the InOctets counter in a router. The B data source assumes that the counter never decreases, except when a counter overflows. The update function takes the overflow into account. The counter is stored as a per-second rate. When the counter overflows, RRDtool checks if the overflow happened at the 32bit or 64bit border and acts accordingly by adding an appropriate value to the result. =item B will store the derivative of the line going from the last to the current value of the data source. This can be useful for gauges, for example, to measure the rate of people entering or leaving a room. Internally, derive works exaclty like COUNTER but without overflow checks. So if your counter does not reset at 32 or 64 bit you might want to use DERIVE and combine it with a MIN value of 0. =over =item NOTE on COUNTER vs DERIVE by Don Baarda Edon.baarda@baesystems.comE If you cannot tolerate ever mistaking the occasional counter reset for a legitimate counter wrap, and would prefer "Unknowns" for all legitimate counter wraps and resets, always use DERIVE with min=0. Otherwise, using COUNTER with a suitable max will return correct values for all legitimate counter wraps, mark some counter resets as "Unknown", but can mistake some counter resets for a legitimate counter wrap. For a 5 minute step and 32-bit counter, the probability of mistaking a counter reset for a legitimate wrap is arguably about 0.8% per 1Mbps of maximum bandwidth. Note that this equates to 80% for 100Mbps interfaces, so for high bandwidth interfaces and a 32bit counter, DERIVE with min=0 is probably preferable. If you are using a 64bit counter, just about any max setting will eliminate the possibility of mistaking a reset for a counter wrap. =back =item B is for counters which get reset upon reading. This is used for fast counters which tend to overflow. So instead of reading them normally you reset them after every read to make sure you have a maximal time available before the next overflow. Another usage is for things you count like number of messages since the last update. =item B is for storing the result of a formula applied to other data sources in the B. This data source is not supplied a value on update, but rather its Primary Data Points (PDPs) are computed from the PDPs of the data sources according to the rpn-expression that defines the formula. Consolidation functions are then applied normally to the PDPs of the COMPUTE data source (that is the rpn-expression is only applied to generate PDPs). In database software, these are referred to as "virtual" or "computed" columns. =back I defines the maximum number of seconds that may pass between two updates of this data source before the value of the data source is assumed to be I<*UNKNOWN*>. I and I are optional entries defining the expected range of the data supplied by this data source. If I and/or I are defined, any value outside the defined range will be regarded as I<*UNKNOWN*>. If you do not know or care about min and max, set them to U for unknown. Note that min and max always refer to the processed values of the DS. For a traffic-B type DS this would be the max and min data-rate expected from the device. I I defines the formula used to compute the PDPs of a COMPUTE data source from other data sources in the same . It is similar to defining a B argument for the graph command. Please refer to that manual page for a list and description of RPN operations supported. For COMPUTE data sources, the following RPN operations are not supported: PREV, TIME, and LTIME. In addition, in defining the RPN expression, the COMPUTE data source may only refer to the names of data source listed previously in the create command. This is similar to the restriction that Bs must refer only to Bs and Bs previously defined in the same graph command. =item BIB<:>I The purpose of an B is to store data in the round robin archives (B). An archive consists of a number of data values or statistics for each of the defined data-sources (B) and is defined with an B line. When data is entered into an B, it is first fit into time slots of the length defined with the B<-s> option becoming a I. The data is also processed with the consolidation function (I) of the archive. There are several consolidation functions that consolidate primary data points via an aggregate function: B, B, B, B. The format of B line for these consolidation functions is: BIB<:>IB<:>IB<:>I I The xfiles factor defines what part of a consolidation interval may be made up from I<*UNKNOWN*> data while the consolidated value is still regarded as known. I defines how many of these I are used to build a I which then goes into the archive. I defines how many generations of data values are kept in an B. =back =head1 Aberrant Behaviour detection with Holt-Winters forecasting by Jake Brutlag Ejakeb@corp.webtv.netE In addition to the aggregate functions, there are a set of specialized functions that enable B to provide data smoothing (via the Holt-Winters forecasting algorithm), confidence bands, and the flagging aberrant behavior in the data source time series: =over 4 =item BIB<:>IB<:>IB<:>IB<:>IB<:>I =item BIB<:>IB<:>IB<:>I =item BIB<:>IB<:>IB<:>I =item BIB<:>IB<:>I =item BIB<:>IB<:>IB<:>IB<:>I =back These B differ from the true consolidation functions in several ways. First, each of the Bs is updated once for every primary data point. Second, these B are interdependent. To generate real-time confidence bounds, then a matched set of HWPREDICT, SEASONAL, DEVSEASONAL, and DEVPREDICT must exist. Generating smoothed values of the primary data points requires both a HWPREDICT B and SEASONAL B. Aberrant behavior detection requires FAILURES, HWPREDICT, DEVSEASONAL, and SEASONAL. The actual predicted, or smoothed, values are stored in the HWPREDICT B. The predicted deviations are store in DEVPREDICT (think a standard deviation which can be scaled to yield a confidence band). The FAILURES B stores binary indicators. A 1 marks the indexed observation a failure; that is, the number of confidence bounds violations in the preceding window of observations met or exceeded a specified threshold. An example of using these B to graph confidence bounds and failures appears in L. The SEASONAL and DEVSEASONAL B store the seasonal coefficients for the Holt-Winters Forecasting algorithm and the seasonal deviations respectively. There is one entry per observation time point in the seasonal cycle. For example, if primary data points are generated every five minutes, and the seasonal cycle is 1 day, both SEASONAL and DEVSEASONAL with have 288 rows. In order to simplify the creation for the novice user, in addition to supporting explicit creation the HWPREDICT, SEASONAL, DEVPREDICT, DEVSEASONAL, and FAILURES B, the B create command supports implicit creation of the other four when HWPREDICT is specified alone and the final argument I is omitted. I specifies the length of the B prior to wrap around. Remember that there is a one-to-one correspondence between primary data points and entries in these RRAs. For the HWPREDICT CF, I should be larger than the I. If the DEVPREDICT B is implicity created, the default number of rows is the same as the HWPREDICT I argument. If the FAILURES B is implicitly created, I will be set to the I argument of the HWPREDICT B. Of course, the B I command is available if these defaults are not sufficient and the create wishes to avoid explicit creations of the other specialized function B. I specifies the number of primary data points in a seasonal cycle. If SEASONAL and DEVSEASONAL are implicitly created, this argument for those B is set automatically to the value specified by HWPREDICT. If they are explicity created, the creator should verify that all three I arguments agree. I is the adaptation parameter of the intercept (or baseline) coefficient in the Holt-Winters Forecasting algorithm. See L for a description of this algorithm. I must lie between 0 and 1. A value closer to 1 means that more recent observations carry greater weight in predicting the baseline component of the forecast. A value closer to 0 mean that past history carries greater weight in predicted the baseline component. I is the adaption parameter of the slope (or linear trend) coefficient in the Holt-Winters Forecating algorihtm. I must lie between 0 and 1 and plays the same role as I with respect to the predicted linear trend. I is the adaption parameter of the seasonal coefficients in the Holt-Winters Forecasting algorithm (HWPREDICT) or the adaption parameter in the exponential smoothing update of the seasonal deviations. It must lie between 0 and 1. If the SEASONAL and DEVSEASONAL B are created implicitly, they will both have the same value for I: the value specified for the HWPREDICT I argument. Note that because there is one seasonal coefficient (or deviation) for each time point during the seasonal cycle, the adaption rate is much slower than the baseline. Each seasonal coefficient is only updated (or adapts) when the observed value occurs at the offset in the seasonal cycle corresponding to that coefficient. If SEASONAL and DEVSEASONAL B are created explicity, I need not be the same for both. Note that I can also be changed via the B I command. I provides the links between related B. If HWPREDICT is specified alone and the other B created implicitly, then there is no need to worry about this argument. If B are created explicitly, then pay careful attention to this argument. For each B which includes this argument, there is a dependency between that B and another B. The I argument is the 1-based index in the order of B creation (that is, the order they appear in the I command). The dependent B for each B requiring the I argument is listed here: =over 4 =item * HWPREDICT I is the index of the SEASONAL B. =item * SEASONAL I is the index of the HWPREDICT B. =item * DEVPREDICT I is the index of the DEVSEASONAL B. =item * DEVSEASONAL I is the index of the HWPREDICT B. =item * FAILURES I is the index of the DEVSEASONAL B. =back I is the minimum number of violations (observed values outside the confidence bounds) within a window that constitutes a failure. If the FAILURES B is implicitly created, the default value is 7. I is the number of time points in the window. Specify an integer greater than or equal to the threshold and less than or equal to 28. The time interval this window represents depends on the interval between primary data points. If the FAILURES B is implicity created, the default value is 9. =head1 The HEARTBEAT and the STEP Here is an explanation by Don Baarda on the inner workings of rrdtool. It may help you to sort out why all this *UNKNOWN* data is popping up in your databases: RRD gets fed samples at arbitrary times. From these it builds Primary Data Points (PDPs) at exact times every "step" interval. The PDPs are then accumulated into RRAs. The "heartbeat" defines the maximum acceptable interval between samples. If the interval between samples is less than "heartbeat", then an average rate is calculated and applied for that interval. If the interval between samples is longer than "heartbeat", then that entire interval is considered "unknown". Note that there are other things that can make a sample interval "unknown", such as the rate exceeding limits, or even an "unknown" input sample. The known rates during a PDP's "step" interval are used to calculate an average rate for that PDP. Also, if the total "unknown" time during the "step" interval exceeds the "heartbeat", the entire PDP is marked as "unknown". This means that a mixture of known and "unknown" sample time in a single PDP "step" may or may not add up to enough "unknown" time to exceed "heartbeat" and hence mark the whole PDP "unknown". So "heartbeat" is not only the maximum acceptable interval between samples, but also the maximum acceptable amount of "unknown" time per PDP (obviously this is only significant if you have "heartbeat" less than "step"). The "heartbeat" can be short (unusual) or long (typical) relative to the "step" interval between PDPs. A short "heartbeat" means you require multiple samples per PDP, and if you don't get them mark the PDP unknown. A long heartbeat can span multiple "steps", which means it is acceptable to have multiple PDPs calculated from a single sample. An extreme example of this might be a "step" of 5mins and a "heartbeat" of one day, in which case a single sample every day will result in all the PDPs for that entire day period being set to the same average rate. I<-- Don Baarda Edon.baarda@baesystems.comE> =head1 HOW TO MEASURE Here are a few hints on how to measure: =over =item Temperature Normally you have some type of meter you can read to get the temperature. The temperature is not realy connected with a time. The only connection is that the temperature reading happened at a certain time. You can use the B data source type for this. RRRtool will the record your reading together with the time. =item Mail Messages Assume you have a methode to count the number of messages transported by your mailserver in a certain amount of time, this give you data like '5 messages in the last 65 seconds'. If you look at the count of 5 like and B datatype you can simply update the rrd with the number 5 and the end time of your monitoring period. RRDtool will then record the number of messages per second. If at some later stage you want to know the number of messages transported in a day, you can get the average messages per second from RRDtool for the day in question and multiply this number with the number of seconds in a day. Because all math is run with Doubles, the precision should be acceptable. =item It's always a Rate RRDtool stores rates in amount/second for COUNTER, DERIVE and ABSOLUTE data. When you plot the data, you will get on the y axis amount/second which you might be tempted to convert to absolute amount volume by multiplying by the delta-time between the points. RRDtool plots continuous data, and as such is not appropriate for plotting absolute volumes as for example "total bytes" sent and received in a router. What you probably want is plot rates that you can scale to for example bytes/hour or plot volumes with another tool that draws bar-plots, where the delta-time is clear on the plot for each point (such that when you read the graph you see for example GB on the y axis, days on the x axis and one bar for each day). =back =head1 EXAMPLE C This sets up an B called F which accepts one temperature value every 300 seconds. If no new data is supplied for more than 600 seconds, the temperature becomes I<*UNKNOWN*>. The minimum acceptable value is -273 and the maximum is 5000. A few archives areas are also defined. The first stores the temperatures supplied for 100 hours (1200 * 300 seconds = 100 hours). The second RRA stores the minimum temperature recorded over every hour (12 * 300 seconds = 1 hour), for 100 days (2400 hours). The third and the fourth RRA's do the same with the for the maximum and average temperature, respectively. =head1 EXAMPLE 2 C This example is a monitor of a router interface. The first B tracks the traffic flow in octects; the second B generates the specialized functions B for aberrant behavior detection. Note that the I argument of HWPREDICT is missing, so the other B will be implicitly be created with default parameter values. In this example, the forecasting algorithm baseline adapts quickly; in fact the most recent one hour of observations (each at 5 minute intervals) account for 75% of the baseline prediction. The linear trend forecast adapts much more slowly. Observations made in during the last day (at 288 observations per day) account for only 65% of the predicted linear trend. Note: these computations rely on an exponential smoothing formula described in a forthcoming LISA 2000 paper. The seasonal cycle is one day (288 data points at 300 second intervals), and the seasonal adaption paramter will be set to 0.1. The RRD file will store 5 days (1440 data points) of forecasts and deviation predictions before wrap around. The file will store 1 day (a seasonal cycle) of 0-1 indicators in the FAILURES B. The same RRD file and B are created with the following command, which explicitly creates all specialized function B. C Of course, explicit creation need not replicate implicit create, a number of arguments could be changed. =head1 EXAMPLE 3 C This example is monitoring the average request duration during each 300 sec interval for requests processed by a web proxy during the interval. In this case, the proxy exposes two counters, the number of requests processed since boot and the total cumulative duration of all processed requests. Clearly these counters both have some rollover point, but using the DERIVE data source also handles the reset that occurs when the web proxy is stopped and restarted. In the B, the first data source stores the requests per second rate during the interval. The second data source stores the total duration of all requests processed during the interval divided by 300. The COMPUTE data source divides each PDP of the AccumDuration by the corresponding PDP of TotalRequests and stores the average request duration. The remainder of the RPN expression handles the divide by zero case. =head1 AUTHOR Tobias Oetiker Eoetiker@ee.ethz.chE