Free download. Book file PDF easily for everyone and every device. You can download and read online Point Process Models with Applications to Safety and Reliability file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Point Process Models with Applications to Safety and Reliability book. Happy reading Point Process Models with Applications to Safety and Reliability Bookeveryone. Download file Free Book PDF Point Process Models with Applications to Safety and Reliability at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Point Process Models with Applications to Safety and Reliability Pocket Guide.
Recommended for you

In consequence, he concludes that parameter estimation of such ROCOF models reduces to the estimation of the cumulative hazard function of the underlying lifetime distribution. For a recent discussion on the subject, see Wang and Lu 8 and references therein. In this aim, statistical modelling based on data analyses is a highly valuable tool that engineers can employ to optimize the performance of assets under their supervision. An inadequate parametric approach leads to erroneous model selection for the failure time of the system, which can in turn induce to wrong decisions for the system maintenance procedure thus involving serious economical problems, among others.

All books of the series Springer Series in Reliability Engineering

In conclusion, statistical analyses based on false parametric premises about the system lifetime lead to erroneous model selection which, in the best of the cases, turn out to be useless for reliability analysts. The knowledge about the existence of peaks in the intensity function curve might help the analyst to anticipate the occurrence of failures and thus more efficiently programming a preventive maintenance policy. On the other hand, in case the analyst needs to explicitly construct a parametric model for the system lifetime for forecasting purposes for example , a previous trend changes analysis of the failure rate would be very useful for an adequate model selection.

Therefore, trend change of failure rate has been an important subject of study for engineers. See, for example, other studies, 9 - 12 just to cite some recent studies. Having in mind all the above considerations, the main purpose of this paper is to provide an effective graphical tool to explore the underlying characteristics of the ROCOF, in order to detect constant or monotonic patterns, or possible trend changes.

Our practical motivation is the study presented in Phillips 13 where the author develops bootstrap methods for constructing confidence regions for the ROCOF of a repairable system, in particular, he analyses the failure times of a photocopier. That paper shows up the advantages of kernel smoothing to detect local features of the ROCOF such as changes of trend. Kernel estimators depend on smoothing bandwidth parameters that need to be chosen in practice.

Moreover, different choices of the bandwidth can lead to very different outcomes in some cases, which can be tricky to interpret see section 3. This approach has the problem that some features in the data can be only revealed looking at a wide range of bandwidth values.

With this idea, Chaudhuri and Marron 14 introduced the inferential graphical tool SiZer to detect significant local features in density and regression functions. The characteristics that are really there, that is, those which are not an artefact of the sampling variability, are revealed through the construction of confidence intervals for the first derivative of the function.

This paper provides an extension of SiZer for the rate function of a counting process under the Aalen's multiplicative intensity model. The first step in developing SiZer is to define a proper kernel estimator for the ROCOF and its first derivative, as the characteristics of the function are identified by the sign of the first derivative.

Recommended for you

Section 2 describes local linear kernel estimators for the failure rate function and its first derivative, which arise from the intuitive least squares principle suggested by Nielsen and Tanggaard 21 for hazard estimation. Closely related local linear estimators have been proposed by Chen et al 22 for counting process intensity functions but considering martingale estimating equations. Section 3 illustrates the effect of the bandwidth on the estimated ROCOF considering the classic coal mining disaster data.

  • Racial Prescriptions: Pharmaceuticals, Difference, and the Politics of Life!
  • International Commercial and Marine Arbitration (Routledge Research in International Commercial Law).
  • Shes Not There: A Life in Two Genders.
  • Market Masters: Wall Streets Top Investment Pros Reveal How to Make Money in Both Bull and Bear Markets.

The second step is to construct confidence intervals for the intensity rate derivative to allow the inference about the trend changes in the failure rate. Section 4 describes two alternative methods to construct confidence intervals, one based on the asymptotic normal distribution and other based on a consistent bootstrap method suggested by Cowling et al. Four applications to real data are described in section 5. The empirical performance of the proposal in the paper is evaluated through an extensive simulation study in section 6.

Finally, some conclusions are drawn in section 7. In the general case, the intensity is a stochastic function. For simplicity, let us denote and the aggregated processes, respectively. The solution to the problem 2 provides estimates for the hazard function and its first derivative, which can be expressed as follows: For the hazard function, : 3 For the first derivative of the hazard, : 4. The pointwise asymptotic properties of the local linear estimator of intensity rate, given in 3 , are obtained in Nielsen and Tanggaard. Now, to derive the pointwise asymptotic properties of the estimator of the derivative, we proceed as in Nielsen and Tanggaard, 21 that is, we split the error into a deterministic term B t converging in probability plus a variable term V t converging to a normal distribution.

Theorem 1.


Assume that the following conditions hold A1. Then we have that 9 and 10 where. The failures occurring at disjointed intervals are assumed to be independent, and there are no simultaneous failures in the system. In applications in reliability engineering, this means that a single system is followed until a failure occurs, immediately the system is fixed and restored to work.

After the repair, the system's state is exactly the same as it was before failure. Now, we consider the local linear estimator of the rate of a NHPP when only the successive counts of failures at disjoint consecutive intervals of time are available. As a result, we derive discretized versions of the estimators for the ROCOF, and its first derivative is presented in 3 and 4. Alternative expressions for the variance of the estimates can be easily computed from the expressions 12 and 13 , instead of using discretized version of the variance obtained for the continuous case given in Specifically, the variance of can be directly deduced from the expression 13 as follows: The kernel estimators defined in the previous section involve an unknown bandwidth parameter that determines the level of smoothing to be considered.

To illustrate the effect of the bandwidth on the estimated failure intensity, we consider the classic coal mining disaster data. Details about these data are provided in section 5.

Reliability Analysis in Process Industry

In such an industrial accident, the aim is to explore the overall trend in the ROCOF the rate of occurrence of explosions trying to find a decreasing trend but also looking for possible local trend changes. The impression one can get looking at the estimates is quite different. While the smallest bandwidth produces many wiggles in the ROCOF, suggesting many local trend changes, the biggest bandwidth produce a smooth curve with a clear decreasing trend and no changes.

The bandwidth effect we have visualized with this example is well known in kernel smoothing. Big bandwidths produce oversmooth estimates at the risk of smoothing away important features of the data. Too small bandwidths produce undersmooth estimates with too many wiggles. However, if the goal is beyond the pointwise estimation of the underlying curve and one wants to assess the significance of its features, a common way to proceed consist of deriving inferential conclusions based on a bandwidth choice.

The authors discuss how to correct for this bias but without providing any practical guidance in fact in the coal mines example, no bias correction was performed. The inferential approach we adopt in this paper differs from the traditional approach where a single bandwidth is chosen and the inferential conclusions are drawn from a kernel estimator with such bandwidth. We focus simultaneously on a wide range of values for the smoothing parameter, instead of trying to estimate the optimum amount of smoothing from the data.

This is an effective approach from exploratory purposes since different levels of smoothing may reveal different useful information in the data. In this sense, the smoothing levels are comparable with variations in the levels of resolution or scales in a visual system. Under this perspective, the traditional approach of choosing a single bandwidth according to an optimality criteria becomes uninformative and even misleading.

  • Creationism in Europe.
  • Dead of Eve (Trilogy of Eve, Book 1)?
  • Recommended for you;
  • Point process models with applications to safety and reliability.
  • Computational Systems Bioinformatics: Csb2007 Conference Proceedings, University of California, San Diego, USA, 13-17 August 2007.

Looking at the problem at a unique scale would mean to ignore significant characteristics if this bandwidth is too big or overestimating the number of changes in the trend of the target function when the chosen bandwidth is too small. Chaudhuri and Marron 14 implemented the scale space inference for density and regression functions through the graphical tool SiZer. After being introduced by Chaudhuri and Marron, 14 SiZer has become a powerful tool for conducting exploratory data analysis in many statistical frameworks. This is achieved by an effective combination of statistical inference in scale space and visualization.

In this arrangement, the portions of the display are colour coded as follows: blue if zero is greater than the upper confidence limit significantly increasing , red if zero is less than the lower confidence limit significantly decreasing , purple if zero is within the confidence limits not significantly increasing or decreasing , and grey indicating regions where the data are too sparse to make statements about significance the effective sample size defined below is less than 5.

As an illustration, and before entering in more details, we show in Figure 2 the SiZer that corresponds to the coal mining disaster data considered in the previous section and fully described in section 5. The family plot in the top panel shows the estimated rate of occurrence of accidents ROCOF over time localization space with different bandwidths scales. In this case, the data set is small with observations.

In this case, we have constructed the SiZer Map with bootstrap confidence intervals derived from bootstrap samples; the output for the other type of intervals is quite similar though. The SiZer Map shows a clear decreasing trend and no changes. This is the same conclusion obtained by Barnard 27 and Cox and Lewis. They concluded that a general decreasing trend is exhibited by the accident hazard rate with no trend changes with time.

We consider the local linear estimator given in 3 to define the family of smoothers and compute confidence intervals for the first derivative from expression 16 , using the local linear estimator of the derivative given in 4 and the estimate of its variance given in We devote the rest of this section to define two different approaches, one based on the limiting normality property and the other based on bootstrapping. Simultaneous inference for SiZer is solved by defining m independent confidence interval problems, where m reflects the number of independent blocks.

For the kernel estimator of the intensity function, the ESS can be computed as 18 And the number of independent blocks for each h is estimated as 19 where denotes the average value on the set. As an alternative to the normal confidence intervals proposed above, the bootstrap method can be used to construct pointwise and simultaneous confidence intervals.

The four types of intervals defined above have been considered and compared in the empirical analyses reported in the next sections. For illustration, we have included in sections 5. We begin our study checking whether the Poisson assumption is admissible for the underlying process that governs the arrivals in all the examples presented in this section. If the arrival process on the original scale is a NHPP, then the are independent and identically distributed according to an exponential law with rate 1.

Here, we describe an application in reliability engineering. The authors conclude in their analysis that the failure times of each item machine can be adequately represented by a NHPP. Given that machines operate independently each other in time, we can consider the overall failure process, that is all failures occurred in all items, also as a NHPP.

Asset Integrity and Reliability

This fact is confirmed by the visual check we present on the top panel of Figure 3. We have then performed the SiZer analysis for these data and shown the result in Figure 4. The question is then whether the characteristics shown in the family plot are indeed significant. Finally, we can see that this trend change can be visualized at most of the considered bandwidths. The SiZer map shown in Figure 4 have been constructed using simultaneous bootstrap intervals with bootstrap samples. The results are quite close for the other types of intervals; however, the bootstrap method seems to be appropriate for these data since the sample size is small, just observations.

In this example, we consider the data concerning to mining accidents explosions caused by coalbed gas or coal dust that caused the deaths of 10 or more miners during many years.

The data are taken from Jarrett 32 and consists of times of occurrences of accidents in days occurring during a period from March 15, , to March 22, Source: package boot in R. The interest of this example arises from the need to know possible patterns of occurrence of accidents to a better comprehension of the nature of such events. We have then performed the SiZer analysis for this data set, and the results are shown in Figure 2 see section 4.

This characteristic in the ROCOF curve can be clearly visualized for all the scales smoothing parameters considered in the plot. In this case, we have considered again simultaneous bootstrap intervals with bootstrap samples. In this case, the data have been taken from Guida and Pulcini 33 and consist of 55 failure times in kilometres of the powertrain system of an urban route bus between the early months of and up to the end of December of Each powertrain system was subject to minimal repair at failure, which is corroborated by our graphical check of the Poisson assumption see bottom panel in Figure 3.

The bus was circulating on urban roads of the city of Naples. In this example, we are interested in exploring failure patterns of the powertrain system of these buses. SiZer analysis for this data set is shown in Figure 5. These characteristics of the ROCOF are displayed at all resolution levels, this is, for all the bandwidths considered in the plot. This feature is not reflected by the parametric approach developed in the paper of Guida and Pulcini. The SiZer map shown in Figure 5 has been constructed using pointwise normal intervals. The results for the other types of intervals are quite close though.

Unlike the previous examples, in this case, we do not have information about the exact times of occurrences, instead, we have monthly counts of the event of interest. Our main concern in this example is to explore cyclic behaviour of the intensity of the underlying NHPP. A total of storms were observed during this time interval. Our SiZer analysis supports their conclusions as we describe below. Buy eBook. Buy Softcover. FAQ Policy.

About this book In teaching an elementary course in stochastic processes it was noticed that many seemingly deep results in point processes are readily accessible by the device of representing them in terms of random gap lengths between points. Show all. Point processes Pages Thompson, W. Homogeneous Poisson processes Pages Thompson, W. Application of point processes to a theory of safety assessment Pages Thompson, W. Renewal processes Pages Thompson, W. Poisson processes Pages Thompson, W. Superimposed processes Pages Thompson, W. Markov point processes Pages Thompson, W.

Springer Series in Reliability Engineering |

The order statistics process Pages Thompson, W. Competing risk theory Pages Thompson, W. Show next xx.