Repository logo
 
Loading...
Profile Picture
Person

Oliveira Martins, João Paulo

Search Results

Now showing 1 - 10 of 11
  • Testing the Maximum by the Mean in Quantitative Group Tests
    Publication . Martins, João Paulo; Santos, Rui; Sousa, Ricardo
    Group testing, introduced by Dorfman in 1943, increases the efficiency of screening individuals for low prevalence diseases. A wider use of this kind of methodology is restricted by the loss of sensitivity inherent to the mixture of samples. Moreover, as this methodology attains greater cost reduction in the cases of lower prevalence (and, consequently, a higher optimal batch size), the phenomenon of rarefaction is crucial to understand that sensitivity reduction. Suppose, with no loss of generality, that an experimental individual test consists in determining if the amount of substance overpasses some prefixed threshold l. For a pooled sample of size n, the amount of substance of interest is represented by (Y1, … , Yn), with mean (Formula Presented) and maximum Mn. The goal is to know if any of the individual samples exceeds the threshold l, that is, Mn > l. It is shown that the dependence between (Formula Presented) and Mn has a crucial role in deciding the use of group testing since a higher dependence corresponds to more information about Mn given by the observed value of (Formula Presented).
  • Searching for the corner seismic moment in worldwide data
    Publication . Felgueiras, Miguel; Santos, Rui; Oliveira Martins, João Paulo
    In this paper the existence of the corner frequency value for the seismic moment distribution is investigated, analysing worldwide data. Pareto based distributions, usually considered as the most suitable to this type of data, are fitted to the most recent data, available in a global earthquake catalog. Despite the undeniable finite nature of the seismic moment data, we conclude that no corner frequency can be established considering the available data set. © 2015 AIP Publishing LLC.
  • Known Mean, Unknown Maxima? Testing the Maximum Knowing Only the Mean
    Publication . Santos, Rui; Oliveira Martins, João Paulo; Felgueiras, Miguel
    In the quantitative group testing problem, the use of the group mean to identify if the group maximum is greater than a prefixed threshold (infected group) is analyzed, using n independent and identically distributed individuals. Under these conditions, it is shown that the information of the mean is sufficient to classify each group as infected or healthy with low probability of misclassification when the underline distribution is a unilateral heavy-tailed distribution.
  • Estimation Through Array-Based Group Tests
    Publication . Oliveira Martins, João Paulo; Felgueiras, Miguel; Santos, Rui
    Pooling individual samples for batch testing is a common procedure for reducing costs. The recent use of multidimensional array algorithms, due to the emergence of robotic pooling, is an innovative way of pooling. We show that the two-dimensional array-based group tests can provide accurate estimates for the prevalence rate even for situations in which the traditional estimators, applied to one-dimensional arrays, are not valid. Hence, a computational script was developed to determine which prevalence rate estimate minimizes the sum of the squared deviations between the number of observed and expected rows and columns whose pooled sample had a positive test result. © 2017, National Statistical Institute. All rights reserved.
  • Three-dimensional array-based group testing algorithms with one-stage
    Publication . Oliveira Martins, João Paulo; Felgueiras, Miguel; Santos, Rui
    The use of three-dimensional array-based testing algorithms is more efficient and accurate in some situations than other more commonly used algorithms to protocol pooled samples testing. We evaluate the advantages of using of this complex pooling schemes with only one stage in the problem of estimation of the prevalence rate of some disease. Using simulation work, we show that it does not seem to exist any advantage in using three or even higher-dimensional arrays for this type of problem.
  • Alternative heavy tailed models in seismology
    Publication . Felgueiras, Miguel; Martins, João; Santos, Rui
    Great earthquakes are commonly considered as the ones with moment magnitude (Mw ) above or equal to 8.0. Since these earthquakes can destroy entire communities located near the epicentre, the search of physical laws that explain the energy released by them is an important issue. There is a connection between the radiated energy of an earthquake, its magnitude and its seismic moment (M 0). Thence, when fitting a heavy or an extremely heavy tailed distribution to a seismic moment dataset, we are in fact adjusting a mathematical model which explains the amount of energy released by these great seisms. Therefore, the main goal of this work is to study the more appropriated Pareto based models (the most used family in this field) when explaining the seismic moment of the great earthquakes. With this purpose in mind, we selected two different catalogs that accommodate recent events and are considered more accurate than other catalogs used in previous works. We conclude that the traditional Pareto distribution remains a good choice to deal with this kind of data, but Log-Pareto lead to higher p-values and Location-scale Pareto is better fitted to the biggest events.
  • The reference method influence on the sensitivity of the Clostridium difficile enzyme immunoassays: A meta analysis
    Publication . Martins, João Paulo; Felgueiras, Miguel; Santos, Rui
    The use of enzyme immunoassays to screen for toxins A and B produced by Clostridium difficile is a common procedure in algorithms designed for its detection. Moreover, the absence of a unique test capable of providing reliable results at low cost motivates a great discussion about which algorithm is the best. Thus, several studies have evaluated the performance of these enzyme immunoassays. However, all fail to provide sufficient explanations for the different behaviours observed in different studies that evaluate the same index test against a common reference method. Our main goal was to find out which factors affect the sensitivity of these assays, since the specificity is very close to 1. In this research, we verified that sensitivity increases with the prevalence rate and with the proportion of reported cases of onset diarrhea. Therefore, its use is advisable for high prevalence rates (e.g. in an epidemic setting). As far as reference methods are concerned, nucleic acid amplification tests can be used as a reference method, with a performance similar to the well-accepted toxigenic culture. The method chosen for toxigenicity screening in a toxigenic culture also seems to affect the evaluation performance of tests and should be better studied in the future.
  • Estimation of prevalence in rare disease using pooled samples
    Publication . Martins, J. P.; Santos, R.; Felgueiras, M.
    The use of pooled samples for screening infected individuals is a known procedure to reduce costs. In an estimation problem, the aim is only to determine how many individuals are infected instead of determining who is infected (classification problem). In that setting, our goal was to compare the performance of using one or two-dimensional arrays. The best performance was established according to one of the following criteria: minimizing the number of individuals or the number of tests required to attain a certain estimate accuracy. It is observed that when we want to minimize the number of individuals used, the two-dimensional procedures have a little advantage over the one-dimensional procedures. However, when the major concern is the cost, the one-dimensional procedures clearly outperform the two-dimensional procedures.
  • Pareto Models for the Energy Released in Earthquakes
    Publication . Felgueiras, Miguel; Santos, Rui; Martins, João Paulo
    In this paper we explore Pareto based distributions to deal with the energy released by the major seisms. This is a relevant problem because great earthquakes can cause heavy losses, both human and material. The standard Pareto distribution, despite being usually well fitted to the data concerning the energy released by seisms, reveals some lack of fit when dealing with the energy released by the great earthquakes. Besides the more traditional Pareto and Log-Pareto, we also consider the Extended Slash Pareto (ESP) and the Location-Scale Pareto Mixture (LSPM) distributions in this work. For the less studied ESP and LSPM distributions, we present the parameters estimators and perform a simulation study in order to evaluate the estimators performance under different scenarios. Thenceforth, the four distributions are applied to two datasets (catalogs) containing information on the seisms magnitude, which has a direct connection to the energy released by the earthquakes (seismic moment). The used catalogs are considered as conveniently accurate and updated, and are being used in recent works. In conclusion, the Pareto distribution still is appropriate to fit this kind of data, but other distributions emerge as better models. The Log-Pareto distributions led to higher fitting p-values than the Pareto distribution, and LSPM also emerges as a strong competitor. LSPM is better fitted to the greatest observations and therefore gives a more accurate prevision for the energy released by the greater earthquakes.
  • Gaussian Scale Mixtures
    Publication . Felgueiras, Miguel Martins; Martins, João Paulo; Santos, Rui Filipe; European Society of Computational Methods in Sciences and Engineering (ESCMSE)
    In this paper we present a parsimonious approximation of a Gaussian mixture when its components share a common mean value, i.e. a scale mixture. We show that a shifted and scaled Student’s t -distribution can be approximated to this type of mixture, and use the result to develop a hypothesis test for the equality of the components mean value. A simulation study to check the quality of the approximation is also provided, together with an application to logarithmic daily returns.