Browsing by Author "Rodrigues, Nuno M. M."
Now showing 1 - 10 of 15
Results Per Page
Sort Options
- Adaptive bridge model for compressed domain point cloud classificationPublication . Seleem, Abdelrahman; Guarda, André F. R.; Rodrigues, Nuno M. M.; Pereira, FernandoThe recent adoption of deep learning-based models for the processing and coding of multimedia signals has brought noticeable gains in performance, which have established deep learning-based solutions as the uncontested state-of-the-art both for computer vision tasks, targeting machine consumption, as well as, more recently, coding applications, targeting human visualization. Traditionally, applications requiring both coding and computer vision processing require frst decoding the bitstream and then applying the computer vision methods to the decompressed multimedia signals. However, the adoption of deep learning-based solutions enables the use of compressed domain computer vision processing, with gains in performance and computational complexity over the decompressed domain approach. For point clouds (PCs), these gains have been demonstrated in the single available compressed domain computer vision processing solution, named Compressed Domain PC Classifer, which processes JPEG Pleno PC coding (PCC) compressed streams using a PC classifer largely compatible with the state-of-the-art spatial domain PointGrid classifer. However, the available Compressed Domain PC Classifer presents strong limitations by imposing a single, specifc input size which is associated to specifc JPEG Pleno PCC confgurations; this limits the compression performance as these confgurations are not ideal for all PCs due to their diferent characteristics, notably density. To overcome these limitations, this paper proposes the frst Adaptive Compressed Domain PC Classifer solution which includes a novel adaptive bridge model that allows to process the JPEG Pleno PCC encoded bit streams using diferent coding confgurations, now maximizing the compression efciency. Experimental results show that the novel Adaptive Compressed Domain PC Classifer allows JPEG PCC to achieve better compression performance by not imposing a single, specifc coding confguration for all PCs, regardless of its diferent characteristics. Moreover, the added adaptability power can achieve slightly better PC classifcation performance than the previous Compressed Domain PC Classifer and largely better PC classifcation performance (and lower number of weights) than the PointGrid PC classifer working in the decompressed domain.
- Adaptive Deep Learning-Based Point Cloud Geometry CodingPublication . Andre F. R. Guarda; Rodrigues, Nuno M. M.; Fernando PereiraPoint clouds are a very rich 3D visual representation model, which has become increasingly appealing for multimedia applications with immersion, interaction and realism requirements. Due to different acquisition and creation conditions as well as target applications, point clouds’ characteristics may be very diverse, notably on their density. While geographical information systems or autonomous driving applications may use rather sparse point clouds, cultural heritage or virtual reality applications typically use denser point clouds to more accurately represent objects and people. Naturally, to offer immersion and realism, point clouds need a rather large number of points, thus asking for the development of efficient coding solutions. The use of deep learning models for coding purposes has recently gained relevance, with latest developments in image coding achieving state-of-the-art performance, thus making natural the adoption of this technology also for point cloud coding. This paper presents a novel deep learning-based solution for point cloud geometry coding which is able to efficiently adapt to the content’s characteristics. The proposed coding solution divides the point cloud into 3D blocks and selects the most suitable available deep learning coding model to code each block, thus maximizing the compression performance. In comparison to the state-of-the-art MPEG G-PCC Trisoup standard, the proposed coding solution offers average quality gains up to 4.9dB and 5.7dB for PSNR D1 and PSNR D2, respectively.
- Compression of medical images using MRP with bi-directional prediction and histogram packingPublication . Santos, Joao M.; Guarda, Andre F.R.; Cruz, Luís A. da Silva; Rodrigues, Nuno M. M.; Faria, Sergio M. M. deMedical imaging technology has become essential for the improvement of medical practice. This led to advances in the technology, namely in image sampling resolutions, pixel bit-depth and inter slice resolution. Additionally, common use of medical images, the life expectancy of patients and legal restrictions led to increasing storage costs. Therefore, efficient compression of medical image data is in high demand, for archiving and transmission. In this work we propose to improve the compression efficiency of the Minimum Rate Predictors lossless encoder, by adding bi-directional prediction support and a histogram packing technique. The results show that the proposed method presents a higher compression efficiency than state-of-the-art HEVC encoder. The compression efficiency is improved by 20%, on average, when compared to HEVC and by 46.1% when compared with the original MRP algorithm.
- Constant Size Point Cloud Clustering: a Compact, Non-Overlapping SolutionPublication . Guarda, André F. R.; Rodrigues, Nuno M. M.; Pereira, FernandoPoint clouds have recently become a popular 3D representation model for many application domains, notably virtual and augmented reality. Since point cloud data is often very large, processing a point cloud may require that it be segmented into smaller clusters. For example, the input to deep learning-based methods like auto-encoders should be constant size point cloud clusters, which are ideally compact and non-overlapping. However, given the unorganized nature of point clouds, defining the specific data segments to code is not always trivial. This paper proposes a point cloud clustering algorithm which targets five main goals: i) clusters with a constant number of points; ii) compact clusters, i.e. with low dispersion; iii) non-overlapping clusters, i.e. not intersecting each other; iv) ability to scale with the number of points; and v) low complexity. After appropriate initialization, the proposed algorithm transfers points between neighboring clusters as a propagation wave, filling or emptying clusters until they achieve the same size. The proposed algorithm is unique since there is no other point cloud clustering method available in the literature offering the same clustering features for large point clouds at such low complexity
- Deep Learning-based Point Cloud Geometry Coding with Resolution ScalabilityPublication . Guarda, André F. R.; Rodrigues, Nuno M. M.; Pereira, FernandoPoint clouds are a 3D visual representation format that has recently become fundamentally important for immersive and interactive multimedia applications. Considering the high number of points of practically relevant point clouds, and their increasing market demand, efficient point cloud coding has become a vital research topic. In addition, scalability is an important feature for point cloud coding, especially for real-time applications, where the fast and rate efficient access to a decoded point cloud is important; however, this issue is still rather unexplored in the literature. In this context, this paper proposes a novel deep learning-based point cloud geometry coding solution with resolution scalability via interlaced sub-sampling. As additional layers are decoded, the number of points in the reconstructed point cloud increases as well as the overall quality. Experimental results show that the proposed scalable point cloud geometry coding solution outperforms the recent MPEG Geometry-based Point Cloud Compression standard which is much less scalable.
- Improving multiscale recurrent pattern image coding with least-squares prediction modePublication . Graziosi, Danillo B.; Rodrigues, Nuno M. M.; Silva, Eduardo A. B. da; Faria, Sérgio M. M. de; Carvalho, Murilo B. de; Faria, Sergio; M. M. Rodrigues, Nuno;The Multidimensional Multiscale Parser-based (MMP) image coding algorithm, when combined with flexible partitioning and predictive coding techniques (MMP-FP), provides state-of-the-art performance. In this paper we investigate the use of adaptive least-squares prediction in MMP. The linear prediction coefficients implicitly embed the local texture characteristics, and are computed based on a block’s causal neighborhood (composed of already reconstructed data). Thus, the intra prediction mode is adaptively adjusted according to the local context and no extra overhead is needed for signaling the coefficients. We add this new context-adaptive linear prediction mode to the other MMP prediction modes, that are based on the ones used in H.264/AVC; the best mode is chosen through rate-distortion optimization. Simulation results show that least-squares prediction is able to significantly increase MMP-FPs rate-distortion performance for smooth images, leading to better results than the ones of state-of-theart, transform-based methods. Yet with the addition of least-squares prediction MMP-FP presents no performance loss when used for encoding non-smooth images, such as text and graphics.
- IT/IST/IPLeiria Response to the Call for Evidence on JPEG Pleno Point Cloud CodingPublication . Rodrigues, Nuno M. M.; Pereira, Fernando; Guarda, AndréThis document proposes two scalable point cloud (PC) geometry codecs, submitted to the JPEG Call for Evidence on Point Cloud Coding (PCC).
- IT/IST/IPLeiria Response to the Call for Proposals on JPEG Pleno Point Cloud CodingPublication . Guarda, André F. R.; Rodrigues, Nuno M. M.; Ruivo, Manuel; Coelho, Luís; Seleem, Abdelrahman; Pereira, FernandoThis document describes a deep learning (DL)-based point cloud (PC) geometry codec and a DL-based PC joint geometry and colour codec, submitted to the Call for Proposals on JPEG Pleno Point Cloud Coding issued in January 2022 [1]. These proposals have been originated by research developed at Instituto de Telecomunicações (IT), in the context of the project Deep-PCR entitled “Deep learning-based Point Cloud Representation” (PTDC/EEI-COM/1125/2021), financed by Fundação para a Ciência e Tecnologia (FCT).
- JPEG Pleno Point Cloud Coding Verification Model DescriptionPublication . Guarda, André F. R.; Rodrigues, Nuno M. M.; Ruivo, Manuel; Coelho, Luís; Seleem, Abdelrahman; Pereira, FernandoThis document describes the JPEG Pleno Point Cloud Coding [1] Verification Model (VM), consisting of a deep learning (DL)-based joint point cloud (PC) geometry and colour codec [2].
- Light Field Image Coding Based on Hybrid Data RepresentationPublication . Monteiro, Ricardo J. S.; Rodrigues, Nuno M. M.; Faria, Sérgio M.M.; Nunes, Paulo J. L.This paper proposes a novel efficient light field coding approach based on a hybrid data representation. Current state-of-the-art light field coding solutions either operate on micro-images or sub-aperture images. Consequently, the intrinsic redundancy that exists in light field images is not fully exploited, as is demonstrated. This novel hybrid data representation approach allows to simultaneously exploit four types of redundancies: i) sub-aperture image intra spatial redundancy, ii) sub-aperture image inter-view redundancy, iii) intra-micro-image redundancy, and iv) inter-micro-image redundancy between neighboring micro-images. The proposed light field coding solution allows flexibility for several types of baselines, by adaptively exploiting the most predominant type of redundancy on a coding block basis. To demonstrate the efficiency of using a hybrid representation, this paper proposes a set of efficient pixel prediction methods combined with a pseudo-video sequence coding approach, based on the HEVC standard. Experimental results show consistent average bitrate savings when the proposed codec is compared to relevant state-ofthe-art benchmarks. For lenslet light field content, the proposed coding algorithm outperforms the HEVCbased pseudo-video sequence coding benchmark by an average bitrate savings of 23%. It is shown for the same light field content that the proposed solution outperforms JPEG Pleno verification models MuLE and WaSP, as these codecs are only able to achieve 11% and −14% bitrate savings over the same HEVC-based benchmark, respectively. The performance of the proposed coding approach is also validated for light fields with wider baselines, captured with high-density camera arrays, being able to outperform both the HEVCbased benchmark, as well as MuLE and WaSP.