A scale- and orientation-adaptive extension of Local Binary Patterns for texture classificationby Sebastian Hegenbart, Andreas Uhl

Pattern Recognition


Artificial Intelligence / Computer Vision and Pattern Recognition / Signal Processing / Software


Fuzzy Local Binary Patterns for Ultrasound Texture Characterization

Dimitris K. Iakovidis, Eystratios G. Keramidas, Dimitris Maroulis

A Bayesian Local Binary Pattern texture descriptor

Chu He, Timo Ahonen, Matti Pietikainen

Gender Classification Based on Boosting Local Binary Pattern

Ning Sun, Wenming Zheng, Changyin Sun, Cairong Zou, Li Zhao

Local Oriented Statistics Information Booster (LOSIB) for Texture Classification

Oscar Garcia-Olalla, Enrique Alegre, Laura Fernandez-Robles, Victor Gonzalez-Castro

Noise tolerant local binary pattern operator for efficient texture analysis

Abdolhossein Fathi, Ahmad Reza Naghsh-Nilchi


Author's Accepted Manuscript

A scale- and orientation-adaptive extension of local binary patterns for texture classification

Sebastian Hegenbart, Andreas Uhl

PII: S0031-3203(15)00084-9

DOI: http://dx.doi.org/10.1016/j.patcog.2015.02.024

Reference: PR5363

To appear in: Pattern Recognition

Received date: 10 September 2014

Revised date: 15 January 2015

Accepted date: 25 February 2015

Cite this article as: Sebastian Hegenbart, Andreas Uhl, A scale- and orientation-adaptive extension of local binary patterns for texture classification, Pattern Recognition, http://dx.doi.org/10.1016/j.patcog.2015.02.024

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form.

Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. www.elsevier.com/locate/pr

A Scale- and Orientation-Adaptive Extension of Local

Binary Patterns for Texture Classification

Sebastian Hegenbarta,∗, Andreas Uhla aDepartment of Computer Sciences,

University of Salzburg, 5020 Salzburg, Austria


Local Binary Patterns (LBP) have been used in a wide range of texture classification scenarios and have proven to provide a highly discriminative feature representation. A major limitation of LBP is its sensitivity to affine transformations. In this work, we present a scale- and rotation-invariant computation of LBP. Rotation-invariance is achieved by explicit alignment of features at the extraction level, using a robust estimate of global orientation. Scale-adapted features are computed in reference to the estimated scale of an image, based on the distribution of scale normalized Laplacian responses in a scale-space representation. Intrinsic-scale-adaption is performed to compute features, independent of the intrinsic texture scale, leading to a significantly increased discriminative power for a large amount of texture classes. In a final step, the rotation- and scale-invariant features are combined in a multi-resolution representation, which improves the classification accuracy in texture classification scenarios with scaling and rotation significantly.

Keywords: LBP, texture, classification, scale, adaptive, rotation, invariant, scale-space ∗Corresponding Author; Full-Address: Department of Computer Sciences, University of

Salzburg, Jakob-Haringer Strasse 2, 5020 Salzburg, Austria; Tel.: (0043) 662 8044-6305, Fax: (0043) 662 8044-172.

Email addresses: shegen@cosy.sbg.ac.at (Sebastian Hegenbart), uhl@cosy.sbg.ac.at (Andreas Uhl)

Preprint submitted to Journal of Pattern Recognition February 27, 2015 1. Introduction

A major challenge in texture classification is dealing with varying camera-scales and orientations. As a result, research focused on scale- and rotation-invariant feature representations has been a hot topic in the last years. Feature extraction methods providing such invariant representations, allow to be categorized into5 four conceptually different categories.

In a theoretically elegant approach, methods of the first category transform the problem of representing features in a scale- and rotation-invariant manner in the image domain, to a possibly easier, but equivalently invariant representation in a suitable transform domain. Pun et al. [1] utilize the Log-Polar10 transform to convert scaling and rotation into translation, scale- and rotationinvariant features are then computed using the shift invariant Dual-Tree Complex Wavelet Transform (DT-CWT [2]). Jafari-Khouzani et al. [3] propose a rotation-invariant feature descriptor based on the combination of a Radon transform with the Wavelet transform. A general drawback of this class of methods15 is, that scaling can only be compensated at dyadic steps. As an improvement,

Lo et al. [4]use a Double-Dyadic DT-CWT combined with a Discrete Fourier

Transform (DFT) to construct scale-invariant feature descriptors at sub-dyadic scales. The periodicity of the DFT is also exploited by Riaz et al. [5] to compute scale-invariant features by compensating the shifts in accumulated Gabor filter20 responses.

In a more pragmatic approach, methods of the second category achieve scaleand rotation-invariance either explicitly, by a re-arrangement of feature vectors, or implicitly, by selection of suitable transform sub-bands. In general, methods in this class also rely on some sort of image transformation. Lo et al. [6]25 (using the DT-CWT), Montoya-Zegarra et al. [7] (using the Steerable Pyramid

Transform) as well as Han et al. [8] and Fung et al. [9] (both relying on Gabor filters responses) are representative approaches of this category. In parallel to the first concept, methods of this class are often limited in the accuracy and amount of compensable scaling and rotation by the nature of the used image30 2 transformation.

The obvious, but potentially most devious category, is based on a feature representation with inherent scale- and rotation-invariance. The fractal dimension [10] as measure for the change in texture detail across the scale dimension, is a promising candidate for such a representation. Geometric invariant feature35 representations based on the temporal series of outputs of pulse coupled neural networks (PCNN) have been used by Ma et al. [11] and Zhan et al. [12]. As a consequence of the inherent scale- and rotation-invariance however, this type of features is likely to have a decreased discriminative power as compared to other feature representations and often requires a generative, model based approach,40 such as Bag-Of-Words, to be competitive.

The fourth and last category of methods utilizes estimated texture properties to adaptively compute features with the desired invariants. Xu and

Chen [13] use geometrical and topological attributes of regions, identified by applying a series of flexible threshold planes. Another large set of methods45 is based on the response of interest point detectors, such as the Laplacian of