Surface Classification Using Acceleration Signals ... - Semantic Scholar

Report 4 Downloads 94 Views
Surface Classification Using Acceleration Signals Recorded During Human Freehand Movement Matti Strese, Clemens Schuwerk, Eckehard Steinbach1 Abstract— When a tool is used to tap onto an object or the tool is dragged over its surface, vibrations are induced in the tool that can be captured using acceleration sensors. Based on these signals, this paper presents an approach for toolmediated surface texture classification which is robust against varying scan-time parameters. We examine freehand recordings of 69 textures and propose a classification system that uses perception-related features such as hardness, roughness and friction as well as selected features adapted from speech recognition such as modified cepstral coefficients. We focus on mitigating the effect of varying contact force and hand speed conditions on these features as a prerequisite for a robust machine-learning-based approach for surface texture classification. Our system works without explicit scan force and velocity measurements. Experimental results show that our proposed approach allows for successful classification of surface textures under varying freehand movement conditions. The proposed features lead to a classification accuracy of 95% when combined with a Naive Bayes Classifier.

I. INTRODUCTION Stroking a rigid tool over the surface of an object or tapping onto the object surface induces accelerations in the tool. Measured with an accelerometer, the corresponding surface texture signals (or short texture signals) represent the surface characteristics. They can be used to recreate the feel of real surfaces using voice coil actuators [1], [2], [3], to enhance traditional teleoperation systems [4], or to recognize surfaces using robots [5], [6], [7], [8]. In this paper we investigate surface texture classification using acceleration signals recorded during human freehand movement. When a human strokes a rigid tool over an object surface, the applied force, the scan velocity, and the angle between the tool and the surface might vary during the surface exploration and between subsequent exploration sessions. These scan-time parameters heavily influence the nature of the acquired texture signals [3]. The variability of the recorded acceleration signals complicates the texture classification process. Related works in the area of robotic texture recognition, reviewed in Section II, use well-defined exploration trajectories or controlled scan-time parameters to overcome this problem. With humans performing the surface explorations, such well-defined scan-time parameters, however, are not feasible. We focus on the design of features extracted from acceleration signals that are invariant against This work has been supported, in part, by the European Research Council under the European Unions Seventh Framework Programme (FP7/20072013) / ERC Grant agreement no. 258941. 1 The authors are with the Chair of Media Technology, Technische Universit¨at M¨unchen, 80333 M¨unchen, Germany [email protected], [email protected], [email protected]

Test data

Training data

(1) Feature extraction

Database of features

(2) Feature matching

Database of surface textures



Classification results

Fig. 1: Texture classification pipeline.

these scan-time parameters. The proposed features are used in combination with standard machine learning (ML) algorithms to classifiy the surface. Fig. 1 illustrates the texture classification pipeline studied in this work. The remainder of this paper is structured as follows. In Section II we review relevant work in the area of robotic texture recognition. Our texture recording setup is introduced in Section III. The proposed features are described in Section IV and evaluated in Section V. Our evaluation includes a comparison of our proposed features to features from related work, tested with three different ML algorithms. II. RELATED WORK Acceleration signals have been used for robotic texture recognition. In [9], a biomimetic sensor emulating the human finger, mounted on an industrial robot, is used to scan surface textures to provide acceleration data for the machine classification of materials achieving 95% accuracy. The dominant five peaks of the spectral envelope are used to form a feature vector. This feature definition for the matching/classification process heavily depends on scan-time parameters. It is known that texture signals change mainly with the applied force and scan velocity [3], [10]. Therefore, the robot needs to follow well-defined trajectories and uses only the signals acquired at constant velocity for training and classification. Hence, extensive training with various predefined trajectories is necessary. The force applied to the surface during the recordings is explicitly controlled to be constant. Note that industrial robots can repeat different movement trajectories very precisely, are normally equipped with various sensors and, hence, are able to measure the scan-time parameters at runtime. Similar limitations can be observed in the work presented in [5]. Herein, a humanoid robot with a three-axis accelerometer mounted on an artificial fingernail provides acceleration data for texture classification. The discretized spectrogram of the recorded signal is used as a feature for surface classification. Here too, extensive training with a controlled

applied force, fixed velocities and well-defined exploratory movements is used, since the feature definition again heavily depends on these scan-time parameters. In another related work in [8], the applied force and scan velocity are explicitly measured and used as features. In addition, the energy in 30 spectral bins serve as features for surface classification. This approach also suffers from the additional sensor requirements and from the shortcomings of explicit training with a range of different scan parameters. For a texture database consisting of 15 different textures, the system achieves a classification accuracy for a single exploration of 72% using a multi-class SVM. In [6], the fundamental frequency of the acceleration signal is used as a feature representing the texture. In addition to controlled force and velocity conditions during scanning, the surface profiles are also constrained to be well-defined (periodic) sinusoidal gratings, for which the fundamental frequency is expected to change almost linearly with the scan velocity, and thereby aid in the surface classification. Lastly, the BioTac sensor, a sophisticated multimodal tactile sensor that resembles a human fingertip and measures contact forces, vibrations and temperature, is mounted on a linear stage to stroke the textures in [7]. A classification rate of 95.4% among 117 textures is reported. Features like roughness, fineness and traction are identified from the literature on human perception and modeled analytically. Although the proposed features are descriptive for welldefined scan parameters, they still show a high variance with the applied force and the scan velocity [7]. In summary, the approaches reviewed above ([5], [6], [7], [8], [9]) deal with either artificially created textures, necessitate constant scan parameters (force and velocity) during recording, or the scan parameters are explicitly measured and used to match the texture signal to a database that consists of various scan force/velocity combinations. In this paper, we contribute features that are either invariant against these scan-time parameters or offer large value variability among the textures to allow for robust texture classification during unconstrained texture exploration. Besides freehand surface exploration, such robust features potentially also improve the performance of robotic texture recognition systems as the well-controlled surface exploration and the extensive training of the classifiers can be avoided. The approaches from related work depend on multiple costly sensors. By contrast, we only apply an acceleration sensor for the task at hand and, if desired for measuring friction, two cheap force sensing resistors (FSR) that can easily be used with the same electronics as the acceleration sensor. Moreover, we also include tapping features in the classification framework to evaluate the hardness of different surfaces, which has not been considered in the literature so far. Finally, we implement and compare features from related work on our database of freehand recordings and quantify the improvement that we achieve with the proposed features in this paper.

Fig. 2: Haptic stylus used for texture analysis.

Fig. 4: Surface texture data trace with its characteristic sections of impact impulse and movement phase, adapted from [11].

Fig. 5: Impact impulses obtained by tapping on the surface several times with increasing hand velocity towards the surface.

III. RECORDING PROCEDURE The general recording procedure has been adopted from our previous work in [11]. We use a three-axis LIS344ALH accelerometer (ST Electronics) with a range of ±6 g and combine all three axis to one using DFT321 as in [12]. Different from [11], the haptic stylus is not attached to a Phantom Omni device. In this work, we use a free-to-wield object with the same tooltip plus two FSR in order to evaluate lateral forces for friction measurements as Fig. 2 shows. We extend the haptic texture database from [11] to a total number of 69 textures (see Fig. 3). In order to be able to flexibly add new textures or to improve the recording modalities, we decided not to rely on the publicly available and extensive haptic database from [13], as it does not contain tapping data. Moreover, we want to be able to analyze and evaluate features for controlled recordings during the feature design process which is not possible with the data in [13] either. Our controlled recordings consist of acceleration signals, where either the velocity or the force is linearly increased using a motor-controlled rotatory plate (see [11] for details on the recording setup). The velocity is controlled by increasing the rotatory plate motor current. The force is controlled by pressing the Phantom Omni device on the surface. Fig. 4 illustrates a typical data trace with different movement phases like impacting on the surface, overcoming the static friction and moving along the surface. For the classification task itself, 10 freehand data traces (length of 25 seconds) per texture are collected as described in [11]. We additionally recorded linearly increasing tapping data for our hardness feature evaluation as shown in Fig. 5 and lateral

Fig. 3: Materials included in our haptic texture database, freely accessible at http://www.lmt.ei.tum.de/texture/

force measurements for friction estimation. Both of them are obtained by using freehand recordings. The tapping data consists of a series of impact impulses of different strength, which are created by increasing the hand velocity towards the surface. The sampling frequency of the recorded acceleration data is 10 kHz in this work. IV. FEATURES AND TEXTURE CLASSIFICATION In this section, we propose a set of perception-related features (subsections B - E) that describe the hardness, damping, roughness and friction of material surfaces. Additionally, we introduce a modified version of the Mel Frequency Cepstral Coefficients (MFCC) in subsection A, which is widely used in speech recognition and adapt it to our scenario of surface texture classification. A. Modified Frequency Cepstral Coefficients The field of audio signal processing offers a variety of methods that can also be used for the analysis of haptic data traces as stated in [8]. For instance, cepstral analysis captures the spectral envelope of a given data frame. We adapt the MFCC implementation from [14] and modify it for our texture signals. Most importantly, the filter bank has to be adjusted to the acceleration sensor bandwidth. Also, the audio specific processing steps such as the alpha value and the liftering procedure are removed. Since [8] also argues that an exponential decay in the spectral amplitudes is directly related to haptic perception, we test exponential decaying as well as constant Mel filter bank spectral amplitudes. Fig. 6 shows the processing pipeline for a 25 ms texture signal segment, which is extracted from the movement phase. This calculation is repeated and averaged for overlapping data frames (10 ms overlap) in order to get 13 mean cepstral coefficients. B. Hardness and Damping Features Decaying sinusoids are used as a model for impact impulses in [15]. We assume that hardness and damping

Fig. 6: Modified MFCC for a texture frame of 25 ms.

properties of surfaces provide further valuable dimensions for successful texture classification. 1) Hardness: We extract the hand acceleration data h = [|h1 |, . . . , |hn |]T and impact impulse data i = [|i1 |, . . . , |im |]T shown in Fig. 7 from each segmented tapping signal. Additionally, we extract the indices of the three largest impact impulses nmax ¯ 1...3 in i and calculate their temporal centroid n. Based on this, we define our hardness feature (H) as: H=

max(i) 1 · n n¯ ∑i=1 hi

(1)

The rising slope (first term in (1)) is calculated by dividing the absolute maximum in the tapping data by the index centroid of the three largest tapping impulses. The hand velocity (denominator of the second term in (1)) is approximated as the sum of all absolute data values in the hand acceleration data vector h. Fig. 8 shows an evaluation of the proposed hardness feature for tapping on three different materials with increasing impact velocity. It can be seen from Fig. 8 that the proposed hardness feature shows limited intra-texture variation for increasing impact velocity. In comparison, the inter-texture variation of H is significant as desired. 2) Damping: We identify the Spectral Centroid (SC) [7] of the impact impulse data as inversely related to the damping of a material. If I denotes the Discrete Cosine Transform (length m = 4096 in this work) of the tapping

Roughness value

1

0.5

0

0

1

2 3 4 5 6 7 8 1 second signal frame at increasing exerted force

9

10

Fig. 7: Magnitude of impact impulse data with visible sections of sensor noise, hand acceleration, impact and decay.

Fig. 9: Exemplary roughness comparison for a stone texture for linearly increasing scan force. The definition of roughness from [7] (black circles) shows higher variance compared to our roughness feature (blue crosses) in (3).

Fig. 8: Hardness feature for different materials. The impact velocity increases from left to right.

Fig. 10: Temporal roughness feature for different textures, tested on movement phase data under linearly increasing scan force.

data i, we calculate the damping-related feature (SC) as: m

SC =

2 |I(k)|2 · k ∑k=1

(2)

m

2 |I(k)|2 ∑k=1

Softer materials tend to behave as low pass filters and reveal stronger damping (smaller SC values) during our recordings. By contrast, stones and metals have high clamping spectral components regardless of the impact velocity and the resulting varying spectral energy of an impact impulse. C. Roughness The definition of microscopic texture roughness as mentioned for instance in [7] is strongly perception-motivated and variant to scan-time parameter changes. In our scenario of texture classification, an invariant roughness value is needed. To achieve this, we first apply a FIR high pass filter to each data trace with a cutoff frequency of 100 Hz. We then compute a Wavelet-Transformation (Coiflet3) and extract the detail levels d1 = [|d11 |, . . . , |d1n |]T and d5 = [|d51 |, . . . , |d5n |]T for further processing. We assume that rougher textures have stronger differences in these detail levels and calculate the temporal roughness (TR) as shown in (3) with d being the arithmetic mean of vector d. Both detail levels are affected by changes in scan-time parameters, thus the ratio is mitigating the influence of varying speed and force conditions. T R = log10 (d) with d = d1 −

d1 · d5 d5

(3)

In Fig. 9, a comparison is drawn between the definition from [7] and the proposed roughness feature from (3). Fig. 10 shows that this new roughness definition can be used for classification, because different textures generate roughness values which are approximately scan-invariant but show large differences for different materials. We also consider a spectral interpretation of texture roughness (SR), where we assume that rapid changes in the spectral domain indicate larger roughness of a texture. For the movement data x we define a data frame with a fixed size of 5000 data points as xn = [xn , . . . , xn+5000 ]T and its Discrete

Fig. 11: Macroscopic roughness feature extraction block diagram.

Cosine Transform as Xn , where we take the absolute value of each DCT coefficient. We calculate the difference spectrum Dk = Xn − Xn+100 of each frame and advance in steps of 100 data points until the end of x is reached. This leads to K difference spectra Dk . Our Spectral Roughness feature (SR) is then defined as follows: K

SR = log10 ( ∑ D2k )

(4)

k=1

This feature exhibits a wide spread between very smooth and very rough textures and, hence, is considered to be suitable for texture classification. D. Macroscopic Roughness Macroscopic roughness comprises the existence of texture grating wavelengths above 1 mm [16] and thus the low frequency content of a texture acceleration signal. Unlike [8], we do not remove spectral components below 10 Hz. We observed that coarse structures like stones or meshes include valuable information in these low frequency vibrations. We combine four possible ways to identify macroscopic roughness of a texture, which are illustrated in Fig. 11. 1) Waviness: Waviness describes the deviation of the low frequency signal slopes in the time domain. First of all, we segment our low pass filtered (100 Hz cutoff frequency) movement data vector x in frames and calculate for each frame the mean absolute value and store these in a vector m. We choose 200 data points as the frame size following the observation in [8] that the scan force and velocity variation

Fig. 13: left: Wheatstone Circuit, right: Measured friction force of exemplary foam texture. Fig. 12: Fineness feature for different textures during the movement phase at increasing movement velocity.

is almost negligible in this range. For mitigating the effect of increasing scan force or velocity, we also calculate a 100 data points Simple Moving Average (SMA) s100 of x. Our waviness feature (WV) is calculated as: WV = log10 (σ (m − s100 ))

(5)

where σ denotes the standard deviation of a vector. 2) Spikiness: Spikes in otherwise smooth acceleration time-domain signal traces are a strong indicator for bumpy surfaces or meshes on which the device gets stuck occasionally. We apply a custom spike detection algorithm on the low pass-filtered signal trace x. With a SMA (5000 data points) x¯ 5000 we calculate a threshold vector xth = 2 ∗ σ (x) + x + x¯ 5000 and a difference vector x∆ = x − xth . All values smaller than zero are set to zero and we calculate our spikiness (SP) feature as: SP = log10 (x∆ )

(6)

3) Fineness: In [7], the Spectral Centroid is introduced as a feature for texture classification, but not evaluated on unconstrained acceleration data. Applied during the movement phase, it is related to the fineness (F) of the texture and supposed to increase at higher velocities. When using our texture recording setup (which is different from the one in [7]) this observation could not be confirmed in our experiments. F stays nearly constant, if applied on either controlled or freehand data traces. In Fig. 12, we calculate the SCs as introduced in Section IV-B.2 per frame for three different materials under linearly increasing velocity conditions for our controlled recordings. Though the values vary around an average value, a difference between the texture Fineness F exists that can be used for classification. The approach in [7] deliberately cuts off frequencies above 600 Hz, motivated by the perception threshold of the human skin. In our opinion, the frequencies above 600 Hz include important information for texture classification. Hence, we do not apply such a filter. 4) Regularity: The amount of self-repeating patterns and regular structures in the signal trace is a characteristic texture property. As shown in [4], a comparison to voiced/unvoiced speech can be drawn because signals may either show the appearance of quasi-periodic patterns or similarity to random noise. Each movement signal value from x is rangenormalized with the denominator d = max(x) − min(x) and stored in xˆ . We calculate the auto-correlation vector rk of xˆ and take the derivative ∆rk = rˆ k − rˆ k−1 . All values smaller than zero in ∆rk are set to zero since we assume that the

amount of regularity is correlated to the positive slopes within the auto-correlation function rk of a texture signal. We finally calculate our regularity feature (RG) as: RG = ∆rk

(7)

E. Friction Friction is a major dimension in human texture classification as stated in [16]. It is estimated in [8] from the force signal, measured with an expensive force sensor. As we want to avoid the usage of such a sensor, we attach two FSR on both sides of the gripping part of the haptic stylus. Via a Wheatstone Bridge as shown in Fig. 13 (left), the differential voltage is recorded by the DAQ from [11]. We measure a differential voltage vector, shown in Fig. 13 (right) and calculate from each entry the absolute value v∆ = [|v1 |, . . . , |vn |]T , as we disregard if the user moves in a certain direction. We observe for instance that foam textures generally compel the user to apply higher lateral force in comparison to metal plates or a stone tiles in order to reach the desired exploration velocity. Hence, we use the mean differential magnitude voltage vector as a feature describing the friction (Fr): Fr = v∆ (8) V. EVALUATION We evaluate the different features on our freehand recordings using a tenfold cross-validation and standard ML approaches. The accumulated number of hits (number of correctly classified textures) after the cross validation represents the performance of all applied features on this database. We apply three standard ML approaches on the instance space, where all data is numerical, supervised (true texture names serve as labels) and normalized between 0 and 1. Sequential Feature Selection (Matlab Statistics Toolbox) was used to identify the relevant features. A Euclidian Distance based approach [18] constitutes a first and intuitive way for classification in this context. Second, the Naive Bayes is used as in [7]. Finally, Linear Discriminant Analysis is known for its way of reducing dimensionality (see [18]) of the feature space and thereby tries to lessen the curse of dimensionality [7]. The results for the proposed features and for features from related work, when applied on our database, are shown in Table I. After applying feature selection, our features combined with five modified cepstral coefficients work best with the Naive Bayes Classifier and achieve an overall accuracy of 95% (see last column in Table I). Modified cepstral coefficients also yield a remarkable accuracy of 88% and confirm their usefulness not only in speech recognition but also in haptic texture classification.

TABLE I: Comparison of the classification performance for different textures. The first part shows the results obtained with our proposed features when used individually. The second part shows the results using features from related work. The third part shows the results when feature selection is applied. Feature Name Macroscopic Roughness Microscopic Roughness Hardness 5 Modified MFCC(2,5,6,8,12) Friction Modified MFCC Constant Mel Filter Bank Modified MFCC Exp. Decreasing Mel Filter Bank 30 Energy Values (Gaussian Binning) Spectrogram Spectral Centroid Standard Roughness 10 LPC Coefficients Mean,Variance, Wavelet Energy 5 Relevant Spectral Peaks Micro- / Macro-Roughness, Hardness, 5 MFCC Micro- / Macro-Roughness, Hardness, 5 MFCC, Friction

Dimensionality 3 2 2 5 1 13 13 30 125 1 1 10 3 5 12 13

We believe that their success originates from the audible phenomenon of characteristic sound per texture even if velocity and force are varying. In the end, the reasons for audible sound and the reasons for detected acceleration are the same, namely the vibration signals of the surface due to tool-mediated interaction. All related work approaches regardless of their dimensionality could not reach the accuracy of either the modified cepstral coefficients or our method. Moreover, approaches using a high number of features suffer from overfitting, unless enough training data is gathered [18]. Methods using a high amount of features have to provide a huge training database in order to avoid this problem. The best result can be obtained by a combination of the content based features with a subset of Modified MFCC and the Naive Bayes Classifier as indicated in Table I. If the friction feature is omitted and only acceleration data is used, we achieve a classification accuracy of 91%. VI. CONCLUSION We describe a haptic texture database containing controlled and freehand recorded haptic signals. New perceptual features and features adapted from audio signal processing that try to mitigate variations in force and velocity dependencies are presented and compared to state of the art methods using three standard ML approaches. A combination of the modified cepstral coefficients and the proposed perceptual features as input for a Naive-Bayes classification yields a high accuracy of 95%. In future work, more ML approaches can be investigated. The proposed methods in this paper constitute one of many ways of how such instance spaces can be classified and more advanced approaches may perform better. The proposed features can be tested on the presented database in [13] for cross-validation of their robustness in terms of exerted scan force and velocity. R EFERENCES [1] K. Kuchenbecker, J. Romano, and W. McMahan, “Haptography : Capturing and Recreating the Rich Feel of Real Surfaces,” in The 14th International Symposium on Robotics Research (ISRR), 2009. [2] W. McMahan, J. M. Romano, A. M. A. Rahuman, and K. J. Kuchenbecker, “High frequency acceleration feedback significantly increases the realism of haptically rendered textured surfaces,” in IEEE Haptics Symposium (HAPTICS), March 2010, pp. 141–148.

Naive Bayes (%) 48 25 14 67 10 88 89 39 70 15 11 5 20 14 91 95

Linear Discriminant (%) 38 31 11 60 8 87 88 30 62 8 10 4 10 12 87 92

Eucl. Dist. (%) 32 22 10 47 7 74 71 26 15 8 10 4 7 6 70 83

Reference Section IV-D Section IV-C Section IV-B Section IV-A Section IV-E adapted from [14] adapted from [14] [8] [5] [7] [7] [10] [17] [9] Section V Section V

[3] J. M. Romano and K. J. Kuchenbecker, “Creating realistic virtual textures from contact acceleration data,” IEEE Transactions on Haptics, vol. 5, no. 2, pp. 109–119, April 2012. [4] R. Chaudhari, B. Cizmeci, K. J. Kuchenbecker, S. Choi, and E. Steinbach, “Low Bitrate Source-filter Model Based Compression of Vibrotactile Texture Signals in Haptic Teleoperation,” in ACM Multimedia 2012, October 2012, pp. 409–418. [5] J. Sinapov and V. Sukhoy, “Vibrotactile recognition and categorization of surfaces by a humanoid robot,” IEEE Transactions on Robotics, vol. 27, no. 3, pp. 488–497, June 2011. [6] C. M. Oddo, M. Controzzi, S. Member, L. Beccai, C. Cipriani, M. C. Carrozza, and A. Member, “Roughness Encoding for Discrimination of Surfaces in Artificial Active-Touch,” IEEE Transactions on Robotics, vol. 27, no. 3, pp. 522–533, June 2011. [7] J. A. Fishel and G. E. Loeb, “Bayesian exploration for intelligent identification of textures,” Frontiers in neurorobotics, vol. 6, no. 4, pp. 1–20, June 2012. [8] J. M. Romano and K. J. Kuchenbecker, “Methods for Robotic ToolMediated Haptic Surface Recognition,” in IEEE Haptics Symposium (HAPTICS), February 2014, pp. 49–56. [9] N. Jamali and C. Sammut, “Majority voting: material classification by tactile sensing using surface texture,” IEEE Transactions on Robotics, vol. 27, no. 3, pp. 508–521, June 2011. [10] H. Culbertson, J. Unwin, and K. J. Kuchenbecker, “Modeling and Rendering Realistic Textures from Unconstrained Tool-Surface Interactions,” Transactions on Haptics, vol. 7, no. 3, pp. 381–393, July 2014. [11] M. Strese, J.-Y. Lee, C. Schuwerk, Q. Han, H.-G. Kim, and E. Steinbach, “A haptic texture database for tool-mediated texture recognition and classification,” in Proc. of IEEE HAVE, Dallas, Texas, USA, October 2014. [12] N. Landin, J. M. Romano, W. McMahan, and K. J. Kuchenbecker, “Dimensional reduction of high-frequency accelerations for haptic rendering,” in Haptics: Generating and Perceiving Tangible Sensations. Springer, 2010, pp. 79–86. [13] H. Culbertson, J. J. L. Delgado, and K. J. Kuchenbecker, “One hundred data-driven haptic texture models and open-source methods for rendering on 3d objects,” in IEEE Haptics Symposium (HAPTICS), February 2014, pp. 319–325. [14] D. Ellis. (2005) Reproducing the feature outputs of common programs using matlab and melfcc.m. [Online]. Available: http://labrosa.ee.columbia.edu/matlab/rastamat/mfccs.html [15] K. J. Kuchenbecker, J. Fiene, and G. Niemeyer, “Improving contact realism through event-based haptic feedback,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 2, pp. 219–230, 2006. [16] S. Okamoto, H. Nagano, and Y. Yamada, “Psychophysical Dimensions of Tactile Perception of Textures.” IEEE Transactions on Haptics, vol. 6, no. 1, pp. 81–93, March 2013. [17] D. S. Chathuranga, Z. Wang, V. A. Ho, A. Mitani, and S. Hirai, “A biomimetic soft fingertip applicable to haptic feedback systems for texture identification,” in Proc. of IEEE HAVE, 2013, pp. 29–33. [18] P. Flach, Machine learning: the art and science of algorithms that make sense of data. Cambridge University Press, 2012.