Sitting Posture Analysis by Pressure Sensors Kazuhiro Kamiya, Mineichi Kudo, Hidetoshi Nonaka and Jun Toyama Division of Computer Science Graduate School of Information Science and Technology Hokkaido University, Sapporo 060-0814, JAPAN E-mail: {kami,mine,nonaka,jun}@main.ist.hokudai.ac.jp Abstract It has been promising to provide personalized services for improving our living environment in support of information technologies. Sitting is one of the natural actions in our daily life. We focus on sitting behavior as a cue for providing such services. We used a pressure sensor seat on a chair for identifying sitting postures. In the experiments, we classified nine postures, including leaning forward / backward / right / left and legs crossed. We obtained classification rates of 98.9% when the sitting person was known and 93.9% when the person was not known.
1. Introduction There have been demands in recent years for improvement in the quality of our daily life by using information and network technologies. To provide suitable services just for persons who want them in various situations, we need to figure out who the person is and in what situation the person is placed. These questions should be answered by solving identification problems and by behavioral analysis. Identification using biometrics such as fingerprint, finger vein pattern and iris can attain high precision. However, these authentication systems require the user’s cooperation. In daily life, it is not reasonable to expect cooperative actions many times at home or at the office. Identification in usual life should be unnoticeable and should not cause physical or psychological disturbance. Therefore the authors have been proposing a framework called “soft authentication” and have already shown a chair system using pressure sensors for person identification and a ceiling-sensor system using infrared sensors for tracking individuals [4, 7]. The chair system [7] used 32 pressure sensors that were connected. The identification rate ranges 78% – 98% for 5
978-1-4244-2175-6/08/$25.00 ©2008 IEEE
persons and 67% – 96% for 10 persons, depending on the situation. Especially, when the subjects sat in a relaxed position, the identification rate was over 95% even for 12 persons. Such a high rate is sufficient to identify persons in usual life. In the experiments, pressure sensors were placed directly on the seating face of a chair, so the chair s could not be changed. To improve the portability, we have recently developed a sensor sheet that can be placed on any chair. The number of sensors was also increased to 64 (= 8 × 8). The new sensor sheet was used in this study for posture identification. Work operation analysis has been widely studied [3, 5, 6]. For example, Aoki et al. reported a way of monitoring and detecting irregular states of single elderly residents by using a pyroelectric type detector [3]. Posture classification systems with pressure sensors have also been studied [5, 6]. In those studies, two sheets with 2,016 sensors each were placed on a seating face and a backrest. The sensor values were reduced to 15 by principal component analysis, and classification accuracy of 79% for 10 postures was reported when the subjects were new users (which means that the training and testing data were different). However, this rate seems slightly lower than expected. Therefore, we tried to improve the method for posture identification using our new sensor sheet. In addition, we examined the difference between person-known and person-unknown cases. It can be assumed from the results of our previous study that sitting persons can be identified with almost 100% accuracy [7], and thus we can use knowledge about their weights and usual sitting postures that were measured beforehand.
2. Equipment We developed a sensor sheet for measuring pressure force on a seating face. The hardware construction is shown in Fig. 1, and the dimensions of the chair are shown in Table 1. In total, 64 pressure sensors are
Table 2. Pressure sensor specifications Thickness 0.208mm Length 203mm Width 14mm Sensing area 9.53mm Connector 3-pin male square pin Maximum load 4.4N Figure 1. Hardware construction. Table 1. Dimensions of the chair Height 82cm Depth of seating face 41cm Width of seating face 42cm Height of seating face 47cm
Sum of sensor values
1400
1200
1000
800
stable part
stable part
600
400
changing part
unstable part
200
0 0
placed on this sheet as shown in Fig. 2. Each sensor is placed 30 mm apart from the other sensors on 8 by 8 cells. Specifications of the sensor (Flexiforce [1]) are shown in Table 2 and the sensing part of a sensor is shown in Fig. 3. These sensors are very thin and hence can be easily bent to fit, so a user can sit down on the seat comfortably. Since the sensor sheet is light and thin, it can be carried anywhere and used in various situations in daily life. In the experiments, we placed a low rebounding mat of 7 mm in thickness on the sensor sheet. The sampling rate is 12.5 Hz, and a set of 64 sensor values at a frame is therefore obtained every 80 ms.
3. Posture Recognition When a person sits down on the chair and changes posture, the sum of pressure values typically shows a change such as that in Fig. 4. The value increases rapidly when a person starts sitting and after a while, later becomes almost constant. We call the latter period a “stable part.” When the person changes posture,
Figure 2. Sensor sheet on chair.
Figure 3. Pressure sensor.
20
40
60
80
100
120
Frame
Figure 4. Behavior of pressure force. the value shows a sharp increase and decrease. Consequently, we can divide a time series of pressure values into “stable parts,” “unstable parts” and “changing parts.” We consider that even unstable parts can provide some information about working status. For example, a large fraction may be evidence of overworking. However, we focus on only the stable parts in this paper.
4. Methodology In this section, we describe a way of classifying sitting postures. First we have to extract the stable parts showing individual poses. To remove noise, we apply a high-frequency cut filter to a given time series. Then we extract the periods that have a length of more than one second and a difference from the previous frame of less than θ (θ = 0.01 in the following experiment). We have a 64-dimensional vector in one frame. For classification, we collect such posture vectors from a time series of a trial and from several trials of subjects in order to train a classifier. Posture classification involves some basic issues. One issue is person dependency and time dependency. Different persons would obviously sit in different ways. Even the same person might sit on different parts of the seating face. One ways to resolve this problem is to normalize pressure distributions by the distribution of the starting time. Another issue is weight dependency. We must examine whether difference in weight has a great effect. The sensor values would increase linearly or non-linearly with an increase in weight. Therefore,
Table 3. Nine postures
2
Number of subjects
(N) (F) (B) (R) (RC) (RCR) (L) (LC) (LCL)
Normal Leaning forward Leaning backward Leaning right Right leg crossed Leaning right with right leg crossed Leaning left Left leg crossed Leaning left with left leg crossed
it would be effective to divide the sensor values by the total value. To absorb the first kind of difference, we record the posture vector when a subject sits down on the chair. We call this a normal posture (vector). To obtain the normal posture, we use the first ten frames of the first stable part and average them. After that, any posture vector (of 64 pressure values) is subtracted by the normal posture (position normalization). As a result, any posture is measured relatively to the normal posture. For the second normalization, we divide any posture vector by the sum of the 64 values, assuming the weight of the subject. That is, any vector is normalized to one in the summation.
5. Experiments We classified nine sitting postures seen in usual life (Table 3). In Table 3, we combined similar postures into three groups (hereafter abbreviated as N, F and B).
5.1. Experiment Conditions The subjects were 10 male university students aged 21 to 24 years and weighing 57 kg to 90 kg (Fig. 5). Each subject was first asked to sit down back (lean deeply) and sit in an upright position. At this stage, normal posture was measured. Then the subject was asked to take the following sequence of postures: B → N → F → N → · · · → LCL, that is, return to normal posture after each posture. Each posture was maintained for 2–3 seconds. Breaks were taken between trials (one trial meaning one sequence). In each break, the subject walked away from the chair. Each subject performed five trials, and 10 frames (about one second) were extracted from each stable part cut by automatically in each posture. Thus, 450 (= 5 trials × 9 postures × 10 frames) frames were collected for each subject. The sensor values were normalized in each dimension so as to variance 1, and a support vector machine (SVM) [2] with a radial basis function with default parameter values was adopted as a classifier.
1
0
55-60
60-65
65-70
70-75
75-80
80-85
85-90
Weight (kg)
Figure 5. Weight histogram.
Table 4. Confusion matrix in an unnormalized case N B F RC R RCR LC L LCL
N 472 9 0 10 13 0 22 7 0
B 0 469 50 0 0 0 0 10 0
F 0 22 450 0 10 0 0 10 0
RC 10 0 0 440 2 10 0 20 0
R 8 0 0 0 457 11 3 0 0
RCR 0 0 0 30 18 479 0 0 0
LC 10 0 0 0 0 0 474 0 63
L 0 0 0 20 0 0 1 443 43
LCL 0 0 0 0 0 0 0 10 394
5.2. Person-unknown situation The first situation was almost the same as reference [5, 6]; that is, the training and testing data were of different subjects. One of the ten subjects was used for testing and the remaining nine subjects were used for training, which means 450 frames for testing and 4050 (= 450 × 9) frames for training. We repeated such divisions 10 times. The classification rate was calculated by 10-fold cross-validation in this way. Unnormalized case The classification rate of an unnormalized case was 90.6% and the details are shown in Table 4. This rate is better than 79% in [5, 6]. As expected, misclassification occurred more frequently within the same group (diagonal block) than between groups. On the other hand, “F” and “B” are sometimes misclassified to each other. It looks strange, but it was revealed that “F” and “B” correspond, in common, to a weight decrease because of weighting down with forelegs or backrest, and in both cases, the normal posture affected much. Normalized case By applying two kinds of normalization, the classification rate was increased up to 93.9%. The confusion matrix is shown in Table 5. “N” and others were classified perfectly. This is mainly due to the normalization by normal posture. There are still difficult subtasks such as “R” and “RCR,” and “LC” and “LCL.” One possible way to improve the performance
Table 5. Confusion matrix in a normalized case N B F RC R RCR LC L LCL
N 500 0 0 0 0 0 0 0 0
B 0 490 40 0 0 0 8 1 0
F 0 0 458 0 0 0 0 10 0
RC 0 0 0 474 0 19 0 10 0
R 0 0 0 0 450 1 0 0 0
RCR 0 6 2 26 50 480 30 0 0
LC 0 0 0 0 0 0 462 0 60
L 0 4 0 0 0 0 0 471 0
LCL 0 0 0 0 0 0 0 8 440
Table 6. Classification results of nine postures Case Person-unknown (10 persons) Person-known (each of 10)
Unnormalized Normalized (position) Normalized (+ weight) Unnormalized Normalized (position) Normalized (+ weight)
Accuracy 90.6% 93.0% 93.9% 98.4% 98.8% 98.9%
7. Conclusion is to choose an appropriate subset of features (sensors) depending on target subtasks. For example, left-front and right-rear sensors would be sufficient for classifying “R” and “RCR.”
5.3. Person-known situation Next we examined the case in which the sitting person is known. To guarantee this situation, we reexamined the performance of the system for person identification. We identified the persons using their normal postures. We divided 5-trials data of each trial into one for testing and the other four for training. As a result, the identification rate was 93.0% for 10 persons. This rate is comparable to that in our previous experiments [7]. Then we assumed that the person was known with the person’s training data for posture classification. We obtained the classification rate for each and averaged the rates over 10 subjects. The total classification rate was 98.9%. The difference of 5% shows the difference by person dependency.
6. Discussion The results are summarized in Table 6. The basic performance of 90.6% was improved to 93.0% by position normalization and to 93.9% by additional weight normalization. When we can know who the sitting person is, the performance reached to 98.9%. In a previous study [7], we found that the person identification rate declines after a long time interval. To examine if this is also the case or not, we took a same data from one person in a three-week interval. Then we obtained 100% classification rate in which two timedifferent data were exchanged in turn. Although, this is the result of one subject, the normalization of position seems sufficient to solve this time-dependency problem.
we developed a new sensor sheet to improve a way of sitting posture classification and we examined its performance in two cases. In a person-unknown case, we were able to obtain classification accuracy of 93.9% with position and weight normalizations. In a personknown case, we could classify nine postures almost surely. In addition, at the rate of 93%, the same system identified the subjects. Therefore, the system can identify both the person and posture with accuracy rates of more than 90%. In this paper, we used only “stable part” frames for sitting-posture classification. For accomplishment of our final goal (analyzing work operation of a sitting person), it is necessary to analyze a long time-series of sensor data, for example, for detecting the user’s tiredness feeling.
References [1] http://www.tekscan.com/. [2] http://www.csie.ntu.edu.tw/˜cjlin/ libsvm/. [3] S. Aoki, M. Onishi, A. Kojima, and K. Fukunaga. Detection of a solitude senior’s irregular states based on learning and recognizing of behavirol patterns. IEEJ Transactions on Electrical and Electronic Enginnering, 125(6):259–265, 2005. (in japanese). [4] T. Hosokawa, M. Kudo, H. Nonaka, and J. Toyama. Soft authentication using an infrared ceiling sensor network. Pattern Analysis and Applications. accepted for publication. [5] L. A. Slivovsky and H. Z. Tan. A real-time static posture classification system. Proceedings of the ASME Dynamic Systems and Control Division, 69-2:1049–1056, 2000. [6] H. Z. Tan, L. A. Slivovsky, and A. Pentland. A sensing chair using pressure distribution sensors. IEEE/ASME Transactions on Mechatronics, 6(3):261–268, 2001. [7] M. Yamada, K. Kamiya, M. Kudo, H. Nonaka, and J. Toyama. Soft authentication and behavior analysis using a chair with sensors attached: Hipprint authentication. Pattern Analysis and Applications. accepted for publication.