Free download. Book file PDF easily for everyone and every device. You can download and read online Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing book. Happy reading Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing Bookeveryone. Download file Free Book PDF Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing Pocket Guide.
Availability

Further work on VAT algorithm by [28], developed an approach for visually assessing of cluster tendency for the objects represented by a relational data matrix. In [29], authors proposed a modified technique which can significantly reduces the computational complexity of the iVAT algorithm [18]. The general approach of our model is to extract a broad set of features from each email sample that can be passed to our classification engine. Figure 1 shows the architecture of classification system. In this process, we collect the feature of the incoming email contents and convert it into a Detecting Unwanted Email Using VAT vector space where each dimension of the space corresponds to a feature of whole corpus of the email message.

The initial transformation is often a null step that has the output text as just the input text. A corpus of emails is being used in our system are taken from [10]. In our feature extraction process we have used three set of information from each email, they are: 1 message body, ii message header and iii email behaviour. Each email is parsed as text file to identify each header element to distinguish them from the body of the message.

Every substring within the subject header and M. Chowdhury the message body was delimited by white space. Each substring is considered to be a token, and an alphabetic word was also defined as a token delimited by white space. The alphabetic word in our experiments was chosen only English alphabetic characters A-Z, a-z or apostrophes. The tokens were evaluated to create a set of 21 hand-crafted features from each e-mail message.

The study investigates the suitability of these 21 features in classifying unwanted emails. The behavioural features are included for improving the classification performance, in particular for reducing false positive FP problems. The following Figure 2 shows a sample dissimilarity matrix R and corresponding dissimilarity image in Figure 3. We then select the features based on closest distance from the centeroid.

For selecting the closest distance we use the formula presented in our previous work [31]. We also reduce the redundant and noisy features from our data set based on data cleaning technique using WEKA tools. The following block diagram shown in Figure 6 is illustrating the feature selection technique. Here the values of x are typically of the form , and are composed of either real or discrete values.

The y values represent the expected outputs for the given x values, and are usually drawn from a M. Chowdhury Fig. Consequently, the task of a learning model involves the approximation of the function f x to produce a classifier. In the training model, we use a particular data set and the same number of instances by randomly selected from other sets.

Then we split the data in K different fold for making training set. Then we use the training set to train the classifier and evaluate the classification result by using the test data. The same rotating process applies for all other sets. In our classification process, we built an interface program with WEKA for data classification.

The interface program allows the system to select the data sets and the corresponding classifiers according to the testing requirements rather than the default in WEKA tools. A confusion matrix contains information about actual and predicted classifications produced by a classification system. The following Table 1 shows the confusion matrix for a two class classifier.

Some common standard metrics have been defined from Table 1, such as precision, recall and accuracy. A FP refers to when a classifier incorrectly identifies an instance being positive. In this case a false positive is when a legitimate email is classified as a spam message.

Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing 2010

False Negative FN. A FN is when a classifier incorrectly identifies an instance as being negative. In this case a false negative is when a spam message is classified as being a legitimate email. Precision Pr. The methodology used in the experiment is as follows: Step 1 Extract the features from incoming emails Step 2 Generate dissimilarity matrix a. Combine the features from all categories and make an integrated data file. Create dissimilarity matrix R of training file base on distance measure d.

Generate the corresponding dissimilarity image. Step 3 Generating the VAT cluster a. Create centeroid — leader of the cluster, and labeling the instances of each cluster. Step 4 Generate the features vector; create global list of features after selecting the features from each VAT cluster. Chowdhury Step 5 Classification and evaluation a. Evaluate the test data e. Repeat for other classifiers g.

Repeat for other data sets 4. Table 2 presents the average of experimental results according to the classifiers. The RF shows best performance among the classifiers. Table 2. The present evaluation shows almost similar performance with trivial variation. The AUC is a popular measure of the accuracy of an experiment. The larger the AUC, the better the experiment is at predicting by the existence of the classification. The possible values of AUC range from 0. The CI option specifies the value of alpha to be used in all CIs.

The quantity 1-Alpha is the confidence coefficient or confidence level of all CIs. The P-value represents the hypotheses tests for each of the criterion variables. A useful experiment should have a cut-off value at which the true positive rate is high and the false positive rate is low. In fact, a near-perfect classification would have an ROC curve that is almost vertical from 0,0 to 0,1 and then horizontal to 1,1. The diagonal line serves as a reference line since it is the ROC curve of experiment that is useless in determining the classification.

The CB ratio is the ratio of the net cost when the condition is absent, to the net cost when it is present. In this experiment we select the cut-off value for which the computed M. We have reviewed different classification algorithms and have considered five classifiers based on our simulation performance. In future work we have a plan to extract the features from dynamic information from real time emails contents and pass to our classification engine to achieve better performance.

References [1] Zhang, J. In: Jin, H. ICA3PP LNCS, vol. Springer, Heidelberg [13] Islam, M. In: Zaki, M. PAKDD Springer, Heidelberg M. Chowdhury [19] Wang, Y. In: Kruegel, C. RAID Springer, Heidelberg [23] Xu, R.

Bestselling Series

Applied Statistics 28, — [25] Kaufman, L. Wiley, New York [26] Xue, Z. In: ASID , pp. In: Proc. Neural Networks, pp. The concepts of ECC over prime field are described, followed by the experimental design of the smart card simulation. Finally, the results are compared against RSA algorithms. Practically all daily activities can be converted to use smart cards as a means to submit personal data into different types of information systems.

Many examples can be found such as financial, passport, travel, and health care systems [2, 6]. A major concern for these smart card systems is the security of the data stored in the smart cards. Access to data in smart cards by unauthorized parties may result in financial losses. These methods are used to encrypt and decrypt the data in smart cards and generate digital signatures for verification of smart cards.

In Figure 1. Cryptography algorithms of smart card system [2, 6] are used for encryption and decryption, digital signature and key exchange. Abdurahmonov, E. Yeoh, and H. Data integrity: all data of the smart card have not been altered and destroyed in an unauthorized manner, which is typically provided by digital signature. Authentication: the recipient can verify that the received message has not been changed in the course of being transmitted, which is supplied by digital signature. Encryption in smart card systems can use the cryptographic algorithms shown in Figure 1.

With Private Key cryptosystem symmetric key , sender and receiver use one private key or common key. A major advantage is that private key cryptosystems are faster than public key cryptosystems. Private key cryptosystem processes are illustrated in Figure 2. In contrast to private key schemes, public keey schemes require two keyss, which are a public key and a private key. Advantage oof public key cryptosystems is the higher security compared to private key cryptosyystems. However, public key k cryptosystems are usually significantly slower thaan private key cryptosystem ms.

Figure 3 illustrates the public key crypto encryptioon scheme. Nowadays, Elliptic Curvve Cryptography ECC pub blic key cryptosystems are emerging as alternative RSA public key cryptosystemss in the smart card technologies because ECC public keey cryptosystem [8, 9, 10, 18] has the same security level with RSA, small keey umption, fast computation, and small key bandwidth. Hussain 2. Thus, digital signatures are one of the most interesting and original applications of modern cryptography that the first digital signature idea was proposed by Diffie and Helman in [4].

The hash functions are used for authentication process. A hash algorithm for a digital signature takes variable length of data may be thousands or millions bits and produces a fixed length output. Hash function algorithms are part of symmetric algorithms that are a widely deployed in cryptography for message authentication. The SHA hash functions generate the output by transforming the input data in 80 steps, which applies a sequence of logical functions in each step. Figure 5 illustrates one of the steps. ECC can be used to provide a digital signature, key exchange, and encryption.

Distributed Deep Learning

ECC is based on the difficulty of an underlying mathematical problem, such as Discrete Logarithm Problem DLP but is the strongest public-key cryptographic system known [12, 13, 16] to date as well as it is emerging as an attractive public key cryptosystem. The discrete logarithm problem DLP is the basis for the security of many cryptosystems including the Elliptic Curve Cryptosystem.

Since the ECDLP appears to be significantly harder than the DLP Discrete Logarithm Problem , the strength-per-key bit is substantiality greater in elliptic curve systems than in conventional discrete logarithm system [16]. The core of elliptic curve arithmetic is an operation, which is called scalar multiplication.

Integer is called the order of the point and is a divisor of the order of the curve. The problem of calculating equation 1. From equation 1 , is a public key, and is a private key. In Equation 1 , it is computationally difficult to calculate k that is a scalar multiplier in the equation. Finite field is called Galois Field in honor of Evariste Galois in some sources [16]. The order of the finite field is the number of elements in the field. Finite field is based on and is denoted. There are two finite fields, that the first finite is prime field.

The different equations are to account point addition and point doubling so that, the group law is completely unlike. The group law of ECC over prime field is explained in Table 1. Figure 6 and Figure 7 are drawn geometric explanation of point addition and point doubling. Scalar multiplication is called point multiplication because coordinate systems are based on points. Scalar multiplications is also significant operation, have chosen to implement ECC over prime field in smart card systems, was used the binary method Improving Smart Card Security Using Elliptic Curve Cryptography while it does not require pre-computation, and uses less memory than other efficient methods [16, 9].

Scalar multiplication of a point occurs times to account ECC algorithms of smart card systems. Based on this equation, given the prime modulus , the curve constants from Table 1 and and two points in coordinate systems and , the problem is to find a scalar ] that fulfils the equation. Hussain 4 Implementation of ECC 4. The simulation program was developed using Java Card 2. Figure 8 illustrates the interactions in the simulation. These recommendations are: bit, bit, bit, bit, bit, and bit domain parameters. Sextuple of EC domain parameters are specified in ECC, which is represented as an octet string Improving Smart Card Security Using Elliptic Curve Cryptography converted using the conventions specified, and they should be determined the following logarithm equations [19, 20].

Equation 1. With this equation solving the logarithm problem on the associated elliptic curve would require approximately 2 operations. The EC domain parameters over prime field are supported at each security level; they are Koblitz curve parameters and verifiably random parameters.

However, verifiably random parameters are supplied at export strength and at extremely high strength than Koblitz domain parameters. Mainly, Koblitz domain parameters are suitable EC domain parameters over binary field. We implement verifiable random parameters into smart card systems.

These parameters are illustrated in Figure 8. Hussain , , , , , and These key lengths are compared against RSA encryption key lengths, which are [3] , , , , , and bits. Flow chart is illustrated in Figure 10 is a flowchart that illustrates the implementation of the encryption process. Thus, ECDSA is designed to be existentially unforgivable, even in the presence of an adversary capable of launching chosen-message attacks. We used the same key lengths in bits as in the ECC encryption algorithm: , , , , , and bits.

These are compared with RSA digital signature key lengths, which are , , , , and bits, used in SHA-1 hash function. ECDSA signature scheme is similar to digital signature process that consists of signature generation and signature verification. Output: Signature , 1. Output: Pass or Fail 1. Accept the signature Fig.

Hussain Figure 11 is a flow chart of ECDSA over prime field for the digital signature process is illustrated step by step; this process is based on the above mentioned Algorithm 1 and Algorithm 2. These are compared against RSA encryption and digital signature algorithms.

Table 2 shows the results of ECC encryption and RSA encryption with different key lengths, where E-time is the encryption time, D-time is the decryption time and total time is the summary of the both times. The results are matched for corresponding key lengths of the different encryption methods as described above. The results are matched for corresponding key lengths of the different encryption methods as before. For example, ECC digital signature using bit key length is compared against RSA digital signature using bit key length. Thus the performance of ECC is better for digital signature generation but the performance of RSA is better for digital signature verification.

Initial experiments with a sample data set showed significant improvements in performance compared to RSA methods. Further investigation with a larger set of data is to be conducted to demonstrate the effectiveness of ECC over prime field cryptography and improve the algorithms in providing better performance and security. References 1. Xiaoyun, W. Rankl, W. John Wiley and Sons Ltd. Vincent, O. Electronic Commerce Research 10 1 , 27—41 4. Diffie, W. IEEE Trans. IT 6 , — 5.

Koblitz, N. Mathematics of computation 48, — 6. Hendry, M. Artech House, Boston 7. Bart, P. Computer Networks 51 9 , — 8. Johan, B. Computer Networks 36 4 , — 9. Vanstone, S. Information Security Technical Report 2 2 , 78—87 Berta, I. Periodical Polytechnic Serial El. Johnson, D. Springer, Heidelberg T. Hussain Hitchcock, Y. Mahdavi, R. In: Heng, S. CANS Springer, Heidelberg Miller, V.

In: Williams, H. Dabholkar, A. Hankerson, D. Springer and Francis Group Chaves, R. Avanzi, R. Taylor, Springer Brown, M. In: Naccache, D. CT-RSA Rafiul Hassan Abstract. Wireless access technology has come a long way in its relatively short but remarkable lifetime, which has so far been led by WiFi technology. WiFi enjoys a high penetration in the market. Most of the electronic gadgets such as laptop, notepad, mobile set, etc.

Currently most WiFi hotspots are connected to the Internet via wired connections e. On the other hand, since WiMAX can provide a high coverage area and transmission bandwidth, it is very suitable for the backbone networks of WiFi. The paper depicts a converged architecture of WiMAX and WiFi, and then proposes an adaptive resource distribution model for the access points. The resource distribution model ultimately allocates Md. Rabbani et al. A dynamic splitting technique is also presented that divides the total transmission period into downlink and uplink transmission by taking the minimum data rate requirements of the connections into account.

This ultimately improves the utilization of the available resources, and the QoS of the connections. In the past several years, a number of municipalities and local communities around the world have taken the initiative to deploy WiFi systems in outdoor settings in order to provide broadband access to cities and metrozones as well as rural and underdeveloped areas. Because of the wide popularity of the WiFi technology, major vendors of handheld devices, such as Google and Apple, include WiFi compatible radio interfaces in their most popular mobile devices including Android, iPhone, iPad and iPod touch, etc.

Despite its wide popularity, WiFi is not free from drawbacks. Small range, limited mobility, and the inability to provide strict quality of service QoS for different classes of services are the biggest drawbacks of WiFi. Moreover, WiFi needs wired backhaul network for Internet connectivity, which is cost-intensive, especially installation in areas that lack infrastructure.

WiMAX, the Worldwide Interoperability for Microwave Access, is the muchanticipated wireless technology aiming to provide business and consumer wireless broadband access on the scale of the Metropolitan Area Network. The distinctive features of WiMAX include very high peak data rate, scalable bandwidth and data rate, adaptive modulation and coding, flexible and dynamic per user resource allocation, and support for advanced antenna techniques. With its large coverage area and high transmission rate, WiMAX can serve as a backbone for WiFi hotspots to be connected to the Internet.

With mobility support, WiMAX has become the key technology for users to access high-speed Internet on buses, express trains [2], ships, etc. To solve the above mentioned problem, our contributions are following: i presents a resource distribution scheme at the WiFi AP, and ii proposed a dynamic splitting technique that divides the total transmission period into downlink and uplink transmission.

Therefore, high deployment costs are associated with the backhaul network of WiFi, particularly in the remote rural or suburban areas with low population densities or new settlement areas lacking wired infrastructure. In contrast, WiMAX is a highly promising technology for the provision of high-speed wireless broadband services.

The convergence of these technologies in seamless transmission of data ensuring end user QoS requirements is a major challenge. Here, Berlemann et al. The major drawback of this solution is that a significant portion of the available transmission time is wasted which causes reduced spectral efficiency. As, the MACs of Another example of coexistence of IEEE The common frame structure is also used here.

They considered a central coordinating device which combines the BS of The BSHC can operate in both an For The BSHC has full control over the channel and over all The drawback of this coexistence is that the BSHC needs complete control over the radio resources of the co-located Also, the QoS requirements of different types of applications fails to be addressed.

Other than the coexistence in shared frequency band approach, hierarchical approach is also recently proposed in literature. In [6], Hui-Tang et al. A two-level bandwidth allocation scheme was used in their architecture. However, they did not give any description about the scheduling where a good scheduling algorithm is required to provide guaranteed QoS. In [7], Gakhar et al. They did not provide the mechanisms required to ensure QoS like bandwidth allocation, scheduling and admission control.

Utility function can be used for the resource allocation problem [9, 10, 11, 12]. In [9], Kelly et al. In [10, 11], utility function is employed to build a bridge between physical layer and media access control MAC layer. In [12], Shi et al. WiMAX defines five QoS service classes [13] to provide support for a wide variety of applications which are: 1 Unsolicited Grant Services UGS - this service class provides a fixed periodic bandwidth allocation for the application. There is no need to send any other requests after the connection setup.

This service class is similar to UGS in that the BS allocates the maximum sustained rate during active mode, but unlike UGS, no bandwidth is allocated during silent period; 4 Non-real-time Polling Service nrtPS - this service class is designed to support non-real-time VBR traffic which does not require delay guarantee like File Transfer Protocol FTP ; 5 Best Effort Service BE this service class is designed to support data streams that do not require a minimum service-level guarantee. Most of the data traffic belongs to this service class.

WiMAX can deal with various applications using these service classes. On the other hand, WiFi provides best efforts service to the various types of applications. In the following, we first describe the network architecture that we are going to use for convergence which is followed by the adaptive scheduling algorithm to support class-wise QoS in the later section where we have proposed a utility function for the resource allocation algorithm in WiFi APs.

For standalone SS, the uplink and downlink data is directly transferred through the connection with the SS. The IEEE PCF is a centralized MAC algorithm to provide contention-free service to support for delay sensitive applications. For the purpose of providing prioritized medium access, we propose a scheduling-based mechanism on managing transmissions in PCF. So, we will concentrate on scheduling in WiFi AP.

Since the basic RR-based scheduling in Two scheduling algorithms are proposed here - one for downlink traffic and another for uplink traffic which is due to the different behavior of uplink and downlink traffic from the end-users point of view. In the utility function Eq. Using the utility function, we formulate an optimization problem in Eq. We assume that the WiFi nodes can request bandwidth as they require.

The contentionfree period of PCF is divided into N time slots. Our scheduling scheme aims to distribute these N slots among the K connections so that the priorities of the connections are maintained as well as the minimum rates are served. Let xk,n be an element of matrix X with K rows and N columns.

Utility function can be used to evaluate the benefit of allocating a time slot to certain connection. The purpose of using the utility function is that it ultimately allocates more time slots to the connections that need more instantaneous resources to meet QoS requirement, e. Because, with the increasing utility function, the utility-based resource allocation can exploit the multi-user diversity and the user will be given timeslot according to their needs.

The decreasing marginal utility function can prevent the resource allocation algorithm from favoring excessively the top priority users and maintain fairness of the resource allocation. Here, we choose logarithmic function as the form of the utility function which is an increasing function and has decreasing marginal utility function. The bigger the value of F is, the more fair the resource allocation is to the users and the lower the spectral efficiency becomes. This type of utility function was used in literature [12] but they did not consider the priority of the connections.

If the delay, currently experienced by a packet, is near to maximum delay limit, the PI value increases for the connection related to that packet. Similarly, if the queue length of a connection reaches the maximum queue length, the PI value increases for that connection. The connection with low priority and low minimum rate has low PI value. Thus the term PI in the formulation is adaptive to the changing traffic load and network condition. To formulate an optimization problem based on the utility function for the time slots allocation, let K connections can use N slots in such a way that the summation of the utility functions of all the connections is maximized subject to the conditions that each slot will be allocated to one connection only and the minimum data rate of each of the connections is ensured.

The minimum data rate of each connection is ensured by the 3rd constraint in Eq. The solution to the optimization problem involves both continuous variable PIk and binary variable xk,n. For K connections and N time slots, there are K N possible ways of time slot allocation to the connections.

The process of finding an optimal solution, therefore, may not be always robust. Considering the practical applications, we propose a heuristic algorithm in the next section which produces a near-optimal solution and at the same time computationally inexpensive. We allocate slots to the connections so that the minimum rate of each of the connections is satisfied starting from the highest priority QoS class connections to the lowest priority.

For the rest of the slots, for each of the connections, we calculate the difference of the total utility function before and after allocating that slot to a connection. The connection with the highest difference is allocated that slot. It is worthwhile to compare the computational complexity of our heuristic algorithm with the exhaustive method.

CSDL | IEEE Computer Society

In the heuristic algorithm detailed steps presented in Algorithm 1 , in phase 1 , we need O N computation and in phase 2 we need O KN computation. Therefore, overall time complexity of our heuristic algorithm is O KN which is far less than the time complexity of the exhaustive method O K N. When the WiFi node receives the downlink data, it will send uplink data if it has any. Therefore, there is no control at the AP over the ratio of downlink and uplink data.

We implemented our resource distribution algorithm at WiFi AP for both downlink and uplink data. However, in case of Internet connections or broadcast data e. Hence, equal split ratio between downlink and uplink data will degrade the system performance, while unequal but fixed split ratio will still lead to poor utilization of available resources. We propose a mechanism of splitting the CFP in downlink and uplink transmission periods as shown in Algorithm 2. The CFP is divided according to the minimum number of slots required to maintain the minimum data rate for the downlink and uplink connections, respectively.

Algorithm 2. We used a single-channel infrastructure WLAN in our simulation using custom simulator atisim developed by M. Siddique in his PhD thesis [15]. Though we considered different combinations of number of connections and time slots, due to space limitation, here we show only the results of experiment considering 30 connections and 40 PCF time slots for both downlink and uplink connections. Round-robin method assigns time slots to each connection in equal portions and in circular order, handling all connections without priority.

As evident in Fig. This is because we consider the priority of the connections in our utility function which ultimately affects time slot allocation. So, higher priority connections get higher throughput. Traditional PCF does not consider priority among the connections. Here we see the same scenario that the higher priority connection are provided improved throughput and QoS. In Fig. In case of BE connections, PCF provides more space than our method while our method provides substantial space in the queue for the BE connections.

Our method considers the priority of the connections while distributing the time slots to the connections. The leftover space in the queue is also considered in the priority index. By doing so, our method serves higher priority class more while still serving the lower priority connections.

It shows that with 15 connections and 40 time slots, the minimum rate requirement of the connections were almost always satisfied in our method. Our method performs even better when there is scarcity of resource 30 connections and 40 time slots also shown in Table 1 and Table 2 for downlink and uplink connections, respectively.

We present the data for both downlink and uplink connections using a fixed splitting method and our proposed dynamic splitting algorithm. For the fixed splitting method, we considered the ratio of downlink and uplink transmission to be Proposed Average connection throughput bps Average connection throughput bps Proposed 5, 4, 3, 2, 1, 4, 3, 2, 1, 0 Connections with QoS classes Connections with QoS classes a b Proposed Average connection throughput bps Average connection throughput bps Proposed 4, 3, 2, 1, 0 4, 3, 2, 1, 0 Connections with QoS classes c Connections with QoS classes d Fig.

This is because the fixed splitting method does not consider the nature of the traffic, and always gives the same number of time slots to uplink and downlink connections. So, the network resources are more efficiently utilized by means of our method. These figures present the same scenario more clearly; namely, that the uplink connections attain less time slots in the case of a fixed splitting ratio. To provide QoS support to connections, we defined a utility function that contains priority index, a measure that takes QoS class, minimum data rate, queue status and packet delay into account.

The scheduling algorithm maximizes the total utility of all connections to allocate the time slots over a cycle. We also proposed a dynamic mechanism for dividing the contention free period of PCF into downlink and uplink transmission period. An algorithm was formulated in consideration of the minimum rate requirements of the connections in allocating time slots to downlink and uplink transmissions. The schemes, we proposed in this paper, were supported by simulation results.

Our results show that our method significantly outperforms the traditional round-robin algorithm used in PCF in terms of meeting QoS requirements. We also showed that fixed or equal ratio of splitting leads to poor utilization of available resources and degradation of QoS requirements, whereas dynamic splitting performs significantly better. Future work will focus on developing an efficient admission control mechanism so that the utilization will be maximized as well as the minimum rate and latency requirements of the flow can be guaranteed without hampering other flows considering the priority of the new flow and existing flows.

Mangold, S. Berlemann, L. Hui-Tang, L. Gakhar, K. Niyato, D. IEEE Network 21 3 , 6—14 9. Kelly, F. Journal of the Operational Research Society 49 3 , — Song, G. Shi, J. So-In, C. Siddique, M. This paper presents a human daily activity classification approach based on the sensory data collected from a single tri-axial accelerometer worn on waist belt.

The classification algorithm was realized to distinguish 6 different activities including standing, jumping, sitting-down, walking, running and falling through three major steps: wavelet transformation, Principle Component Analysis PCA based dimensionality reduction and followed by implementing a radial basis function RBF kernel Support Vector Machine SVM classifier.

Two trials were conducted to evaluate different aspects of the classification scheme. In the first trial, the classifier was trained and evaluated by using a dataset of samples collected from seven subjects by using a k-fold cross-validation method. In the second trial, the generation capability of the classifier was also validated by using the dataset collected from six new subjects. The results in trial 2 show the system is also good at classifying activity signals of new subjects. It can be concluded that the collective effects of the usage of single accelerometer sensing, the setting of the accelerometer placement and efficient classifier would make this wearable sensing system more realistic and more comfortable to be implemented for long-term human activity monitoring and classification in ambulatory environment, therefore, more acceptable by users.

Wu, R.


  • Software Engineering, Artificial Intelligence, Networking and Parallel - Google книги.
  • Forensic Science in Healthcare: Caring for Patients, Preserving the Evidence.
  • Top Authors.
  • Choose your Country/Region;
  • Computer science - Wikipedia.
  • Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing 2010.
  • Subscribe to alerts.

Chen, and M. She [], sports [3] or aged care [4]. Various sensing technologies and classification schemes have been developed to tackle this scientific issue in the past decades. For instance, Kubo et al. Many others employed computer-vision technologies for human motion analysis []. However, these technologies are impractical in the context of classifying human daily activities in ambulatory environment.

Generally, there are several other issues need to be addressed to promote the use of long-term activity monitoring system in ambulatory environment. These include ease of use, comfort and safety of wearing, discretion and the ability to perform daily activity unimpeded. Any system which impedes those functions is most likely to be rejected by users.

Recently, appreciable research efforts have been devoted toward the automated monitoring and classifying human or animal physical activities from wearable sensor data []. Accelerometers are currently among the most widely used body-worn sensors for long-term activity monitoring and classification in freeliving environment due to its small size, light weight, cheap price as well as low power consumption []. Theses sensors are usually attached onto various placements, including chest, waist, legs, arms and back, etc. For the sake of safety and comfort of wearing, a single accelerometer sensor is more preferable to be attached onto the waist belt.

Although this wearable sensing technology offers an optimal platform for monitoring daily activity patterns if appropriately implemented, effective feature extraction and robust classification algorithms are also required for representing the signals in a more informative and compact format and subsequently interpreting them in the context of different activities. It is highly desirable but still challenging to design a robust human activity classification approach based on the sensory data from a single accelerometer for the sake of safety and comfort of the wearers.

While discrete wavelet transform DWT has gained widespread acceptance as a useful feature extraction technology in single processing, many classification techniques have been developed in the past decades, such as Bayesian decision making, K-nearest neighbor k-NN algorithm, Rule-based algorithm [14], Leastsquares method LSM , Dynamic time warping [15], Artificial neural networks ANN , Hidden Markov model HMM [16], etc.

Support vector machine SVM is an important machine learning technique which was originally proposed in the early s for the recognition and classification of objects, voices, and text or handwritten characters. The advantage of the SVM classifier is that SVM is able to pre-process and represent feature vectors in a higher-dimensional space where they can become linearly separable, if the feature vectors in their original feature space are not linearly separable [17].

In this work, we proposed an approach to human daily physical activities classification based on the sensory data from a single tri-axial accelerometer. Then, a set of feature vectors were extracted from the sensory data through wavelet transform, followed by dimension reduction using Principle Component Analysis PCA. The classifier was validated on the acceleration data collected from thirteen subjects performing six different physical activities. The original data by microcontroller were sent to a laptop computer with USB interfaces, at a communication rate bps.

The reference directions and laid position of tri-axial accelerometer are shown in Fig. Subjects Thirteen healthy volunteers 11 males and 2 females were recruited from the Institute for Technology and Research Innovation, Deakin University, Australia. The subjects varied in age from 26 and 50 years, in weight from 51kg to 75kg and in height from cm to cm. All of them were in good health, and none of them had a special deformity in their bodies and limbs.

Acceleration Data Acceleration data were collected from 13 volunteers performing six activities, including falling, jumping, running, sitting-down, standing, walking and falling. Seven subjects 1 to 7 , whose data were used to train can cross-validate the classifier, were asked to repeat every action for ten times, while the remaining six subjects 8 to 13 , whose data were used to test the generalization capability of the system, were asked to perform each activity for twice. The activity labels and instructions are shown in table 1. Table 1 Activity labels and instructions Label Activity Instruction 1 falling fall forward down 2 jumping jump at origin 3 running run forward 4 sit-down sit down on chair about 5 seconds 5 standing remain standing straight 6 walking walk forward In this way, a total of kinematic datasets were recorded from 13 volunteers, containing training datasets from subjects 1 to 7 and 72 testing datasets from subjects 8 to First, raw da- Automated Classification of Human Daily Activities in Ambulatory Environment ta were pre-processed by segmenting interest signals from original kinematic data, i.

To make activity classification at next stage more effective and efficient, the discrete wavelet transform DWT was used to present acceleration data in a more informative and compact format through decomposing the signal into several components [20]. Then, principal component analysis PCA was adopted to reduce the dimensionality of the feature vector [21], which is aimed to make the recognition and classification module more efficient and realistic for real-time applications. Finally, SVM classifier was employed for activity classification [].

Activity Patterns Fig. From Fig. There is substantial different between walking and running on the signal amplitude, though the patterns of the signals look similar. Moreover, the signals of walking and running by different subjects are hardly discriminated either in amplitude or in frequency. She Feature Extraction Based on DWT It is necessary to prepare training samples and testing samples to establish and evaluate the classifier. In this work, signals of interest associated with each activity from original time series of acceleration data were segmented into individual datasets.

In this work, feature extraction was accomplished by applying discrete wavelet transform DWT to present the acceleration signals corresponding to activities in a more informative and compact format. DWT is a widely accepted technology to decompose a signal into different frequency bands. Only the approximation signal A t is used for the next level of decomposition. In order to extract feature vectors, statistics over the set of the wavelet coefficients was used. The extracted wavelet coefficients provide a compact representation that shows the energy distribution of the human activity signal in time and frequency.

The approximate coefficients Ca were taken as extracted feature vectors. Dimension Reduction by PCA To reduce the complexity of classifier and the number of samples needed for building a classification system, principle component analysis was used to further reduce the size of the feature vectors. As such, each feature vector was normalized and projected to the most discriminative data array. In other words, the original data set is rotated to the direction of maximum variance, where correlated high-dimensional data can be presented in a low-dimensional uncorrelated feature space and the data variance is preserved as much as possible with a small number of principal components, i.

Therefore, their eigenvalues will serve as inputs into classifier at the next stage. It can automatically adjust its capacity according to the scale of a specific problem by maximizing the width of the classification margin [23]. This is achieved by picking the hyperplane so that the distance from the hyperplane to the nearest data point is maximized. More precisely, … T min W s.

The underneath principle is to produce a series of values between N and M i. The abscissa and ordinate were demoted by logarithm to the base 2. The results obtained during testing in the two trials are shown in Tables 2 to 3, respectively. In this trial, 5-fold cross validation was performed to make sure that the training is un-biased.

Specifically, in each process, the dataset was divided into five equal subsets, among which, four subsets were used to train the classifier, and the left one subset was used as cross validation samples. This process is then repeated five times the folds , with each Automated Classification of Human Daily Activities in Ambulatory Environment of the five subsets used exactly once as the validation data.

The verification performance of the classifier was assessed by the mean accuracy of 5 processes. The data collected from subjects 1 to 7 were randomly partitioned into training dataset samples and cross-validation dataset with 84 samples left. Table 2 shows the recognition results of cross-validation samples.

Trial 2 To further evaluate generalization capability of the classifier, the second trial was carried out and the results are shown in Table 3. In trial 2, 72 samples from six subjects 8 to 13 , who were unknown to the classifier, were tested. She Discussion This classification approach has demonstrated that the SVM algorithm can distinguish six human activities based on their corresponding kinematic data collected from the accelerometer sensor. However, the classifier does not perform equally well on different activities.

The activities including falling, sitting-down and standing can be recognized very well, while some activities such as jumping, running and walking are relatively prone to misclassification.

Bibliographic Information

It can be observed that the sensory data collected from different people performing the same activities such as running and walking show different kinematic characteristics in the sensory data. Because of this, the classifier is easily confused by running and walking signals. Due to the placement of tri-axial accelerometer sensor on the waist belt refer to Fig. Therefore, it is acceptable that the signals from y-axial can be removed in future experiment.

As a positive effect, it would consequently improve the recognition accuracy further. The classification algorithm was realized to distinguish 6 different activities including standing, jumping, sitting-down, walking, running and falling through three major steps: discrete wavelet transformation DWT , Principle Component Analysis PCA -based dimensionality reduction and followed by implementing a radial basis function RBF kernel Support Vector Machine SVM classifier.

The results obtained show the system is not only able to learn the activity signals from the subjects whose data were involved in training but also has good generalization capability for the data from new subjects. Moreover, compared to other placements, the position of the accelerometer sensor on the waist belt offers easier implementation and causes less discomfort to the wearers in real-time applications.

It is also very beneficial to recognize human daily activities using as few sensors as possible for applications in health care, rehabilitation, aged care or sport, etc. Therefore, the collective effects of the usage of single accelerometer sensing, the setting of the accelerometer placement and efficient classifier would make this Automated Classification of Human Daily Activities in Ambulatory Environment wearable sensing system more realistic and more comfortable to be implemented for long-term human activity monitoring and classification in ambulatory environment, therefore, more acceptable by users.

It can be concluded that the approach to the recognition and classification of multiple human daily activities based on the sensory data of a single tri-axial accelerometer is promising and will be further explored in the near future. Doukas, C. International Journal on Artificial Intelligence Tools 19 2 , — 2. Salarian, A. Bonomi, A. Medicine and Science in Sports and Exercise 41 9 , — 4. Schwartz, A.

Diabetes Care 31 3 , — 5. Kubo, H. Wang, L. Zhang, J. Computer Vision and Image Understanding 8 , — 8. Bang, S. Patel, S. In: Lo, B. Wu, J. Denkinger, M. BMC Geriatrics 10 Kang, D. Disability and Rehabilitation: Assistive Technology 5 4 , — Lai, C. She Altun, K.

Pattern Recognition 43 10 , — Pogorelc, B. Rabiner, L. Proceedings of the IEEE 77 2 , — Maulik, U.


  1. Faculty | Department of Computer Science, Columbia University?
  2. Library Hub Discover.
  3. CSDL | IEEE Computer Society;
  4. Quantum-Classical Analogies (The Frontiers Collection)?
  5. Lando: The Sacketts Series, Book 8?
  6. Pattern Recognition 44 3 , — Antonsson, E. Journal of Biomechanics 18 1 , 39—47 Kangas, M. Gait and Posture 28 2 , — Chen, Y. Shahbudin, S. Das, K. NeuroImage 51 4 , — Lau, H. Medical and Biological Engineering and Computing 46 6 , — Kandaswamy, A. Computers in Biology and Medicine 34 6 , — Journal of Applied Biomechanics 24 1 , 83—87 Fukuchi, R. Robocup is a popular test bed for AI programs around the world. Robosoccer is one of the two major parts of Robocup, in which AIBO entertainment robots take part in the middle sized soccer event.

    The three key challenges that robots need to face in this event are manoeuvrability, image recognition and decision making skills. This paper focuses on the decision making problem in Robosoccer—The goal keeper problem. Currently, the decision making process in Robosoccer is carried out using rule-base system. Moreover the ball distance is being calculated using IR sensors available at the nose of the robot. In this paper, we propose a reinforcement learning based approach that uses a dynamic state-action mapping using back propagation of reward and Q-learning along with spline fit QLSF for the final choice of high level functions in order to save the goal.

    The novelty of our approach is that the agent learns while playing and can take independent decision which overcomes the limitations of rule-base system due to fixed and limited predefined decision rules. The spline fit method used with the nose camera was also able to find out the location and the ball distance more accurately compare to the IR sensors.

    The noise source and near and far sensor dilemma problem with IR sensor was neutralized using the proposed spline fit method. It was found that the efficiency of our QLSF approach in goalkeeping was better than the rule based approach in conjunction with the IR sensors. Mukherjee, S. Huda, and J. Yearwood 1 Introduction Robocup is an international event which is divided in two different sections namely Robosoccer and Rescue league [1].

    Both the fields are currently popular test beds for AI programs around the world and the AIBO robot takes part into the middle size soccer event there. Usually a team of four robots play soccer against the opponent team. The AIBO robots were considered as the medium of the experiment in this case. The last released model ERS7 has enough processing power to compute relatively simple program and capable of multitasking in real time mode as it runs on Aperius [6] real time operating system.

    We focus on the decision making skills mixed with ball distance and measurement problem. In this paper we consider the goal keeper problem for decision making in Robosoccer. The first and basic problem is goalkeeping against one attacker. The attacker shots the ball from different positions towards the goal.

    The second problem is an extension of the first one. The knowledge base achieved by the goalkeeper against one attacker was used to save the goal against two attackers as well. The first attacker passes the ball to its mate while it takes shot towards the goal using the flying pass. Due to the noisy data achieved with conventional IR sensors a new measurement method was introduced. At the same time a new ball distance method was developed using the ball finder method[19] and nose camera.

    The same decision making process was introduced for a two attacker problem to test its usefulness later on. In the literature [18] review it was found that RL was used to determine the low level functions such as quadruped locomotion and navigation purpose using AIBO. Enter pincode. Usually delivered in days? The purpose of the 13th International Conference on Computer and Information Science SNPD held on August , in Kyoto, Japan was to bring together researchers and scientists, businessmen and entrepreneurs, teachers and students to discuss the numerous fields of computer science, and to share ideas and information in a meaningful way.

    Our conference officers selected the best 17 papers from those papers accepted for presentation at the conference in order to publish them in this volume. The papers were chosen based on review scores submitted by members of the program committee, and underwent further rounds of rigorous review. The conference organizers selected 17 outstanding papers from SNPD , all of which you will find in this volume of Springer's Studies in Computational Intelligence. Have doubts regarding this product?