یک قانون غربالگری امن برای ماشین بردار پشتیبانی لاپلاس / A safe screening rule for Laplacian support vector machine

یک قانون غربالگری امن برای ماشین بردار پشتیبانی لاپلاس A safe screening rule for Laplacian support vector machine

  • نوع فایل : کتاب
  • زبان : انگلیسی
  • ناشر : Elsevier
  • چاپ و سال / کشور: 2018

توضیحات

رشته های مرتبط مهندسی کامپیوتر
گرایش های مرتبط هوش مصنوعی
مجله کاربرد مهندسی هوش مصنوعی – Engineering Applications of Artificial Intelligence
دانشگاه College of Science – China Agricultural University – China

منتشر شده در نشریه الزویر
کلمات کلیدی یادگیری نیمه نظارتی، ماشین بردار پشتیبانی، غربالگری ایمن، نابرابری متغیر

Description

1. Introduction Since support vector machine (SVM) was proposed by Vapnik (1998), it has been widely applied and well studied (Cortes and Vapnik, 1995; Platt and John, 1999; Steinwart and Christmann, 2008). It has many advantages. Firstly, SVM solves a quadratic programming problem (QPP), which guarantees to obtain the unique solution. Secondly, SVM implements the structural risk minimization principle rather than the empirical risk minimization principle, which can minimize the upper bound of the generalization error. The traditional SVM is a supervised method which needs the label of each instance for establishing the model (Hu et al., 2013). However, in many real situations, the acquisition of class labels is costly and difficult. Semi-supervised learning (SSL) is a category of learning tasks that also make use of unlabeled data for training (Altınel et al., 2017). In the last decades, SSL has attracted a notable attention (Chapelle et al., 2006; Li et al., 2010) and has been introduced into the SVM to improve its prediction performance. Specifically, Laplacian support vector machine (LapSVM) for SSL has been proposed by Chova et al. (2008), which introduces an additional regularization term on the geometry of both labeled and unlabeled samples (Belkin et al., 2006; Zhu, 2005, 2008). This method has been demonstrated to yield an excellent performance. The model is built on manifold assumption which is suitable for most cases. Its solution characterized by a convex quadratic optimization problem is guaranteed to be globally optimal. By contrast, in most other semi-supervised SVMs (Chapelle et al., 2008; Yang et al., 2014), it needs to solve complex non-convex problems, and the solutions are locally optimal. Many improvements for LapSVM have been presented to enhance its computational speed and prediction accuracy. In the literature (Melacci and Belkin, 2011), two strategies were presented to solve the primal problem of LapSVM in order to reduce the training time. Qi et al. (2014) proposed a fast LapSVM (FLapSVM) in which authors modify the model of LapSVM to avoid extra matrix and reduce the computation. However, these algorithms just focus on the methods of solving optimization problem not the scale of the problem itself. In addition, the Laplacian graph approaches have been successfully introduced into twin support vector machines (TSVMs) (Jayadeva et al., 2007; Xu et al., 2016, 2017; Xu, 2017; Qi et al., 2012; Yang and Xu, 2016). However, it is still challenging to deal with large datasets on account of the burden of computational cost and memory.
اگر شما نسبت به این اثر یا عنوان محق هستید، لطفا از طریق "بخش تماس با ما" با ما تماس بگیرید و برای اطلاعات بیشتر، صفحه قوانین و مقررات را مطالعه نمایید.

دیدگاه کاربران


لطفا در این قسمت فقط نظر شخصی در مورد این عنوان را وارد نمایید و در صورتیکه مشکلی با دانلود یا استفاده از این فایل دارید در صفحه کاربری تیکت ثبت کنید.

بارگزاری