您的位置: 专家智库 > >

国家自然科学基金(s61071131)

作品数:2 被引量:2H指数:1
发文基金:北京市自然科学基金国家自然科学基金更多>>
相关领域:自动化与计算机技术电子电信更多>>

文献类型

  • 2篇中文期刊文章

领域

  • 2篇自动化与计算...
  • 1篇电子电信

主题

  • 2篇模式识别
  • 1篇统计模式识别
  • 1篇主成分
  • 1篇主成分分析
  • 1篇范数
  • 1篇概率分布
  • 1篇ROBUST
  • 1篇STRATE...
  • 1篇EFFICI...
  • 1篇FEATUR...
  • 1篇L1范数
  • 1篇LEARNI...
  • 1篇LEAVE-...
  • 1篇MAXIMI...

传媒

  • 1篇Tsingh...
  • 1篇Journa...

年份

  • 1篇2013
  • 1篇2012
2 条 记 录,以下是 1-2
排序方式:
Efficient Leave-One-Out Strategy for Supervised Feature Selection被引量:2
2013年
Feature selection is a key task in statistical pattern recognition.Most feature selection algorithms have been proposed based on specific objective functions which are usually intuitively reasonable but can sometimes be far from the more basic objectives of the feature selection.This paper describes how to select features such that the basic objectives,e.g.,classification or clustering accuracies,can be optimized in a more direct way.The analysis requires that the contribution of each feature to the evaluation metrics can be quantitatively described by some score function.Motivated by the conditional independence structure in probabilistic distributions,the analysis uses a leave-one-out feature selection algorithm which provides an approximate solution.The leave-oneout algorithm improves the conventional greedy backward elimination algorithm by preserving more interactions among features in the selection process,so that the various feature selection objectives can be optimized in a unified way.Experiments on six real-world datasets with different feature evaluation metrics have shown that this algorithm outperforms popular feature selection algorithms in most situations.
Dingcheng FengFeng ChenWenli Xu
关键词:统计模式识别概率分布
Learning robust principal components from L1-norm maximization
2012年
Principal component analysis(PCA) is fundamental in many pattern recognition applications.Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization(L1-PCA-REM) since conventional L2-norm based PCA(L2-PCA) is sensitive to outliers.Recently,the variance maximization formulation of PCA with L1-norm(L1-PCA-VM) has been proposed,where new greedy and nongreedy solutions are developed.Armed with the gradient ascent perspective for optimization,we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation,which are verified by experiments on synthetic and real-world datasets.
Ding-cheng FENG
关键词:L1范数主成分分析模式识别
共1页<1>
聚类工具0