We study the approximation of functions from anisotropic Sobolev classes B(W_p^r([0,1]~d)) and H¨older-Nikolskii classes B(W_p^r([0,1]~d)) in the L q([0,1] d) norm with q ≤ p in the quantum model of computation.We determine the quantum query complexity of this problem up to logarithmic factors.It shows that the quantum algorithms are significantly better than the classical deterministic or randomized algorithms.
我们在计算的量模型学习 multivariate 功能的集成的近似。用一条新减小途径,我们在各向异性的 Sobolev 班上获得第 n 个最小的质问错误的更低的界限 ?(W p r ([0, 1 ] d ))(r 鈭 ??+ d ) 。然后把这结果与我们我们决定的以前的一个相结合为各向异性的 H 的第 n 个最小的质问错误的最佳的界限 ? lder-Nikolskii 班 ?(H 鈭 ? r ([0, 1 ] d )) 并且 Sobolev 班 B (W 鈭 ? r ([0, 1 ] d )) 。为班的这二种类型,量算法给的结果表演重要在古典确定、使随机化的算法上加快。关键词量集成 - 各向异性的 Sobolev 类 - H ? lder-Nikolskii 类 -第 n 最小的询问错误先生(2000 ) 题目分类 41A63 - 65D15 - 65Y20 由中国的国家自然科学基础支持了(资助 Nos. 10501026 并且 60675010 )
We study the approximation of the imbedding of functions from anisotropic and general-ized Sobolev classes into Lq([0,1]d) space in the quantum model of computation. Based on the quantum algorithms for approximation of finite imbedding from LpN to LNq , we develop quantum algorithms for approximating the imbedding from anisotropic Sobolev classes B(Wpr ([0,1]d)) to Lq([0,1]d) space for all 1 q,p ∞ and prove their optimality. Our results show that for p < q the quantum model of computation can bring a speedup roughly up to a squaring of the rate in the classical deterministic and randomized settings.
近年来迁移学习已经引起了越来越广泛的兴趣,签数据以及源领域数据是不同分布的分类问题,且建立一个归纳分类模型对新来的目标数据进行预测.首先分析了直推式迁移学习(transductive transfer learning)中存在的类别比例漂移问题,然后提出归一化的方法使得预测的类别比例接近于实际样本类别比例.更进一步,提出了一种基于混合正则化框架的归纳迁移学习算法.其中包括目标领域分布结构的流形正则化,预测概率的熵正则化,以及类别比例的期望正则化.这个框架被用于从源领域到目标领域学习的归纳模型中.最后,在实际文本数据集上的实验结果表明,提出的归纳迁移学习模型是有效的,同时该模型可以直接对新来的目标数据进行预测.
Analytical study of large-scale nonlinear neural circuits is a difficult task. Here we analyze the function of neural systems by probing the fuzzy logical framework of the neural cells' dynamical equations. Al- though there is a close relation between the theories of fuzzy logical systems and neural systems and many papers investigate this subject, most investigations focus on finding new functions of neural systems by hybridizing fuzzy logical and neural system. In this paper, the fuzzy logical framework of neural cells is used to understand the nonlinear dynamic attributes of a common neural system by abstracting the fuzzy logical framework of a neural cell. Our analysis enables the educated design of network models for classes of computation. As an example, a recurrent network model of the primary visual cortex has been built and tested using this approach.
HU Hong1, LI Su2, WANG YunJiu2, QI XiangLin2 & SHI ZhongZhi1 1 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China
Nonlinear m-term approximation plays an important role in machine learning, signal processing and statistical estimating. In this paper by means of a nondecreasing dominated function, a greedy adaptive compression numerical algorithm in the best m -term approximation with regard to tensor product wavelet-type basis is pro-posed. The algorithm provides the asymptotically optimal approximation for the class of periodic functions with mixed Besov smoothness in the L q norm. Moreover, it depends only on the expansion of function f by tensor pro-duct wavelet-type basis, but neither on q nor on any special features of f.