摘要 :
Degradation model of image is established. Permissive revivification method and constraints revivification methods are used to analyze recovering of image. On this basis, images are adopted Lucy-Richardson method, blindness convol...
展开
Degradation model of image is established. Permissive revivification method and constraints revivification methods are used to analyze recovering of image. On this basis, images are adopted Lucy-Richardson method, blindness convolution method, constrained least squares filtering and Wiener filtering method to recover. And then, the results are analyzed and compared. The results show that the recovering of image is applied Wiener filtering method with more definition.
收起
摘要 :
Degradation model of image is established. Permissive revivification method and constraints revivification methods are used to analyze recovering of image. On this basis, images are adopted Lucy-Richardson method, blindness convol...
展开
Degradation model of image is established. Permissive revivification method and constraints revivification methods are used to analyze recovering of image. On this basis, images are adopted Lucy-Richardson method, blindness convolution method, constrained least squares filtering and Wiener filtering method to recover. And then, the results are analyzed and compared. The results show that the recovering of image is applied Wiener filtering method with more definition.
收起
摘要 :
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand,...
展开
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand, it represents an opportunity to resource recovery. This work presents the characterization of the residue, discusses some possible ways to recover the valuables, and presents experimental results of pyrometallurgical runs aimed at recover nickel as an iron-nickel alloy. It has been shown that the residue contains large amounts of silicon, iron, aluminum and magnesium, in the form of oxides and silicates, and small quantities of nickel, cooper and cobalt, besides several other elements. It has also been shown that gravimetric or magnetic concentration methods do not present promising results. On the other hand, it is possible to obtain iron-nickel alloys by high-temperature carbothermic reduction of the "black sludge".
收起
摘要 :
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand,...
展开
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand, it represents an opportunity to resource recovery. This work presents the characterization of the residue, discusses some possible ways to recover the valuables, and presents experimental results of pyrometallurgical runs aimed at recover nickel as an iron-nickel alloy. It has been shown that the residue contains large amounts of silicon, iron, aluminum and magnesium, in the form of oxides and silicates, and small quantities of nickel, cooper and cobalt, besides several other elements. It has also been shown that gravimetric or magnetic concentration methods do not present promising results. On the other hand, it is possible to obtain iron-nickel alloys by high-temperature carbothermic reduction of the "black sludge".
收起
摘要 :
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand,...
展开
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand, it represents an opportunity to resource recovery. This work presents the characterization of the residue, discusses some possible ways to recover the valuables, and presents experimental results of pyrometallurgical runs aimed at recover nickel as an iron-nickel alloy. It has been shown that the residue contains large amounts of silicon, iron, aluminum and magnesium, in the form of oxides and silicates, and small quantities of nickel, cooper and cobalt, besides several other elements. It has also been shown that gravimetric or magnetic concentration methods do not present promising results. On the other hand, it is possible to obtain iron-nickel alloys by high-temperature carbothermic reduction of the "black sludge".
收起
摘要 :
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand,...
展开
The Caron process of nickel extraction from lateritic ores produces a fair amount of an inert residue, the so-called "black sludge", that is disposed in a dam according to the required environmental regulations. On the other hand, it represents an opportunity to resource recovery. This work presents the characterization of the residue, discusses some possible ways to recover the valuables, and presents experimental results of pyrometallurgical runs aimed at recover nickel as an iron-nickel alloy. It has been shown that the residue contains large amounts of silicon, iron, aluminum and magnesium, in the form of oxides and silicates, and small quantities of nickel, cooper and cobalt, besides several other elements. It has also been shown that gravimetric or magnetic concentration methods do not present promising results. On the other hand, it is possible to obtain iron-nickel alloys by high-temperature carbothermic reduction of the "black sludge".
收起
摘要 :
Welcome to the 23rd Annual ACM Symposium on Applied Computing (SAC 2008). This international event is dedicated to computer scientists, engineers, and practitioners seeking innovative ideas in various areas of computer application...
展开
Welcome to the 23rd Annual ACM Symposium on Applied Computing (SAC 2008). This international event is dedicated to computer scientists, engineers, and practitioners seeking innovative ideas in various areas of computer applications. This year, the conference is hosted by the University of Fortaleza and the Federal University of Ceara, Brazil. The organizing committee is grateful to your participation in this exciting international gathering.
The ACM Special Interest Group on Applied Computing is dedicated to further the interests of computing professionals engaged in the design and development of new computing applications, interdisciplinary applications areas, and applied research. The conference provides a forum for discussion and exchange of new ideas addressing computational algorithms and complex applications. This goal is reflected in the wide spectrum of application areas and tutorials designed to provide a variety of discussion topics during this event.
收起
摘要 :
Welcome to the 23rd Annual ACM Symposium on Applied Computing (SAC 2008). This international event is dedicated to computer scientists, engineers, and practitioners seeking innovative ideas in various areas of computer application...
展开
Welcome to the 23rd Annual ACM Symposium on Applied Computing (SAC 2008). This international event is dedicated to computer scientists, engineers, and practitioners seeking innovative ideas in various areas of computer applications. This year, the conference is hosted by the University of Fortaleza and the Federal University of Ceara, Brazil. The organizing committee is grateful to your participation in this exciting international gathering.
The ACM Special Interest Group on Applied Computing is dedicated to further the interests of computing professionals engaged in the design and development of new computing applications, interdisciplinary applications areas, and applied research. The conference provides a forum for discussion and exchange of new ideas addressing computational algorithms and complex applications. This goal is reflected in the wide spectrum of application areas and tutorials designed to provide a variety of discussion topics during this event.
收起
摘要 :
In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes.
Given n distinct elements α1,...,αn from a field F, and n subsets S1,...,Sn of F each of size at
...
展开
In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes.
Given n distinct elements α1,...,αn from a field F, and n subsets S1,...,Sn of F each of size at most l, the list decoding algorithm of Guruswami and Sudan [7] can in polynomial time output all polynomials p of degree at most k which satisfy p(αi) ∈ Si for every i, as long as l < ? n/k ?. We show that the performance of this algorithm is the best possible in a strong sense; specifically, we show that when l = ? n/k ?, the list of output polynomials can be super-polynomially large in n. One way to interpret our result is the following. The algorithm in [7] can, when given as input $n'$ distinct pairs (βi,∈i) ∈ F2 (the βi's need not be distinct), find and output all degree k polynomials p such that p(βi) = γi for at least $t$ values of i, provided t > √k n'. By our result, an improvement to the Reed-Solomon list decoder of [7] that works with slightly smaller agreement, say t > √kn' - k/2, can only be obtained by exploiting some property of the βi's (for example, their (near) distinctness).
For Reed-Solomon codes of block length $n$ and dimension k where k = nδ for small enough δ, we exhibit an explicit received word r with a super-polynomial number of Reed-Solomon codewords that agree with it on $(2 - ε) k locations, for any desired ε > 0 (we note agreement of k is trivial to achieve). Such a bound was known earlier only for a non-explicit center. We remark that finding explicit bad list decoding configurations is of significant interest --- for example the best known rate vs. distance trade-off is based on a bad list decoding configuration for algebraic-geometric codes [14] which is unfortunately not explicitly known.
摘要 :
In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes.
Given n distinct elements α1,...,αn from a field F, and n subsets S1,...,Sn of F each of size a
...
展开
In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes.
Given n distinct elements α1,...,αn from a field F, and n subsets S1,...,Sn of F each of size at most l, the list decoding algorithm of Guruswami and Sudan [7] can in polynomial time output all polynomials p of degree at most k which satisfy p(αi) ∈ Si for every i, as long as l < ⌈ n/k ⌉. We show that the performance of this algorithm is the best possible in a strong sense; specifically, we show that when l = ⌈ n/k ⌉, the list of output polynomials can be super-polynomially large in n. One way to interpret our result is the following. The algorithm in [7] can, when given as input $n'$ distinct pairs (βi,∈i) ∈ F2 (the βi's need not be distinct), find and output all degree k polynomials p such that p(βi) = γi for at least $t$ values of i, provided t > √k n'. By our result, an improvement to the Reed-Solomon list decoder of [7] that works with slightly smaller agreement, say t > √kn' - k/2, can only be obtained by exploiting some property of the βi's (for example, their (near) distinctness).
For Reed-Solomon codes of block length $n$ and dimension k where k = nδ for small enough δ, we exhibit an explicit received word r with a super-polynomial number of Reed-Solomon codewords that agree with it on $(2 - ε) k locations, for any desired ε > 0 (we note agreement of k is trivial to achieve). Such a bound was known earlier only for a non-explicit center. We remark that finding explicit bad list decoding configurations is of significant interest --- for example the best known rate vs. distance trade-off is based on a bad list decoding configuration for algebraic-geometric codes [14] which is unfortunately not explicitly known.