text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Abstract: We combine Homotopy Type Theory with axiomatic cohesion, expressing the latter internally with a version of "adjoint logic" in which the discretization and codiscretization modalities are characterized using a judgmental formalism of "crisp variables". This yields type theories that we call "spatial" and "cohesive", in which the types can be viewed as having independent topological and homotopical structure. These type theories can then be used to study formally the process by which topology gives rise to homotopy theory (the "fundamental $\infty$-groupoid" or "shape"), disentangling the "identifications" of Homotopy Type Theory from the "continuous paths" of topology. In a further refinement called "real-cohesion", the shape is determined by continuous maps from the real numbers, as in classical algebraic topology. This enables us to reproduce formally some of the classical applications of homotopy theory to topology. As an example, we prove Brouwer's fixed-point theorem. | CommonCrawl |
Professor Halfbrain has spent his entire weekend by cutting lots of wooden $50\times50$ checkerboards into lots of polyominoes. He looked at various pattern polyominoes with area $49$, and always tried to cut as many copies of the pattern as possible out of some wooden $50\times50$ checkerboard. The pattern polyomino could be rotated and flipped over, but it always had to be aligned with the cells of the checkerboard. For most patterns, the professor was able to cut quite a number of copies while wasting only a small fraction of the checkerboard area.
Professor Halfbrain has also proved two extremely deep theorems on such checkerboard cuttings.
Professor Halfbrain's first theorem: For every possible polyomino pattern of area $49$, it is possible to cut at least one copy of the pattern out of a wooden $50\times50$ checkerboard.
Professor Halfbrain's second theorem: There exist polyomino patterns of area $49$, for which it is not possible to cut 52 copies of the pattern out of a wooden $50\times50$ checkerboard.
This puzzle asks you to improve the two theorems of professor Halfbrain and to make them even deeper. Find an integer $x$, so that "one copy" in the first theorem may be replaced by "$x$ copies", and so that "52 copies" in the second theorem may be replaced by "$x+1$ copies" (again yielding true statements, of course).
If the minimal bounding box of a 49-cell polynomino is $x\times y$, then $x+y\leq 50$.
Proof: by induction. A 1-cell polyomino fits in a $1\times 1$ box, so its semiperimeter (i.e., $x+y$) is 2. Every $(n+1)$-cell polyomino is formed by adding a cell to an $n$-cell polyomino, and that additional cell can increase the $x$ size of the bounding box or increase the $y$ size of the bounding box, but not both, so if every $n$-cell polyomino fits in a box of semiperimeter $n+1$, then every $(n+1)$-cell polyomino fits in a box of semiperimeter $(n+1)+1$.
Four $x\times y$ rectangles with $x+y\leq50$ always fit into a $50\times50$ square.
Proof: by construction. Use a 'pinwheel' arrangement where the two $x\times y$ boxes in the NE and SW corners are placed with $x$ horizontal and $y$ vertical, and the two in the other corners are placed with $y$ horizontal and $x$ vertical.
Five copies of a 'Plus' polyomino (with two length-$25$ arms intersecting in the middle) can't be placed in a $50\times50$ square.
Proof: First, a few subfacts. For convenience, I'm going to define the $50\times 50$ square as running from $\langle-25,-25\rangle$ to $\langle25,25\rangle$.
a) The center of any Plus must be inside the inner square from $\langle-13,-13\rangle$ to $\langle13,13\rangle$ (or else it'll go outside the bounds of the $50\times50$ board).
b) The center of one Plus can't be placed anywhere in the bounding box of another (or else the two will overlap on their arms).
c) If the center of a Plus is placed anywhere in a $13\times 13$ box, its bounding box covers that entire space.
Now consider the four quadrants from $\langle0,0\rangle$ out to $\langle\pm13,\pm13\rangle$. By b) and c), each of these quadrants can only have one Plus's center placed within it, but between them these quadrants cover the entire 'legal' space (per a) ) for placing Plus polyominoes. Since we can fit only one per quadrant, we can fit at most four.
You can make a polyomino consisting of a 13x14 square with an 11x12 hole in the middle with one missing corner tile. This can only fit 9 times into the 50x50 board. I believe this is the worst possible fit, so I would substitute 'nine' and 'ten' for 'one' and '52' in the first and second theorems respectively.
Not the answer you're looking for? Browse other questions tagged mathematics geometry checkerboard dissection polyomino or ask your own question. | CommonCrawl |
Thin airfoil theory gives $C = C_o + 2\pi\alpha$, where $C_o$ is the lift coefficient at $\alpha = 0$. However, I couldn't find any equation to calculate what $C_o$ is which must be some function of the airfoil shape. In other words how do you extend thin airfoil theory to cambered airfoils without having to use experimental data?
This is my own attempt, I made this airfoil model of the lift coefficient of the airfoil at zero angle of attack for a project I am working on. It's derived from a Joukowsky transform. it seems to work but how is $C_o$ actually calculated?
Because it is derived from the Joukowsky transform of the inviscid potential flow around a cylinder it's more accurate at high Reynolds numbers.
Browse other questions tagged aircraft-design aerodynamics lift fluid-mechanics or ask your own question.
Does zero-lift angle of attack depend on Reynolds number? | CommonCrawl |
Abstract : This paper deals with sparse feature selection and grouping for classification and regression. The classification or regression problems under consideration consists in minimizing a convex empirical risk function subject to an $\ell^1$ constraint, a pairwise $\ell^\infty$ constraint, or a pairwise $\ell^1$ constraint. Existing work, such as the Lasso formulation, has focused mainly on Lagrangian penalty approximations, which often require ad hoc or computationally expensive procedures to determine the penalization parameter. We depart from this approach and address the constrained problem directly via a splitting method. The structure of the method is that of the classical gradientprojection algorithm, which alternates a gradient step on the objective and a projection step onto the lower level set modeling the constraint. The novelty of our approach is that the projection step is implemented via an outer approximation scheme in which the constraint set is approximated by a sequence of simple convex sets consisting of the intersection of two half-spaces. Convergence of the iterates generated by the algorithm is established for a general smooth convex minimization problem with inequality constraints. Experiments on both synthetic and biological data show that our method outperforms penalty methods. | CommonCrawl |
We introduce the concepts of IVF m-semiopen sets, IVF m-preopen sets, IVF m-semicontinuous mappings and IVF m-precontinuous mappings on interval-valued fuzzy minimal spaces. We investigate characterizations of IVF m-semicontinuous mappings and IVF m-precontinuous mappings and study properties of IVF m-semiopen sets and IVF m-preopen sets.
Y. B. Jun, G. C. Kang and M.A. Ozturk Interval-valued fuzzy semiopen, preopen and $\alpha$-open mappings, Honam Math. J., vol. 28(2), (2006) pp. 241-259.
Y. B. Jun, Sung Sook Kim and Chang Su Kim, Interval-valued fuzzy semi-preopen sets and interval-valued fuzzy semi-precontinuo mappings, Honam Math. J., vol. 28(2), (2006) pp. 241-259.
W. K. Min and Y. H. Yoo, Interval-Valued Fuzzy m$\alpha$-continuous mappings on Interval-Valued Fuzzy Minimal Spaces, submitted.
T. K. Mondal and S. K. Samanta, Topology of interval-valued fuzzy sets, Indian J. Pure Appl. Math., vol. 30(1), (1999) pp. 23-38. | CommonCrawl |
tea.mathoverflow.net - Why has this answer been deleted?
Community: Why has this answer been deleted?
and a set of generators can be obtained by taking minimal such monomials (i.e. not divisible by smaller such monomials). And relations between these generators are of the form (monomial in $w_i$) = (another monomial in $w_i$). That's a pretty easy presentation by any standard.
P.S. This works over $\mathbb C$ or any ring containing $1/n$ and $\mu$."
You are an unregistered user.
You are trying to delete an accepted answer.
You are trying to delete a question with an upvoted answer.
This is my experience from SE 2.0, but I believe that this feature is shared with MO and its obsolete software.
There is an old story here and I think this answer was deleted as part of it. You might be able to figure things out if you check meta threads around the end of April 2010. I don't think anybody would mind if it got undeleted.
Ah, I see! Voting to undelete then, and if possible to scan through the other deleted posts from the same author.
@FGD: I would mind greatly if the answer were undeleted, purely on the grounds that I would want to respect the wishes of the author.
@justcurious: is you post here something, it is not yours anymore. If the community feels that your contribution is important and useful, then you have to accept that it becomes visible to everyone. At the moment it is visible to quite a few users anyway. | CommonCrawl |
The Levenberg-Marquardt method is used to do a curve fitting to measured data. It minimizes the expression $$ S(\beta) = \sum_i^n ( y_i - f (x_i, \beta) )^2 $$ The measured data is collected in the $y_i$ and is measured at the points $x_i$. The function $f$ depends on the parameters $\beta$, which are varied to minimize $S(\beta)$, the error. For more details check out the Wikipedia article.
Here is a simple example for a least square problem.
The leastsq_levm routine is a convienient and simple method for least square fits in .NET (C# and Visual Basic). You can find more details about nonlinear least squares and some examples on our website. The class reference for leastsq_lvm can be found here. | CommonCrawl |
Dehghan Niri, T., Hosseini, M., Heydari, M. (2019). An efficient improvement of the Newton method for solving nonconvex optimization problems. Computational Methods for Differential Equations, 7(1), 69-85.
Tayebeh Dehghan Niri; Mohammad Mehdi Hosseini; Mohammad Heydari. "An efficient improvement of the Newton method for solving nonconvex optimization problems". Computational Methods for Differential Equations, 7, 1, 2019, 69-85.
Dehghan Niri, T., Hosseini, M., Heydari, M. (2019). 'An efficient improvement of the Newton method for solving nonconvex optimization problems', Computational Methods for Differential Equations, 7(1), pp. 69-85.
Dehghan Niri, T., Hosseini, M., Heydari, M. An efficient improvement of the Newton method for solving nonconvex optimization problems. Computational Methods for Differential Equations, 2019; 7(1): 69-85.
Newton method is one of the most famous numerical methods among the line search methods to minimize functions.
It is well known that the search direction and step length play important roles in this class of methods to solve optimization problems. In this investigation, a new modification of the Newton method to solve unconstrained optimization problems is presented. The significant merit of the proposed method is that the step length $\alpha_k$ at each iteration is equal to 1. Additionally, the convergence analysis for this iterative algorithm is established under suitable conditions.
Some illustrative examples are provided to show the validity and applicability of the presented method and a comparison is made with several other existing methods. | CommonCrawl |
where $\alpha_k$ is the log hazard ratio for the $k$th cause for a one-unit increase in the biomarker, at time $t$. Each competing event can be modelled however we like, with different baseline hazard functions, and with different covariate effects. Of course, we can also have different or additional association structures for each event.
To illustrate, I'm going to show you how to simulate data from a joint longitudinal and competing risks model, with two competing risks, representing death from cancer and death from other causes, and then show how to fit the true model using merlin.
Let's assume a simple random intercept and random linear trend for the biomarker trajectory, i.e.
We start by setting a seed, as always, and generate a sample of 300 patients and create an id variable.
Next we generate our subject-specific random effects, for the intercept and slope. We need a single draw from each distribution, per patient. I'm assuming the random effects are independent, but that's easily adapted using drawnorm. I also generate a treatment group indicator, assigning to each arm with 50% probability.
Now we simulate cause-specific event times, using survsim. Simulating joint longitudinal-survival data is not the easiest of tasks mathematically, but thanks to survsim it's pretty simple. We simply specify our cause-specific hazard function, and survsim does the hard work (numerical integration nested within root-finding), details in Crowther and Lambert (2013).
which simulates our event times and stores them in t1, and censoring indicator in d1 given administrative censoring at 5 years.
and we're all done with the competing risk outcomes. Now onto our biomarker.
Let's assume a setting where our biomarker is recorded at baseline, and then annually, so each patient can have up to five measurements. The easiest thing to do is to expand our dataset, by creating replicants of each row, and then drop any observations which occur after each patient's event/censoring time.
row of data per patient, so we replace the repeated event times and event indicators with missings.
which I would say is rather elegantly simple to specify for such a complex model…you may disagree. I'm using the EV[y] element type to link the current value of the biomarker to each cause-specific hazard model, making sure to specify the timevar() options, so time is correctly handled. This is also why we must use the fp() element type to specify functions of time (or rcs() if you prefer splines). We can use stime as our outcome in both survival models but simply use the cause-specific event indicators so we get the correct contributions to the likelihood.
fp()#M2[id] | 1 . . . . .
M1[id] | 1 . . . . .
Now we have our fitted model, generally of most interest within a competing risks analysis is both the cause-specific hazard ratios, which we get from our results table, and estimates of the cause-specific cumulative incidence functions.
We use merlin's predict engine to help us with such things. When predicting something which depends on time, it's often much easier to specify a timevar(), which contains timepoints at which to predict at, which subsequently makes plotting much easier.
Let's now predict each cause-specific cumulative incidence function, in particular, I'll predict marginal CIFs, so we can interpret them as population-average predictions. I'll also use the at() option so we can investigate the effect of treatment.
For plotting purposes, we often create stacked plots of CIFs, and one way to do this is to add them appropriately together to draw the area under the curve, and then overlay one of the CIFs..
We could directly quantify the impact of treatment on each CIF by predicting the difference. To get confidence intervals, we use Stata's powerful predictnl command, which takes a bit of time, but gives us something really useful.
So the joint model I've talked about is an introductory example in the competing risks setting. Given merlin's capabilities, it's rather simple to extend to using other associations structures, as such the rate of change (dEV[y]) or the integral (iEV[y]) of the trajectory function, and to use different distributions for each cause-specific hazard model…the list of extensions goes on. | CommonCrawl |
Let $ABCD$ be a general quadrilateral.
$\alpha$ and $\gamma$ are opposite angles.
Let the area of $\triangle DAB$ and $\triangle BCD$ be $\mathcal A_1$ and $\mathcal A_2$.
Brahmagupta's Formula is a specific version of Bretschneider's Formula for a cyclic quadrilateral.
This entry was named for Carl Anton Bretschneider.
He published a proof in 1842. | CommonCrawl |
A seating order of the people inside the church has been given before Mirko enters. Mirko is, of course, late for the morning Mass and will sit in an empty space so that he shakes hands with as many people as he can. If there are no empty seats left, Mirko will simply give up on the idea and go to the evening Mass instead. We can assume that nobody enters the church after Mirko.
Calculate the total number of handshakes given during the morning Mass.
The first line of input contains positive integers $R$ and $S$ ($1 \leq R, S \leq 50$) as stated in the text. Each of the following $R$ lines contains $S$ characters. These $R \times S$ characters represent the seating order. The character "." (dot) represents an empty place and the character "o" (lowercase letter o) represents a person.
The first and only line of output should contain the required number of handshakes. | CommonCrawl |
Adámek, Jiří and Rosický, Jiří, Locally presentable and accessible categories.
Artin, M. and Mazur, B., Etale homotopy.
Atiyah, M. F., Characters and cohomology of finite groups.
Atiyah, M. F. and Segal, G. B., Equivariant $K$-theory and completion.
Bénabou, Jean, Introduction to bicategories.
Baez, John C. and Shulman, Michael, Lectures on $n$-categories and cohomology.
Bergner, Julia E., A model category structure on the category of simplicial categories.
Bergner, Julia E., A survey of $(\infty ,1)$-categories.
Bergner, Julia E., Rigidification of algebras over multi-sorted theories.
Berkovich, Vladimir G., Spectral theory and analytic geometry over non-Archimedean fields.
Beĭlinson, A. A. and Bernstein, J. and Deligne, P., Faisceaux pervers.
Boardman, J. M. and Vogt, R. M., Homotopy invariant algebraic structures on topological spaces.
Bohmann, Anna Marie, Global orthogonal spectra.
Bourn, Dominique, Sur les ditopos.
Bousfield, A. K., The localization of spaces with respect to homology.
Breen, Lawrence, On the classification of $2$-gerbes and $2$-stacks.
Brown, Kenneth S., Abstract homotopy theory and generalized sheaf cohomology.
Chapman, T. A., Lectures on Hilbert cube manifolds.
Cordier, Jean-Marc, Sur la notion de diagramme homotopiquement cohérent.
Cordier, Jean-Marc and Porter, Timothy, Homotopy coherent category theory.
Deligne, Pierre, Équations différentielles à points singuliers réguliers.
Dugger, Daniel, Combinatorial model categories have presentations.
Dugger, Daniel and Hollander, Sharon and Isaksen, Daniel C., Hypercovers and simplicial presheaves.
Duskin, John W., Simplicial matrices and the nerves of weak $n$-categories. I. Nerves of bicategories.
Dwyer, W. G. and Kan, D. M., Homotopy theory and simplicial groupoids.
Dwyer, W. G. and Kan, D. M., Realizing diagrams in the homotopy category by means of diagrams of simplicial sets.
Dwyer, W. G. and Kan, D. M., Simplicial localizations of categories.
Dwyer, William G. and Hirschhorn, Philip S. and Kan, Daniel M. and Smith, Jeffrey H., Homotopy limit functors on model categories and homotopical categories.
Dydak, Jerzy and Segal, Jack, Shape theory.
Edwards, David A. and Hastings, Harold M., Čech and Steenrod homotopy theories with applications to geometric topology.
Ehlers, Philip John and Porter, Timothy, Ordinal subdivision and special pasting in quasicategories.
Eilenberg, Samuel and Steenrod, Norman E., Axiomatic approach to homology theory.
Eilenberg, Samuel and Zilber, J. A., Semi-simplicial complexes and singular homology.
Freitag, Eberhard and Kiehl, Reinhardt, Étale cohomology and the Weil conjecture.
Fresnel, Jean and van der Put, Marius, Rigid analytic geometry and its applications.
Friedlander, Eric M., Étale homotopy of simplicial schemes.
Gaitsgory, Dennis and Rozenblyum, Nick, Crystals and D-modules.
Gepner, David and Henriques, André, Homotopy Theory of Orbispaces.
Giever, John B., On the equivalence of two singular homology theories.
Giraud, Jean, Cohomologie non abélienne.
Goerss, Paul G. and Jardine, John F., Simplicial homotopy theory.
Gordon, R. and Power, A. J. and Street, Ross, Coherence for tricategories.
Greenlees, J. P. C. and May, J. P., Localization and completion theorems for $M\rm U$-module spectra.
Grothendieck, A., Crystals and the de Rham cohomology of schemes.
Grothendieck, A., On the de Rham cohomology of algebraic varieties.
Grothendieck, Alexander, Sur quelques points d'algèbre homologique.
Günther, Bernd, The use of semisimplicial complexes in strong shape theory.
Haver, William E., Mappings between $\rm ANR$s that are fine homotopy equivalences.
Herrera, M. and Lieberman, D., Duality and the de Rham cohomology of infinitesimal neighborhoods.
Hirschhorn, Philip S., Model categories and their localizations.
Hopkins, Michael J. and Kuhn, Nicholas J. and Ravenel, Douglas C., Generalized group characters and complex oriented cohomology theories.
Jardine, J. F., Simplicial presheaves.
Johnstone, Peter T., Stone spaces.
Joyal, A., Quasi-categories and Kan complexes.
Joyal, André, The Theory of Quasi-Categories and its Applications..
Joyal, André and Tierney, Myles, Strong stacks and classifying spaces.
Körschgen, Alexander, A comparison of two models of orbispaces.
Kan, Daniel M., A combinatorial definition of homotopy groups.
Kashiwara, M., Faisceaux constructibles et systèmes holonômes d'équations aux dérivées partielles linéaires à points singuliers réguliers.
Kashiwara, Masaki, The Riemann-Hilbert problem for holonomic systems.
Kashiwara, Masaki and Schapira, Pierre, Sheaves on manifolds.
Kock, Joachim, Elementary remarks on units in monoidal categories.
Lazard, Daniel, Sur les modules plats.
Leinster, Tom, A survey of definitions of $n$-category.
Leinster, Tom, Higher operads, higher categories.
Mac Lane, Saunders, Categories for the working mathematician.
Mac Lane, Saunders, Natural associativity and commutativity.
Mac Lane, Saunders and Moerdijk, Ieke, Sheaves in geometry and logic.
Makkai, Michael and Paré, Robert, Accessible categories: the foundations of categorical model theory.
Mardešić, Sibe and Segal, Jack, Shape theory.
May, J. P., Equivariant homotopy and cohomology theory.
May, J. P. and Sigurdsson, J., Parametrized homotopy theory.
May, J. Peter, Simplicial objects in algebraic topology.
Mebkhout, Z., Une équivalence de catégories.
Mebkhout, Z., Une autre équivalence de catégories.
Mebkhout, Zoghman, Sur le problème de Hilbert-Riemann.
Miller, Haynes, The Sullivan conjecture on maps from classifying spaces.
Milnor, John, Construction of universal bundles. I.
Milnor, John, The geometric realization of a semi-simplicial complex.
Moerdijk, I. and Vermeulen, J. J. C., Proper maps of toposes.
Munkres, James R., Topology: a first course.
Nichols-Barrer, Joshua Paul, On quasi-categories as a foundation for higher algebraic stacks.
Polesello, Pietro and Waschkies, Ingo, Higher monodromy.
Quillen, Daniel, Higher algebraic $K$-theory. I.
Quillen, Daniel G., Homotopical algebra.
Rezk, Charles, A model for the homotopy theory of homotopy theory.
Rezk, Charles and Schwede, Stefan and Shipley, Brooke, Simplicial structures on model categories and functors.
Rosický, J., On homotopy varieties.
Saavedra Rivano, Neantro, Catégories Tannakiennes.
Schwartz, Lionel, Unstable modules over the Steenrod algebra and Sullivan's fixed point set conjecture.
Schwede, Stefan, Global homotopy theory.
Segal, Graeme, Classifying spaces and spectral sequences.
Serre, Jean-Pierre, Géométrie algébrique et géométrie analytique.
Simpson, Carlos, Homotopy over the complex numbers and generalized de Rham cohomology.
Spaltenstein, N., Resolutions of unbounded complexes.
Stapleton, Nathaniel, Transchromatic generalized character maps.
Stapleton, Nathaniel, Transchromatic twisted character maps.
Stasheff, James Dillon, Homotopy associativity of $H$-spaces. I, II.
Street, Ross, The algebra of oriented simplexes.
Street, Ross, Two-dimensional sheaf theory.
Tamsamani, Zouhair, Sur des notions de $n$-catégorie et $n$-groupoïde non strictes via des ensembles multi-simpliciaux.
Toën, Bertrand, Vers une interprétation galoisienne de la théorie de l'homotopie.
Toën, Bertrand, Vers une axiomatisation de la théorie des catégories supérieures.
Toën, Bertrand and Vezzosi, Gabriele, Homotopical algebraic geometry. I. Topos theory.
van den Dries, Lou, Tame topology and o-minimal structures.
Verdier, Jean-Louis, Des catégories dérivées des catégories abéliennes.
Wall, C. T. C., Finiteness conditions for $\rm CW$-complexes.
Théorie des topos et cohomologie étale des schémas. Tome 1: Théorie des topos. | CommonCrawl |
If $\Phi,\Psi:\mathbb R^n \to\mathbb R^n $ are diffeomorphisms, and $f:\mathbb R\to[0,1]$ is smooth and surjective, is the map $x\mapsto f(\|x\|)\Phi(x)+(1-f(\|x\|))\Psi(x)$ a diffeomorphism?
I promptly posted a counterexample. Within a few minutes, the OP and I engaged in what I have come to identify as a cat and mouse game: they comment on my answer with a slight modification to their question, I reply with a counterexample, they modify, I counter, ad infinitum. When I had my limit, I replied with my latest counterexample, and recommended the OP edit their question to fit the specific question.
How one can spot a question that will diverge into this sort of cat and mouse game?
How one can recommend the user edit their question in order to avoid this from happening?
An incorrect copy of a question from a textbook/notes from a given class.
The XY problem, as Asaf mentions in the comments.
A question which springs independently as curiosity of the user.
Note that they are not mutually disjoint. I will adress your questions for each of the cases.
I think the most easily manageable is number 1. It usually is very easy to spot a typo, and so we only need to ask something like: "are you sure that is what you mean, and not (...)?" If the user is not responsive, then disengage. If the question makes sense with the typo, leave it be (it may become a difficult question/trivial question etc). If the question doesn't make sense, apply "standard procedure": vote to close as whatever best reason you see fit (it most likely will be unclear what is being asked, or lacking context). Furthermore, a typo will probably not lead to a lot of edits (there may be cases where a lot of typos may indeed lead to a lot of edits, but these are rare. I don't recall any instance of this happening).
Number 2 and 3 can be wild beasts. As soon as a "cat and mouse" question begins to surface (i.e., the user has edited/commented sometimes changing it) and it does not look like a typo, I think the best course of action is to try and verify if you are dealing with the XY problem. I think something as simple as commenting "Are you falling in the XY problem? If so, please try to make explicit the question that motivated you, so we can show better ways to get there." If the user acknowledges this, problem solved. If not, then you probably are in case 3 which is the worst in my opinion.
Number 3 has the most potential to be stressful. In this case, the user probably has not a good grasp of what he is asking and will forget "trivial" cases (or treat cases which are not trivial as "trivial" because they are not the kind of thing they expect). Here, I don't think there is a clear-cut procedure. Try and talk with the user, and emphasize that changing a question in such a way that potential answers could be invalidated, even if there are none, is bad. If he is not satisfied with the question he made due to the possible answers then make another, making sure that the question is the right one this time. If the talk is productive, good. But as soon as you realize the talk is not being (or will not be) productive, disengage. If you feel the user is being remarkably rude or disruptive, I think flagging for moderator attention is appropriate. If an edit war begins to happen (people rolling back the question to a previous state but OP not agreeing), flagging also seems appropriate.
Another problem with number 3 is that it may be the case that it is "clear" (also a problem to determine when something is "clear" or not) that OP meant something else, and thus the edit is expected, but people rushed to answer the question for whatever reason. However, this situation will probably be a "one-edit" kind of thing, not several ones. I'm only mentioning this problem for completeness, since I think it is relevant and is one instance where OP has less of the blame.
Difficult to "spot the question", easier to "spot the user". The question will have many obvious errors and assumptions that are obviously going to lead to a big rewrite, but if it doesn't it can be difficult to determine from the question if the user (or anyone else) is going to swoop in and invalidate your answer.
When the change is sufficient, beyond a reasonable refinement, you need to assess who's writing the question the asker or you; the person asking is supposed to be the asker, if you have to rewrite it you might as well do exactly that, or not.
Sometimes it's necessary for a bit of back and forth, but you need to decide if it's productive.
Simply editing the question is permitted. If I'm unsure of the best way to pose the question I specifically note that "Improvements to the question are welcomed". If it's a good question that's going to help a number of people in the future it's worth putting some effort into it. If it's "unclear what is being asked" there's a flag for that.
Go in knowing what your commitment is going to be and don't go far beyond it. If I only have a few minutes available I don't tackle a question that will need a long answer.
Just ask, or you do it (regardless of rep, it can be reviewed by peers). Great Q&A is what this site wants. It is sometimes tough to know the correct way to phrase a question when you don't know the answer, even harder to answer when the asker doesn't know the question.
Give them the benefit of the doubt, commit sufficiently, don't get discouraged or stuck in a loop that slowly leads nowhere. Work for the greater good, if the asker (or answerer) becomes enough of a nuisance they'll accumulate some downvotes to guide them on their way.
Some people don't so much want to play "cat and mouse", instead they feel entitled to play "50! questions"; but if they told you that upfront you might not be interested. They need you to debug their thought processes.
What to do about repeatedly changing question?
He has not enough reputation to comment and made a new question, how to proceed?
How should I respond to a moderator's comment on a deleted answer?
How to properly fix an embarrassing mistake? | CommonCrawl |
This paper proves an optimal strategy for Ebert's hat game with three players and more than two hat colors. In general, for $n$ players and $k$ hat colours, we construct a strategy that is asymptotically optimal as $k\rightarrow \infty$. Computer calculation for particular values of $n$ and $k$ suggests that, as long as $n$ is linear with $k$, the strategy is asymptotically optimal. We conclude by comparing our strategy with the strategy of Lenstra and Seroussi and with the bound of Alon, and suggest our strategy is better when $2k \geq n \geq 7$. | CommonCrawl |
Abstract: The object of the study is the class of holomorphic functions in a multidimensional toric space. We distinguish the subclass of the functions equivalent to entire functions in the following sense: $g$ belongs to the subclass whenever there exists a monomial holomorphic mapping $\mathscr F$ such that $f=g\circ\mathscr F$ is an entire function. We give a full description of the functions of the subclass under consideration in terms of the geometric properties of the supports of the series they expand in. For the appearing extensions of the class of entire functions of several variables, we develop an approach to constructing a growth theory for these classes. Applying the method, we find a multidimensional analog to the expansion of a holomorphic function in a Laurent series.
Keywords: entire function of several variables, extension, growth characteristic, multiple Laurent series, support, strictly convex cone. | CommonCrawl |
f(z)=z^7 + 6z^3 + 7.
f(z) is analytic at all points except z = $\infty$. Therefore, it is analytic within and upon the complementary of first quadrant.
Thus, the angle change is $4\pi$, and the number of zero in the first quadrant is 2.
Please see the new attached scanned picture. For yi on Ri to 0, as long as y is positive, f(yi) always lies in the fourth quardarnt.When R tends to be infinity, Re(f(iR)) = 7 and Im(f(iR)) tends to be negtive infinity. When R=0, f(iR) = 7. So f(iy) approximately rotates from negative imaginary axis to positive real axis counter-clockwisely.
$x$ from $0$ to $R$; indeed, $f(x)$ stays real, but argument of real negative is not $2n\pi $, it is $(2n+1)\pi$. You need to check if the sign of $f(x)$ changes here. | CommonCrawl |
The property of two sets $A$ and $B$ in a topological space $X$ requiring the existence of a continuous real-valued function $f$ on $X$ such that the closures of the sets $f(A)$ and $f(B)$ (relative to the usual topology on the real line $\mathbf R$) do not intersect. For example, a space is completely regular if every closed set is separable from each one-point set that does not intersect it. A space is normal if every two closed non-intersecting subsets of it are functionally separable. If every two (distinct) one-point sets in a space are functionally separable, then the space is called functionally Hausdorff. The content of these definitions is unchanged if, instead of continuous real-valued functions, one takes continuous mappings into the plane, into an interval or into the Hilbert cube.
This page was last modified on 14 December 2017, at 22:44. | CommonCrawl |
Why don't I get the same value of percentage ionic character of a particular molecule from different equations?
Where $x_a$ and $x_b$ stands for the electronegativity of $a$ and $b$ atom in bond $a-b$.
Now if I put $x_a-x_b = 2,$ The 1st, 2nd & 3rd equations respectively give the value as -.648%, 47.5% & 46%. What is the reason behind these different values of percentage ionic character of the same molecule? At where am I wrong?
Browse other questions tagged bond electronegativity dipole valence-bond-theory or ask your own question. | CommonCrawl |
What has one side and no edges? This isn't an impossible riddle but has an answer viz. the Klein Bottle.
Have you ever taken any two numbers, added the second to the first; written this down, added the result to the second number, written the result own, and so on?
Soon after writing my previous letter to you, I noticed the error I had made in connection with numbers 7 and 9.
The game chosen for this issue of Parabola is played by two people on a $10 \times 10$ chessboard.
Most candidates did not understand the question so ruled themselves out of consideration.
J221 Find a 2-digit number $AB$ such that $(AB)^2 - (BA)^2$ is a perfect square.
J211 The numbers 31,767 and 34,924, when divided by a certain 3 digit divisor, leave the same remainder, also a 3-digit number. Find the remainder. | CommonCrawl |
Title: Simplicially graded algebraic topology.
Abstract: In algebraic topology there are many examples of local to global theorems where a global consequence is drawn from imposing strong local conditions. When working with simplicial complexes it is often possible to fragment a global geometric obstruction into a local obstruction over each simplex with a problem being soluble if and only if all the local obstructions vanish. An important example of this is the total surgery obstruction of Ranicki.
In this talk I will discuss the approach of fragmenting over a simplicial complex, give some examples of problems that can be solved with this approach and explain some potential applications to the algebraic theory of surgery.
Title: Automorphisms of fusion systems of finite groups of Lie type.
Abstract: Fix a prime p. Given a finite group G and a Sylow p-subgroup S of G, there are natural homomorphisms between Out(G), the group of homotopy classes of self-equivalences of the p-complete classifying space of G, and the fusion preserving automorphisms of $S$. We compare these groups in case of finite groups of Lie type. This is joint work with J. M. Møller and B. Oliver.
Title: Decomposition spaces for homotopy algebraic combinatorics.
Abstract: Joint work with J Kock (UAB), A Tonks (Leicester).
In this talk I will explain the use of (infinity-)groupoids instead of sets in enumerative combinatorics, and homotopy equivalence instead of bijective correspondence between equinumerous objects. Groupoid slices, or equivalently presheaves (of groupoids) on a groupoid $G$, take the place of numerical functions from $\pi_0(G)\to\mathbb Q$. In particular, the process of taking the incidence coalgebra of a poset or monoid can be lifted: starting with a simplicial groupoid satisfying certain coassociativity conditions (in our terminology, a `decomposition space') one can define an incidence coalgebra in the category of groupoids. Examples include generalisations of Hall (co)algebras, the Butcher–Connes–Kreimer coalgebra of trees, and a new notion of directed restriction species, generalising the restriction species of Schmitt.
Title: Homological Stability for Families of Coxeter Groups.
eventually consists of isomorphisms. Homological stability holds for symmetric groups, general linear groups, mapping class groups, and many many more. In almost all cases we do not know the homology groups themselves. In this talk I will explain a stability result for certain families of Coxeter groups. Examples include the A_n, BC_n and D_n families (all finite), the superideal simplex reflection groups (all hyperbolic), and many others.
Title: Coarse geometry of discrete subgroups of isometries of symmetric spaces.
Abstract: Among the discrete subgroups of isometries of hyperbolic space, convex cocompact groups play an important role, in particular in Thurston's geometrization. Those subgroups are characterized in many ways. In this talk I discuss joint work with M Kapovich and B Leeb about generalizing this notion to higher rank symmetric spaces of non compact type.
Title: Loops, L-spaces, and left-orderabiliy.
Abstract: I'll give an overview of a project-in-progress with Jonathan Hanselman. We develop a calculus for studying the (bordered) Heegaard Floer homology of a particular class of orientable three-manifolds with torus boundary. This allows us to give a complete understanding of when gluing two such manifolds (to form a closed three-manifold) has simplest-possible Heegaard Floer homology — that is, when the resulting closed manifold is an L-space. As an application, we prove the following: For a graph manifold $Y$ which decomposes along a torus into two Seifert fibred pieces, $Y$ is an L-space if and only if $\pi_1(Y)$ is not a left-orderable group. This equivalence between L-spaces and non-left-orderable fundamental groups is conjectured to hold in general. | CommonCrawl |
Notation: why are categories enriched over $\mathcal V$?
In most references about enriched categories, $(\mathcal V, \otimes)$ is supposed to be a monoidal category and then $\mathcal V$-enriched categories are defined.
Why is the letter $\mathcal V$ used for a monoidal category?
Browse other questions tagged category-theory notation monoidal-categories enriched-category-theory or ask your own question.
How to compute (co)limits of enriched categories? | CommonCrawl |
$q$-Discrete Painlevé equations for recurrence coefficients of modified $q$-Freud orthogonal polynomials - Mathematics > Classical Analysis and ODEs - Download this document for free, or read online. Document in PDF available to download.
Abstract: We present an asymmetric $q$-Painlev\-e equation. We will derive this using$q$-orthogonal polynomials with respect to generalized Freud weights: theirrecurrence coefficients will obey this $q$-Painlev\-e equation up to a simpletransformation. We will show a stable method of computing a special solutionwhich gives the recurrence coefficients. We establish a connection with$\alpha-q-P V$. | CommonCrawl |
Thus $(132)$ is an even permutation because it is a product of even number of transpositions which is $2$.
So how can a permutation be both odd and even ??
Another question is if a permutation which is a cycle has length odd then it is an even permutation. How can I say wheher a permutation is odd/even using the definition of inversion?
For example $(132)$ has length 3 but is odd permutation since it has 1 inversion whereas $(123)$ is an even permutation since it has zero inversions,both having length $3$.
Can someone kindly help me out??
No permutation is both odd and even. $(123)$ is an even permutation. It is the cycle that sends $1\mapsto 2\mapsto 3\mapsto 1$. It is not the identity permutation. This cycle notation may be a bit confusing in this way if we also use two line notation, in that we also write the two line notation with parentheses and it means something completely different. I usually see one line notation without parentheses, so $123$ is the identity permutation, but $(123)$ is a cycle with an even number of inversions.
Note that you can write $(123) = (312) = (231)$. So your method of detecting inversions is not correct. An inversion in the cycle does not correspond to an inversion in the permutation.
In general a cycle of length $2k$ is an odd permutation, and a cycle of length $2k+1$ is even. This is a pretty simple rule. If you write a permutation as a product of disjoint cycles, the parity is additive as one would expect, as is true for any product of permutations. An easy way to remember this is as follows: $$(123) = (12)(23)$$ $$(2341) = (23)(34)(41) = (23)(34)(14)$$ You can in general split a cycle into a product of transpositions this way, and the number of transpositions, while not the number of inversions, has the same parity as such.
By the definition of a cycle, it is not terribly difficult to prove this multiplication rule. You should give it a shot. Even more fun, we have $$(125347) = (125)(5347) = (125)(534)(47)$$ etc. This splitting rule is a rule I find very useful.
As a warning, if you multiply permutations in the opposite order, as in not according to function composition, the pretty splitting rule disappears. Then you'd have $$(123) = (23)(12)$$ This to me is evidence that multiplication in that order is unnatural, but it may have some advantages that I'm not aware of. I find this is sufficient reason not to use it for my purposes, but of course if it is what you use in your course or textbook, it is what it is. You can still use my splitting rule, but you have to reverse the order. You can recover the naturality of the splitting rule by interpreting cycles in the opposite order, but as far as I know this is not done.
Does every non-trivial subgroup of $S_9$ containing an odd permutation necessarily contain a transposition?
In S_3, an even permutation must have an odd number of orbits. True or False?
Is this permutation even or odd?
Can we prove that the identity permutation is even this way?
Show that the identity permutation cannot be expressed as the product of an odd number of transpositions.
Express an even permutation $\alpha$ as a string of $(13456)$ and $(132)$? | CommonCrawl |
For questions on random walks, a mathematical formalization of a path that consists of a succession of random steps.
Does the "prime ant" ever backtrack?
Brownian Motion in Confined space, any results?
Is there a connection between the 3D random walk constant and the partition function?
Potential uses for viewing discrete wavelets constructed by filter banks as hierarchical random walks.
What is the area covered by a Random walk in a 2D grid?
Standard deviation of a quantum walk?
Where does directed random walk hit the boundary?
For a simple random walk $S_n$ and for a stopping time $\tau$, what is the intuitive interpretation of $P(\tau < \infty) = 1$?
Completeness of random walks in multiple dimensions?
Will simple random walk on $n$-cycle converges to Brownian motion on $S^1$?
Pushing the Iterated Logarithm Rule to the limits.
Why are random walks in dimensions 3 or higher transient?
How can I generate a random walk on the unitary group $U(n)$? | CommonCrawl |
What is the formula for calculating annual interest expense (IntExp) which is used in the equations above?
Select one of the following answers. Note that D is the value of debt which is constant through time, and ##r_D## is the cost of debt.
A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by?
Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to.
(a) The manufacturing firm's before-tax WACC.
(b) The manufacturing firm's after-tax WACC.
(c) A services firm's before-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm.
(d) A services firm's after-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm.
(e) The services firm's levered cost of equity.
Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula?
(a) CapEx is added in the Net Income (NI) equation so it needs subtracting in the CFFA equation.
(b) CapEx is a financing cash flow that needs to be ignored. Therefore it should be subtracted.
(c) CapEx is not a cash flow, it's a non-cash expense made up by accountants that needs to be subtracted.
(d) CapEx is subtracted to account for the net cash spent on capital assets.
(e) CapEx is subtracted because it's too hard to predict, therefore we exclude it.
Find Trademark Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Find UniBar Corp's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Find Piano Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Find World Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Note: all figures above and below are given in millions of dollars ($m).
A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged.
(a) The company is increasing its debt-to-assets and debt-to-equity ratios. These are types of 'leverage' or 'gearing' ratios.
(b) The company will pay less tax to the government due to the benefit of interest tax shields.
(c) The company's net income, also known as earnings or net profit after tax, will fall.
(d) The company's expected levered firm free cash flow (FFCF or CFFA) will be higher due to tax shields.
(e) The company's expected levered equity free cash flow (EFCF) will not change.
Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant?
(a) An increase in revenue (##Rev##).
(b) An decrease in revenue (##Rev##).
(c) An increase in rent expense (part of fixed costs, ##FC##).
(d) An increase in interest expense (##IntExp##).
(e) An increase in dividends.
The firm is financed by listed common stock and vanilla annual fixed coupon bonds, which are both traded in a liquid market.
The bonds' yield is equal to the coupon rate, so the bonds are issued at par. The yield curve is flat and yields are not expected to change. When bonds mature they will be rolled over by issuing the same number of new bonds with the same expected yield and coupon rate, and so on forever.
Tax rates on the dividends and capital gains received by investors are equal, and capital gains tax is paid every year, even on unrealised gains regardless of when the asset is sold.
There is no re-investment of the firm's cash back into the business. All of the firm's excess cash flow is paid out as dividends so real growth is zero.
The firm operates in a mature industry with zero real growth.
All cash flows and rates in the below equations are real (not nominal) and are expected to be stable forever. Therefore the perpetuity equation with no growth is suitable for valuation.
A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below.
Which point corresponds to the best time to calculate the terminal value?
(d) Any of the points.
(e) None of the points.
An old company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below.
(a) An increase in revenue (Rev).
(b) An increase in rent expense (part of fixed costs, FC).
(c) An increase in depreciation expense (Depr).
(d) An decrease in net working capital (ΔNWC).
Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Achieve firm free cash flow (FFCF or CFFA) of $1m.
Complete a $1.3m share buy-back.
Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above.
All amounts are received and paid at the end of the year so you can ignore the time value of money.
The firm has sufficient retained profits to pay the dividend and complete the buy back.
The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year.
How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued?
(e) No new shares need to be issued, the firm will be sufficiently financed.
Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant?
(b) An increase in rent expense (a type of recurring fixed cost, FC).
(d) An increase in inventories (a current asset).
(e) An decrease in interest expense (IntExp).
Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Make $5m in sales, $1.9m in net income and $2m in equity free cash flow (EFCF).
The firm has sufficient retained profits to legally pay the dividend and complete the buy back.
a positive cash flow of $1.1 million in one year (t=1).
The project has a total required return of 10% pa due to its moderate level of undiversifiable risk.
Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project.
He knows that the opportunity cost of investing the $1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m ##(=1m \times 10\%)## which occurs in one year (t=1).
He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year.
Your friend has listed a few different ways to find the NPV which are written down below.
Which of the above calculations give the correct NPV? Select the most correct answer.
(e) II and V only.
There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not.
Which of the below FFCF formulas include the interest tax shield in the cash flow?
The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent.
(a) 1, 3, 5, 7, 9.
(b) 2, 4, 6, 8, 10.
(c) 1, 4, 6, 8, 10.
(d) 2, 3, 5, 7, 9.
(e) 1, 3, 5, 8, 10.
Does this annual FFCF or the annual interest tax shield?
One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT).
The project will require an immediate purchase of $50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored.
The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year.
Thousands are represented by 'k' (kilo).
All rates and cash flows are nominal. The inflation rate is 2% pa.
The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual.
What is the net present value (NPV) of the project?
Does this annual FFCF with zero interest expense or the annual interest tax shield?
One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT).
(a) Diversifiable standard deviation of asset returns.
(b) Systematic standard deviation of asset returns.
(c) Proportion of debt and equity used to fund the assets.
(d) Cash flows from assets.
Due to the project, current assets will increase by $6m now (t=0) and fall by $6m at the end (t=1). Current liabilities will not be affected.
The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio.
Millions are represented by 'm'.
All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates.
The project is undertaken by a firm, not an individual.
Due to the project, current assets will increase by $5m now (t=0) and fall by $5m at the end (t=1). Current liabilities will not be affected.
The debt-to-assets ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio.
All rates and cash flows are real. The inflation rate is 2% pa.
Read the following financial statements and calculate the firm's free cash flow over the 2014 financial year.
Find the cash flow from assets (CFFA) of the following project.
Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of $1m at the end of the year.
Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of $3m right now. However, if the mine goes ahead then this natural beauty will be destroyed.
Note 3: The mining equipment will have a book value of $1m at the end of the year for tax purposes. However, the equipment is expected to fetch $2.5m when it is sold.
Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one.
Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch $0.9 million when it is sold at t=2.
Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another $0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities.
Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m).
Note 1: Due to the project, the firm will have to purchase $40m of inventory initially (at t=0). Half of this inventory will be sold at t=1 and the other half at t=2.
Note 2: The equipment will have a book value of $2m at the end of the project for tax purposes. However, the equipment is expected to fetch $1m when it is sold. Assume that the full capital loss is tax-deductible and taxed at the full corporate tax rate.
Note 3: The project will be fully funded by equity which investors will expect to pay dividends totaling $10m at the end of each year.
To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position.
(a) Net income, depreciation and interest expense.
(b) Depreciation and capital expenditure.
(c) Current assets, current liabilities and cost of goods sold (COGS).
(d) Current assets, current liabilities and capital expenditure.
(e) Current assets, current liabilities and depreciation expense.
To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the income statement needed? Note that the income statement is sometimes also called the profit and loss, P&L, or statement of financial performance.
Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other).
What is the value of the levered firm including interest tax shields?
Use the below information to value a levered company with annual perpetual cash flows from assets that grow. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Note that 'k' means kilo or 1,000. So the $30k is $30,000.
(a) The WACC before tax is 6.46% pa.
(b) The WACC after tax is 5.55% pa.
(c) The current value of the firm's levered assets including tax shields is $603.839k.
(d) The current value of debt is $600k.
(e) The benefit from interest tax shields in the first year is $7.2k. | CommonCrawl |
Note that my question is that how is it possible that I get $t = \infty$. It does not make sense for me (physically speaking). Where did I get wrong?
Please if there is information you require let me know.
How are you getting PasteBoard to work (1st image)? It is not working for me!
Why didn't you get PasteBoard to work for the 2nd image?
You have obtained a quartic equation in $z$. This has 4 solutions, not 1. Find some more solutions! Then check which one describes the situation you are trying to find.
In fact there is one other real solution and 2 imaginary solutions. Probably you should solve numerically (this is a physics question, not maths). eg Make a first guess $z$ then reiterate $z'=\frac12 (z^4+1)$. Or apply Newton-Raphson Method.
I do not why PasteBoard works with computers connected to the uni server. I did not use it again because the pic was yielded horizontally all the time.
You wrote about reiteration and Newton-Raphson Method. I am not acquainted with those; may you delve into it?
The quartic equation gives you a value of $z$ which is less than 1, so its logarithm is -ve. The corresponding value of time in your formula is +ve.
Oops... I have to think more before asking.
You have not gone wrong. You simply have not gone far enough.
You have obtained a quartic equation $$z^4-2z+1=0$$ This has 4 solutions, not only the obvious one $z=1$. The solution $z=1$ requires $t=+\infty$ so this is not the solution you require. You need to find some more solutions. Then check which one describes the situation you are trying to find.
In fact there is one other real solution and 2 imaginary solutions. Probably you should solve numerically (this is a physics question, not maths).
These converge to 0.5437, though not quickly.
which converges much more rapidly. | CommonCrawl |
You are given an undirected graph $G$ with $n$ nodes and $m$ edges. The set of vertices is $V$ and the set of edges is $E$.
Let the Complement of $G$ be $G'$. The Complement of a graph is a graph with all of the same nodes, but if there's no edge between nodes $a$ and $b$ in $G$, then there is an edge between $a$ and $b$ in $G'$, and if there is an edge between $a$ and $b$ in $G$, then there is no edge between $a$ and $b$ in $G'$.
A Clique is a subset of nodes that have an edge between every pair. A subset of nodes $S$ is called a Double Clique if $S$ forms a clique in $G$, and $V-S$ forms a clique in $G'$. Note that an empty set of nodes is considered a clique.
Given a graph, count the number of double cliques in the graph modulo $10^9+7$.
Each input will consist of a single test case. Note that your program may be run multiple times on different inputs. Each test case will begin with a line with two integers $n$ and $m$ ($1 \le n,m \le 2 \times 10^5$), where $n$ is the number of nodes and $m$ is the number of edges in the graph. The nodes are numbered $1..n$. Each of the next $m$ lines will contain two integers $a$ and $b$ ($1 \le a <b \le n$), representing an edge between nodes $a$ and $b$. The edges are guaranteed to be unique.
Output a single integer, which is the number of Double Cliques in the graph modulo $10^9+7$. | CommonCrawl |
After a wonderful evening in the restaurant the time to go home came. Leha as a true gentlemen suggested Noora to give her a lift. Certainly the girl agreed with pleasure. Suddenly one problem appeared: Leha cannot find his car on a huge parking near the restaurant. So he decided to turn to the watchman for help.
Leha wants to ask the watchman $q$ requests, which can help him to find his car. Every request is represented as five integers $x_1, y_1, x_2, y_2, k$. The watchman have to consider all cells $(x, y)$ of the matrix, such that $x_1 \leq x \leq x_2$ and $y_1 \leq y \leq y_2$, and if the number of the car in cell $(x, y)$ does not exceed $k$, increase the answer to the request by the number of the car in cell $(x, y)$. For each request Leha asks the watchman to tell him the resulting sum. Due to the fact that the sum can turn out to be quite large, hacker asks to calculate it modulo $10^9 + 7$.
However the requests seem to be impracticable for the watchman. Help the watchman to answer all Leha's requests.
The first line contains one integer $q$ ($1 \leq q \leq 10^4$) — the number of Leha's requests.
($1 \leq x_1 \leq x_2 \leq 10^9, 1 \leq y_1 \leq y_2 \leq 10^9, 1 \leq k \leq 2 \cdot 10^9$) — parameters of Leha's requests.
Print exactly $q$ lines — in the first line print the answer to the first request, in the second — the answer to the second request and so on. | CommonCrawl |
Mancala is a family of board games played around the world, sometimes called sowing games, or count-and-capture games, which describes the game play. One simple variant is a solitaire game called Tchoukaillon which was described by Véronique Gautheron. Tchoukaillon is played on a board with an arbitrary number of bins numbered $1, 2, \ldots $, containing $b, b, \ldots $ counters respectively and an extra empty bin called the Roumba on the left.
A single play consists on choosing a bin, $n$, for which $b[n] = n$ (indicated by the darker circles in the diagram) and distributing the counters one per bin to the bins to the left including the Roumba (getting the next diagram below in the figure above). If there is no bin where $b[n] = n$, then the board is a losing board.
If there is a sequence of plays which takes the initial board distribution to one in which every counter is in the Roumba, the initial distribution is called a winnable board. In the example above, $0,1,3,\ldots $ is a winnable board (the "$\ldots $" indicates all the bins to the right of bin $3$ contain $0$). For each total number of counters, there is a unique distribution of the counters to bins to make a winnable board for that total count (so $0,1,3,\ldots $ is the only winnable board with $4$ counters).
Write a program which finds the winnable board for a total count input.
The first line of input contains a single integer $P$, ($1 \le P \le 200$), which is the number of data sets that follow. Each data set should be processed identically and independently.
Each data set consists of a single line of input. It contains the data set number, $K$, followed by a single space, followed by the total count $N$ ($1 \le N \le 2000$) of the winnable board to be found.
For each data set there will be multiple lines of output. The first line of output contains the data set number, $K$, followed by a single space, followed by the index of the last bin, $B$, with a non-zero count. Input will be chosen so that $B$ will be no more than $80$. The first line of output for each dataset is followed by the bin counts $b, b, \ldots , b[B]$, 10 per line separated by single spaces. | CommonCrawl |
I just returned from PROMYS last weekend, and in this post, I hope to summarize the math that I learned there, describe the program itself, and offer a few observations and suggestions to anyone who plans to attend the PROMYS program, go to any other math programs, or become a mathematician in general.
If you do plan to go to PROMYS, it is best that you DO NOT read the specifics about the math that is done there. I will make sure to include a WARNING before I start to talk about the math.
First, a word about something that is not math.
"But," you object, "This is a math blog!"
Just keep reading. This is something all mathematicians should know. I'll try to keep it short.
Let me remind all readers (should I happen to have any) of my blog that although mathematics is one of the things I cherish most in life, it is not worth sacrificing one's health or quality of life in other areas for. Despite the fact that making progress in learning deep mathematics is difficult and requires a lot of time and effort, one should not pursue it with such enthusiasm that it upsets the balance of one's life. If you are a student in school, for example, it is important to do well in all subjects, and not just to do very well in math. If you are a college student or full adult, remember that there are other things to life than math.
And, most importantly, do not let pursuing mathematics detract from your physical health. Being a mathematician doesn't mean sitting in a chair all day - you can be in shape and be a mathematician at the same time. Being a mathematician doesn't mean paying little attention to what you eat - if anything, being able to work out the details of a difficult proof means that one should be able to distinguish between a healthy and an unhealthy diet. Finally, perhaps the thing that was most prevalent at PROMYS, was sleep loss - being a mathematician doesn't mean you have to stay up until midnight or later every day doing math. Sleep loss, if anything, will make math even more difficult, since a sleep-deprived brain doesn't function as well as a well-rested one.
Remember that mathematical rigor can also be applied to one's own life. One should analyze the choices just as thoroughly as one analyzes a mathematical system - they are, in fact, the choices that count the most.
That's all I have to say about that! Back to the math.
Now I feel I should say something about PROMYS as a whole. I really enjoyed being at the program for six weeks - it was the most fun I've had in a long time, but it was also exhausting. At the program, there is little time to do anything other than the math assigned. That being said, the math is extremely cool. The true value of the program comes through in the problem sets, which, at first glance, seem rather basic, but are in fact extremely well-designed; they somehow present the students with problems in a way that nudges them towards the desired results and discoveries without giving too much information ("spoilers"). Being submerged in a mathematically-inclined community for six weeks is also very refreshing for any mathematician who lives somewhere without a robust mathematical community. The diversity of the students there (they come from all over the world) is also eye-opening, and exposes one to people from all sorts of different backgrounds.
WARNING: Below are spoilers about the mathematical topics covered at PROMYS. If you plan to go, DO NOT read any further.
Finally, a word about the math that we did at PROMYS. The program began with a look at the axiomatization of basic number theory, or really just arithmetic; we worked out how to describe the integers as formally and rigorously as possible, and for the first two to three weeks, we proved things about the integers that seemed trivial enough to take for granted before PROMYS. Of particular importance was Euclid's Division Algorithm, which states that any integers $a,b\in\mathbb Z$ satisfy the relationship $a=bq+r$ for some $q,r\in\mathbb Z$ with $0\le r\lt b$, so long as $b\ne 0$. Another important theorem was Bézout's identity, which states that if $a,b\in\mathbb Z$ are coprime integers, then the equation $ax+by=1$ has solutions $x,y\in\mathbb Z$. These all seem like trivialities, but they are, in fact, nontrivial to prove, if one is working only with the axioms of the integers.
And, of course, there are even more than that, but I cannot list all of them. When studying these mathematical structures, the importance of rigorously proving the properties of the integers became clear, since some of them do not share the properties that we often take for granted in the integers. For example, one of the most beloved properties of the integers is that any integer can be factored uniquely into primes; however, unique prime factorization, surprisingly, does not hold in some fields of the form $\mathbb Z [\sqrt 5]$. I'll leave it to you to find out which.
The most surprising and useful thing that I learned at PROMYS is the value of variety in studying mathematical systems; often, a mathematical theorem can be extremely difficult to prove, but is reduced almost to triviality if one has some foundation knowledge about the right mathematical structures. This means that knowledge in a variety of areas is extremely important. Though all of the major theorems that we proved in PROMYS (such as quadratic reciprocity and the four-square theorem) are theorems in Number Theory, they were proven using results from Number Theory in a almost all of the mathematical systems listed above, as well as some theorems from geometry and algebra.
Before attending PROMYS, I was stubbornly entrenched in the pursuit of knowledge only regarding calculus and analysis, but this program has shown me the magic that occurs in Number Theory and more elementary fields. While most of my past blog entries concerned calculus, it is likely that very few of my future blog posts will do so. There is so much to explore without ever surpassing the level of high-school geometry, and I will probably be fixated on that for a long time. | CommonCrawl |
1226221 Is A Fascinating Number | Math Misery?
Also note that $$111211 \times 112111 = 12467976421 = 111211 \times 112111$$ also works.
What other palindromic semi-primes are there with distinct prime factors and yield a palindromic factorization like that of \(1226221\) and \(12467976421\)?
It's doubtful that there is anything interesting from a number theoretic standpoint, but these are nice curios.
This entry was posted in Thoughts and tagged number theory, palindrome on January 3, 2017 by Manan Shah. | CommonCrawl |
where $P^\bullet $ is bounded above and consists of projective objects, and $\alpha $ is a quasi-isomorphism. Any two morphisms $\beta _1, \beta _2$ making the diagram commute up to homotopy are homotopic.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 064A. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 064A, in case you are confused. | CommonCrawl |
What sets AdS radius of the Vasiliev dual to the O(N) vector model?
4D N=4 Super Yang Mills in the planar limit has an infinite dimensional symmetry known as Yangian symmetry. Dualities respect symmetries, so what does this symmetry correspond to in the $AdS_5\times S^5$ string theory dual? If I remember correctly, in the boundary theory the infinite dimensional symmetry arises because the superconformal and the dual superconformal transformations do not commute but instead form the Yangian algebra. What is the analog in the dual? Do these extra symmetries that emerge in the planar limit have a nice geometric interpretation in the bulk? | CommonCrawl |
In the first half, we will review Chabauty's and Skolem's methods. We will then explain how these can be generalized to the non-abelian Chabauty's method of Minhyong Kim. If time allows, I will also mention polylogarithms.
I will discuss recent joint work, with Cristian Gavrus and Daniel Tataru, in which we consider wave maps on a (1+2)-dimensional nonsmooth background. Our main result asserts that in this variable-coefficient context, the wave maps system is wellposed at almost-critical regularity.
We will review the theory of $A_\infty $-algebras, their minimal models and differential graded realizations. We discuss the example of the Ext algebra which arises in Koszul duality.
Join us for a thought leadership conversation in conjunction with the 25th anniversary of the Impact Fund between four legal trailblazers in the vanguard of the movement for equal pay for women. This free event is presented in association with BerkeleyLaw and the Thelton E. Henderson Center. | CommonCrawl |
We prove nonlinear stability of line soliton solutions of the KP-II equation with respect to transverse perturbations that are exponentially localized as $x\to\infty$. We find that the amplitude of the line soliton converges to that of the line soliton at initial time whereas jumps of the local phase shift of the crest propagate in a finite speed toward $y=\pm\infty$. The local amplitude and the phase shift of the crest of the line solitons are described by a system of 1D wave equations with diffraction terms. | CommonCrawl |
Where each vector $\mathbf B_i$ is an unravelled version of one basis function from a truncated 3D discrete cosine transform and $m_x, m_y, m_z$ are the order of transform in the $x-, y-, z-$ directions, respectively.
Browse other questions tagged dct or ask your own question.
Difference between compressive sensing and DCT-based compression? | CommonCrawl |
In this paper we investigate stability and inter-action measures for interconnected systems that have beenproduced by decomposing a large-scale linear system into aset of lower order subsystems connected in feedback. We beginby analyzing the requirements for asymptotic stability throughgeneralized dissipation inequalities and storage functions. Usingthis insight we then describe various metrics based on a system'senergy dissipation to determine how strongly the subsystemsinteract with each other. From these metrics a decompositionalgorithm is described.
TRUST Center, University of California, Berkeley.
In this work we have analyzed the effectsof correlated failures of power lines on the total systemload shed. The total system load shed is determined bysolving the optimal load shedding problem, which is thesystem operator's best response to a system failure.We haveintroduced a Monte Carlo based simulation framework forestimating the statistics of the system load shed as a functionof stochastic network parameters, and provide explicitguarantees on the sampling accuracy. This framework hasbeen applied to a 470 bus model of the Nordic power systemand a correlated Bernoulli failure model. It has been foundthat increased correlations between Bernoulli failures ofpower lines can dramatically increase the expected valueas well as the variance of the system load shed.
This paper analyzes distributed control protocols for first- and second-order networked dynamical systems. We propose a class of nonlinear consensus controllers where the input of each agent can be written as a product of a nonlinear gain, and a sum of nonlinear interaction functions. By using integral Lyapunov functions, we prove the stability of the proposed control protocols, and explicitly characterize the equilibrium set. We also propose a distributed proportional-integral (PI) controller for networked dynamical systems. The PI controllers successfully attenuate constant disturbances in the network. We prove that agents with single-integrator dynamics are stable for any integral gain, and give an explicit tight upper bound on the integral gain for when the system is stable for agents with double-integrator dynamics. Throughout the paper we highlight some possible applications of the proposed controllers by realistic simulations of autonomous satellites, power systems and building temperature control.
High-voltage direct current (HVDC) is a commonly used technology for long-distance power transmission, due to its low resistive losses and low costs. In this paper, a novel distributed controller for multi-terminal HVDC (MTDC) systems is proposed. Under certain conditions on the controller gains, it is shown to stabilize the MTDC system. The controller is shown to always keep the voltages close to the nominal voltage, while assuring that the injected power is shared fairly among the converters. The theoretical results are validated by simulations, where the affect of communication time-delays is also studied.
KTH, School of Industrial Engineering and Management (ITM). KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
We consider frequency control of synchronous generator networks and study transient performance under both primary and secondary frequency control. We model random step changes in power loads and evaluate performance in terms of expected deviations from a synchronous frequency over the synchronization transient; what can be thought of as lack of frequency coherence. We compare a standard droop control strategy to two secondary proportional integral (PI) controllers: centralized averaging PI control (CAPI) and distributed averaging PI control (DAPI). We show that the performance of a power system with DAPI control is always superior to that of a CAPI controlled system, which in turn has the same transient performance as standard droop control. Furthermore, for a large class of network graphs, performance scales unfavorably with network size with CAPI and droop control, which is not the case with DAPI control. We discuss optimal tuning of the DAPI controller and describe how internodal alignment of the integral states affects performance. Our results are demonstrated through simulations of the Nordic power grid.
KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH Royal Inst Technol, Sch Elect Engn, SE-10044 Stockholm, Sweden.;KTH Royal Inst Technol, ACCESS Linnaeus Ctr, SE-10044 Stockholm, Sweden..
In this paper, we compare the transient performance of a multi-terminal high-voltage DC (MTDC) grid equipped with a slack bus for voltage control to that of two distributed control schemes: a standard droop controller and a distributed averaging proportional-integral (DAPI) controller. We evaluate performance in terms of an H-2 metric that quantifies expected deviations from nominal voltages, and show that the transient performance of a droop or DAPI controlled MTDC grid is always superior to that of an MTDC grid with a slack bus. In particular, by studying systems built up over lattice networks, we show that the H-2 norm of a slack bus controlled system may scale unboundedly with network size, while the norm remains uniformly bounded with droop or DAPI control. We simulate the control strategies on radial MTDC networks to demonstrate that the transient performance for the slack bus controlled system deteriorates significantly as the network grows, which is not the case with the distributed control strategies.
Wireless Sensor Networks and Control Systems are an essential part of the Smart Grid. We consider the problem of performing control over large complex networked systems with packet drops. More specifically, we are interested in improving the performance of the regulation of control loops when the communication is made over low-cost wireless networks. In control over wireless networks it is common to use Contention-Free (CF) schemes where no losses occur with the price of low scalability and complicated scheduling policies. In this work we propose a hybrid MAC and control architecture, where a small number of control loops with high demand of attention are scheduled in a CF scheme and well regulated loops are scheduled in a lossy, asynchronous and highly scalable, Contention-Access (CA) scheme. We model and analyze the performance of such system with Markov Jump Linear System (MJLS) tools and compare it with other architecture types. Performance is evaluated using a quadratic cost function of the state.
The experimental implementation and validation of a localization system based on a heterogeneous sensor network is described. The sensor network consists of ultrasound ranging sensors and web cameras. They are used to localize a mobile robot under sensor communication constraints. Applying a recently proposed sensor fusion algorithm that explicitly takes communication delay and cost into account, it is shown that one can accurately trade off the estimation performance by using low-quality ultrasound sensors with low processing time and low communication cost versus the use of the high-quality cameras with longer processing time and higher communication cost. It is shown that a periodic schedule of the sensors is suitable in many cases. The experimental setup is discussed in detail and experimental results are presented.
The model reduction problem for networks of interconnected dynamical systems is studied in this paper. In particular, networks of identical passive subsystems, which are coupled according to a tree topology, are considered. For such networked systems, reduction is performed by clustering subsystems that show similar behavior and subsequently aggregating their states, leading to a reduced-order networked system that allows for an insightful physical interpretation. The clusters are chosen on the basis of the analysis of controllability and observability properties of associated edge systems, representing the importance of the couplings and providing ameasure of the similarity of the behavior of neighboring subsystems. This reduction procedure is shown to preserve synchronization properties (i.e., the convergence of the subsystem trajectories to each other) and allows for the a priori computation of a bound on the reduction error with respect to external inputs and outputs. The method is illustrated by means of an example of a thermal model of a building.
In this paper, a model reduction procedure for a network of interconnected identical passive subsystems is presented. Here, rather than performing model reduction on the subsystems, adjacent subsystems are clustered, leading to a reduced-order networked system that allows for a convenient physical interpretation. The identification of the subsystems to be clustered is performed through controllability and observability analysis of an associated edge system and it is shown that the property of synchronization (i.e., the convergence of trajectories of the subsystems to each other) is preserved during reduction. The results are illustrated by means of an example.
In this paper, controllability properties of networks of diffusively coupled linear systems are considered through the controllability Gramian. For a class of passive linear systems, it is shown that the controllability Gramian can be decomposed into two parts. The first part is related to the dynamics of the individual systems whereas the second part is dependent only on the interconnection topology, allowing for a clear interpretation and efficient computation of controllability properties for a class of networked systems. Moreover, a relation between symmetries in the interconnection topology and controllability is given. The results are illustrated by an example.
In this paper, we present a toolbox for structured model reduction developed for MATLAB. In addition to structured model reduction methods using balanced realizations of the subsystems, we introduce a numerical algorithm for structured model reduction using a subgradient optimization algorithm. We briefly present the syntax for the toolbox and its features. Finally, we demonstrate the applicability of various model reduction methods in the toolbox on a structured mass-spring mechanical system.
We derive a modular fluid-flow network congestion control model based on a law of fundamental nature in networks: the conservation of information. Network elements such as queues, users, and transmission channels and network performance indicators like sending/acknowledgment rates and delays are mathematically modeled by applying this law locally. Our contributions are twofold. First, we introduce a modular metamodel that is sufficiently generic to represent any network topology. The proposed model is composed of building blocks that implement mechanisms ignored by the existing ones, which can be recovered from exact reduction or approximation of this new model. Second, we provide a novel classification of previously proposed models in the literature and show that they are often not capable of capturing the transient behavior of the network precisely. Numerical results obtained from packet-level simulations demonstrate the accuracy of the proposed model.
A method to compute the L-2 gain is developed for the class of linear periodic continuous-time systems that admit a finite-dimensional state-space realisation. A bisection search for the smallest upper bound on the gain is employed, where at each step an equivalent discrete-time problem is considered via the well known technique of time-domain lifting. The equivalent problem involves testing a bound on the gain of a linear shift-invariant discrete-time system, with the same state dimension as the periodic continuous-time system. It is shown that a state-space realisation of the discrete-time system can be constructed from point solutions to a linear differential equation and two differential Riccati equations, all subject to only single-point boundary conditions. These are well behaved over the corresponding one period intervals of integration, and as such, the required point solutions can be computed via standard methods for ordinary differential equations. A numerical example is presented and comparisons made with alternative techniques.
We consider a class of nonlinear systems for which an observer-based output-feedback controller is updated at discrete time instances. However, the received update or patch can be compromised by the attacker to drive the system to instability. In this paper, we provide a checkable condition to ensure that the received patch has not been tampered with to cause instability in the control system. Moreover, we guarantee that the application of the tamper-free patch ensures global asymptotic stability of the control system by choosing the update time instances appropriately. The secure patch update protocol is illustrated on an example involving the output-feedback synchronization of two neuron population models, where the observer gains are updated at discrete time instances.
Complete and rigorous foundations for basic thermodynamic laws from the statistical description of microscopic systems has been a long-standing goal for mathematicians and physicists alike since Boltzmann. In this paper, we show how Willems's dissipativity theory provides a convenient framework to study a physical system at both microscopic and macroscopic level, and suggests a natural storage function different from the usual free energy to derive the theorem of energy equipartition of energy for linear systems. In this setup, we introduce a simple and general definition for temperature defined also out of equilibrium which allows to test the limits of validity of Fourier's law describing the transfer of heat from hot systems to cold systems. In particular under time-scale separation conditions, we derive the Maxwell-Cattaneo law, allowing for instantaneous flow of energy from cold to hot systems, which should be considered instead of Fourier's law for a proper description of energy exchanges between interconnected linear systems.
In this paper, we advocate the use of open dynamical systems, i.e. systems sharing input and output variables with their environment, and the dissipativity theory initiated by Jan Willems as models of thermodynamical systems, at the microscopic and macroscopic level alike. We take linear systems as a study case, where we show how to derive a global Lyapunov function to analyse networks of interconnected systems. We define a suitable notion of dynamic non-equilibrium temperature that allows us to derive a discrete Fourier law ruling the exchange of heat between lumped, discrete-space systems, enriched with the Maxwell-Cattaneo correction. We complete these results by a brief recall of the steps that allow complete derivation of the dissipation and fluctuation in macroscopic systems (i.e. at the level of probability distributions) from lossless and deterministic systems. This article is part of the themed issue 'Horizons of cybernetical physics'.
In this paper, we identify a class of time-varying port-Hamiltonian systems that is suitable for studying problems at the intersection of statistical mechanics and control of physical systems. Those port-Hamiltonian systems are able to modify their internal structure as well as their interconnection with the environment over time. The framework allows us to prove the First and Second Laws of thermodynamics, but also lets us apply results from optimal and stochastic control theory to physical systems. In particular, we show how to use linear control theory to optimally extract work from a single heat source over a finite time interval in the manner of Maxwell's demon. Furthermore, the optimal controller is a time-varying port-Hamiltonian system, which can be physically implemented as a variable linear capacitor and transformer. We also use the theory to design a heat engine operating between two heat sources in finite-time Carnot-like cycles of maximum power, and we compare those two heat engines.
Caltech, Control and Dynamical Systems.
We rigorously derive the main results of thermodynamics,including Carnot's theorem, in the framework oftime-varying linear systems.
State estimators in power systems are currently used to, for example, detect faulty equipment and to route power flows. It is believed that state estimators will also play an increasingly important role in future smart power grids, as a tool to optimally and more dynamically route power flows. Therefore security of the estimator becomes an important issue. The estimators are currently located in control centers, and large numbers of measurements are sent over unencrypted communication channels to the centers. We here study stealthy false-data attacks against these estimators. We define a security measure tailored to quantify how hard attacks are to perform, and describe an efficient algorithm to compute it. Since there are so many measurement devices in these systems, it is not reasonable to assume that all devices can be made encrypted overnight in the future. Therefore we propose two algorithms to place encrypted devices in the system such as to maximize their utility in terms of increased system security. We illustrate the effectiveness of our algorithms on two IEEE benchmark power networks under two attack and protection cost models.
Achieving all-encompassing component-level security in power system IT infrastructures is difficult, owing to its cost and potential performance implications.
Introduction Supervisory control and data acquisition (SCADA) systems are widely used to monitor and control large-scale transmission power grids. Monitoring traditionally involves the measurement of voltage magnitudes and power flows; these data are collected by meters located in substations. In order to deliver the measured data from the substations to the control centre, the measurement data measured by meters in the same substation are multiplexed by a remote terminal unit (RTU) [1, 2]. Because electric power transmission systems extend over large geographical areas, typically entire countries, wide-area networks (WANs) are used to deliver the multiplexed measurement data from the substations to the control centre. For large-scale transmission grids it is often not feasible to measure all power flows and voltages of interest. Furthermore, the measurements are often noisy. Therefore the measurement data are usually fed into a model-based state estimator (SE) at the control centre, which is used to estimate the complete physical state (complex bus voltages) of the power grid. The SE is used to identify faulty equipment and corrupted measurement data through the so-called bad-data detection (BDD) system. Apart from BDD, the state estimate is used by the human operators and by the energy-management systems (EMS) found in modern SCADA systems, such as optimal power flow analysis, and contingency analysis (CA), see for example . Future power grids will be even more dependent on accurate state estimators to fulfil their task of optimally and dynamically routing power flows, because clean renewable power generation tends to be less predictable than nonrenewable power generation.
A game-theoretic model for analysing the effects of privacy on strategic communication between agents is devised. In the model, a sender wishes to provide an accurate measurement of the state to a receiver while also protecting its private information (which is correlated with the state) private from a malicious agent that may eavesdrop on its communications with the receiver. A family of nontrivial equilibria, in which the communicated messages carry information, is constructed and its properties are studied.
We propose a method for designing loop-shaping controllers using Bode's ideal transfer function. Bode's ideal transfer function is introduced using fractional calculus. The ideal loop transfer function is approximated using the first generation CRONE approximation, and then implemented by means of Hinfinity-optimization followed by closed-loop controller order reduction of the resulting controller. The design method is confirmed to be powerful and robust by simulating on a flexible transmission system.
CSIROs Data61, Canberra, ACT, Australia.; Univ Melbourne, Dept Elect & Elect Engn, Melbourne, Vic, Australia..
Univ Melbourne, Melbourne Informat Decis & Autonomous Syst Lab, Parkville, Vic 3010, Australia.;Univ Melbourne, Dept Elect & Elect Engn, Parkville, Vic 3010, Australia..
In this paper, batteries are used to preserve the privacy of households with smart meters. It is commonly understood that data from smart meters can be used by adversaries to infringe on the privacy of the households, e.g., figuring out the individual appliances that are being used or the level of the occupancy of the house. The Cramer-Rao bound is used to relate the variance of the estimation error of any unbiased estimator of the household consumption from the aggregate consumption (i.e., the household plus the battery) to the Fisher information. Subsequently, optimal policies for charging and utilizing batteries are devised to minimize the Fisher information (in the scalar case and the trace of the Fisher information matrix in the multi-variable case) as a proxy for maximizing the variance of the estimation error of the electricity consumption by adversaries (irrespective of their estimation policies). The policies are chosen to respect the physical constraints of the battery regarding capacity, initial charge, and rate constraints. The results are demonstrated on real power measurement data with non-intrusive load monitoring algorithms.
The problem of preserving the privacy of individual entries of a database with constrained additive noise is considered. An adversary can submit linear queries to an agent possessing the entire database. The agent returns a response to the query that is corrupted by an additive random noise whose support is a subset or equal to a constraint set. The Cramer-Rao bound is used to bound the variance of the estimation error of the database, which may be used by the adversary, to the trace of the inverse of the Fisher information matrix. A measure of privacy using the Fisher information matrix is developed. The probability density that minimizes the Fisher information (as a proxy for maximizing the measure of privacy) is computed.
We present a complexity reduction algorithm for a family of parameter-dependent linear systems when the system parameters belong to a compact semi-algebraic set. This algorithm potentially describes the underlying dynamical system with fewer parameters or state variables. To do so, it minimizes the distance (i.e., $H_\infty$-norm of the difference) between the original system and its reduced version. We present a sub-optimal solution to this problem using sum-of-squares optimization methods. We present the results for both continuous-time and discrete-time systems. Lastly, we illustrate the applicability of our proposed algorithm on numerical examples.
We present a suboptimal control design algorithm for a family of continuous-time parameter-dependent linear systems that are composed of interconnected subsystems. We are interested in designing the controller for each subsystem such that it only utilizes partial state measurements (characterized by a directed graph called the control graph) and limited model parameter information (characterized by the design graph). The algorithm is based on successive local minimizations and maximizations (using the subgradients) of the H∞-norm of the closed-loop transfer function with respect to the controller gains and the system parameters. We use a vehicle platooning example to illustrate the applicability of the results.
Distributed approaches to secondary frequency control have become a way to address the need for more flexible control schemes in power networks with increasingly distributed generation. The distributed averaging proportional-integral (DAPI) controller presents one such approach. In this paper, we analyze the transient performance of this controller, and specifically address the question of its performance under noisy frequency measurements. Performance is analyzed in terms of an H2 norm metric that quantifies power losses incurred in the synchronization transient. While previous studies have shown that the DAPI controller performs well, in particular in sparse networks and compared to a centralized averaging PI (CAPI) controller, our results prove that additive measurement noise may have a significant negative impact on its performance and scalability. This impact is shown to decrease with an increased inter-nodal alignment of the controllers' integral states, either through increased gains or increased connectivity. For very large and sparse networks, however, the requirement for inter-nodal alignment is so large that a CAPI approach may be preferable. Overall, our results show that distributed secondary frequency control through DAPI is possible and may perform well also under noisy measurements, but requires careful tuning.
This paper presents the work on resilient and secure power transmission and distribution developed within the VIKING (vital infrastructure, networks, information and control system management) project. VIKING receives funding from the European Community's Seventh Framework Program. We will present the consortium, the motivation behind this research, the main objective of the project together with the current status.
The resilience of Supervisory Control and Data Acquisition (SCADA) systems for electric power networks for certain cyber-attacks is considered. We analyze the vulnerability of the measurement system to false data attack on communicated measurements. The vulnerability analysis problem is shown to be NP-hard, meaning that unless P = NP there is no polynomial time algorithm to analyze the vulnerability of the system. Nevertheless, we identify situations, such as the full measurement case, where the analysis problem can be solved efficiently. In such cases, we show indeed that the problem can be cast as a generalization of the minimum cut problem involving nodes with possibly nonzero costs. We further show that it can be reformulated as a standard minimum cut problem (without node costs) on a modified graph of proportional size. An important consequence of this result is that our approach provides the first exact efficient algorithm for the vulnerability analysis problem under the full measurement assumption. Furthermore, our approach also provides an efficient heuristic algorithm for the general NP-hard problem. Our results are illustrated by numerical studies on benchmark systems including the IEEE 118-bus system.
In this paper, we characterize and analyze the set of strategic stealthy false-data injection attacks on discrete-time linear systems. In particular, the threat scenarios tackled in the paper consider adversaries that aim at deteriorating the system's performance by maximizing the corresponding quadratic cost function, while remaining stealthy with respect to anomaly detectors. As opposed to other work in the literature, the effect of the adversary's actions on the anomaly detector's output is not constrained to be zero at all times. Moreover, scenarios where the adversary has uncertain model knowledge are also addressed. The set of strategic attack policies is formulated as a non-convex constrained optimization problem, leading to a sensitivity metric denoted as the output-to-output ℓ2-gain. Using the framework of dissipative systems, the output-to-output gain is computed through an equivalent convex optimization problem. Additionally, we derive necessary and sufficient conditions for the output-to-output gain to be unbounded, with and without model uncertainties, which are tightly related to the invariant zeros of the system.
Recent research efforts are considering the problem of performing control of dynamical systems over wireless sensor and actuator networks. However, existing results lack an experimental evaluation in real platforms. In this demonstration an inverted pendulum system is controlled over an IEEE 802.15.4 wireless sensor and actuator network. This platform can evaluate several sensor networks and control algorithms and is currently used as an educational tool at KTH Royal Institute of Technology, Sweden.
In a thermodynamic process with measurement and feedback, the second law of thermodynamics is no longer valid. In its place, various second-law-like inequalities have been advanced that each incorporate a distinct additional term accounting for the information gathered through measurement. We quantitatively compare a number of these information measures using an analytically tractable model for the feedback cooling of a Brownian particle. We find that the information measures form a hierarchy that reveals a web of interconnections. To untangle their relationships, we address the origins of the information, arguing that each information measure represents the minimum thermodynamic cost to acquire that information through a separate, distinct measurement procedure. | CommonCrawl |
Abstract: Side-channel attacks have proven many hardware implementations of cryptographic algorithms to be vulnerable. A recently proposed masking method, based on secret sharing and multi-party computation methods, introduces a set of sufficient requirements for implementations to be provably resistant against first-order DPA with minimal assumptions on the hardware. The original paper doesn't describe how to construct the Boolean functions that are to be used in the implementation. In this paper, we derive the functions for all invertible $3 \times 3$, $4 \times 4$ S-boxes and the $6 \times 4$ DES S-boxes. Our methods and observations can also be used to accelerate the search for sharings of larger (e.g. $8 \times 8$) S-boxes. Finally, we investigate the cost of such protection.
Publication Info: This is an extended version of the paper "Threshold Implementations of all 3x3 and 4x4 S-boxes", which will appear at CHES 2012. | CommonCrawl |
We work in the Church-style simply typed lambda calculus. All terms shall be considered in long normal form. Any term of type $A_1\rightarrow A_2\ldots\rightarrow A_n \rightarrow 0$ is of the form $\lambda y_1 y_2 \ldots y_n . y_i u_1 u_2 \ldots u_m$. In this case, we shall say that $i$ is the index of this term.
My question is, does every term of a type with multiple components (that is, $n$ is greater than $1$ in the example above) behave as a function with respect to indices? Namely, given two terms of the same index of type $A_1$ above, would both return terms of type $A_2\ldots\rightarrow A_n \rightarrow 0$ with the same index?
Here's a counterexample to your conjecture.
Now, $M\;N$ reduces to $\lambda\,x\,y.\;x$ (index 1) and $M\;N'$ reduces to $\lambda\,x\,y.\;y$ (index 2).
Not the answer you're looking for? Browse other questions tagged lo.logic lambda-calculus typed-lambda-calculus or ask your own question.
Are two unbound variables alpha-equivalent? | CommonCrawl |
We use the theory of sutured TQFT to classify contact elements in the sutured Floer homology, with Z coefficients, of certain sutured manifolds of the form $(\Sigma \times S^1, F \times S^1)$ where $\Sigma$ is an annulus or punctured torus. Using this classification, we give a new proof that the contact invariant in sutured Floer homology with Z coefficients of a contact structure with Giroux torsion vanishes. We also give a new proof of Massot's theorem that the contact invariant vanishes for a contact structure on $(\Sigma \times S^1, F \times S^1)$ described by an isolating dividing set. | CommonCrawl |
In the era of computation and data-driven research, traditional methods of disseminating research are no longer fit-for-purpose. New approaches for disseminating data, methods and results are required to maximize knowledge discovery. The "long tail" of small, unstructured datasets is well catered for by a number of general-purpose repositories, but there has been less support for "big data". Outlined here are our experiences in attempting to tackle the gaps in publishing large-scale, computationally intensive research. GigaScience is an open-access, open-data journal aiming to revolutionize large-scale biological data dissemination, organization and re-use. Through use of the data handling infrastructure of the genomics centre BGI, GigaScience links standard manuscript publication with an integrated database (GigaDB) that hosts all associated data, and provides additional data analysis tools and computing resources. Furthermore, the supporting workflows and methods are also integrated to make published articles more transparent and open. GigaDB has released many new and previously unpublished datasets and data types, including as urgently needed data to tackle infectious disease outbreaks, cancer and the growing food crisis. Other "executable" research objects, such as workflows, virtual machines and software from several GigaScience articles have been archived and shared in reproducible, transparent and usable formats. With data citation producing evidence of, and credit for, its use in the wider research community, GigaScience demonstrates a move towards more executable publications. Here data analyses can be reproduced and built upon by users without coding backgrounds or heavy computational infrastructure in a more democratized manner.
In a world where zettabytes of electronic information are now produced globally each year , quick and easy access to this information is becoming increasingly important in realizing its potential for society and human development. For scientific data in particular, removing silos and opening access to enable new data-driven approaches increase transparency and self-correction, allow for more collaborative and rapid progress, and enable the development of new questions—revealing previously hidden patterns and connections across datasets.
On top of a citation advantage , public access to data has had other measurable benefits to individual fields, such as rice research . Further, pressing issues led by the stresses of a growing global population, such as climate change, rapid loss of biodiversity, and public health costs, require urgent and rapid action. Unfortunately, shrinking research budgets in much of the world mean that the access and use of research data that is already being collected need to be maximized as much as possible. There is growing awareness and uptake of open access publishing, with some estimates that up to half the papers currently being published are now free-to-read . Browsing the narrative is only the first step, but key to maximizing the utility of publicly funded research is the ability to access the supporting data and build upon the contents—an area that needs more development.
Funding agencies, such as the US National Science Foundation, have started to mandate data management and share plans for all funded projects, and the NIH is likely to go a step further—investing through their "Big Data to Knowledge" program in the development of a biomedical and healthCAre Data Discovery and Indexing Ecosystem, or "bioCADDIE" (http://biocaddie.ucsd.edu/). The data discovery index enabled through bioCADDIE aims to achieve data what PubMed has achieved for the literature, like Pubmed and PubMed Central, should provide infrastructure and momentum towards mandatory data archiving. In Europe, the Directorate General for Research and Innovation and OpenAIRE (http://www.openaire.eu/) requires grant recipients to make their publications Open Access. They also have an Open Research Data Pilot for selected H2020 projects that mandates a Data Management Plan and deposition of data funded by the grant in a research data repository, such as Zenodo or the upcoming European Open Science Cloud for Research infrastructure. The UK Research Councils (RCUK) are also drafting a Concordat on Open Research Data (http://www.rcuk.ac.uk/research/opendata/) that promotes making research data open and usable, but does not act as a mandate. The lack of mandate at the RCUK level is indicative of the general approach of broad, abstract statements in support of open access and data, but with little hard line necessity. This and the lack of cross-border agreement greatly limits the uptake of such policies in international research.
The other key stakeholders are journals, and while they have supposedly been tightening their policies (culminating in cross publisher schemes, such as the Joint Data Archiving Policy in Ecology Journals ), there is still a very long way to go in terms of compliance. Although there are encouraging signs that some open access publishers are starting to address this issue . While a recent survey found 44 out of the top 50 highest impact journals have made policy statements about data sharing, data are available for only a minority of articles , and in some cases access to raw data can be as low as 10 % . With increasing dependence on computational methods, code availability policies are unfortunately even more poorly adhered to than data release policies .
These deficiencies have led to a 'reproducibility gap', where most studies cannot be reproduced based on limited available information in the published paper. Systematically testing the reproducibility of papers across various research fields, Ioannidis and others determined that the proportion of published research findings are false or exaggerated, and an estimated 85 % of research resources are wasted because of this . For example, an evaluation of a set of microarray studies revealed that only 1 out of 9 could be reproduced in principle , and similar results have been seen in preclinical cancer research, where scientific findings could be confirmed in only 11 % of cases . These sorts of figures have been reported in a number of studies, and an analysis of past preclinical research studies indicates that the cumulative prevalence of irreproducible preclinical research exceeds 50 %; in other words, the US$28 B/year spent on preclinical research is not reproducible . With increasing rates of retraction of published work (particularly correlating with supposed 'higher impact' journals ), it is important to stem these deficiencies, not only to prevent the waste and even incorrect answers to health and environmental issues that could have deadly consequences, but also to prevent undermining public confidence in science.
Current formats for disseminating scientific information—the static scientific journal article—have fundamentally not changed for centuries and need to be updated for the current, more data-driven and digital age. Especially so considering journal articles are the primary method for judging work achievements and career success. As stated by Buckheit and Donoho, scholarly articles are merely advertisement of scholarship, and the actual scholarly artefacts are the data and computational methods . Further, with particularly interesting and important datasets and tools, curious users outside of traditional academic environments can also be engaged and utilized through citizen science. This approach has already demonstrated novel insights can be gained through widening the user base to include people with different perspectives and a lack of many preconceptions (e.g. Galaxy Zoo ). Making scientific and medical information open access has educative potential, and also enables informed decision making by patients, doctors, policy makers and electorates that may not otherwise have access to this information. It also provides material for data miners to extract new knowledge, and enabling open data advocates and developers to build new infrastructure, apps and tools.
One of the key ways to tackle this reproducibility gap and—importantly—to accelerate scientific advances is to make data and code freely and easily available. However, doing this in the real world is far more problematic than it sounds. The current mechanism of simply relying on authors to provide data and materials on request or from their own websites has been clearly shown not to work. The Reproducibility Initiative: Cancer Biology Consortium carried out a study to quantify many of these issues by trying to repeat experiments from 50 highly cited studies published in 2010–2012. Their attempt to obtain data using this approach took, on average, two months to obtain the data for each paper, and in 4 out of 50 cases, the authors had yet to cooperate after a year of chasing . Further, based on an assessment of papers in the ACM conferences and journals, obtaining the code, which is essential for replication and reuse, took two months on average for nearly 44 % of papers , and a survey of 200 economics publications found that of the 64 % of the authors that responded, 56 % would not share supplementary materials .
As an alternative to relying only on authors' time and goodwill, funding agencies can play a role in pushing data and code sharing. However, despite the good intentions of a few forward thinking organizations, such as the NIH and Wellcome Trust, most funders around the world do not enforce this before publication. Journals, therefore, default to being one of the few stakeholders who can make this happen. But, with their current focus more on narrative delivery rather than code and data access, this has also been problematic.
Carrot and stick approaches, however, are not enough. The hurdles for action need to be lowered, so as to make data FAIR (findable, accessible, interoperable and re-usable) . Data management and curation is an expensive and complicated process that most researchers are simply unable to deal with. Further, data storage does not come cheap: a number of studies on disciplinary research data centres in the UK and Australia, funded by their Research Councils, found the running costs of these data centres to be roughly 1.5 % of the total research expenditure .
While Open Access textual content is being worked into policies and mandates from research funders, looking beyond static archived PDFs, a Research Object (RO)-oriented approach to all the products of the research cycle is needed. ROs are semantically rich Linked Data aggregations of resources that provide a layer of structure on top of this textual information, bundling together essential information relating to experiments and investigations. This includes not only the data used, and methods employed to produce and analyse that data, but also links and attributes the people involved in the investigation .
There are a growing number of databases that allow scientists to share their findings in more open and accessible ways, as well as a new generation of data journals trying to leverage them. Some areas of biology and chemistry (particularly those working with nucleic acid or X-ray crystal structure data) have been well catered for over many decades with an ecosystem of domain-specific databases, such as those of the International Nucleotide Sequence Database Consortium (INSDC; GenBank, DDBJ and ENA), and Worldwide Protein Data Bank. There is also now a selection of broad-spectrum databases including Zenodo, Dryad, figshare and the DataVerse repositories. These broad-spectrum databases have the benefit of not having data-type restrictions, and researchers can deposit data from the entire set of experiments of a study in a single place. Although these resources cater well for the 'long-tail' of data producers working with tabular data in the megabyte to gigabyte size range, researchers working in more data-intensive areas producing large-scale imaging, high-throughput sequencing and mass spectrometry may not be as well served, due to their file size limitations and charges.
Still, beyond storage, there needs to be more incentive to make this still difficult activity more worthwhile: data and method producers need to be credited doing so . Effectively, wider and more granular methods of credit such as micro- or even nano-publication need to be accepted , and new platforms and infrastructure are required to enable this information to be disseminated and shared as easily and openly as possible.
To establish such a mechanism for providing this type of credit, along with data storage and access, code availability, and open use for all of these components, BGI, the world's largest producer of genomics data, and BioMed Central, the world's first commercial open access publisher, built a novel partnership and launched the open access, open data, open code journal GigaScience. This aims to provide these elements for biological and biomedical researchers in the era of "big-data . BGI, with extensive computational resources, has a long history of making its research outputs available to the global research community, while BioMedCental has an established open access publishing platform. Bringing these together allowed the creation of a journal that integrates data publishing via the journal's database, GigaDB, a method/workflow/analysis sharing portal based on the Galaxy workflow system ('GigaGalaxy'), and standard narrative publishing. Through this, GigaScience aims to finally provide infrastructure and incentives to credit and enable more reproducible research . Here we outline our experiences and approaches in attempt to enable a more fit-for-purpose publishing of large-scale biological and biomedical data-heavy, and computational-intensive research.
Integrated with the online, open access journal GigaScience is a database, GigaDB, that is deployed on an infrastructure provided by BGI Hong Kong, that hosts the data and software tools associated with articles published in GigaScience . In a similar manner to Zenodo using the CERN Data Centre, the ability to leverage the tens of petabytes of storage as well as the computational and bioinformatics infrastructure already in place at BGI makes the start-up and ongoing overhead of this data publishing approach more cost effective. On top of the integrated data platform, another feature differentiating GigaScience from most other data journals is the in-house and on-hand team of curators, data scientists and workflow management experts to assist and broker the data and method curation, review and dissemination process.
Working with the British Library and the DataCite consortium (http://www.datacite.org), each dataset in GigaDB is assigned a Digital Object Identifier (DOI) that can be used as a standard citation in the reference section for use of these data in articles by the authors and other researchers. Using the DOI system already familiar in publishing, the process of data citation, in which the data themselves are cited and referenced in journal articles as persistently identifiable bibliographic entities, is a way to acknowledge data output . Digital Curation Centres (DCC) best practice guidelines for formatting and citation are carried out, and as much metadata as possible is provided to DataCite to maximize its discoverability in their repository and in the Thomson-Reuters Data Citation Index (http://thomsonreuters.com/data-citation-index/).
GigaDB further removes unenforceable legal barriers by releasing all data under the most open Creative Commons CC0 waiver that is recommended for datasets as it prevents the stacking of attribution requirements in large collections of data . Taking such a broad array of data types, we have also tried to aid data interoperability and integration by taking submissions in the ISA-TAB metadata format . To increase the usability further, we are also working on providing programmatic access to the data by the provision of an application programming interface (API) to GigaDB. Figure 1 illustrates the submission and peer-review workflow of GigaScience, GigaDB and GigaGalaxy (Fig. 1). With curation of data and checking of software availability by the in-house team, information about the study and samples is collected and collated, and checked for completeness. This information forms the basis of the DOI, and upon passing review, the data files are then transferred to the GigaDB servers from the submitter. Care is taken to ensure files are in appropriate formats and correctly linked to the relevant metadata. Finally, a DOI is minted and release of the data through the GigaDB website can occur.
At time of writing, GigaDB has issued 220 DOIs to datasets, the largest (by volume) from "Omics" fields, including sequence-based genomics, transcriptomics, epigenomics, and metagenomics, as well as mass spectrometry-based technologies such as proteomics and metabolomics. A growing number of datasets are from imaging technologies such as MRI, CT and mass spectrometry imaging, as well as other techniques such as electrophysiology and systems biology. Smaller in size, but not in novelty are workflows, virtual machines and software platforms. Roughly 30TB of data is currently available to download from the GigaDB servers, with the largest datasets being sets of agricultural (379 cattle and 3000 rice strains , as well as human cancer genomes in the 10–15TB range. Much of the raw sequencing data has been subsequently moved to the subject-specific INSDC databases, but GigaDB hosted them during the initial curation and copying processes, and continues to host intermediate and analysed data for reproducibility purposes. Subscribing to Aspera (http://asperasoft.com) to speed up data transfers demonstrated an up to 30-fold increase in transfer speeds over FTP if users downloaded and installed the free Web browser plug-in. Most datasets are in the 1–10GB range, but many are pushing 100GB, making them impossible to host in other broad-spectrum databases. On top of datasets of relevance to human health (cancer, diabetes, hepatitis B and other human pathogens), many plant genomes of agricultural importance (sorghum, millet, potato, chickpea, cotton, flax, cucumber and wheat) are also available and released without many of the restrictions and material transfer agreements that the agricultural sector often imposes.
GigaScience are keen to promote the use of alternative measures for research assessment other than the impact factor. Google analytics and DataCite resolution statistics show our datasets are receiving over five times the UK DataCite average number of accesses, with some reaching over 1000 resolutions a year, and much higher levels of page views and FTP accesses.
One of the main aims of data citation is to incentivise and credit early release of data, i.e. prior to the publication of the analyses of the data (which can sometimes take years). Here, to promote this activity, GigaDB also includes a subset of datasets from BGI and their collaborators that are not associated with GigaScience articles, many of which were released pre-publication. For example, the Emperor and Adelie penguin, and polar bear genomes were released nearly 3 years before they were formally published in research articles. This has enabled, previously impossible, early use of large resources to the scientific community. The polar bear genome , for example, has accumulated a number of citations in population genetics and evolutionary biology studies that have benefited from its availability. The polar bear genome analysis was eventually published in May 2014 (nearly three years after its data was publicly released), and is a very encouraging example for authors concerned of "scooping" and negative consequences of early data publication and release. In addition, it highlights the benefits of Data Citation, where despite being used and cited in at least 5 analysis papers, this did not prevent it from a prestigious publication and feature on the cover of Cell . Particularly as at that time, Cell Press was the only major biology publisher, to state in a survey carried out by F1000, that they would see the publication of data with a DOI as potential prior publication .
Most important of all are datasets that assist the fight back against disease and conservation efforts that also have educative potential, and can inspire new more open ways of doing research. What follows are few examples of the most influential and demonstrative datasets we have released, and outline of some of the downstream consequences that the data publication has set in motion.
An excellent example of the utility of pre-publication release of data was from our first dataset and DOI minted—the genome of the deadly E. coli 0104: H44 outbreak in Germany that killed over 50 people, infected thousands more, and caused mass panic in Europe in summer of 2011 . Upon receiving DNA from the University Medical Centre Hamburg-Eppendorf, our colleagues at BGI were the first to sequence and release the genome of the pathogen responsible using "Ion Torrent" rapid bench-top sequencing technology. Due to the unusual severity of the outbreak, it was clear that the usual scientific procedure of producing data, analysing it slowly and then releasing it to the public after a potentially long peer-review procedure was inappropriate. By releasing the first genomic data into the public domain within hours of completion of the first round of sequencing, and before it had even finished uploading to the NCBI sequencing repository, the immediate announcement of its availability on twitter, promoted its use. The microbial genomics community around the world immediately took up the challenge to study the organism collaboratively (a process that was dubbed by some bloggers as the first "Tweenome").
Using these data, within the first 24 h of release, researchers from Birmingham University had released their own genome assembly in the same manner, and a group in Spain released analyses from a new annotation platform and set up a collaborative GitHub repository to provide a home to these analyses and data . Within days of the initial release, a potential ancestral strain had been identified by a blogger in the US , helping clear Spanish farmers of the blame. Importantly for the treatment side, the many antibiotic resistance genes and pathogenic features were much more quickly and clearly understood. Additionally, releasing the data under a CC0 waiver allowed truly open-source analysis, and the UK Health Protection agency, and other contributors to the GitHub group, followed suit in releasing their work in this way. Within two weeks, two dozen reports were filed in the repository, with contributions spanning North America, Europe, Australia, Hong Kong and Egypt, including files from bloggers without formal biology training .
While the authors gained much good feeling and positive coverage from this (despite some inevitable disagreement over credit and what exactly was achieved ), they still wanted and, under current research funding assessment systems, required the more traditional form of scientific credit—publication in a prestigious journal. At the time of releasing these data, the consequences of doing so in a citeable form before the publication of associated manuscripts were unclear, especially with varying journal editorial policies regarding pre-publication dissemination of results. Particularly relevant to this is a commonly acknowledged editorial guideline of the New England Journal of Medicine outlining limitations on prepublication release of information known as the "Ingelfinger rule" . This policy has made many researchers wary of publicizing preliminary data, as it states that a manuscript may not be considered for publication if its substance has been reported elsewhere, which can include release in press or other non-scientific outlets. In this digital era, however, this policy is looking increasingly out of touch with growing use of social media, blogging and pre-print servers, and with new funder mandates and policies regarding open access to data. It is, therefore, unclear as to how this restriction can be reconciled with various communities' code of practice regarding pre-publication data deposition in public databases.
Therefore, from a publishing perspective, the release of these data in a citeable format was a useful test case of how new and faster methods of communication and data dissemination can complement and work alongside more traditional systems of scientific communication and credit. Fortunately, the open source analysis was eventually published in the New England Journal of Medicine, ironically the journal responsible for the Ingelfinger rule . It was positive to see that maximizing the use of the data by putting it into the public domain did not trump scientific etiquette and convention that allowed those producing the data to be attributed and get credit.
While these data and approach aided the development of rapid diagnostic methods and targeted bactericidal agents to kill the pathogen , the project's potentially biggest legacy may be as an example of open-science, data-citation, and use of CC0 data. After releasing the data under a CC0 waiver, this allowed truly open source analysis. Furthermore, a team at the sequencing company Pacific-Biosystems quickly followed this style of sharing by releasing their data openly and speedily without wasting time on legal wrangling . The lessons from this have subsequently been used to influence UK and EU science policy, with the Royal Society in the UK using the E. coli crowdsourcing as an example of "the power of intelligently open data", and highlighting it on the cover of their influential "Science as an Open Enterprise" report .
The first Data Note we published was a first-pass genome assembly covering 76 % of the genome of the rare Puerto Rican Parrot, crowdfunded with money raised from private donations, art and fashion shows in Puerto Rico . The social and traditional media attention that the authors attracted from this paper enabled the authors to receive even more sponsorship, and using this has funded an improved higher coverage version of the genome, as well as comparative genomics studies . The bird genomics community has embraced rapid data publishing, and following the Puerto Rican Parrot example, a number of our first datasets released were bird genomes. These datasets eventually became part of the Avian Phylogenomics Project, utilizing the genomics of modern birds to unravel how they emerged and evolved after the mass extinction that wiped out their dinosaur ancestors 66 million years previously. The decision by this community to release these data, in some cases up to 3 years prior to the main consortium publications—in 2011 for the Adelie and Emperor Penguins [91, 92], the Pigeon (eventually published in 2013) , and Darwin Finch (released in 2012, and as yet unpublished) —was a positive step demonstrating the benefits of early data release. Along with the extra early release of these first species, the remaining 42 avian genomes for this interdisciplinary, international project were released seven or eight months before the project was published. Releasing one bird genome per day on Twitter and Facebook doubled the traffic to these datasets on GigaDB over the month, and generated many retweets and positive comments from other avian researchers. In addition to releasing the data, GigaScience also published two Data Notes alongside the over thirty other consortium papers in Science and various BMC Journals. The two Data Notes presented a more narrative-style way to publish data, where the authors described the details of data production and access for all of the comparative genomics data and phylogenomics data from the bird species that supports all of these studies. On top of the 4TB of raw data in the SRA repository, we hosted 150GB of data from all of the assemblies in GigaDB, and many other datasets that do not have subject-specific repositories, such as the optical maps for the Budgie and Ostrich [25, 95], including the thousands of files used in the phylogenomic work. Two DOIs in GigaDB [36, 92] collect together all of the individual bird genome DOIs, and also provide a link to a compressed single archive file for those who wish to retrieve the complete set.
Another place where fast and open dissemination of large-scale data can promote scientific advance is in areas of new technology that undergo quick changes and cannot wait to be disseminated via standard publishing venues. An example is the release of data from the Oxford Nanopore MinION™ portable single-molecule sequencer in 2014 on GigaDB. Being a new and rapidly improving technology with regular chemistry changes and updates, there was much demand for access to test data, but few platforms or repositories ready and able to handle the volumes and un-standardized formats of data produced by it.
While the first publication presenting on the data was quite negative about the quality, due to the difficulties in sharing it, there was a lack of supporting evidence provided . Other groups claimed to have had more success, but there was a need and much demand to share these early data to resolve these arguments. While groups were able to share individual reads via figshare , the raw datasets were 10–100\(\times \) larger than the size restrictions set by this platform. Working with authors at Birmingham University, we helped them release the first reference bacterial genome dataset sequenced on the MinION™ in GigaDB on September 10th 2014 ; and after peer review, published the Data Note article describing it just over five weeks later . Being 125GB in size, this was a challenging amount of data to transfer around the world, and our curators worked with the EBI to enable their pipelines to take the raw data. But, this only became possible several weeks after the data were released.
Being the first MinION™-generated genome in the public domain, it was immediately acquired and used as test data for tools and teaching materials . Further, being able to rapidly review and disseminate this type of data, GigaScience also published the first MinION™ clinical amplicon sequencing paper and data in March 2015 . The clinical applications of this tool continue to grow, with the Birmingham group recently demonstrating the utility of MinION™ sequencing via involvement in the Ebola crisis in the field in West Africa . The "real-time" nature of this type of technology demonstrates that publishing needs to become more real-time to keep up.
The rate of species extinction is lending increasing urgency to the description of new species, but in this supposedly networked era, the process of cataloguing the rich tapestry of life is little changed since the time of Linnaeus. Fortunately, this process is being dragged into the twenty-first century, as the procedure of describing animal species finally entered the electronic era in 2012 with the acceptance of electronic taxonomy publication and registration with ZooBank, the official registry of the ICZN . Concerned with growing disappearance rates, some scientists have encouraged moving to a so-called 'turbo taxonomy' approach, where rapid species description is needed to manage conservation .
A demonstrative collaboration between GigaScience and Pensoft Publishers has pushed the boundaries of opening up by the digital era further still, presenting an innovative approach to describing new species by creating a new kind of 'specimen', the 'cybertype' . This consists of detailed and three-dimensional (3D) computer images of a specimen that can be downloaded anywhere in the world and a swathe of data types to suit modern biology, including its gene catalogue (transcriptome), DNA barcodes, and video of the live animal, in addition to the traditional morphological description. This approach has been illustrated by the description of a newly discovered cave centipede species from a remote karst region of Croatia—the 'cyber centipede' Eupolybothrus cavernicolus, with all of the data hosted and curated and integrated using ISA-TAB metadata in the GigaDB database .
This digital representation of an exemplar type specimen shows there is the potential for new forms of collections that can be openly accessed and used without the physical constraint of loaning specimens or visiting museum collections. It also means digital specimens can also be viewed alive and in three dimensions. While this new species subterranean lifestyle may protect it from some of the threats on the surface, this new type of species description also provides an example of how much previously uncharacterized information, including animal behaviour, internal structure, physiology and genetic makeup, can potentially be preserved for future generations .
While museum specimens can degrade, this "cybertype" specimen has the potential to be a digital message in a bottle for future generations that may not have access to the species. This publication encouraged further submissions and publications from this community, such as 141 magnetic resonance imaging scans of 98 extant sea urchin species , three high-resolution microCT scans of brooding brittle stars , as well as a coordinated publication with journal, PLOS One , publishing the nearly 40GB of microCT data supporting a paper describing, in high resolution and 3D, the morphological features commonly used in earthworm taxonomy . Despite some of the folders being close to 10GB in size, the data reviewers were able to retrieve each of those in as little as half an hour using our high-speed Aspera internet connection.
The growth in "big data" has led to scientists doing more computation, but the nature of the work has exposed limitations in our ability to evaluate published findings. One barrier is the lack of an integrated infrastructure for distributing reproducible research to others . To tackle this, in addition to data, we are also hosting the materials and methods used in the data analyses reported in papers published in GigaScience in our repository, GigaDB. Publishing Technical Notes describing software and pipelines, all associated code is released under OSI (Open Source Initiative)-compliant licenses to allow software to be freely used, modified, and shared.
On top of archiving snapshots of code and scripts in our GigaDB servers and to allow more dynamic source code management, we also have a journal GitHub page for tools that are not in a code repository (see many examples in http://github.com/gigascience). In addition, we have developed a data reproducibility platform based on the popular Galaxy workflow system to host histories and workflows and communicate computational analyses in an interactive manner . GigaGalaxy is a project prototyping the use of Galaxy to enable computational experiments to be documented and published with all computational outputs directly connected, allowing readers to inspect intermediate data and analysis steps, as well as reproduce some or all of the experiment, and modify and re-use methods. Specific analyses of data from selected papers are re-implemented as Galaxy workflows in GigaGalaxy using the procedure shown in Fig. 1, in which all of this technical reproducibility work is done in-house. Making data analyses available using the popular Galaxy platform democratises the use of many complicated computational tools. Users do not need knowledge of computer programming nor do they need to learn the implementation details of any single tool, and can run much of it off our computational resources. It also enables more visual and easy-to-understand representations of methods, an example being the test cases from our publication demonstrating a suite of Galaxy tools to study genome diversity . We provide further documentation in GigaGalaxy on data analyses as well as diagrams generated by cytoscape.js to visualize how input datasets, workflows and histories are related to each example analysis. Plus for the sake of additional citability and reproducibility, the Galaxy XML files are also hosted in GigaDB . Moreover, there are implemented workflows from other papers, including a case study in reproducibility from our SOAPdenovo2 paper that managed to exactly recreate all of the benchmarking results listed in the paper .
With open source software environments such as R and Python continuing to grow in popularity, there are a number of reporting tools being integrated into it, such as Knitr and Jupyter. These tools enhance reproducible research and automated report generation by supporting execution embedded within various document formats. One example was a paper publishing a huge cache of electrophysiology data resources important for studying visual development . On top of the 1GB of supporting data and code being available from GigaDB, the authors also produced the paper in a dynamic manner, creating it using R and the Knitr library. Following the reproducible research paradigm, this allows readers to see and use the code that generated each figure and table and know exactly how the results were calculated, adding confidence in the research output and allowing others to easily build upon previous work .
Another approach in disseminating more accessible and dynamic research outputs is through the use of virtual machines, giving reviewers and future users the ability to reproduce the experiments described in a paper, without the need to install complex, version-sensitive and inter-dependent prerequisite software components. With a number of submitters following this approach we have reviewed and published a number of virtual machines, one example being a paper publishing novel MRI tools and data . Publishing and packaging the test data alongside tools, scripts and software required to run the experiments, this is available to download from GigaDB as a "virtual hard disk" that will specifically allow researchers to directly run the experiments themselves and to add their own annotations to the data set . A related, but more lightweight approach is to use containers, such as Docker, applying virtualisation techniques to encapsulate analysis workflows to make them independent from the host it is executed on. GigaScience recently published its first example of a containerized analysis workflow that can be executed virtually anywhere using a Docker container of metagenomics data and pipelines .
Submitting code, workflows and these more dynamic research objects can be complicated, but being based at a research institute and having curators, data scientists and workflow management experts in-house enables us to help authors curate these resources if they are unable to. Leveraging the functionality that many of these open source platforms (e.g. GitHub and DockerHub) provide for cloning their contents as an archive makes it a relatively trivial task for us to take these snapshots. Referees are asked, and in most cases do carry out thorough data reviews, validation and reproducibility checks, although if they are unable to do this sufficiently rigorously, our in-house team often steps in as an additional reproducibility referee. These additional overheads are covered through combined Open Access Article and Data Publishing charges, as well as support from external funders, such as China National Genebank (a non-profit institute supported by the government of China). Being able to take advantage of the tens of petabytes of storage and computational infrastructure already in place at BGI and China National Genebank keeps the overheads low enough to provide these value-added services.
In the three and half years since the formal launch of the journal, we have published an extremely diverse range of data types, and our publishing pipelines and practices have evolved significantly in this time. Looking back at the successes and difficulties over this period, there are a number of lessons that are worth sharing. Being a journal focussing on "big data", the challenges of data volumes are the most obvious one. While the "long tail" of small, unstructured datasets is easy to handle by ourselves and others, our chosen niche focussing on the small proportion of data producers and fields generating the bulk of global research data volumes has been challenging. While demonstrating it is possible to publish a 10TB+ dataset such as the 3000 rice genomes , it subsequently took DNAnexus a month to download these data from our FTP servers. In the subsequent year after publication, the processed data and analyses carried out on the DNAnexus platform has now taken the total data volumes to 120TB. This has been made publically available in the cloud as an Amazon Web Services (AWS) Public Dataset, but if we were going to host this in our GigaDB server it would take one user at least one year to download (https://aws.amazon.com/public-data-sets/3000-rice-genome/), given the speed DNAnexus received the data from us. Similarly, a number of the terascale datasets we have presented have had to be shipped to us on hard disks. This method being increasingly impractical if we wanted to send the hard disks on in the same manner to downstream users. Even datasets in the 100GB range have been challenging to get hold of from less-connected corners of the world, where a microCT imaging dataset from South Africa took one month to be copied to our servers due to bandwidth problems and regular power cuts at their university, requiring the process to be restarted a number of times . Popular datasets require greater bandwidth, and the E. coli nanopore dataset mentioned above had to be mirrored in AWS S3 for the first month to cope with the huge short-term demand .
On top of data volumes, reproducibility has been the other major challenge and learning experience. To carry out a case study in reproducibility we used what we hoped was one of our most scrutinized papers, the bioinformatics tool SOAPdenovo2 . We subjected the publication to a number of data models including ISA-TAB, Research Object, and Nanopublications and despite managing to exactly recreate all of the results listed in the paper, it identified a small number of inaccuracies in the interpretation and discussion . Due to these deficiencies being uncovered, the authors produced a correction article to officially communicate the amendment to their initial report . The open, detailed and transparent approach to reviewing data and methods used by GigaScience has also uncovered methodological problems in companion papers published in other journals. For example, in reviewing a Data Note presenting metabolomics and lipidomics data discrepancies in the previously reported methods came to light. This leads to an Erratum being published in the J Proteome Res explaining care should be taken when interpreting some of the results . Availability and scrutiny of code is just as important as data, and our MinION™ sequenced reference bacterial genome Data Note had to be corrected after an error in a script had been reported due to a fix supplied by an anonymous online contributor on Github . While correcting and keeping the scientific record accurate, these approaches are expensive in time and money, and cost–benefit decisions need to be made on how much effort should expended to maintain this level of reproducibility. Notable among the examples discussed in this paper, it took about half a man-month worth of resources to reproduce the results reported in our SOAPdenovo2 paper, and cost around $1000 of AWS credits to replicate the result of our "Dockerised" metagenomics data and pipeline paper. These costs need to be balanced against the US$28 B/year wasted just on irreproducible preclinical research , and will be much cheaper and more effective if reproducible practices like version control and workflow management are carried out by the authors at the time the experiments are carried out rather than retrospectively by the journal. This will also make the review process much easier, and take less time of the external reviewers or in-house team at the journal. The investments and moves towards distributed computing infrastructure should also be made in distributing training in reproducible research practices. Schemes, such as Software and Data Carpentry are essential to take much of the load off the peer review and publication process in ensuring the accuracy and reproducibility of the literature, and the GigaScience team has participated in a number of workshops, hackathons and "bring your own data parties" for these very reasons.
The papers and datasets presented provide examples of how novel mechanisms of dissemination can aid and speed up important areas of research, such as disease outbreaks, biodiversity and conservation research. Whilst trying to host all of the supporting materials and methods used in GigaScience papers in our GigaDB database, it is often the case that this is still not enough information to understand how the results of a scientific study were produced. A more comprehensive solution is required for users to reproduce and reuse the computational procedures described in scientific publications. This deeper and more hands-on scrutiny in the publication and review process is likely to identify more methodological inconsistencies in presented work, and as we have seen from some of our own published work initially at least, there may be increase in Errata and Corrections as a result. Rather than be seen in a negative light as airing out the "dirty laundry", journals and authors should see this as an essential part of the scientific process, and be proud to be part of the self-correcting nature of the research cycle.
Current publishing practices, which have barely changed since the seventeenth century, are particularly poorly evolved to handle this; so beyond data storage, we are looking to produce more dynamic and executable research objects. To this end, we have been developing a data reproducibility platform based on the popular Galaxy workflow system, and the first examples of this have already been published. While there is an archival version of the code hosted in GigaDB, and dynamic version linked through code repositories, such as our GitHub page, being able to visualize and execute parts of the pipelines and workflows allows a totally different perspective, allowing reviewers and future users to 'kick the wheels' and 'get under the hood' of computational methods and analyses. We are starting also to work with virtual machines and docker containers to ensure the software presented always behaves consistently. This will democratize science further, allowing users without coding experience or access to high-performance computing infrastructure to access and utilize the increasingly complicated and data-driven research that they have funded. As research data volumes continue to grow near exponentially, anecdotally demonstrated here from our publishing of growing numbers of datasets in the Giga- and Tera-scale, it is becoming increasingly technically challenging to publish and disseminate large-scale data to potential users. Alternative strategies are required, and taking the lead from industry-standard big data processing approaches used in cloud computing, we and others need to move from being "data publishers" to "compute publishers". The proof-of-concept examples presented here in publishing virtual machines and Docker containers already demonstrate the feasibility of this, and the next stage is to make this scalable and standardized. The approaches of groups, such as the Bioboxes community to create standardized interfaces that make scientific software interchangeable, show a potential path towards doing this, and GigaScience is focussing its efforts over the coming years to be at the forefront of these efforts.
The authors would like to thank BGI and China National Genebank for supporting GigaScience with seed funding and data, and the BBSRC and NERC for collaboration funding. | CommonCrawl |
Abstract. We consider the kinetic Ising models (Glauber dynamics) corresponding to the infinite volume Ising model in dimension 2 with nearest neighbor ferromagnetic interactionand under a positive external magnetic field $h$. Minimalconditions on the flip rates are assumed, so that all the common choices are being considered. We study the relaxation towards equilibrium when the system is at an arbitrary subcritical temperature $T$ and the evolution isstarted from a distribution which is stochastically lower than the ($-$)-phase. We show that as $h\to 0$ the relaxation time blows up as $\exp(\lambda_c(T)/h)$, with $\lambda_c(T) = w(T)^2/(12 T m^*(T))$. Here $m^*(T)$ is the spontaneous magnetization and $w(T)$ is the integrated surface tension of the Wulff body of unit volume. Moreover, for $0 < \lambda < \lambda_c$, the state of the process at time $\exp(\lambda/h)$ is shown to be close, when $h$ is small, to the ($-$)-phase. The difference between this state and the ($-$)-phase can be described in terms of an asymptotic expansion in powers of the external field. This expansion can be interpreted as describing a set of $\Cal C^\infty$ continuations in $h$ of the family of Gibbs distributions with the negative magnetic fields into the region of positive fields. | CommonCrawl |
In the previous article on cointegration in R we simulated two non-stationary time series that formed a cointegrated pair under a specific linear combination. We made use of the statistical Augmented Dickey-Fuller, Phillips-Perron and Phillips-Ouliaris tests for the presence of unit roots and cointegration.
A problem with the ADF test is that it does not provide us with the necessary $\beta$ regression parameter - the hedge ratio - for forming the linear combination of the two time series. In this article we are going to consider the Cointegrated Augmented Dickey-Fuller (CADF) procedure, which attempts to solve this problem. We've looked at CADF before using Python, but in this article we are going to implement our CADF function using R.
While CADF will help us identify the $\beta$ regression coefficient for our two series it will not tell us which of the two series is the dependent or independent variable for the regression. That is, the "response" value $Y$ from the "feature" $X$, in statistical machine learning parlance. We will show how to avoid this problem by calculating the test statistic in the ADF test and using it to determine which of the two regressions will correctly produce a stationary series.
The main motivation for the CADF test is to determine an optimal hedging ratio to use between two pairs in a mean reversion trade, which was a problem that we identified with the analysis in the previous article. In essence it helps us determine how much of each pair to long and short when carrying out a pairs trade.
The CADF is a relatively simple procedure. We take a sample of historical data for two assets and then perform a linear regression between them, which produces $\alpha$ and $\beta$ regression coefficients, representing the intercept and slope, respectively. The slope term helps us identify how much of each pair to relatively trade.
Once the slope coefficient - the hedge ratio - has been obtained we can then perform an ADF test (as in the previous article) on the linear regression residuals in order to determine evidence of stationarity and hence cointegration.
We will use R to carry out the CADF procedure, making use of the tseries and quantmod libraries for the ADF test and historical data acquisition, respectively.
We will begin by constructing a synthetic data set, with known cointegrating properties, to see if the CADF procedure can recover the stationarity and hedging ratio. We will then apply the same analysis to some real historical future data, as a precursor to implementing some mean reversion trading strategies.
We are now going to demonstrate the CADF approach on simulated data. We will use the same simulated time series from the previous article.
Recall that we artificially created two non-stationary time series that formed a stationary residual series under a specific linear combination.
We can use the R linear model lm function to carry out a linear regression between the two series. This will provide us with an estimate for the regression coefficients and thus the optimal hedge ratio between the two series.
The Dickey-Fuller test statistic is very low, providing us with a low p-value. We can likely reject the null hypothesis of the presence of a unit root and conclude that we have a stationary series and hence a cointegrated pair. This is clearly not surprising given that we simulated the data to have these properties in the first place.
We are now going to apply the CADF procedure to multiple sets of historical financial data.
There are many ways of forming a cointegrating set of assets. A common source is to use ETFs that track similar characteristics. A good example is an ETF representing a basket of gold mining firms paired with an ETF that tracks the spot price of gold. Similarly for crude oil or any other commodity.
An alternative is to form tighter cointegrating pairs by considering separate share classes on the same stock, as with the Royal Dutch Shell example below. Another is the famous Berkshire Hathaway holding company, run by Warren Buffet and Charlie Munger, which also has A shares and B shares. However, in this instance we need to be careful because we must ask ourselves whether we would likely be able to form a profitable mean reversion trading strategy on such a pair, given how tight the cointegration is likely to be.
A famous example in the quant community of the CADF test applied to equities data is given by Ernie Chan. He forms a cointegrating pair from two ETFs, with ticker symbols EWA and EWC, representing a set of Australian and Canadian equities baskets, respectively. The logic is that both of these countries are heavily commodities based and so will likely have a similar underlying stochastic trend.
Ernie makes uses of MatLab for his work, but this is an article about R. Hence I thought it would be instructive to utilise the same starting and ending dates for his historical analysis in order to see how the results compare.
For completeness I will replicate the plots from Ernie's work, in order that you can see the same code in the R environment. Firstly, let's plot the adjusted ETF prices themselves.
Once again we have evidence to reject the null hypothesis of the presence of a unit root, leading to evidence for a stationary series (and cointegrated pair) at the 5% level.
The ADF test statistic for EWC as the independent variable is smaller (more negative) than that for EWA as the independent variable and hence we will choose this as our linear combination for any future trading implementations.
A common method of obtaining a strong cointegrated relationship is to take two publicly traded share classes of the same underlying equity. One such pair is given by the London-listed Royal Dutch Shell oil major, with its two share classes RDS-A and RDS-B.
Since the first linear combination has the smallest Dickey-Fuller statistic, we conclude that this is the optimal linear regression. In any subsequent trading strategy we would utilise these regression coefficients for our relative long-short positioning.
We have utilised the CADF to obtain the optimal hedge ratio for two cointegrated time series. In subsequent articles we will consider the Johansen test, which will allow us to form cointegrating time series for more than two assets, providing a much larger trading universe from which to pick strategies.
In addition we will consider the fact that the hedge ratio itself is not stationary and as such will utilise techniques to update our hedge ratio as new information arrives. We can utilise the Bayesian approach of the Kalman Filter for this.
Once we have examined these tests we will apply them to a set of trading strategies built in QSTrader and see how they perform with realistic transaction costs. | CommonCrawl |
Adv. Appl. Math. Mech., 11 (2019), pp. i-iii.
Adv. Appl. Math. Mech., 11 (2019), pp. 559-570.
The secondary instability was investigated in high-speed boundary layers over flat plates by numerical methodology. The numerical simulation suggests that the main streaky structures in 2D hypersonic boundary transition process are resulted from the secondary instability theory caused by the primary 2D Mack mode. The secondary instability analysis found that a new family called fundamental family of solutions was found which is the least stable secondary instability when the amplitude of the primary mode instability reaches a threshold value. It is help for us to understand the fundamental breakdown in hypersonic boundary layers.
Adv. Appl. Math. Mech., 11 (2019), pp. 571-582.
This paper presents the understanding of the fundamentals when designing a numerical schemes for hyperbolic problems with discontinuities as parts of their solutions. The fundamentals include the consistency with hyperbolic balance laws in integral form rather than PDE form, spatial-temporal coupling, thermodynamic consistency for computing compressible fluid flows, convergence arguments and multidimensionality etc. Some numerical results are shown to display the performance.
Adv. Appl. Math. Mech., 11 (2019), pp. 583-597.
Richtmyer-Meshkov Instability (RMI) in a spherical geometry is studied via direct numerical simulation using a high-order three-dimensional in-house solver. Specifically, a six-order compact difference scheme coupled with localized artificial diffusivity method is adopted in order to capture discontinuities with high accuracy. A pure converging shock propagation in a sphere is simulated and the result agrees well with Guderley's theory. For RMI in a spherical geometry, the development of mixing width and its growth rate at different stages are examined and the underlying mechanism is also briefly analyzed. Particularly addressed is the effect of Mach number on the growth rate of perturbations and turbulent mixing process.
Adv. Appl. Math. Mech., 11 (2019), pp. 598-607.
The paper presents a study of the influence of the domain size and LBM collision models on fully developed turbulent channel flows. The results using spectral method show that a smaller domain size will increase the velocity fluctuations in the streamwise direction. And MRT-LBM with different collision models gives reliable results at least for low order flow statistics compared with those from spectral method and finite-difference method.
Adv. Appl. Math. Mech., 11 (2019), pp. 608-618.
In this paper, a thick disk with aspect ratio greater than $1/10$ is directly simulated by adaptive mesh refinement-immersed boundary-lattice Boltzmann flux solver (AMR-IB-LBFS). The AMR-IB-LBFS model is a combination of the adaptive mesh refinement (AMR) technique and immersed boundary-lattice Boltzmann flux solver (IB-LBFS). Four different aspect ratio disks are numerically simulated, and the numerical results are in good agreements with theoretical results. In addition, these disks with different aspect ratios are compared with trajectories and gestures.
Adv. Appl. Math. Mech., 11 (2019), pp. 619-629.
As the analogue of helicity, the concept of stream helicity proposed in 2007 provides a quantitative measurement for entanglement of streamlines for an incompressible flow field or magnetic field lines for a magnetic field. It is found that the helical-wave decomposition can serve as a convenient mathematical physics tool for the calculation of helicity and stream helicity. However, for a multiply connected domain such as a torus under the background of TOKAMAK, the analysis presents some peculiarities due to the nature of multiple-value of some involving potentials. Here we give a full derivation of stream helicity for such a case, together with some detailed analysis. Particularly, it is found that a maximum of absolute helicity under prescribed energy could be attained by a pure potential field.
Adv. Appl. Math. Mech., 11 (2019), pp. 630-639.
The linear stability of supersoinc mixing layer with the mean velocity profile approximated by the hyperbolic tangent is studied. The convective Mach number of mixing layer is 2 and the temperature ratio is 1. A new classification of disturbance mode of mixing layer is presented to unify the existing results in literature and the new results in this paper. The mixing layer has supersonic and subsonic mode according the disturbance phase speed. Then the supersonic mode can be classified into mixing mode and asouctic mode. The mixing mode can be further classified into sbsonic-supersonic mode and supersonic-subsonic mode according the relative speed between disturbance phase speed and the free flow speed of mixing layer, which correspond to fast mode and slow mode, respectively. The pahse speed of acoustic mode are supersonic with respet to the free flow in both sides of mixing layer and some unstable acoustic mode are found. The acoustic mode has radiation characteristics in both sides of mixing layer and results in a series of expansion and compression fans.
Adv. Appl. Math. Mech., 11 (2019), pp. 640-652.
In this paper, Lagrangian tracking of a specific material surface and Lagrangian-averaged vorticity deviation (LAVD) are applied to experimental data sets of two kinds of wall-bounded flows to detect coherent structures. One for laminar boundary layer with wall-mounted hemisphere, the other for turbulent boundary layer. Lagrangian coherent structures detected in a hemisphere protruded laminar boundary layer show some similarity with Eulerian-detected hairpin vortices. However, the LAVD-based vortices and the evolution of material surface demonstrated in turbulent boundary layer are different from the patterns in the wake of the hairpin shedding hemisphere. The wavelike deformed material surfaces appear to support the importance of three-dimensional wave structures in the near-wall turbulence production process. The Lagrangian methods provide another perspective in understanding coherent structures in wall-bounded flows.
Adv. Appl. Math. Mech., 11 (2019), pp. 653-663.
In this paper, direct numerical simulation (DNS) of spatially evolving flat-plate hypersonic turbulent boundary layer with free-stream Mach number $8$ is used to study the effects of wall temperature on compressibility and Reynolds stress contribution. The DNS contains two cases with different wall temperatures, $T_w$/$T_\infty$=$10.03$ and $1.9$. The results show that lower wall temperature enhances both the compressibility and injection-and-sweeping motions near the wall.
Adv. Appl. Math. Mech., 11 (2019), pp. 664-674.
The effects of combustion on turbulent statistics in a supersonic turbulent jet are investigated by using direct numerical simulation (DNS). To study the combustion effects, two DNS cases (reacting case and non-reacting case) are conducted and the comparison of turbulent statistics are used to study the combustion effects. The medium of jet is $85\%$ hydrogen with temperature $305K$ and the co-flow is hot air with temperature $1150K$. The Reynolds numbers based on the jet diameters are $22000$ and the jet Mach numbers are $1.2$. The DNS results show that combustion case has both larger decay rate constant of centerline streamwise mean velocity and larger spreading rate of half-velocity jet width than the non-reacting case. It also shows that the combustion case achieves the self-similarity slower. For reacting jet, the vortex structures are more smooth in the near field and vortex filament structures are larger in the far field. Combustion causes the turbulent zone to be delayed downstream and increases the probability of strong velocity pulsation.
Adv. Appl. Math. Mech., 11 (2019), pp. 675-685.
Double-step interrogation schemes with decreasing window size are investigated. Particle image pairs are created by Monte Carlo methods based on a simulated hypersonic transitional boundary layer flow. Two types of initial-step interpolation, with and without non-slip conditions at the wall (so-called CSIN and CSI), are respectively investigated. It can be concluded that the non-slip condition is very important to the multi-grid interrogation scheme. CSIN has a good agreement with the preset results but CSI does not. The error level of CSI near the wall is nearly 5 times that of CSIN.
Adv. Appl. Math. Mech., 11 (2019), pp. 686-699.
Secondary instability of a streaky boundary layer under spanwise-localized free-stream vortical disturbances (FSVD) is investigated using BiGlobal instability analysis. The instability analysis is performed at different streamwise locations and at different instants in a whole time period and the results in space-time $(x-t)$ plane are taken into consideration, such that unstable modes are found in four unstable zones. According to the comparison of mode growth accumulation, we find that the contribution of secondary instability to bypass transition is much more important than T-S waves and that the center strong streak plays the key role in secondary instability. The sinuous type outer mode at center low-speed streak is the dominant unstable mode and this agrees with experimental observation. The inner mode at center high-speed streak also has high growth rate, but its growth cannot keep accumulating, resulting in its unimportant role in secondary instability. All unstable modes found in this paper appear at Fjortoft inflection points of the basic flow in gradient direction, implying that the modes have the same physical nature, i.e., they are caused by inflectional instability of the shear flow.
Adv. Appl. Math. Mech., 11 (2019), pp. 700-710.
Helicity plays an important role in the nonlinear dynamic process of compressible helical turbulence. We carry out direct numerical simulations (DNS) of compressible helical turbulence with different mean helicity at the grid resolution of $512^3$ and try to explore the physical mechanism of kinetic energy cascade under the effect of helicity. The filtering method of coarse-graining is employed for scale decomposition to get kinetic energy flux. We get some novel conclusions in contrast to pre-existing knowledge in incompressible helical turbulence. Firstly, helicity also hinders the process of compressible kinetic energy cascade and reduces viscous dissipation. Secondly, helicity weakens some exclusive features, such as coupling between compressible and solenoidal mode of fluctuating velocity, energy transformation between kinetic energy and internal energy, and shocklets which appear only in compressible turbulence and serves as the role of constraint.
Adv. Appl. Math. Mech., 11 (2019), pp. 711-722.
Fully developed rotating turbulent channel flow (RTCF) has been numerically investigated using large-eddy simulation (LES). The subgrid-scale (SGS) eddy viscosity model is base on an SGS helicity dissipation balance and the spectral relative helicity. Posterior test has been implemented to an RTCF with rotation in spanwise direction. The friction Reynolds number $Re_\tau=u_\tau\delta/\nu$ based on wall shear velocity $u_\tau$, half width of the channel $\delta$ and the kinematic viscosity $\nu$ is 180. Two rotation numbers $Ro_\tau=2\Omega\delta/u_\tau$ equal to 22 and 80 have been computed with respective grid resolution. The results from dynamic Smagorinsky model (DSM) and direct numerical simulation (DNS) are used as references. The results demonstrate that the eddy viscosity model can predict both the precise velocity profile and the turbulent intensity.
Adv. Appl. Math. Mech., 11 (2019), pp. 723-736.
The energy spectrum and energy transfer in compressible homogeneous turbulent shear flow are numerically investigated via high-accuracy direct numerical simulation. The Helmholtz decomposition method is employed to decompose the velocity field into a solenoidal component and a compressive one. It is found that the spectra of different velocity modes are strongly anisotropic over all the resolved scales and the specific properties of small-scale anisotropy are significantly influenced by the flow compressibility. The anisotropy of energy transfer process comes from an additional kinetic energy production term which acts as a dominant source at relatively large scales in the streamwise direction. After the redistribution of turbulent kinetic energy caused by the pressure-dilatation correlation in different directions, the energy fluxes due to advection are also expected to be anisotropic. The streamwise energy flux is predominantly large and passes down the kinetic energy to smaller scales, whereas the cross streamwise and spanwise energy fluxes are less significant and pronounce an inverse energy transfer at relatively large scales. | CommonCrawl |
Where a ≠ 0, b, c are given real numbers.
Since every function has its own special graph, so does quadratic one. Graph of every quadratic equation is a parabola. Parabola is a set of points in a plane that are all equally distant from the given line called directrix and given point focus that is not on that line.
Many aspects affect behavior of this graph so we'll start from the simplest.
Notice how we only have the leading coefficient different from zero. And it is equal to exactly one. Since we still don't know how this graph exactly looks we'll start by drawing a lot of points and see where this leads us.
Now we have to find a curve that connects these points perfectly. If we continued the process of finding points of this graph we'd start getting a clear picture of what this should look like.
Let's observe this graph and see what we can conclude.
This graph lies only on the positive side of y- axis and it is symmetrical in order to y- axis.
$\ f(x)= ax^2$Next thing we'll reveal is how does the function act when we're increasing or decreasing leading coefficient.
So what exactly happened? Let's try to draw this function and function $\ f(x) = x^2$ and compare them.
As you can see, function $\ f(x) = 2x^2$ is much narrower than the function $\ f(x) = x^2$. And this is exactly the result we could expect, because if you take a large number as your leading coefficient the function will grow rapidly which means that it will look narrower.
The vertex of a parabola with only leading coefficient will always be in point (0, 0).
On the other hand, if your leading coefficient is a number lesser than one (but still a positive number) your graph will look wider, because function values will grow much slower.
All these graphs turned around. Now all their function values are negative numbers. Note: every time you see a negative leading coefficient this will mean your parabola will be turned upside down.
Now let's see what happens when we include the linear coefficient.
$\ f(x) = x^2 – 2x$Let's, again, take some set of points and calculate functions value in them.
From here we can't accurately draw our parabola because we don't know much about it. In the first case we showed, we knew that the vertex of a parabola will be in the center and that its graph will be symmetrical according to the y – axis. Now we know that nothing of that applies to this graph.
When you know x coordinate you can simply put it in the function and get your y coordinate.
Second thing that is useful to know when drawing a graph of a quadratic equation is the zeros. Zeros are points in which the graph bisects the x- axis. You get those points by calculating f(x) = 0 and calculating zeros of the quadratic equation you got.
For this function we'd calculate $\ x^2 – 2x = 0$ and we'd get $x_1 = 0$, $\ x_2 = 2$. From here it is quite simple to draw this graph.
Of course, this procedure is exactly the same when the coefficients change their signs.
This means that our vertex is in point $\ V = (1, 1)$.
We got imaginary solutions. This will always mean that our graph does not bisect the x – axis. Since you already know your vertex, you know the line over which your graph is symmetrical. All you need is few more points you calculate and you can draw your graph. To be precise, you can also calculate where your graph will bisect the y- axis in the point where x = 0, or in our case point (0, 3).
You can draw all of these graphs using minimum calculations if you learn how this graph translates.
For example if you want do draw a function $\ f(x)=x^2 – 4x – 4$ , you can turn it into $\ f(x) = (x – 2)^2$ which is nothing but a graph of a function $\ f(x) = x^2$ translated by two to the right.
When you, for example, have a graph $\ f(x) = x^2 + 2$, he will look just like the graph of a function $\ f(x) = x^2$ only translated by two on the y – axis.
Every graph you draw is some kind of a combination of these two, which means that you can, by completing the square, draw all of these graphs without really calculating anything.
This graph will be like graph of a function $\ f(x) = 2x^2$ translated by 1 to the right on the x- axis and by 3 to the y – axis moving up. | CommonCrawl |
There are $n$ concert tickets available, each with a certain price. Then, $m$ customers arrive, one after another.
Each customer announces a maximum price they can pay for a ticket, and after this, they will get a ticket whose price is the nearest possible price which does not exceed the maximum price.
The first input line contains integers $n$ and $m$: the number of tickets and the number of customers.
The next line contains $n$ integers $h_1,h_2,\ldots,h_n$: the price of each ticket.
The last line contains $m$ integers $t_1,t_2,\ldots,t_m$: the maximum price for each customer.
Print, for each customer, the price that they will pay for their ticket. After this, the ticket cannot be purchased again.
If a customer cannot get any ticket, print $-1$. | CommonCrawl |
How many digits can you create with only one seven segment display?
Remove the square in the top-left corner of a $2015 \times2015$ chessboard. Can the remaining mutilated chessboard be tiled with $1\times4$ and $4\times1$ rectangles?
Is it possible to arrange the integers $1,2,3,\ldots,169$ in a $13\times13$ square, so that in every $2\times2$ square the sum of the four numbers is divisible by $170$?
How many ways are there to stack boxes? | CommonCrawl |
One of the Fundamental Theorems in Functional Analysis is the Open Mapping Theorem.
Theorem. Let $X,Y$ be Banach spaces and $T \in L(X,Y)$. Then $T$ is surjective if and only if it is open.
I'd love to see some motivation on this theorem (not a proof!). Why should it be true? How was it discovered?
Perhaps the problem is that one (at least in the beginning of mathematical studies) not often thinks about open maps - but I find it fascinating that people saw a connection between surjectivity and open maps.
After you've worked through some examples of this type, the open mapping theorem begins to look rather plausible.
Think of the finite dimensional case. If a linear map is open then it maps the open unit ball to an open ellipsoid centred at 0. It cannot have a lower dimension else the image is "flat" and not open. By expanding the unit ball the ellipsoid increases with it (linearity of $T$) and eventually covers the whole space.
The point of the theorem is that this is still valid in infinite dimensions. The strategy of most proofs is to show that the unit open ball maps to an open set that contains a small ball centered at 0. So by expanding them, the whole space is covered.
I like very much the version in Rudin's book which, in the case of Banach spaces, says that almost open continuous linear operators $T:X\to Y$ are surjective and open, where almost open means that the closure of $T(B_X)$ ($B_X$ being the unit ball of $X$) contains a ball $\alpha B_Y$ for some $\alpha$.
This condition can be spelled out as follows: There is a constant $C>0$ such that for each $\varepsilon>0$ and each $y\in Y$ the equation $y=T(x)$ has an approximate solution with norm control, i.e., there is $x\in X$ with $\|y-T(x)\|\le \varepsilon$ and $\|x\|\le C\|y\|$. The conclusion of the theorem is then that you always have exact solutions with norm control.
This viewpoint of the open mapping theorem (I don't know any book really promoting Banach's theorem in that way) encapsulates concrete approximation-and-correction-procedures (as, e.g., in the classical theorem of Weierstraß about poles of meromorphic functions) in an abstract principle.
Not the answer you're looking for? Browse other questions tagged functional-analysis banach-spaces open-map or ask your own question.
Why should the open mapping theorem be expected?
Is the open mapping theorem about linear maps in Banach spaces dependent on the axiom of choice? | CommonCrawl |
The puzzle below, known as The Problem of Points, is often said to have inspired probability theory.
Fermat and Pascal play a game by repeatedly flipping coins - if a coin lands heads, Fermat gains a point, and if it lands tails, Pascal gains a point. It is agreed that first to $10$ points wins a prize of $100$ francs. Suppose the game abruptly stops when the score is $8$-$7$ to Fermat. How should the prize be fairly divided between the two players?
This sounds quite close in spirit to to the puzzle "Who's Doing the Dishes?".
True - I think this one is a little easier though.
I'm assuming fair division means split the prize based on win probabilities.
An odd principle, considering if it's taken to the start of the game no one makes any money!
I agree with Tiago that if you can solve the dishes challenge, you can solve this one.
Hey James - I think your interpretation of fair is most appropriate.
If Pascal's score remains $7$ then there is one possibility of Fermat winning, i.e he wins both the next two tosses, with a probability of $0.5\times0.5=0.25$.
If Pascal's score is $8$ when Fermat wins there are $2$ possibilities. Either Fermat loses then wins twice (i.e., LWW) or wins, loses and then wins again (i.e., WLW). Together, these have probability $0.125\times2=0.25$.
If pascals score is $9$ when Fermat wins the game there are $3$ possibilities, LLWW with probability $0.0625$, LWLW with probability $0.0625$ and WLLW with probability $0.0625$.
The reward should then be divided according to these probabilities. | CommonCrawl |
$(1)$ If at least one of $M',M''$ is contained in $D$, then $M$ is contained in $\mathcal D$.
$(2)$ If $M$ is contained in $\mathcal D$, then at least one of $M', M''$ is contained in $\mathcal D$.
I wonder if such a subcategory $\mathcal D$ has a name in the literature of abelian categories?
Browse other questions tagged category-theory terminology abelian-categories or ask your own question.
Abelian subcategory generated by a full subcategory.
Subcategory of category of Module satisfies SSA?
Is the category of sheaves of objects from an abelian category abelian?
Do short exact sequences form an abelian category? | CommonCrawl |
Abstract : Curvilinear structure restoration in image processing procedures is a difficult task, which can be compounded when these structures are thin, i.e. when their smallest dimension is close to the resolution of the sensor. Many recent restoration methods involve considering a local gradient-based regularization term as prior, assuming gradient sparsity. An isotropic gradient operator is typically not suitable for thin curvilinear structures, since gradients are not sparse for these. In this article, we propose a mixed gradient operator that combines a standard gradient in the isotropic image regions, and a directional gradient in the regions where specific orientations are likely. In particular, such information can be provided by curvilinear structure detectors (e.g. RORPO or Frangi filters). Our proposed mixed gradient operator, that can be viewed as a companion tool of such detectors, is proposed in a discrete framework and its formulation / computation holds in any dimension; in other words, it is valid in $\mathbb Z^n$, $n \geq 1$. We show how this mixed gradient can be used to construct image priors that take edge orientation as well as intensity into account, and then involved in various image processing tasks while preserving curvilinear structures. Experiments carried out on 2D, 3D, real and synthetic images illustrate the relevance of the proposed gradient and its use in variational frameworks for both denoising and segmentation tasks. | CommonCrawl |
There are $n$ sticks with some lengths. Your task is to modify the sticks so that each stick has the same length.
You can either lengthen and shorten each stick. Both operations cost $x$ where $x$ is the difference between the new and original length.
What is the minimum total cost?
The first input line contains an integer $n$: the number of sticks.
Then there are $n$ integers: $p_1,p_2,\ldots,p_n$: the lengths of the sticks.
Print one integer: the minimum total cost. | CommonCrawl |
Aha! It is floating-point arithmetic that has bitten our feet. As you likely know, naively checking whether floating-point numbers are equal, i.e., comparing them as a == b, is a very bad idea because such numbers are stored with finite precision. Also, arithmetic operations involving floating-point numbers are carried out with finite precision as well. Indeed, given two nonzero floating-point numbers a and b, there is no guarantee that a == (a/b)*b. In general, when you hash two different floating-point numbers, you will, with very high probability, obtain two different hash values even if these numbers are very close to each other. If that happens, the numbers will, again with high probability, be placed in different buckets of a hash table.
On the example above, we were lucky to have (b/a)*a being equal to b, but not so lucky with (a/b)*b and a: these two are unfortunately not equal to each other and are therefore treated as distinct numbers by the dictionary (i.e., by its underlying hash table).
At this point, it should be clear that hashing floating-point numbers is as dangerous as directly checking for their equality using the == operator. Even tiny differences in their values may throw them into different buckets of an associated hash table.
Notice that now both b and (b/a)*a are no longer considered to be two distinct numbers by the hash function since they are truncated before being passed to it, and the same remains true for a and (a/b)*a.
However, as I said above, the problem is only alleviated by the trick just presented. Indeed, the original problem (numbers extremely close to each other being considered distinct by the hash function) has not been entirely addressed: we divided the interval $[0,1)$ into intervals of width $0.01$ over which all numbers are considered equal, but on the shared boundaries of each interval, i.e., for numbers in the form $0.01N$ for $N = 1, 2, \ldots, 99$, the problem still persists since minor deviations around these numbers will fall into different intervals. As a concrete example, for a very small $\delta \gt 0$, $0.50 - \delta$ will fall within $[0.49, 0.50)$ while $0.50 + \delta$ will fall within $[0.50, 0.51)$; therefore, these two numbers will be treated differently by the hash function even though they are very close to each other.
To summarize: creating hash tables using floating-point numbers as keys is a tricky task as hashing them is similar to comparing them for equality using the == operator, and the result may cause your application to behave in unexpected ways. My recommendation: avoid doing this, if you can. | CommonCrawl |
Efficient inference of deep learning models are challenging and of great value in both academic and industrial community. In this paper, we focus on exploiting the sparsity in input data to improve the performance of deep learning models. We propose an end-to-end optimization pipeline to generate programs for the inference with sparse input. The optimization pipeline contains both domain-specific and general optimization techniques and is capable of generating efficient code without relying on the off-the-shelf libraries. Evaluations show that we achieve significant speedups over the state-of-the-art frameworks and libraries on a real-world application, e.g., $9.8\times$ over TensorFlow and $3.6\times$ over Intel MKL on the detection in autonomous driving. | CommonCrawl |
Our goal is compression of massive-scale grid-structured data, such as the multi-terabyte output of a high-fidelity computational simulation. For such data sets, we have developed a new software package called TuckerMPI, a parallel C++/MPI software package for compressing distributed data. The approach is based on treating the data as a tensor, i.e., a multidimensional array, and computing its truncated Tucker decomposition, a higher-order analogue to the truncated singular value decomposition of a matrix. The result is a low-rank approximation of the original tensor-structured data. Compression efficiency is achieved by detecting latent global structure within the data, which we contrast to most compression methods that are focused on local structure. In this work, we describe TuckerMPI, our implementation of the truncated Tucker decomposition, including details of the data distribution and in-memory layouts, the parallel and serial implementations of the key kernels, and analysis of the storage, communication, and computational costs. We test the software on 4.5 terabyte and 6.7 terabyte data sets distributed across 100s of nodes (1000s of MPI processes), achieving compression rates between 100-200,000$\times$ which equates to 99-99.999% compression (depending on the desired accuracy) in substantially less time than it would take to even read the same dataset from a parallel filesystem. Moreover, we show that our method also allows for reconstruction of partial or down-sampled data on a single node, without a parallel computer so long as the reconstructed portion is small enough to fit on a single machine, e.g., in the instance of reconstructing/visualizing a single down-sampled time step or computing summary statistics.
G. Ballard, A. Klinvex, T. G. Kolda. TuckerMPI: A Parallel C++/MPI Software Package for Large-scale Data Compression via the Tucker Tensor Decomposition. arXiv:1901.06043, 2019. | CommonCrawl |
But I do not understand how we can multiply a signal $x(t)$ with the delta function $\delta(t)$, as the $\delta(t)$ is infinite at $x=0$. So this multiplication would amount to multiply a real-valued function with $\infty$ which is obviously not what is meant here. I think.
So would it make the formula formally correct if we would swap the $\delta(t)$ with the indicator function?
because we simply define $x[n] \triangleq x(nT_s)$.
the difference between the dirac delta and the indicator function is that one integrates to an area of 1 and the other integrates to an area of 0.
which is true as long as $x(t)$ is continuous at $t=nT$.
Consequently, multiplying a signal with a Dirac impulse train results in a weighted impulse train, where the weights are the signal values at the sample instants. So what happens is that from a continuous signal $x(t)$ you only retain the sample values $x(nT)$, but you still have an expression that can be considered a continuous-time signal (in the sense that it can be integrated or convolved with another function).
which is now an ordinary function which can be evaluated for any $t$.
In addition to the other answers, I would also like to point out that the "ideal" impulse sampler is usually considered to be followed by some filter of some kind, such as a zero-order hold. The infinite-amplitude spikes are "averaged out" by convolving with the filter impulse response.
Also, even without any filtering after the impulse sampling, the spectrum of the sampled signal creates a mathematical model for the aliased spectrum that is mathematically equivalent (up to a constant) with the periodic spectrum computed using the discrete-time Fourier transform (DTFT).
Can an Delta Dirac function (not response) ever be sampled? | CommonCrawl |
An improved understanding of how vortices develop and propagate under pulsatile flow can shed important light on the mixing and transport processes including the transition to turbulent regime occurring in such systems. For example, the characterization of pulsatile flows in obstructed artery models serves to encourage research into flow-induced phenomena associated with changes in morphology, blood viscosity, wall elasticity and flow rate. In this work, an axisymmetric rigid model was used to study the behaviour of the flow pattern with varying constriction degree ($d_0$), Reynolds number ($Re$) and Womersley number ($\alpha$). Velocity fields were acquired experimentally using Digital Particle Image Velocimetry and generated numerically. For the acquisition of data, Re was varied from 953 to 2500, $d_0$ was 1.0 cm and 1.6 cm, and $\alpha$ was fixed at 33.26 in the experiments and was varied from 15 to 50 in the numerical simulations. Results for the considered Reynolds, showed that the flow pattern consisted of two main structures: a central jet around the tube axis and a recirculation zone adjacent to the inner wall of the tube, where vortices shed. Using the vorticity fields, the trajectory of vortices was tracked and their displacement over their lifetime calculated. The analysis led to a scaling law equation for the maximum vortex displacement as a function of a dimensionless variable dependent on the system parameters Re and $\alpha$.
The linear amplification mechanisms leading to streamwise-constant large-scale structures in laminar and turbulent channel flows are considered. A key feature of the analysis is that the Orr--Sommerfeld and Squire operators are each considered separately. Physically this corresponds to considering two separate processes: (i) the response of wall-normal velocity fluctuations to external forcing; and (ii) the response of streamwise velocity fluctuations to wall-normal velocity fluctuations. In this way we exploit the fact that, for streamwise-constant fluctuations, the dynamics governing the wall-normal velocity are independent of the mean velocity profile (and therefore the mean shear). The analysis is performed for both plane Couette flow and plane Poiseuille flow; and for each we consider linear amplification mechanisms about both the laminar and turbulent mean velocity profiles. The analysis reveals two things. First, that the most amplified structures (with a spanwise spacing of approximately $4h$, where $h$ is the channel half height) are to a great extent encoded in the Orr-Sommerfeld operator alone, thus helping to explain their prevalence. Second---and consistent with numerical and experimental observations---that Couette flow is significantly more efficient than Poiseuille flow in leveraging wall-normal velocity fluctuations to produce large-scale streamwise streaks.
Parameter extension simulation (PES) as a mathematical method for simulating turbulent flows has been proposed in the study. It is defined as a calculation of the turbulent flow for the desired parameter values with the help of a reference solution. A typical PES calculation is composed of three consecutive steps: Set up the asymptotic relationship between the desired solution and the reference solution; Calculate the reference solution and the necessary asymptotic coefficients; Extend the reference solution to the desired parameter values. A controlled eddy simulation (CES) method has been developed to calculate the reference solution and the asymptotic coefficients. The CES method is a special type of large eddy simulation (LES) method in which a weight coefficient and an artificial force distribution are used to model part of the turbulent motions. The artificial force distribution is modeled based on the eddy viscosity assumption. The reference weight coefficient and the asymptotic coefficients can be determined through a weight coefficient convergence study. The proposed PES/CES method has been used to simulate four types of turbulent flows. They are decaying homogeneous and isotropic turbulence, smooth wall channel flows, rough wall channel flows, and compressor blade cascade flows. The numerical results show that the 0-order PES solution (or the reference CES solution) has a similar accuracy as a traditional LES solution, while its computational cost is much lower. A higher order PES method has an even higher model accuracy.
Dielectric particles suspended in a weakly conducting fluid are known to spontaneously start rotating under the action of a sufficiently strong uniform DC electric field due to the Quincke rotation instability. This rotation can be converted into translation when the particles are placed near a surface providing useful model systems for active matter. Using a combination of numerical simulations and theoretical models, we demonstrate that it is possible to convert this spontaneous Quincke rotation into spontaneous translation in a plane perpendicular to the electric field in the absence of surfaces by relying on geometrical asymmetry instead.
The paper presents a hybrid bubble hologram processing approach for measuring the size and 3D distribution of bubbles over a wide range of size and shape. The proposed method consists of five major steps, including image enhancement, digital reconstruction, small bubble segmentation, large bubble/cluster segmentation, and post-processing. Two different segmentation approaches are proposed to extract the size and the location of bubbles in different size ranges from the 3D reconstructed optical field. Specifically, a small bubble is segmented based on the presence of the prominent intensity minimum in its longitudinal intensity profile, and its depth is determined by the location of the minimum. In contrast, a large bubble/cluster is segmented using a modified watershed segmentation algorithm and its depth is measured through a wavelet-based focus metric. Our processing approach also determines the inclination angle of a large bubble with respect to the hologram recording plane based on the depth variation along its edge on the plane. The accuracy of our processing approach on the measurements of bubble size, location and inclination is assessed using the synthetic bubble holograms and a 3D printed physical target. The holographic measurement technique is further implemented to capture the fluctuation of instantaneous gas leakage rate from a ventilated supercavity generated in a water tunnel experiment. Overall, our paper introduces a low cost, compact and high-resolution bubble measurement technique that can be used for characterizing low void fraction bubbly flow in a broad range of applications. | CommonCrawl |
In a seminal paper of Cordero-Erasquin-McCann-Schmuckenschläger, the authors extends using optimal transport techniques classical functional and geometrical interpolation inequalities from the Euclidean to the Riemannian setting. In particular these results imply a "geodesic" version of the celebrated Brunn-Minkovski inequality.
Sub-Riemannian manifolds can be described as limits of Riemannian ones with Ricci going to $-\infty$ and the generalization of the above results is not possible using classical theory of Riemannian curvature bounds.
In this talk, we discuss how, under generic assumptions, these structures support interpolation inequalities. As a byproduct, we characterize the sub-Riemannian cut locus as the set of points where the squared sub-Riemannian distance fails to be semiconvex. The techniques are based on sub-Riemannian optimal transport and Jacobi fields. | CommonCrawl |
Disclaimer: I know relatively little about this application. Corrections welcome.
In this post we see how SymPy can simplify common numeric calculations, particularly in Bayesian inference problems.
Imagine you are a scientist studying some counting process (like radioactive decay or the number of page requests on a web server). You describe this process with a Poisson random variable and try to learn the rate parameter of this distribution by observing some random samples.
If you have no preconceptions about the rate then this problem is easy. You just divide total counts by total time and you're done.
A more complex problem arises when external theory provides prior information about your rate parameter (for example physics might impose rules on the rate of radioactive decay). Lets model this problem in SymPy. For the sake of concreteness lets arbitrarily assume that $\lambda$, the rate parameter, follows a Beta distribution with parameters a and b.
In the lab we observe many samples $x_i$ taken from count. From these we wish to find the most likely value of rate. The probability of any single value of rate given our data can be rewritten with Bayes' rule.
To find the maximizer of $p(\lambda \vert x_i)$ we set the derivative equal to zero. We simplify the computation by taking the log. Because log is monotonic this does not change the solution.
SymPy reduces this Bayesian inference problem to finding roots of the above equation. I suspect that many prevalent numeric problems could be similarly accelerated through a symbolic preprocessing step.
Looking at the equation above it's clear that this problem can be simplified further. However I like the existing solution because it does not depend on the user possessing any mathematical expertise beyond the ability to describe their mathematical model (the derivatives, log, etc… are generally applicable to this problem). In what other automated ways can SymPy further process computations like this? What are other ways that aren't in SymPy but could be developed in the future?
I suspect that the problem given here is analytically solvable. To the extent possible SymPy should try to solve these problems. However for the vast number of problems without analytic solutions I suspect there is still a great deal we can do, either by reducing the problem as above or through the mathematically informed selection of numeric algorithms.
Various root finding algorithms are appropriate in different cases. Wikipedia suggests Householder's Method, a generalization on Newton's method for scalar systems with known derivatives. Perhaps in cases where SymPy is unable to solve the problem analytically it could select the correct numeric algorithm. Is this a reasonable use case for SymPy? | CommonCrawl |
A hash table is a data structure that maps keys to values. A hashing function is used to compute keys that are inserted into a table from which values can later be retrieved. As the name suggests, a distributed hash table (DHT) is a hash table that is distributed across many linked nodes, which cooperate to form a single cohesive hash table service. Nodes are linked in what is called an overlay network. An overlay network is simply a communication network built on top of another network. The Internet is an example, as it began as an overlay network on the public switched telephone network.
$\langle key, value \rangle$ pairs are stored on a subset of the network, usually by some notion of "closeness" to that key.
A DHT network design allows the network to tolerate nodes coming and going without failure, and allows the network size to increase indefinitely.
The need for DHTs arose from early file-sharing networks such as Gnutella, Napster, FreeNet and BitTorrent, which were able to make use of distributed resources across the Internet to provide a single cohesive service .
Gnutella searches were inefficient, because queries would result in messages flooding the network.
Napster used a central index server, which was a single point of failure and left it vulnerable to attacks.
FreeNet used a key-based routing. However, it was less structured than a DHT and did not guarantee that data could be found.
($O(log(n))$) similar to that of a centralized index, while having the benefits of a decentralized network.
Each participant has some unique network identifier.
They perform peer lookup, data storage and retrieval services.
There is some implicit or explicit joining procedure.
Communication need only occur between neighbors that are decided on by some algorithm.
In this report we'll go over some of the aspects common to all DHTs and dive deeper into a popular DHT implementation, called Kademlia.
Peer discovery is the process of locating nodes in a distributed network for data communication. This is facilitated by every node maintaining a list of peers and sharing that list with other nodes on the network. A new participant would seek to find their peers on the network by first contacting a set of predefined bootstrap nodes. These nodes are normal network participants who happen to part of some dynamic or static list. It is the job of every node on the network to facilitate peer discovery.
As peers come and go, these lists are repeatedly updated to ensure network integrity.
A DHT network efficiently distributes responsibility for the replicated storage and retrieval of routing information and data. This distribution allows nodes to join and leave with minimal or no disruption. The network can have a massive number of nodes (in the case of BitTorrent millions of nodes) without each node having to know about every other participant in the network.
In this way, DHTs are inherently more resilient against hostile attackers then a typical centralized system .
Arbitrary data may be stored and replicated by a subset of nodes for later retrieval. Data is hashed using a consistent hashing function (such as SHA256) to produce a key for the data. That data is propagated and eventually stored on the node or nodes whose node IDs are "closer" to the key for that data for some distance function.
Partitioned data storage has limited usefulness to a typical blockchain, as each full node is required to keep a copy of all transactions and blocks for verification.
The following graph is replicated and simplified from . Degree is the number of neighbors with which a node must maintain in contact.
The popularity of Kademlia over other DHTs is likely due to its relative simplicity and performance. The rest of this section dives deeper into Kademlia.
The number of messages necessary for nodes to learn about each other, is minimized.
Nodes have enough information to route traffic through low-latency paths.
Parallel and asynchronous queries are made to avoid timeout delays from failed nodes.
The node existence algorithm resists certain basic distributed denial-of-service (DDoS) attacks.
The bit length of the Node ID should be sufficiently large to make collisions extremely unlikely when using a uniformly distributed random number generator [].
A node wishing to join the network for the first time has no contacts in its $k$-buckets. In order for the node to establish itself on the network, it must contact one, or more than one, bootstrap node. These nodes are not special in any way other than being listed in some predefined list. They simply serve as a first point of contact for the requesting node to become known to more of the network and to find their closest peers.
There are a number of ways that bootstrap nodes can be obtained, including adding addresses to a configuration and using DNS seeds.
A joining node generates a random ID.
It contacts a few nodes it knows about.
It sends a FIND_NODE lookup request of its newly generated node ID.
The contacted nodes return the closest nodes they know about. The newly discovered nodes are added to the joining node's routing table.
The joining node then contacts some of the new nodes it knows about. The process then continues iteratively until the joining node is unable to locate any closer nodes.
This works, because XOR exhibits the same mathematical properties as any distance function.
The XOR metric implicitly captures a notion of distance in the preceding tree structure .
Kademlia is a relatively simple protocol consisting of only four remote procedure call (RPC) messages that facilitate two independent concerns: peer discovery and data storage/retrieval.
PING/PONG - used to determine liveness of a peer.
FIND_NODE - returns at most $k$ nodes, which are closer to a given query value.
STORE - request to store a $\langle key, value \rangle$ pair.
FIND_VALUE - behaves the same as FIND_NODE by returning the $k$ closest nodes. If a node has the requested $\langle key, value \rangle$ pair, it will instead return the stored value.
Notably, there is no JOIN message. This is because there is no explicit join in Kademlia. Each peer has a chance of being added to a routing table of another node whenever an RPC message is sent/received between them . In this way, the node becomes known to the network.
The lookup procedure allows nodes to locate other nodes, given a node ID. The procedure begins by the initiator concurrently querying the closest $\alpha$ (concurrency parameter) nodes to the target node ID it knows about. The queried node returns the $k$ closest nodes it knows about. The querying node then proceeds in rounds, querying closer and closer nodes until it has found the node. In the process, both the querying node and the intermediate nodes have learnt about each other.
retrieved by participants in the network.
increase the availability of t he data. Depending on the implementation, the data may eventually expire (say 24 hours).
Therefore, the original publisher may be required to republish the data before that period expires.
The retrieval procedure follows the same logic as storage, except a FIND_VALUE RPC is issued and the data received.
Each node organizes contacts into a list called a routing table. A routing table is a binary tree where the leaves are buckets that contain a maximum of $k$ nodes, aptly named $k$-buckets. These are nodes with some common node ID prefix, which is captured by the XOR metric.
$D$, $E$ in their own bucket.
Initially, a node's routing table is not populated with $k$-buckets, but may contain a single node in a single $k$-bucket. As more nodes become known, they are added to the $k$-bucket until it is full. At this point, the node splits the bucket in two: one for nodes that share the same prefix as itself and one for all the others.
Peers within $k$-buckets are sorted from least to most recently seen.
Once a node receives a request or reply from a peer, it checks to see if the peer is contained in the appropriate $k$-bucket. Depending on whether or not the peer already exists, the entry is either moved or appended to the tail of the list (most recently seen). If a particular bucket is already size $k$, the node tries to PING the first peer in the list (least recently seen). If the peer does not respond, it is evicted and the new peer is appended to the bucket, otherwise the new peer is discarded. In this way, the algorithm is biased towards peers that are long-lived and highly available.
Since there is no verification of a node's ID, an attacker can select their ID to occupy a particular keyspace in the network. Once an attacker has inserted themselves in this way, they may censor or manipulate content in that keyspace, or eclipse nodes .
An attacker takes advantage of the fact that in practice, there are relatively few nodes in most parts of a 160-bit keyspace. An attacker injects themselves closer to the target than other peers and eventually could achieve a dominating position. This can be done cheaply if the network rules allow many peers to come from the same IP address.
An Eclipse attack is an attack that allows adversarial nodes to isolate the victim from the rest of its peers and filter its view of the rest of the network. If the attacker is able to occupy all peer connections, the victim is eclipsed.
The cost of executing an eclipse attack is highly dependent on the architecture of the network and can range from a small number of machines (e.g. with hundreds of node instances on a single machine) to requiring a full-fledged botnet. Reference shows that an eclipse attack on Ethereum's Kademlia-based DHT can be executed using as few as two nodes.
Identities must be obtained independently from some random oracle.
Nodes maintain contact with nodes outside of their current network placement.
Sybil attacks are an attempt by colluding nodes to gain disproportionate control of a network. and are often used as a vector for other attacks. Many, if not all, DHTs have been designed under the assumption that a low fraction of nodes are malicious. A Sybil attack attempts to break this assumption by increasing the number of malicious nodes.
Associating a cost with adding new identifiers to the network.
Reliably joining real-world identifiers (IP address, MAC address, etc.) to the node identifier, and rejecting a threshold of duplicates.
Having a trusted central authority that issues identities.
Using social information and trust relationships.
An adversary wants to populate a particular keyspace interval $I$ with bad nodes in order to prevent a particular file from being shared. Let's suppose that we have a network with node IDs chosen completely at random through some random oracle. An adversary starts by executing join/leaves until it has nodes in that keyspace. After that they proceed in rounds, keeping the nodes that are in $I$ and rejoining the nodes that aren't, until control is gained over the interval.
It should be noted that if there is a large enough cost for rejoining the network, there is a disincentive for this attack. In the absence of this disincentive, the cuckoo rule is proposed as a defence.
Given a network that is partitioned into groups or intervals, and in which nodes are positioned uniformly and randomly. Adversaries may proceed in rounds, continuously rejoin nodes from the least faulty group until control is gained over one or more groups as described in Adaptive Join-Leave Attack.
The cuckoo rule is a join rule that moves (cuckoos) nodes in the same group as the joining node to random locations outside of the group. It is shown that this can prevent adaptive join-leave attacks with high probability, i.e. a probability $1 - 1/N$, where $N$ is the size of the network.
$R_k(x)$ is a unique $k$-region containing $x$.
Balancing Condition - the interval $I$ contains at least $O(log(n))$ nodes.
Majority Condition - honest nodes are in the majority in $I$.
If a new node $v$ wants to join the system, pick a random $x \in [0, 1)$. Place $v$ into $x$ and move all nodes in $R_k(x)$ to points in $[0, 1)$ chosen uniformly and independently at random (without replacing any further nodes) .
is sufficient to prevent adaptive join-leave attacks with high probability.
Sen, Freedman modelled and analysed the Cuckoo Rule and found that, in practice, it tolerates very few adversarial nodes.
needed to tolerate different $\epsilon$ for 100,000 rounds.
Notably, they show that rounds to failure (i.e. more than one-third of nodes in a given group are adversarial) decreases dramatically with an increasing but small global fraction of adversarial nodes. An amendment rule is proposed, which allows smaller group sizes while maintaining Byzantine correctness. Reference warrants more investigation, but is out of the scope of this report.
DHTs are a proven solution to distributed storage and discovery. Kademlia, in particular, has been successfully implemented and sustained in file-sharing and blockchain networks with participants in the millions. As with every network, it is not without its flaws, and careful network design is required to mitigate attacks.
Novel research exists, which proposes schemes for protecting networks against control from adversaries. This research becomes especially important when control of a network may mean monetary losses, loss of privacy or denial of service.
Wikipedia: "Distributed Hash Table" [online]. Available: https://en.wikipedia.org/wiki/Distributed_hash_table. Date accessed: 2019-03-08.
Kademlia: A Peer-to-Peer Information System" [online]. Available: https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf. Date accessed: 2019-03-08.
Ethereum Wiki [online]. Available: https://github.com/ethereum/wiki/wiki/Kademlia-Peer-Selection#lookup. Date accessed: 2019-03-12.
Wikipedia: "Tapestry (DHT)" [online]. Available: https://www.wikiwand.com/en/Tapestry_(DHT). Date accessed: 2019-03-12.
Towards a Scalable and Robust DHT [online]. Available: http://www.cs.jhu.edu/~baruch/RESEARCH/Research_areas/Peer-to-Peer/2006_SPAA/virtual5.pdf. Date accessed: 2019-03-12.
Low-resource Eclipse Attacks on Ethereum's Peer-to-Peer Network [online]. Available: https://www.cs.bu.edu/~goldbe/projects/eclipseEth.pdf. Date accessed: 2019-03-15.
: Commensal Cuckoo: Secure Group Partitioning for Large-scale Services [online]. Available: http://sns.cs.princeton.edu/docs/ccuckoo-ladis11.pdf. Date accessed: 2019-03-15.
: Overlay and P2P Networks [online]. Available: https://www.cs.Nhelsinki.fi/webfm_send/1339. Date accessed: 2019-04-04.
: Poisoning the Kad Networ" [online]. Available: https://www.net.t-labs.tu-berlin.de/~stefan/icdcn10.pdf. Date accessed: 2019-04-04. | CommonCrawl |
Mathematician, R user and beginner in Machine Learning.
5 Does there exist a matrix $A\neq I$ such that $A^n=1$ for any $n$?
5 How to calculate probability and expected value?
5 If a random variable is uniformly bounded, i.e., $|X_j| \leq C$ for all $j = 1,2,\ldots$, does the fourth moment exist? | CommonCrawl |
In this paper, we study the asymptotic behavior of BV functions in complete metric measure spaces equipped with a doubling measure supporting a $1$-Poincar\'e inequality. We show that at almost every point $x$ outside the Cantor and jump parts of a BV function, the asymptotic limit of the function is a Lipschitz continuous function of least gradient on a tangent space to the metric space based at $x$. We also show that, at co-dimension $1$ Hausdorff measure almost every measure-theoretic boundary point of a set $E$ of finite perimeter, there is an asymptotic limit set $(E)_\infty$ corresponding to the asymptotic expansion of $E$ and that every such asymptotic limit $(E)_\infty$ is a quasiminimal set of finite perimeter. We also show that the perimeter measure of $(E)_\infty$ is Ahlfors co-dimension $1$ regular. | CommonCrawl |
Abstract: In this paper I sketch two new variations of the method of decomposition of unipotents in the microweight representations $(\mathrm E_6,\varpi_1)$ and $(\mathrm E_7,\varpi_7)$. To put them in context, I first very briefly recall the two previous stages of the method, an $\mathrm A_5$-proof for $\mathrm E_6$ and an $\mathrm A_7$-proof for $\mathrm E_7$, first developed some 25 years ago by Alexei Stepanov, Eugene Plotkin and myself (a definitive exposition was given in my paper "A thirdlook at weight diagrams"), and an $\mathrm A_2$-proof for $\mathrm E_6$ and $\mathrm E_7$ developed by Mikhail Gavrilovich and myself in early 2000. The first new twist outlined in this paper is an observation that the $\mathrm A_2$-proof actually effectuates reduction to small parabolics, of corank 3 in $\mathrm E_6$ and of corank 5 in $\mathrm E_7$. This allows to revamp proofs and sharpen existing bounds in many applications. The second new variation is a $\mathrm D_5$-proof for $\mathrm E_6$, based on stabilisation of columns with one zero. [I devised also a similar $\mathrm D_6$-proof for $\mathrm E_7$, based on stabilisation of columns with two adjacent zeroes, but it is too abstruse to be included in a casual exposition.] Also, I list several further variations. Actual detailed calculations will appear in my paper "A closer look at weight diagrams of types $(\mathrm E_6,\varpi_1)$ and $(\mathrm E_7,\varpi_7)$".
Key words and phrases: Chevalley groups, elementary subgroups, exceptional groups, microweight representation, decomposition of unipotents, parabolic subgroups, highest weight orbit. | CommonCrawl |
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1476-1515, 2017.
A number of statistical estimation problems can be addressed by semidefinite programs (SDP). While SDPs are solvable in polynomial time using interior point methods, in practice generic SDP solvers do not scale well to high-dimensional problems. In order to cope with this problem, Burer and Monteiro proposed a non-convex rank-constrained formulation, which has good performance in practice but is still poorly understood theoretically. In this paper we study the rank-constrained version of SDPs arising in MaxCut and in $\mathbb Z_2$ and $\rm SO(d)$ synchronization problems. We establish a Grothendieck-type inequality that proves that all the local maxima and dangerous saddle points are within a small multiplicative gap from the global maximum. We use this structural information to prove that SDPs can be solved within a known accuracy, by applying the Riemannian trust-region method to this non-convex problem, while constraining the rank to be of order one. For the MaxCut problem, our inequality implies that any local maximizer of the rank-constrained SDP provides a $(1 - 1/(k-1)) \times 0.878$ approximation of the MaxCut, when the rank is fixed to $k$. We then apply our results to data matrices generated according to the Gaussian $\mathbb Z_2$ synchronization problem, and the two-groups stochastic block model with large bounded degree. We prove that the error achieved by local maximizers undergoes a phase transition at the same threshold as for information-theoretically optimal methods.
%X A number of statistical estimation problems can be addressed by semidefinite programs (SDP). While SDPs are solvable in polynomial time using interior point methods, in practice generic SDP solvers do not scale well to high-dimensional problems. In order to cope with this problem, Burer and Monteiro proposed a non-convex rank-constrained formulation, which has good performance in practice but is still poorly understood theoretically. In this paper we study the rank-constrained version of SDPs arising in MaxCut and in $\mathbb Z_2$ and $\rm SO(d)$ synchronization problems. We establish a Grothendieck-type inequality that proves that all the local maxima and dangerous saddle points are within a small multiplicative gap from the global maximum. We use this structural information to prove that SDPs can be solved within a known accuracy, by applying the Riemannian trust-region method to this non-convex problem, while constraining the rank to be of order one. For the MaxCut problem, our inequality implies that any local maximizer of the rank-constrained SDP provides a $(1 - 1/(k-1)) \times 0.878$ approximation of the MaxCut, when the rank is fixed to $k$. We then apply our results to data matrices generated according to the Gaussian $\mathbb Z_2$ synchronization problem, and the two-groups stochastic block model with large bounded degree. We prove that the error achieved by local maximizers undergoes a phase transition at the same threshold as for information-theoretically optimal methods. | CommonCrawl |
What is the name of a coinductive type defined with a total order relation?
In type theory, is there a name for a coinductive type simply defined with a successor operator and an equivalence relation?
And what would be the name of such a type if it were defined with a total order relation as well?
Also, as far as the successor operator is concerned, is there a recommend single-symbol notation for it that would provide a more concise alternative to the usual $succ(\alpha)$? For example, something like $\alpha^\triangleright$.
Browse other questions tagged order-theory equivalence-relations type-theory or ask your own question.
What is it called when !(a < b) and !(b < a) implies a = b?
What is the difference between $x:A$ and $x \Xi A$?
Are $<$ and $\leqq$ acceptable symbols to use for a strict weak ordering and its associated total preorder?
Relation between equivalence relation and cartesian product of two sets.
Formal definition of "equivalence" between two formalizations of a theory? | CommonCrawl |
Edward Farhi's paper on the Quantum Approximate Optimization Algorithm introduces a way for gate model quantum computers to solve combinatorial optimization algorithms. However, D-Wave style quantum annealers have focused on combinatorial optimization algorithms for some time now. What is gained by using QAOA on a gate model quantum computer instead of using a Quantum Annealer?
One of the advantages, as stated in the paper you linked, is that with QAOA you can increase the precision arbitrarily, whereas QA will only find the solution with probability 1 as $T \to \infty$ which is impractical. In addition if $T$ is too long you're likely to not find the solution as the probability is not monotonic. I believe an example of this can be found in a fair-sampling paper by Matsuda et al. Figure 4 shows that for large $\tau$, using quantum annealing on a 5-qubit system, you only likely to find 2 of the 3 possible states.
[arXiv:0808.0365v3] Ground-state statistics from annealing algorithms: Quantum vs classical approaches - Matsuda et al.
Not the answer you're looking for? Browse other questions tagged algorithm speedup annealing optimization or ask your own question. | CommonCrawl |
You want to place a rectangular building in the forest so that no trees must be cut down. For each building size, your task is to calculate the number of ways you can do this.
Print $n$ lines that each contain $m$ integers.
Explanation: For example, there are $5$ possible places for a building of size $2 \times 4$. | CommonCrawl |
Reconfigurable processors with fine-grained runtime-reconfigurable fabrics are used to speed up applications from different domains. Such a reconfigurable fabric allows loading of application-specific accelerators, where multiple accelerators can be combined using a coarse-grained runtime-reconfigurable $\mu $ Program to speed up complex computationally intensive kernels. To allow a large degree of adaptivity in the reconfigurable fabric, as it is required by, e.g., multitasking systems, the $\mu $ Program for a kernel should not be generated at compile time, as it would constrain the adaptivity of the system. To enable flexible and efficient use of the reconfigurable fabric, we propose the necessary algorithms for runtime: 1) accelerator placement (i.e., deciding where on the fabric an accelerator should be reconfigured at runtime); 2) $\mu $ Program generation; and 3) $\mu $ Program caching. Accelerator synthesis and implementation are done at compile time to reduce runtime overhead in generating accelerators. We evaluate the proposed algorithms using different application scenarios and demonstrate the proposed concepts on an field-programmable gate array-based prototype of a reconfigurable processor. In comparison with state-of-the-art reconfigurable processors that generate $\mu $ Programs at compile time, we obtain an average speedup of $1.29\times $ (up to $1.84\times $ ). | CommonCrawl |
Here's a variation of Discrete Peaceful Encampments: Player 3 has entered the game! (which itself is a variation of Peaceful Encampments).
You have 3 white queens, 3 black queens, 3 red queens, and 3 green queens. Place all these pieces onto a normal 8x8 chessboard in such a way that no queen threatens a queen of a different color.
Place 5 queens of each of four different colors onto a 10x10 checkerboard so that no queen threatens a queen of a different color.
Place 7 queens of each of four different colors onto a 12x12 checkerboard so that no queen threatens a queen of a different color.
At what point does it become possible to place more than $N-5$ queens of each of four different colors peacefully onto an $N\times N$ checkerboard?
it follows that we can build a symmetric solution where each army occupies a right-isosceles-triangle of side $6$.
so rotating a quarter or half-turn, other armies are just like A.
There exists no solution for $4$ armies of $9$ queens each on a $13\times13$ board, so Jonathan's $10$ on $14$ is the first with armies of $N-4$ queens.
These are the only solutions for armies of $6$ queens on an $11\times11$ board. For the $12\times12$ and larger boards, the search was modified to find only rotationally symmetrical solutions.
These solutions can be easily expanded to ever larger boards, with armies of $18$ queens on a $19\times19$ board and armies of $20$ queens on a $20\times20$ board. The claim that these are optimal is based on the fact that there were no centrally located queens in any solution on $10\times10$ or $11\times11$ boards. This suggests that your general solution for $4$ armies is itself optimal.
12 is the lower bound. Whether 12 or 13 works is yet to be seen.
Not the answer you're looking for? Browse other questions tagged mathematics geometry chess checkerboard or ask your own question. | CommonCrawl |
An experimental investigation on the near-field electromagnetic loss of thin copper layers has been presented using microfabricated microwave transceivers for applications to multi-chip microsystems. Copper layers in the thickness range of 0.2$\mu$m∼200$\mu$m have been electroplated on the Pyrex glass substrates. Microwave transceivers have been fabricated using the 3.5mm$\times$3.5mm nickel microloop antennas, electroformed on the silicon substrates. Electromagnetic radiation loss of the copper layers placed between the microloop transceivers has been measured as 10dB∼40dB for the wave frequency range of 100MHz∼1GHz. The 0.2$\mu$m-thick copper layer provides a shield loss of 20dB at the frequencies higher than 300MHz, whereas showing a predominant decreases of shield loss to 10dB at lower frequencies. No substantial increase of the shield effectiveness has been found for the copper shield layers thicker that 2 $\mu$m. | CommonCrawl |
Just a quick question to help me see if my reasoning is right. The speed of light is constant from all frames of reference. So does this mean that an observer travelling at the speed and taking into consider time dilation. Does this mean that observer observes the passage of time of every other observer to be the same rate relative to theres since every observer would be travelling at the speed of light toward them.
You're going to get the usual objections that we can't answer except in the limit as one goes faster and faster; obviously for massive entities like us actually attaining the speed of light is off-limits.
Everything that they can see "crowds into" the point right in front of them, except for the single point immediately behind them.
The lattice also appears to be length-contracted; at higher rapidities $\alpha$ the speed these things are coming towards you is approximately a constant $c \tanh \alpha \approx c$ but the distance between them goes to zero like $\ell / \cosh \alpha.$ Therefore you appear to be passing more and more and more of them per second of your time.
These two effects of the stars wanting to tilt "forward" and the things you're passing flying backwards past you seem to meet up at a definite distance behind you: your uniform acceleration effectively creates an event horizon at a fixed distance behind you; things which pass you appear to fall towards this and redshift into stasis rather than fully disappear. However this wall of death is actually an effect of your acceleration and if you were to stop accelerating it would fall back further and further behind you, as the tilt "forward" stopped increasing.
Clocks which appear on the lattice appear to be getting time-dilated more and more, going slower and slower.
There is, technically speaking, no "limit where one accelerates all the way to the speed of light." The problem is that everyone measures light travel at the speed $c$ in their own local coordinates, so no matter how fast you start going, you still have an infinite distance to go! I like to refer to this as a "real-life Zeno paradox".
But we can try to stretch our imagination, to try to figure out what would be happening if you took these trends as far as they may go: for example, all of the stars crowding into the one point of the sky suggests a one-dimensional existence, but all of the inter-object distances shrinking also suggests that this line is only a handful of real honest-to-goodness points. So one might imagine that one needs to think of a sort of zero-dimensional three-point existence, there is the point where the photon "is" between emission and absorption, the entire future-pointing light cone of events appears as one point in front of it, and the emission event and its past-pointing light cone of events appears as one point behind it. There is no "time" per se as the photon hops from the first of these points to the middle to the end point; there are just the two transitions where it winks into existence and winks out, and as far as the photon is concerned they just happen one after the other.
Not the answer you're looking for? Browse other questions tagged special-relativity speed-of-light inertial-frames time-dilation observers or ask your own question.
What is the significance of Einsteins postulate on speed of light?
Would infinite time elapse relative to an outside observer if an object was completely at rest? | CommonCrawl |
Abstract: YSb crystals are grown and the transport properties under magnetic field are measured. The resistivity exhibits metallic behavior under zero magnetic field and the low temperature resistivity shows a clear upturn once a moderate magnetic field is applied. The upturn is greatly enhanced by increasing magnetic field, finally resulting in a metal-to-insulator-like transition. With temperature further decreased, a resistivity plateau emerges after the insulator-like regime. At low temperature (2.5 K) and high field (14 T), the transverse magnetoresistance (MR) is quite large (3.47 $\times 10^4\%$ ). In addition, Shubnikov-de Haas (SdH) oscillation has also been observed in YSb. Periodic behavior of the oscillation amplitude reveals the related information about Fermi surface and two major oscillation frequencies can be obtained from the FFT spectra of the oscillations. The trivial Berry phase extracted from SdH oscillation, band structure revealed by angle-resolved photoemission spectroscopy (ARPES) and first-principles calculations demonstrate that YSb is a topologically trivial material. | CommonCrawl |
In this tutorial, the basic steps of working with DynamO are introduced by performing a simulation of a hard sphere fluid.
If you do not see the above output, please double check that you encountered no errors while installing/building DynamO. If you have problems and you have built DynamO yourself, you can return to the previous tutorial and recheck the output of the make command.
Create an initial "configuration" file, which is the starting point of the simulation.
Use the initial configuration file as the start of a simulation. Equilibrate or "relax" this configuration by running it through time in the simulation. The output of this stage is hopefully then an "equilibrated" configuration file.
Use the equilibrated configuration as the start of a simulation to collect data. You will also generate a final configuration file which can be used as the starting point for other simulations.
This tutorial will give you a general understanding of these steps.
A video of the hard sphere system which is generated and simulated in this tutorial.
In this tutorial you will simulate a hard-sphere fluid. The hard sphere is a simple molecular model used to capture the fundamental effects of "excluded-volume" interactions (see the reference entry for more details). A video of the initial configuration and equilibration performed in this tutorial is presented below. At the start of the video, the hard spheres are shown in their initial configuration. The particles are placed on a lattice and assigned random velocities. The particles are then coloured by their ID number and the simulation is "run".
Here we use periodic boundary conditions as they allow us to simulate a small amount of fluid as though it is part of an infinite/bulk system (see the reference entry for more details).
As the simulation proceeds, the initial lattice structure rapidly disappears. However, it is obvious from the clear colour banding that the particles have not actually moved very far from their initial positions. The system still has a configurational "memory" of its initial state. If we're trying to measure the ensemble average of properties, like the diffusion coefficient, we will need to ensure that the initial state has no effect on the results collected.
To equilibrate the system and move it away from this initial state, the simulation is then set to run at full speed for a few thousand collisions and then slowed down again to take a look at the results. We can see that the simulation has equilibrated well and the coloured particles are well mixed. This system should now be ready to sample "equilibrium"/ensemble-average data from.
There are only three commands to cover in this tutorial, so we'll cover them briefly and then go into each command in detail.
We'll look at each command individually in the following sections.
The first step in the brief example was to create the initial configuration file, called config.start.xml, using dynamod.
In this section, we will briefly learn about the configuration files of DynamO, which are the main input and output of DynamO, and how to generate configuration files using dynamod.
...the starting point for a simulation.
...for saving any snapshots of the system while it is being simulated.
...for saving the final state of the simulation. This can also then be used to continue the simulation later!
Every single parameter of the system is set in a configuration file, including the particle positions, interactions, boundary conditions and solver details. Many other simulation packages usually place some of this information in several different files, but DynamO only uses one file. This means there is a lot of information in this one file and it can be quite difficult to generate from scratch. So lets take a look at how we can generate an basic example configuration file to start us off.
This section is a list of the built in example configurations that dynamod can produce. We ask dynamod to generate any one of the configurations listed there using the --pack-mode option (or -m for short).
-C [ --NCells ] arg (=7) Set the default number of lattice unit-cells in each direction.
-x [ --xcell ] arg Number of unit-cells in the x dimension.
-y [ --ycell ] arg Number of unit-cells in the y dimension.
-z [ --zcell ] arg Number of unit-cells in the z dimension.
--rectangular-box Set the simulation box to be rectangular so that the x,y,z cells also specify the simulation aspect ratio.
-d [ --density ] arg (=0.5) System density.
It will actually output the same result as running the following command.
This is the exact command that was discussed in the brief overview, and its the default set-up of the hard-sphere example. There are lots of options you can use to change the density, number of particles and their placement, and these are discussed in the next section.
The default FCC hard sphere system generated by dynamod.
Most of the options for this --pack-mode (and many other pack modes) control the initial placement of the particles.
When you create an initial configuration, you must be careful to place the particles so that there are no overlaps which would lead to invalid dynamics. If we placed two hard sphere particles so that they were overlapping, the system would be in an invalid state as "hard" particles cannot interpenetrate each other. On the other hand, we want to be able to "pack" the particles as close together as possible so that we can generate high density configurations easily. Obviously we cannot just randomly drop particles as this will quickly lead to overlaps, even in low density systems.
What we're looking for is a regular structure, or lattice, which maximises the distance between the different positions, or "sites", for a fixed size of system. This structure would ensure that we minimise the chance of any particles overlapping right at the start of the simulation. Such structures occur frequently in nature and they're called crystal lattices. You can take a look at Wikipedia's article on the closest way to pack spheres for more information on this topic.
For mono-sized spheres, there are three popular cubic crystal structures which are used by simulators to initially position particles. There is Face-Centred Cubic (FCC), Body-Centered Cubic (BCC), and the Simple (or Primitive) Cubic (SC). DynamO can use any of these three to initially place your particles, and this is selected using the first integer argument (--i1 X, where X=0 for FCC, X=1 for BCC and X=2 for SC).
The FCC crystal often is favoured for producing the initial particle positions as it is the naturally-forming crystal structure of single-sized hard-spheres. Thus, it gives the closest packing you can physically achieve for mono-sized hard spheres without generating overlaps. It also provides a good starting point for other particle shapes and types too, so you'll often see rods, polymers, and other shapes initially arranged in an FCC lattice.
So, back to dynamod. When you pass -C7 --i1 0 to dynamod you are asking dynamo to produce a $7\times7\times7$ (-C 7) FCC (--i1 0) lattice and place a single particle on each lattice site.
As the FCC lattice has 4 unique sites per unit cell, this will result in $N=4\times7^3=1372$ particles being generated. The size of the particles is then scaled to match the density passed using the --density option, or -d for short (by default we have -d 0.5).
-m 0 : Generate a hard sphere system (mode 0).
-C 7 : Create a $7\times7\times7$ lattice.
--i1 0 : The unit cell of the lattice is Face-Centred Cubic (FCC) (this has 4 "sites" per unit cell, so we'll have $4\times7^3=1372$ particles in total).
-d 0.5 : Scale the lattice so that the system will have a reduced density of 0.5.
-r 1 : Rescale the particle velocities so the system has an initial temperature of 1 (it will remain at 1 during the simulation as hard spheres are athermal).
-o config.start.xml : And write the result into a configuration file called config.start.xml.
Which indicates the command was successful. You can take a look at the contents of the configuration file, but the configuration file will be explained in more detail in the next tutorial.
The most complex part of this tutorial is now over. All that remains is to take this initial starting configuration file and actually run a simulation.
In this system, a simulation for 190 units of time is roughly proportional to $10^6$ events, so both commands should result in roughly the same simulation duration. Its just more natural to specify the duration in events (collisions) as the simulator processes these at a constant rate.
Some collected data is also written to output.xml.bz2, but this data will contain influences of the initial crystalline configuration so it should be discarded. The purpose of this first run is to allow the system to have enough time to "relax" from this crystalline configuration and "forget" about this initial configuration. Ideally, our results should be the same regardless of where we started (a property of systems which are ergodic).
The final configuration is written out to the config.end.xml file, but this is not the only source of data outputted by the dynarun command. In the following section we discuss the contents of the output.xml.bz2 file.
The simulation will load and the visualiser windows will appear. The simulation will be paused at the start and you can un-pause it using the visualiser controls. If you close the visualiser windows, the simulation will automatically unpause and will carry on running.
dynarun has the ability to collect a wide range of properties for molecular and granular systems. These include complex properties such as transport coefficients (viscosity, thermal conductivity, and mutual/thermal diffusion) along with more traditional properties such as radial distribution functions, power loss, pressure tensors and much more (see the output plugin reference for more details). Some analysis of the more complex properties will be covered in the following tutorials, but there are some basic properties which we'll cover in this tutorial.
Any data collected on the simulation by dynarun is outputted to a compressed XML file called output.xml.bz2 (you can change the output file name using the --out-data-file option). Both the configuration files and the output files are written in XML, as it is a format that is easy for both humans and computers to read. We'll cover how to look at this data by hand now, but this data format is also easy for a computer to read (see Appendix A: Parsing Output and Config Files).
This will uncompress the file from output.xml.bz2 into output.xml, and you will be able to open it using your favourite text editor. Even internet browsers can open XML files and an example output.xml file from the default hard sphere simulation is available at the link below.
Here you can see that the temperature is almost exactly 1. Hard spheres have no configurational internal energy, so once you set their temperature in an NVE simulation it will not fluctuate.
But as mentioned before these values are zero as the hard sphere fluid has an ideal heat capacity and internal energy. At the bottom of the file are correlation data for the thermal conductivity (ThermalConductivity tag) and other transport properties, but these will be covered in later tutorials. If you want more information on these tags or the available output plugins, please take a look at the output plugin reference documentation using the button below.
We've covered how to create an initial configuration using dynamod and how to "run" this configuration for a fixed number of events using dynarun. Finally, we started to take a look at some of the data that dynarun collects automatically.
This is just the tip of the iceberg as far as what is possible. In the next tutorial, we will take a look at more complex systems and how to edit the configuration files by hand to generate them. | CommonCrawl |
Think or swim has this thing where they have do a implied volatility of a stock. I have chatted with the TOS people but they aren't terribly helpful. Regardless they did send me two images of what they consider to the be formula. I'm not exactly sure if this is a good formula or something they just made up for me to go away. So I have attached the two screen shot.
Based on my poking around I would guess it's the weekly implied averages of the options one month out?
Update: I'm starting a bounty, cause I need to see a real life example from the US stock exchange, such as CMG, NFLX or whatever.
Update: I'm starting a bounty, cause I need to see a real life example from the US stock exchange, such as CMG, NFLX or whatever. Although I believe the answer to be correct, I need a real life example to understand it.
What they gave you is Newton's formula.
If you have a function $f(x)$ then you can find the value $x_0$ such that $f(x_0) = 0$ by this method. It uses the derivative $f'$ which in your case is the vega.
$$f(x_0) = 0 \Leftrightarrow BS(x_0) = M$$ for some volatility $x_0$.
The equation with the $\epsilon$ means that you stop if two consecutive values are close enough.
Richard's answer is the correct answer to a slightly different question. I think what you're asking for is the weighted average option implied volatility for a stock.
The implied volatility of a stock is analogous to the CBOE's VIX Index for the S&P 500 Index (other securities have IV indices as well). The VIX uses a known methodology for imputing the implied volatility of a weighted strip of options in order to interpolate the one-month implied volatility of the index. A detailed description of the VIX' calculation is available on the CBOE website. Also, see the previous post for a detailed explanation on the evolution of the VIX.
The first step is always to determine the implied volatilities a cross-section of option contracts. Richard provides one such method. I do not want to detract from it.
The next task is to aggregate the individual option contracts in a meaningful way. There are likely significant differences in how the "sausage is made" amongst various brokers and data-providers.
I assume that most use methods and heuristics to come up with something analogous to the VIX. On the simplest level, the implied volatility index for any given stock is an open-interest-weighted and maturity-weighted weighted average of the individual options' implied volatilities.
I cannot speak to ThinkOrSwim's exact approach, but I would be willing to bet it mirrors CBOE's pre-2014 approach. Also, I do recall that TradeSation's stock implied volatility algorithm is available in its native programming language—EasyLanguage. From what I recall, TradeStation calculates a stock's implied volatility as a weighted average of out of the money puts and calls going forward on both the first and second expiration months.
On a side note, just you can't actually trade an index, you cannot trade IV directly, but rather have to take a position in a tracking instrument or create a synthetic position.
Also, I am copying code from VBA which uses the Newton's algorithm to find the implied volatility of a call option given the underlying price, exercise price, time, interest, target (usually market) price of a call, and dividend yield.
Not the answer you're looking for? Browse other questions tagged equities implied-volatility finance-mathematics statistical-finance or ask your own question.
How to assess stock price movement from implied volatility?
Implied Volatility of a stock? | CommonCrawl |
As mentioned in the previous [blog], we introduced the big data framework into teaching in 2014, and we specifically chose to use Apache Spark in 2015. Besides calculating the basic statistic, we also asked the students to do something more skillful -- matrix multiplications. The detailed assignment description can be found [here].
Solving large linear equation systems is very common in both engineering and scientific fields, and knowing how to manipulate matrices is basic skills for engineers and scientists. These two examples, though seemingly simple, are quite representative in scientific computing. Then how to solve them actually?
Let's start from the first question $X = A \times A^T \times A$. There are two ways to calculate $X$, namely $X = (A \times A^T ) \times A$ and $X = A \times (A^T \times A)$. Mathematically, they are equivalent, namely same result and same computation complexity. However, from engineering and system perspective, there is a big difference! Why?
You must get the point after seeing this figure, right? If we call it matrix $B$ as our intermediate result. We can see $(A \times A^T )$ leads to a huge $B$ which is $10^6 \times 10^6$ in case I, whereas $(A^T \times A )$ only leads to a very small $B$ which is $10^3 \times 10^3$. Now, you tell me, "which matrix is easier to distribute in the network?". There is going to be several orders of magnitudes difference in the network traffic. If the matrix $A$ is very very tall (i.e., contains huge amount of rows), it is not even possible to finish the calculate of $(A \times A^T )$. So, the lesson we learnt here is "Order does matter!"
We know getting a smaller $B$ is more beneficial, but how shall we carry on to compute $(A^T \times A )$? We know Spark cuts the $A$ into many chunks and stores them separately on different machine. Let's call these chunks of $A$ as $A_1, A_2, A_3, A_4 ... A_m$. Note these chunks have same number of columns (i.e., $10^3$) but may have different number of rows, in other words, their chunk size may be different.
Assemble all the $X_i$. More precisely, concatenate them. Then you have $X$.
I was actually a bit hesitant about whether to give the second question or not. I thought it might be too easy for the students. However, the truth is that most of the students actually did try to compute the complete matrix $A \times A^T$ which is so big that you cannot even fit into the normal memory, then extract the diagonal elements.
When the students brought such solutions to me and complained about the memory issue, I just ask them to do one thing - "Can you tell me the definition of the diagonal elements and describe how you calculate them? "
They usually started with -- "Well, the first row of $A$ (dot) multiply the first column of $A^T$ ...".
Then I continued to ask -- "Then what is the first column of $A^T$?"
Suddenly, the students got the point -- "Haha, it is still the first row of the $A$!!!"
So, you do not need to calculate the whole matrix in order to get the diagonal elements. You just need process the data set line by line, each row will give you one diagonal element.
Same as the previous [median example], this set of questions were also very successful. The students quite liked the problem design even though very few of them actually got all the solutions right. One common issue from the students is that they tried very hard to have precise control on how the matrix/data set is cut in Spark. Controlling how the data is divided is not impossible but just meaningless regarding solving the problem, of course given you have sufficient knowledge in linear algebra and matrix operations. | CommonCrawl |
Abstract: In the work we consider a topological module $\mathcal P$ of entire functions, which is the isomorphic image under the Fourier–Laplace transform of Schwarz space $\mathcal E'$ of distributions compactly supported in a finite or infinite interval $(a;b)\subset\mathbb R$. We study some properties of closed submodules in module $\mathcal P$ related with local description problem. We also study issues on duality between closed submodules in $\mathcal P$ and subspaces in the space $\mathcal E=C^\infty(a;b)$ invariant w.r.t. the differentiation.
Keywords: entire functions, Fourier–Laplace transform, local description of submodules, invariant subspaces, spectral synthesis, finitely generated submodules. | CommonCrawl |
Abstract: The study of continuous phase transitions triggered by spontaneous symmetry breaking has brought revolutionary ideas to physics. Recently, through the discovery of symmetry protected topological phases, it is realized that continuous quantum phase transition can also occur between states with the same symmetry but different topology. Here we study a specific class of such phase transitions in 1+1 dimensions -- the phase transition between bosonic topological phases protected by $Z_n\times Z_n$. We find in all cases the critical point possesses two gap opening relevant operators: one leads to a Landau-forbidden symmetry breaking phase transition and the other to the topological phase transition. We also obtained a constraint on the central charge for general phase transitions between symmetry protected bosonic topological phases in 1+1D. | CommonCrawl |
How can I construct a map $M_2(\mathbb Q)^3 \to M_2(\mathbb Q), (A,B,C) \mapsto (AB-BA)C$ where $M_2(\mathbb Q)$ denotes the space of 2x2 matrices.
This means that f is a symbolic function, whose arguments are symbolic variables. Without further assumptions, Sage will treat the value of f as an expression involving members of the symbolic ring, which is roughly equivalent to treat them as complexes. This implies that multiplication is commutative, hence your result.
Frederic's solution will indeed work, because Sage will not attempt to simplify your code for you, and won't simplify x*y-y*x as 0.
You can define a function as shown in the comments.
Pasting code; enter '--' alone on the line to stop or use Ctrl-D. | CommonCrawl |
I know that the maximal analytic continuation of a holomorphic function is an example of Riemann surfaces but don't know what it is used for.
What can we do with this surface?
Why is this function on the Riemann surface holomorphic?
Riemann surfaces in real analysis?
Are concrete Riemann surfaces Riemann domains over $\mathbb C$? | CommonCrawl |
After 23, it appears that all possible numbers can be expressed as a combination of 4's and 9's.
As you can see, my method doesn't allow me to know that 23 is the maximum. It only suggests 23 as an answer.
Is there a better more decisive method to solve this problem?
GCD(4,9) = 1 which means that there exist $x,y \in \mathbb Z$ such that $4x + 9y = 1$. In fact we have $x = -2$ and $y=1$. Thus, if we can make a number $n$ using 2 or more 4s, we can also make $n+1$.
In the light of this, does the following tell you anything? | CommonCrawl |
Last weekend I implemented a logistic classifier for use in a side-project I've been working on for the last few months (stay tuned on that). For anyone reading along (probably no one), you may recall I was quite enamoured with the beauty with which C# allowed the mathematics to be expressed in code.
Unnecessary heap allocations and copying of data are avoided. I would guess this is the primary optimisation and accounts for about 10x improvement in performance. LINQ is the culprit here and the functional niceties it provides have been replaced by iterative loops.
The cost function is never calculated (only it's gradient w.r.t. $\theta$). I would guess this accounts for an additional 2x improvement. I was using the cost function to adaptively decrease the alpha step size and also to exit out of the training loop early when a good enough solution was found. I still wanted to decrease alpha as the current solution nears a local minima as I've found this results in faster training times. However, this is now done by a fixed proportion each cycle (specified using the halfLives parameter). Also, the training now always executes a fixed number of iterations.
Array indexing is avoided in favour our unsafe code blocks and pointer arithmetic. I would guess this probably only accounts for a little improvement as I suspect standard loop constructs are heavily optimised by the C# compiler.
Finally, I've added a new feature. Some of the classification tasks I have are pretty unbalanced (relative class frequency is 10:1 in the worst case), so I wanted to play with allowing some of the training data to have more impact on the optimisation than others. There is a new parameter weightClass0 which specifies the weight of class 0 training data relative to class 1.
Again, just wanted to put out a quick writeup of this - time to get back to using it in my super-exciting side project.
In this post we'll walk through implementing a logistic classifier in C#. A couple of features of the language allow for an implementation that is particularly nice - LINQ and expression bodied function members. Users of other languages take note!
For those who are just looking for an off-the-shelf library, the the Apache licensed source code is available on github and there is a package ('MathML') on nuget. I went to the effort of implementing this because the main ML libraries for the .NET framework have yet to be ported over to dotnet core. It's turned out to be worth the effort however - the exercise has proved to be a great way to obtain a deeper understanding of the behavior of the classifier.
$h(x, \theta)$ - the 'hypothesis function' - is interpreted as the probability that the features $x$ correspond to an input of class '1' (and $1-h(x, \theta)$ is interpreted as the probability the features correspond to an input of class '0').
If you have a categorical feature with k levels, you would typically split this into into (k-1) binary 0/1 features.
Typically the first 'feature' will simply be a constant value of 1 to give a constant term $\theta_0$ in the hypothesis function.
In some scenarios, you may want to introduce features that are simply some function (e.g. $x^2$) of other features to allow the classifier to differentiate more complex regions in the feature space.
Of course the most difficult part is finding the vector $\theta$ such that the model is a good fit to our training set and hopefully generalizes well to new data.
An intuitive justification of this is given here (note: unfortunately, I didn't find a good formal justification of this online… but there are plenty of good books available). For a collection of feature vectors / classes, the total cost $J(\theta)$ is simply the sum of the individual cost values.
Our goal is to find the model parameters $\theta$ such that the total cost associated with our training set is minimum. Gradient decent can be used to achieve this.
(again, refer to a good book..). To find the optimal $\theta$, we can start with a naive guess and iteratively jump by a small amount proportional to the magnitude of the gradient in the direction of the gradient.
I've used a slightly atypical strategy for choosing the proportionality constant $\alpha$: start out with a relatively high value and when an iteration step results in a worse cost value (implying the algorithm has overstepped the local minimum), halve the current $\alpha$ and keep iterating. Repeat up to a maximum number of iterations.
where $\lambda$ controls the degree of regularization. Note that the final sum excludes the constant model parameter $\theta_0$.
for all components of $\theta$ except the constant term $\theta_0$, which should be calculated using the non regularized gradient function.
The implementation is a straightforward extension of the non-regualarized version.
Ok, that's it! I'm off to actually use this thing for something real. | CommonCrawl |
Abstract: A $k$-star $S_k(v)$ in a plane graph $G$ consists of a central vertex $v$ and $k$ its neighbor vertices. The height $h(S_k(v))$ and weight $w(S_k(v))$ of $S_k(v)$ is the maximum degree and degree-sum of its vertices, respectively. The height $h_k(G)$ and weight $w_k(G)$ of $G$ is the maximum height and weight of its $k$-stars.
Lebesgue (1940) proved that every 3-polytope of girth $g$ at least 5 has a 2-star (a path of three vertices) with $h_2=3$ and $w_2=9$. Madaras (2004) refined this by showing that there is a 3-star with $h_3=4$ and $w_3=13$, which is tight. In 2015, we gave another tight description of 3-stars for girth $g=5$ in terms of degree of their vertices and showed that there are only these two tight descriptions of 3-stars.
In 2013, we gave a tight description of $3^-$-stars in arbitrary plane graphs with minimum degree $\delta$ at least 3 and $g\ge3$, which extends or strengthens several previously known results by Balogh, Jendrol', Harant, Kochol, Madaras, Van den Heuvel, Yu and others and disproves a conjecture by Harant and Jendrol' posed in 2007.
There exist many tight results on the height, weight and structure of $2^-$-stars when $\delta=2$. In 2016, Hudák, Maceková, Madaras, and Široczki considered the class of plane graphs with $\delta=2$ in which no two vertices of degree 2 are adjacent. They proved that $h_3=w_3=\infty$ if $g\le6$, $h_3=5$ if $g=7$, $h_3=3$ if $g\ge8$, $w_3=10$ if $g=8$ and $w_3=3$ if $g\ge9$. For $g=7$, Hudák et al. proved $11\le w_3\le20$.
The purpose of our paper is to prove that every plane graph with $\delta=2$, $g=7$ and no adjacent vertices of degree 2 has $w_3=12$.
Keywords: plane graph, structure properties, tight description, weight, 3-star, girth.
The first author was supported by the Russian Foundation for Basic Research (grants 18-01-00353 and 16-01-00499). The second author's work was performed as a part of government work "Leading researchers on an ongoing basis" (1.7217.2017/6.7). | CommonCrawl |
Need help with this proof using Ferrers' graph or otherwise.
(i) Take the diagram of an arbitrary partition of $r+k$ into $k$ parts, then add $0$ to the smallest part, add $1$ to the second smallest part, add $2$ to the third, $\ldots$ add $(k-1)$ to the largest part and what have you got?
(ii) Take the diagram of an arbitrary partition of of $r+k$ into $k$ parts, then subtract $1$ from each part and what have you got? Now reflect it so rows become columns and columns become rows and what have you got?
Number of Partitions of n into 4 parts equals the number of partitions of 3n into 4 parts of size at most n-1. | CommonCrawl |
Abstract This paper addresses the multi-item capacitated lot sizing problem. We consider a family of $N$ items which are produced in or obtained from the same production facility. Demands are deterministic for each item and each period within a given horizon of $T$ periods.
First, we consider the case that a single setup cost is incurred in each period when an order for any item is placed in that period. We develop an exact branch-and-bound method which can be efficiently used for problems of moderate size. For large problems we propose a partioning heuristic. The partitioning heuristic partitions the complete $T$ periods into smaller intervalls, and specifies an associated dynamic lot sizing model for each of these. The intervals are of a size which permits the use of the exact branch-and-bound method.
The partitioning heuritic can, in the single item case, be implemented with complexity O($T^2 \log \log T$) and for the general multi-item model, in O($N^2 T^2 \log T \log C^*$) times where $C^*$ represents the largest among all periods' capacities. We show that our heuristic is $\epsilon-$optimal as $T\rightarrow\infty$, provided that some of the models paramters are uniformly bounded from above and from below.
Subsequently, we further generalize the model to include additional item dependend set-up costs and provide extensive numerial studies to evaluate the performance under various data constellations. | CommonCrawl |
This book introduces graduate students and researchers in mathematics and the sciences to the multifaceted subject of the equations of hyperbolic type, which are used, in particular, to describe propagation of waves at finite speed. Among the topics carefully presented in the book are nonlinear geometric optics, the asymptotic analysis of short wavelength solutions, and nonlinear interaction of such waves. Studied in detail are the damping of waves, resonance, dispersive decay, and solutions to the compressible Euler equations with dense oscillations created by resonant interactions. Many fundamental results are presented for the first time in a textbook format. In addition to dense oscillations, these include the treatment of precise speed of propagation and the existence and stability questions for the three wave interaction equations. One of the strengths of this book is its careful motivation of ideas and proofs, showing how they evolve from related, simpler cases. This makes the book quite useful to both researchers and graduate students interested in hyperbolic partial differential equations. Numerous exercises encourage active participation of the reader. The author is a professor of mathematics at the University of Michigan. A recognized expert in partial differential equations, he has made important contributions to the transformation of three areas of hyperbolic partial differential equations: nonlinear microlocal analysis, the control of waves, and nonlinear geometric optics.
This is a textbook for an introductory graduate course on partial differential equations. Han focuses on linear equations of first and second order. An important feature of his treatment is that the majority of the techniques are applicable more generally. In particular, Han emphasizes a priori estimates throughout the text, even for those equations that can be solved explicitly. Such estimates are indispensable tools for proving the existence and uniqueness of solutions to PDEs, being especially important for nonlinear equations. The estimates are also crucial to establishing properties of the solutions, such as the continuous dependence on parameters. Han's book is suitable for students interested in the mathematical theory of partial differential equations, either as an overview of the subject or as an introduction leading to further study.
This modern take on partial differential equations does not require knowledge beyond vector calculus and linear algebra. The author focuses on the most important classical partial differential equations, including conservation equations and their characteristics, the wave equation, the heat equation, function spaces, and Fourier series, drawing on tools from analysis only as they arise. Within each section the author creates a narrative that answers the five questions: What is the scientific problem we are trying to understand? How do we model that with PDE? What techniques can we use to analyze the PDE? How do those techniques apply to this equation? What information or insight did we obtain by developing and analyzing the PDE? The text stresses the interplay between modeling and mathematical analysis, providing a thorough source of problems and an inspiration for the development of methods.
This volume contains research and expository articles based on talks presented at the 2nd Symposium on Analysis and PDEs, held at Purdue University. The Symposium focused on topics related to the theory and applications of nonlinear partial differential equations that are at the forefront of current international research. Papers in this volume provide a comprehensive account of many of the recent developments in the field. The topics featured in this volume include: kinetic formulations of nonlinear PDEs; recent unique continuation results and their applications; concentrations and constrained Hamilton-Jacobi equations; nonlinear Schrodinger equations; quasiminimal sets for Hausdorff measures; Schrodinger flows into Kahler manifolds; and parabolic obstacle problems with applications to finance. The clear and concise presentation in many articles makes this volume suitable for both researchers and graduate students.
This volume presents the state of the art in several directions of research conducted by renowned mathematicians who participated in the research program on Nonlinear Partial Differential Equations at the Centre for Advanced Study at the Norwegian Academy of Science and Letters, Oslo, Norway, during the academic year 2008-09. The main theme of the volume is nonlinear partial differential equations that model a wide variety of wave phenomena. Topics discussed include systems of conservation laws, compressible Navier-Stokes equations, Navier-Stokes-Korteweg type systems in models for phase transitions, nonlinear evolution equations, degenerate/mixed type equations in fluid mechanics and differential geometry, nonlinear dispersive wave equations (Korteweg-de Vries, Camassa-Holm type, etc.), and Poisson interface problems and level set formulations.
This text offers students in mathematics, engineering, and the applied sciences a solid foundation for advanced studies in mathematics. Features coverage of integral equations and basic scattering theory. Includes exercises, many with answers. 1988 edition.
What distinguishes differential geometry in the last half of the twentieth century from its earlier history is the use of nonlinear partial differential equations in the study of curved manifolds, submanifolds, mapping problems, and function theory on manifolds, among other topics. The differential equations appear as tools and as objects of study, with analytic and geometric advances fueling each other in the current explosion of progress in this area of geometry in the last twenty years. This book contains lecture notes of minicourses at the Regional Geometry Institute at Park City, Utah, in July 1992. Presented here are surveys of breaking developments in a number of areas of nonlinear partial differential equations in differential geometry. The authors of the articles are not only excellent expositors, but are also leaders in this field of research. All of the articles provide in-depth treatment of the topics and require few prerequisites and less background than current research articles.
Suitable for advanced undergraduate and beginning graduate students taking a course on mathematical physics, this title presents some of the most important topics and methods of mathematical physics. It contains mathematical derivations and solutions - reinforcing the material through repetition of both the equations and the techniques.
An Elementary Course in Partial Differential Equations is a concise, 1-term introduction to partial differential equations for the upper-level undergraduate/graduate course in Mathematics, Engineering and Science. Divided into two accessible parts, the first half of the text presents first-order differential equations while the later half is devoted to the study of second-order partial differential equations. Numerous applications and exercises throughout allow students to test themselves on key material discussed.
This book is a reader-friendly, relatively short introduction to the modern theory of linear partial differential equations. An effort has been made to present complete proofs in an accessible and self-contained form. The first three chapters are on elementary distribution theory and Sobolev spaces with many examples and applications to equations with constant coefficients. The following chapters study the Cauchy problem for parabolic and hyperbolic equations, boundary value problems for elliptic equations, heat trace asymptotics, and scattering theory. The book also covers microlocal analysis, including the theory of pseudodifferential and Fourier integral operators, and the propagation of singularities for operators of real principal type. Among the more advanced topics are the global theory of Fourier integral operators and the geometric optics construction in the large, the Atiyah-Singer index theorem in $\mathbb R^n$, and the oblique derivative problem. | CommonCrawl |
This module implements a several algorithms allowing to extract phenological information from time profiles. These time profiles should represent vegetation status as for instance NDVI, LAI, etc.
where $A+B$ is the maximum value and $B$ is the minimum.
By definition, this is $x_0$.
We define $t_1$ as the date for which the previous straight line reaches the maximum value.
By definition this is g'(x_2).
Only one application is provided at this time. It allows fitting double logistics to each pixel of an image time series. The output contains 2 double logistics, one for the main phenological cycle and another one for a secondary cycle. This secondary cycle may not be present in the input data. This should not have any impact in the estimation of the main cycle.
The application can generate a 12 band image, where each band is one of the parameters of the 2 double logistics in the order $A$, $B$, $x_0$, $x_1$, $x_2$, $x_3$.
The application can also instead generate an output image with the same number of bands as the input image, but replacing the value of each date for each pixel by the value taken by the logistic fitting.
Finally, the application can output an image where each band is one of the phenological metrics for the 2 cycles. The order of the metrics is $g'(x_0)$, $t_0$, $t_1$, $t_2$, $t_3$, $g'(x_2)$. | CommonCrawl |
Abstract: We detect topological semigroups that are topological paragroups, i.e., are isomorphic to a Rees product of a topological group over topological spaces with a continuous sandwich function. We prove that a simple topological semigroup $S$ is a topological paragroup if one of the following conditions is satisfied: (1) $S$ is completely simple and the maximal subgroups of $S$ are topological groups, (2) $S$ contains an idempotent and the square $S\times S$ is countably compact or pseudocompact, (3) $S$ is sequentially compact or each power of $S$ is countably compact. The last item generalizes an old Wallace's result saying that each simple compact topological semigroup is a topological paragroup. | CommonCrawl |
In the country of Dalgonia, there is only one type of fake coins and only one type of genuine coins.
All genuine coins have the same weight.
All fake coins have the same weight.
(but it is not known which of these two weights is larger).
Furthermore, the genuine coins have the following outrageous property: whenever somebody can logically deduce that a certain coin is genuine, this coin evaporates into thin air (and disappears forever from Dalgonia).
Cosmo puts $N\ge3$ coins on the table and tells Fredo: "Exactly $N-1$ of these coins are genuine and exactly one of them is fake." Then Cosmo leaves the room. On the table, there is a balance with two pans (but there are no weights).
Question: For which values of $N\ge3$ is Fredo able to identify the fake coin and to determine whether it is heavier or lighter than the genuine coins of Dalgonia?
Any useful weighing will be of an even number of coins, with the same number on each side of the balance. If the result is balanced, all the coins must be genuine and so will immediately disappear. Therefore if you start with an odd number of coins, there will always be the possibility that all weighings are balanced, eliminating an even number of genuine coins each time, until only the fake coin is left. At this point the fake coin is identified, but its weight cannot be determined.
If you start with an even number of coins, put half on each side of the balance; the result must be unbalanced. Now take the heavier side and weigh individual coins against each other. Either a weighing will be unbalanced, in which case the heavier coin is the fake, or you will eliminate coins until one or none are left. If none are left, the fake is light and you can repeat the process for the lighter side.
If one coin is left, call it $H$ and take three coins $A,B,C$ from the lighter side. Weigh $HA$ vs $BC$.
If it balances, all are genuine and disappear; now the fake coin must be light, and you can weigh the other coins individually until you find it or it's the only remaining coin.
If $HA$ is lighter, $A$ must be fake and light.
Thus you can always find and determine the relative weight of the fake coin.
It works for any $N \ge 4$.
If $N=3$ you cannot guarantee success. You must weigh only 2 coins. If the coin not measured is fake, then you will be successful since the scale will balance and those will disappear, leaving you with the fake. But if the fake is measured, then the coin not measured will disappear, leaving you with 2 coins and no reference to know which is fake.
For $N \ge 4$, the solution is as follows.
If the scales balance, all the measured coins disappear and the coin left out is fake. This can only occur when $N$ was odd.
Otherwise, one side will drop. The coin left out (if $N$ was odd) will disappear since it is known that the fake coin was measured. You are left with one set of coins which may contain a fake heavy coin, and one which may contain a fake light coin. Each group has $M \ge 2$ coins in it.
Take the heavy group and weigh two coins against each other until you run out of coins or find the heavy fake coin (you can be more efficient by weighing groups of coins, but this question doesn't ask us to minimize uses of the scale). If you run out of coins to weigh, and have none left over, then simply repeat with the light pile to find the fake light coin. If instead, you had one coin left, then you know that it might be fake and heavy, or genuine and the fake is in the light pile. This can only occur if $M \ge 3$ and odd.
$HA$ is lighter. Thus, $H$ was not a fake heavy coin and $A$ is the fake light coin.
$HA$ is the same as $BC$. Again, $H$ was not a fake heavy coin and all the measured coins disappear. So, simply weight the remaining coins in the light group 1 vs 1 until you find the fake or are left with one left over (which must be fake).
$HA$ is heavier. This means that either $H$ is heavy, or $B$ or $C$ are light. All other coins ($A$ and the rest of the coins in the light pile) disappear since they were genuine. So, weigh $B$ vs $C$ to see if either one was light (and fake). If not, then it was $H$.
It is possible for any value equal to or greater than 3, assuming that you have the ability to chop the coins into pieces with infinite precision. You do not need to have coins which have an even weight distribution.
For even numbers, you divide into two groups and weigh. Set aside the lighter half as L, the heavier side as H. Now split H into two parts and weigh. If they are equal, then H will disappear and you know that the fake is lighter. If H1 and H2 are unequal, then L will disappear and you know that the fake is heavier. Continue splitting and weighing until you have two coins or one coin left.
If your second weighing showed that the fake was heavier, then the fake is the heavier of the remaining coin(s). If your second weighing showed that the fake was lighter, then the fake is the lighter of the remaining coin(s).
For odd numbers, it is possible, using deduction and limits.
Coins can be split into pieces without losing material.
Coins can be weighed relative to themselves without proving anything. If I split a coin into pieces, I can continue adjusting the split until I have evenly divided the coin, and none of this causes the coin to disappear.
So, take all the coins and then weight 2v1 until you have weighed every single combination of coins. If at any point 1 is heavier than 2, then all other coins will disappear and you know which coin is fake and that it's heavier.
Okay, now what if you don't find the fake? Split every coin into two equal parts (by weight), keeping track of which coins parts belong to which coin. Now weigh two halves of one coin against three halves of other coins until you have tested all combinations. If at any point a coin is heavier than its opposition, then the fake will be revealed and known to be heavier.
So what we have now is a formula of N-1/N, where N is the number of pieces we've split each coin into. Take the limit as N approaches infinity and we have 1, which represents equality of weight, which is not possible. So, for any given threshold of measurement, we can either find the fake and know it's heavier or find nothing and prove that the fake is lighter.
The key here is that any standard of accuracy is reachable, if we have a great deal of patience. In fact, if we know the standard of accuracy in advance, we can simply chop the coins into sufficiently-small pieces and do one round of weighings.
Knowing that fake is lighter (if it were heavier we would have found it), we can solve for an odd number of coins. Weigh the coins 1v1 until you have a single coin left standing. That coin is the fake, and you know it's lighter. For higher values, you can be more efficient by weighing larger piles against each other, but the end result is that you can narrow it down to 1 coin, and then you win.
N=3, N-1=2. There are 2 real coins and 1 fake coin. The 2 real coins will balance against each other and the other one is fake. This process will hold true for all values of N where N is greater than or equal to 3.
Mark all the coins with $c_1, c_2,\ldots,c_N$.
Pick one coin, $c_k$. Weigh all other coins on the balance, putting half on either side.
If the balance is equal, $c_k$ is fake. | CommonCrawl |
Suppose that we are given a certain degree sequence. Is it then possible to construct a graph that has this degree sequence? In general, the answer is no. Such degree sequences that we can construct graphs for are defined below.
Definition: The sequence $(d_1, d_2, d_3, ..., d_n)$ is said to Realizable or Graphic if and only if there exists a graph $G = (V(G), E(G))$ such that $|\: V(G) \:| = n$, AND for each $x_j \in V(G)$, $\deg(x_j) = d_i$.
For example, let's consider the sequence $(9, 2, 2, 1)$ for a simple graph. Is this realizable? The answer is NO. There are only $4$ vertices in this sequence, and the highest degree is $9$. That would imply that there exists a vertex attached to $9$ other vertices, which is impossible. | CommonCrawl |
When defining your model interfaces, it is wise to stick to certain conventions that ensure that your model can be easily used by other people, and coupled with other models, possibly in other simulation tools than the one you use. Such conventions include what variable types and units to use, especially for input and output values but also for parameters, as well as variable names and several other things. Model coupling is hard enough in itself, even without the added complications and extra work associated with ensuring that the output variables of one model are compatible with the input variables of the model you want to connect it with. Coupling conventions could even have an effect on the accuracy and stability of the simulations.
In this article, we provide some recommendations and guidelines for coupling of variables and, more generally, design of model interfaces. Other articles on this site go into more detail about specific domains and subsystems commonly involved in maritime simulations.
There are a few basic guidelines that should always be followed, regardless of the model and the purpose for which it is created. Some of these look suspiciously similar to general software development guidelines, which should come as no surprise, considering that models are just highly specialised pieces of software.
Use FMI and package your models as FMUs. This way, your users are not forced to use the same simulation tools as you, but are free to use their preferred software and to freely combine your models with others.
Use standard units of measurement. For the majority of cases, it is strongly recommended to use SI units, without prefixes such as "kilo-", "micro-" and so on. Do not worry about whether the values the user will see on the screen are very large or very small; it should be the responsibility of the simulation software to scale/convert them to "user-friendly" units and values for display and editing purposes. (FMI also lets you specify the "display units" for your variables, which may be different from the units of the values that are passed between subsystems.) That said, there are certain domains and highly specialised applications where the principle of least astonishment dictates that other units be used, and then the next rule becomes very important.
Document your interfaces. Provide a description of what each variable is, how it should be used, the units in which it is measured, whether there are limitations on its value, what its default/initial value is, and whether it has any non-obvious dependencies on other variables. FMI lets you put this information directly inside the FMU's model description file, in a structured form that allows the information to be interpreted and used by simulation software too, so it's not just for the benefit of human users.
Use clear, understandable variable names. Avoid cryptic abbreviations, and do not assume that your users have a deep knowledge of the scientific context in which the model was made. There is generally little need to worry about the length of variable names; your users won't be writing them by hand most of the time anyway. They will be reading them, however, so make that as easy as possible. Example: If a variable represents angular velocity, calling it angular_velocity (or AngularVelocity or whichever convention you prefer) is usually vastly preferable to calling it angvel or omega or somesuch.
For connections that represent some kind of energy transfer from one system to another, such as the mechanical energy transferred by a rotating driveshaft or the electrical energy transferred by a power cable, we recommend the use of power bonds.
A power bond between two subsystems A and B consists of two variable couplings, where one represents an effort, and the other represents a flow. For an electrical connection, for example, the effort is the electromotive force (i.e., voltage) while the flow is the current. The two couplings are oppositely directed, meaning that if A has effort as an output variable, then it has flow as an input variable, while they must be the other way around for B. This is illustrated in the figure on the right.
Normally, the product of an effort and a flow has units of power (watt) and is a direct measure of the power that is transferred between the two subsystems. This makes it very easy to keep track of the energy flow through the system, to see where energy is produced and dissipated, and to locate violations of energy conservation. (If nothing else, this can be useful for debugging model code.) This feature is employed to good effect in the ECCO method.
Some physical connections may be represented by multiple power bonds. The most straightforward example is probably a force acting on a body in three dimensions, which can be represented with one mechanical power bond for each spatial dimension.
A force-position coupling between subsystems A and B works well enough if A only needs to know B's position. However, if A needs B's velocity, it has to differentiate the position, and numerical differentiation is notoriously inaccurate and hard to get right. Numerical integration, on the other hand, is less susceptible to errors, so it makes sense to output the highest-order derivative that is likely to be needed in other subsystems, which is normally the velocity. Of course, the best course of action for the author of model B is to supply both position and velocity as outputs, so it can be coupled in either way.
The above also holds for rotational systems, which can be realised as torque–angular velocity or as torque–angle. Here, of course, we have the added complication that the angle is periodic, so that an angle $\theta$ is equivalent to $\theta + 2 \pi n$ for $n=1,2,3,\ldots$.
For DC or single-phase AC connections, a power bond is realised with voltage as the effort variable and current as the flow variable. This can be trivially extended to 3-phase AC connections, which can simply be modeled as three single-phase connections—that is, as 3 effort variables and 3 flow variables corresponding to the voltages and currents in the 3 phases, respectively.
We then use what is called a stationary reference frame. For 3-phase systems, a common alternative is to use a rotating reference frame, as this simplifies some models. The transformation from the stationary frame to the rotating frame is called the direct-quadrature-zero transformation, or dq0 for short. Assuming a balanced system, this transformation has the effect of reducing the three AC signals to two DC signals. However, it requires that the phase angle be sent along with the two voltages, as shown in the figure on the right.
Hydrodynamic systems, such as ships' hulls, propellers, rudders, thrusters and so on.
Power systems, including engines, generators and other power conversion and distribution components.
1. ↑ H. M. Paynter, 1961. Analysis and design of engineering systems: Class notes for M.I.T. course 2.751. M.I.T. Press, Boston.
2. ↑ P. C. Breedveld, 1984. Physical systems theory in terms of bond graphs. Twente University.
3. ↑ Dean C. Karnopp; Donald L. Margolis; Ronald C. Rosenberg, 2012. System dynamics: Modeling, simulation and control of mechatronic systems. 5th edition. John Wiley & Sons, Inc., Hoboken, New Jersey, ISBN 978-0-470-88908-4.
4. ↑ William H. Press; Saul A. Teukolsky; William T. Vetterling; Brian P. Flannery, 2002. Numerical recipes in C++: The art of scientific computing. 2nd edition. Cambridge University Press, ISBN 0-521-75033-4.
5. ↑ Dq0 Transform - Open Electrical.
6. ↑ R. H. Park. Two Reaction Theory of Synchronous Machines. AIEE Transactions, 48, pp.716-727. | CommonCrawl |
Is there an extension field of degree infinite has no intermediate field?
Studying a prime field, Prime fields has no proper subfields.
At this point, i have some question.
Suppose that $F$ is a field and $E$ is an extension of $F$. Suppose further that there is no intermediate field between $F$ and $E$.
1) Both $F$ and $E$ equals some prime fields?
2) $E$ is a simple extension of $F$?
or is there an extension of $F$ of degree infinite?
Give some advice. Thank you!
1) No: for example you can take any extension of degree $2$ (e.g. $\Bbb C / \Bbb R$). These need not to be prime fields.
2) Yes. Take any element $\alpha \in E \setminus F$. Then $$F \subsetneq F( \alpha) \subseteq E$$ by our hypothesis this means that $F( \alpha) =E$.
3.2) if $\alpha$ is transcendental, then $F(\alpha^2)$ is an intermediate field between $F$ and $E= F(\alpha)$.
Not the answer you're looking for? Browse other questions tagged field-theory extension-field or ask your own question.
If degree of extension is infinite then intermediate ring not need to be a field.
Finite degree field extension is a simple extension iff there are only finitely many intermediate fields.
Is every field extension of degree $2018$ primitve? | CommonCrawl |
We prove that an $\omega$-categorical core structure primitively positively interprets all finite structures with parameters if and only if some stabilizer of its polymorphism clone has a homomorphism to the clone of projections, and that this happens if and only if its polymorphism clone does not contain operations $\alpha$, $\beta$, $s$ satisfying the identity $\alpha s(x,y,x,z,y,z) \approx \beta s(y,x,z,x,z,y)$.
This establishes an algebraic criterion equivalent to the conjectured borderline between P and NP-complete CSPs over reducts of finitely bounded homogenous structures, and accomplishes one of the steps of a proposed strategy for reducing the infinite domain CSP dichotomy conjecture to the finite case.
Our theorem is also of independent mathematical interest, characterizing a topological property of any $\omega$-categorical core structure (the existence of a continuous homomorphism of a stabilizer of its polymorphism clone to the projections) in purely algebraic terms (the failure of an identity as above). | CommonCrawl |
I've tried to see this "intuitively" but I'm not able to see this as clear as my professor sees it. I'd appreciate any help with this.
For an intuitive understanding just think of the complex plane.
For $a,b\in\mathbb R$, we have $|a+bi|^2 = a^2 + b^2 \ge b^2$, so $|a+bi| \ge |b|$.
I feel this is written a little clearer than the other answers.
Not the answer you're looking for? Browse other questions tagged calculus inequality proof-verification absolute-value or ask your own question.
Proof of inequality with ratio of odds to evens.
Find the exact area of the surface obtained by rotating the curve about the x-axis. | CommonCrawl |
Let $\mathcal B_c$ denote the real-valued functions continuous on the extended real line and vanishing at $-\infty $. Let $\mathcal B_r$ denote the functions that are left continuous, have a right limit at each point and vanish at $-\infty $. Define $\mathcal A^n_c$ to be the space of tempered distributions that are the $n$th distributional derivative of a unique function in $\mathcal B_c$. Similarly with $\mathcal A^n_r$ from $\mathcal B_r$. A type of integral is defined on distributions in $\mathcal A^n_c$ and $\mathcal A^n_r$. The multipliers are iterated integrals of functions of bounded variation. For each $n\in \mathbb N$, the spaces $\mathcal A^n_c$ and $\mathcal A^n_r$ are Banach spaces, Banach lattices and Banach algebras isometrically isomorphic to $\mathcal B_c$ and $\mathcal B_r$, respectively. Under the ordering in this lattice, if a distribution is integrable then its absolute value is integrable. The dual space is isometrically isomorphic to the functions of bounded variation. The space $\mathcal A_c^1$ is the completion of the $L^1$ functions in the Alexiewicz norm. The space $\mathcal A_r^1$ contains all finite signed Borel measures. Many of the usual properties of integrals hold: Hölder inequality, second mean value theorem, continuity in norm, linear change of variables, a convergence theorem. | CommonCrawl |
What is the next term in the sequence 20,5,3,6,7,12, ... ?
I think they are asked daily. They are generally closed, but can't we do anything to reduce the number of questions asked in this manner? Furthermore, I think it would be good to point out that the question can be asked on Puzzling. SE.
Adding a note to the sequences-and-series tag excerpt that questions that ask for the next term in the sequence are off-topic and can be asked on Puzzling.SE.
Problem: People who have a mathematical question about finding the next term in an arithmetic, geometric or harmonic progression might think that the question is off-topic.
Perhaps better than the first one: Note in the excerpt that it is for sequences and series related to calculus. Although I'm not sure that is the only thing the tag should be used for.
Adding a migration path to Puzzling.SE, so that question can be migrated there more easily.
Problem: Puzzling.SE is currently in beta, so I'm not sure if it is even allowed.
Please discuss whether we should do anything about this.
The vast majority of these questions is indeed off-topic. Their closing should continue.
However, doing this with appropriate explanation (e.g. pointing to Puzzling.SE) can become tedious. On the other hand, a migration path is not going to happen as long as Puzzling is in beta, given earlier discussion about this (as linked to by Martin Sleziak). Moreover, as some commenters pointed out, Puzzling.SE isn't too happy about these questions either, so probably the migration path would incite well-meant but actually unwanted behaviour.
I also think that introducing paragraphs about off-topicness of certain questions in tag excerpts will have a very marginal effect, because people don't usually read the full tag wiki, and the tag excerpt itself is generally too valuable to waste on what something is not.
But then, it is obvious that we should try and do something about this. Perhaps a comment template can be crafted for this type of question, that can be used by those who routinely find these questions and cannot muster the motivation to accompany every close vote with a detailed explanation.
As a slight variation on this answer, I propose a canonical question explaining why such questions are ambiguous, which can be linked to from such questions to help the OP understand the issue. This canonical question should be linked here to be discoverable. I wrote this candidate Q/A.
As others have pointed out, in many specific cases it is possible to provide a useful answer (eg if one extrapolation is widely agreed to be most natural, and most likely to be the OP's intention), so marking as duplicate is not always appropriate.
Declare that the full sequence is $$19,25,45,87,159,0,0,0,0,0,\ldots$$ and you can write down a simple formula involving Iverson brackets or Kronecker deltas.
The full sequence is 19, 25, 45, 87, 159, 269, 425, 635, 907, 1249, 1669, 2175, 2775, 3477, 4289, 5219.
I think a comment linking to a well-written explanation of the issue would have been more effective.
Suppose it really is a mathematical question, not a puzzle. If they haven't done their own work (such as checking OEIS), then put it on hold.
Not the answer you're looking for? Browse other questions tagged discussion migration tag-wikis . | CommonCrawl |
I love watches, and I had an idea for a weird kind of watch movement (all of the stuff that moves the hands). It is made up of a a central wheel, with one of the hands connected to it (in this case, it will be the hour hand). This hand goes through a pivot, and then displays the time. I attached a video of a 3d mock up here, because it is kinda hard to explain. My question is, is there any functions that would be able to graph the movement of the end of the hand? I don't want to make the real prototype just yet.
I will take the origin to be the place the hand slides through, $y$ vertical positive up, $x$ horizontal positive right. Let the hand have length $L$ and the circle radius $R$. It appears $L$ is a little greater than $2R$, so it sticks out of the pivot even when the left end is at the farthest left point.
Denote with $l$ the length of the hand and with $R$ the radius of the circle.
As an exercise you can eliminate angle $\alpha$ and obtain an implicit relation between coordinates of point $M$, but there is not much that you can do with it. It is better to work with parametric equations. Select $l,R$ and calculate coordinates for a range of $\alpha$ angles.
Not the answer you're looking for? Browse other questions tagged geometry functions trigonometry or ask your own question.
Function for this game movement graph?
How can I relate the lengths in an analog watch and the angles in sexagesimal system? | CommonCrawl |
SoLid is a short baseline neutrino oscillation experiment, which is searching for sterile neutrinos at the SCK$\cdot$CEN BR2 reactor in Belgium. It uses a novel technology, combining PVT cubes of 5$\times$5$\times$5 cm$^3$ and $^6$LiF:ZnS sheets of $\sim$ 250 $\mu$m thickness. To detect anti-neutrino interactions, signals are read out by a network of wavelength shifting fibers and MPPCs. This fine granularity (12800 cubes) provides powerful tools to distinguish signal from background, but presents a challenge in ensuring homogeneous detector response and calibrating light yield and neutron detection efficiency. This poster describes the results of the quality assurance process with CALIPSO system, which was deployed to perform a first calibration of the 50 detector planes to identify and correct any deficient components during the detector construction. | CommonCrawl |
Jiang, J.Q. and Lloyd, B. (2002) Progress in the development and use of ferrate(VI) salt as an oxidant and coagulant for water and wastewater treatment. Water Research 36(6), 1397-1408.
This paper reviews the progress in preparing and using ferrate(VI) salt as an oxidant and coagulant for water and wastewater treatment. The literature revealed that due to its unique properties (viz. strong oxidizing potential and simultaneous generation of ferric coagulating species), ferrate(VI) salt can disinfect microorganisms, partially degrade and/or oxidise the organic and inorganic impurities, and remove suspended/colloidal particulate materials in a single dosing and mixing unit process, However, these findings have not yet lead to the full-scale application of ferrate(VI,) in the water industry owing to difficulties associated with the relatively low yield of ferrate(VI), the instability of the chemical depending on its method of preparation, and the lack of adequate studies that have demonstrated its capabilities and advantages over existing water and wastewater treatment methods. Fundamental study is thus required to explore the new preparation methods focusing on increasing the production yield and product's stability and avoiding using hypochlorite or chlorine as the oxidant. Also, the application of ferrate(VI) in drinking water treatment has not been studied systematically and future work in this field is recommended.
Sharma, V.K., Kazama, F., Jiangyong, H. and Ray, A.K. (2005) Ferrates (iron(VI) and iron(V)): environmentally friendly oxidants and disinfectants. Journal of Water and Health 3(1), 45-58.
covers a wide range of water treatment topics Iron(VI) and iron(v), known as ferrates, are powerful oxidants and their reactions with pollutants are typically fast with the formation of non-toxic by-products. Oxidations performed by Fe(VI) and Fe(V) show pH dependence; faster rates are observed at lower pH. Fe(VI) shows excellent disinfectant properties and can inactivate a wide variety of microorganisms at low Fe(VI) doses. Fe(VI) also possesses efficient coagulation properties and enhanced coagulation can also be achieved using Fe(VI) as a preoxidant. The reactivity of Fe(V) with pollutants is approximately 3-5 orders of magnitude faster than that of Fe(VI). Fe(V) can thus be used to oxidize pollutants and inactivate microorganisms that have resistance to Fe(VI). The final product of Fe(VI) and Fe(V) reduction is Fe(III), a non-toxic compound. Moreover, treatments by Fe(VI) do not give any mutagenic/carcinogenic by-products, which make ferrates environmentally friendly ions. This paper reviews the potential role of iron(VI) and iron(V) as oxidants and disinfectants in water and wastewater treatment processes. Examples are given to demonstrate the multifunctional properties of ferrates to purify water and wastewater.
Ghernaout, D., Ghernaout, B. and Naceur, M.W. (2011) Embodying the chemical water treatment in the green chemistry-A review. Desalination 271(1-3), 1-10. Green chemistry (GC) is the key to sustainable development as it will lead to new solutions to existing problems. Moreover, it will present opportunities for new processes and products and at its heart is scientific and technological innovation. This paper aims to contribute to a better understanding of the new challenges that chemistry is facing. A particular emphasis is accorded to the need for the development of environmentally friendly technologies for water treatment where GC can be satisfied. Indeed, following the establishment of the 12 Principles of GC, there has been a steady growth in our understanding of what GC means. Furthermore, there are great perspectives relating to the greening of chemical water treatment, especially in terms of ferrate(VI) adding, as oxidant/disinfectant/coagulant in the same time, and microchannel reactors which would be considered as promising devices for water treatment due to their proved advantages.
Sharma, V.K. (2007) Disinfection performance of Fe(VI) in water and wastewater: a review. Water Science and Technology 55(1-2), 225-232.
summary of disinfection work (mostly Sharma's) through 2006 Ferrate(VI) [(FeO42-)-O-VI, Fe(VI)] has excellent disinfectant properties and can inactivate a wide variety of microorganisms at low Fe(VI) dosages. The final product of Fe(VI) is Fe(III), a non-toxic Compound. The treatment by Fe(VI) does not give any chlorination by-products, which makes Fe(VI) an environmentally-friendly ion. The results demonstrate that Fe(VI) can inactivate Escherichia coli (E. coli) at lower dosages or shorter contact time than hypochlorite. Fe(VI) can also kill many chlorine resistant organisms, such as aerobic spore-formers and sulphite-reducing clostridia, and would be highly effective in treating emerging toxins in the aquatic environment. Fe(VI) can thus be used as an effective alternate disinfectant for the treatment of water and wastewater. Moreover, Fe(VI) is now becoming economically available in commercial quantities and can be used as a treatment chemical to meet the water demand of this century. This paper reviews the potential role of Fe(VI) as disinfectant in water and wastewater treatment processes.
Sharma, V.K. (2010) Oxidation of nitrogen-containing pollutants by novel ferrate(VI) technology: A review. Journal of Environmental Science and Health Part a-Toxic/Hazardous Substances & Environmental Engineering 45(6), 645-667.
very detailed review of reactions with DON compounds and some PPCPs Nitrogen-containing pollutants have been found in surface waters and industrial wastewaters due to their presence in pesticides, dyes, proteins, and humic substances. Treatment of these compounds by conventional oxidants produces disinfection by-products (DBP). Ferrate(VI) (Fe(VI)O(4)(2), Fe(VI)) is a strong oxidizing agent and produces a non-toxic by-product Fe(III), which acts as a coagulant. Ferrate(VI) is also an efficient disinfectant and can inactivate chlorine resistant microorganisms. A novel ferrate(VI) technology can thus treat a wide range of pollutants and microorganisms in water and wastewater. The aim of this paper is to review the kinetics and products of the oxidation of nitrogen-containing inorganic (ammonia, hydroxylamine, hydrazine, and azide) and organic (amines, amino acids, anilines, sulfonamides, macrolides, and dyes) compounds by ferrate(VI) in order to demonstrate the feasibility of ferrate(VI) treatment of polluted waters of various origins. Several of the compounds can degraded in seconds to minutes by ferrate(VI) with the formation of non-hazardous products. The mechanism of oxidation involves either one-electron or two-electrons processes to yield oxidation products. Future research directions critical for the implementation of the ferrate(VI)-based technology for wastewater and industrial effluents treatment are recommended.
Sharma, V.K. (2011) Oxidation of inorganic contaminants by ferrates (VI, V, and IV)-kinetics and mechanisms: A review. Journal of Environmental Management 92(4), 1051-1073.
stabiliy of ferrate - abs spectra for many Fe species - many rate constants for inorganics Inorganic contaminants are found in water, wastewaters, and industrial effluents and their oxidation using iron based oxidants is of great interest because such oxidants possess multi-functional properties and are environmentally benign. This review makes a critical assessment of the kinetics and mechanisms of oxidation reactions by ferrates (Fe(VI)O(4)(2-), Fe(V)O(4)(3-), and Fe(IV)). The rate constants (k, M(-1) s(-1)) for a series of inorganic compounds by ferrates are correlated with thermodynamic oxidation potentials. Correlations agree with the mechanisms of oxidation involving both one-electron and two-electron transfer processes to yield intermediates and products of the reactions. Case studies are presented which demonstrate that inorganic contaminants can be degraded in seconds to minutes by ferrate(VI) with the formation of non-toxic products.
Yu, X. and Licht, S. (2008) Advances in electrochemical Fe(VI) synthesis and analysis. Journal of Applied Electrochemistry 38(6), 731-742.
both electrochemical synthetic and analytical methods Hexavalent iron species (Fe(VI)) have been known for over a century, and have long-time been investigated as the oxidant for water purification, as the catalysts in organic synthesis and more recently as cathodic charge storage materials. Conventional Fe(VI) syntheses include solution phase oxidation (by hyphchlorite) of Fe(III), and the synthesis of less soluble super-irons by dissolution of FeO(4)(2-), and precipitation with alternate cations. This paper reviews a new electrochemical Fe(VI) synthesis route including both in situ and ex situ syntheses of Fe(VI) salts. The optimized electrolysis conditions for electrochemical Fe(VI) synthesis are summarized. Direct electrochemical synthesis of Fe(VI) compounds has several advantages of shorter synthesis time, simplicity, reduced costs (no chemical oxidant is required) and providing a possible pathway towards more electro-active and thermal stable Fe(VI) compounds. Fe(VI) analytical methodologies summarized in this paper are a range of electrochemical techniques. Fe(VI) compounds have been explored as energy storage cathode materials in both aqueous and non-aqueous phase in "super-iron" battery configurations. In this paper, electrochemical synthesis of reversible Fe(VI/III) thin film towards a rechargeable super-iron cathode is also summarized.
Macova, Z., Bouzek, K., Hives, J., Sharma, V.K., Terryn, R.J. and Baum, J.C. (2009) Research progress in the electrochemical synthesis of ferrate(VI). Electrochimica Acta 54(10), 2673-2683. Sharma's review on electrochemical synthesis There is renewed interest in the +6 oxidation state of iron, ferrate (VI) (Fe(VI)O(4)(2-)), because of its potential as a benign oxidant for organic synthesis, as a chemical in developing cleaner ("greener") technology for remediation processes. and as an alternative for environment-friendly battery cathodes. This interest has led many researchers to focus their attention on the synthesis of ferrate(VI). Of the three synthesis methods, electrochemical, wet chemical and thermal, electrochemical synthesis has received the most attention due to its ease and the high purity of the product. Moreover, electrochemical processes use an electron as a so-called clean chemical. thus avoiding the use of any harmful chemicals to oxidize iron to the +6 oxidation state. This paper reviews the development of electrochemical methods to synthesize ferrate(VI). The approaches chosen by different laboratories to overcome some of the difficulties associated with the electrochemical synthesis of ferrate(VI) are summarized. Special attention is paid to parameters such as temperature, anolyte, and anode material composition. Spectroscopic work to understand the mechanism of ferrate(VI) synthesis is included. Recent advances in two new approaches, the use of an inert electrode and molten hydroxide salts, in the synthesis of ferrate(VI) are also reviewed. Progress made in the commercialization of ferrate(VI) continuous production is briefly discussed as well.
Licht, S., Wang, B.H. and Ghosh, S. (1999) Energetic iron(VI) chemistry: The super-iron battery. Science 285(5430), 1039-1042.
Science article - pertains to use in batteries, not water treatment Higher capacity batteries based on an unusual stabilized iron(VI) chemistry are presented. The storage capacities of alkaline and metal hydride batteries are largely cathode limited, and both use a potassium hydroxide electrolyte. The new batteries are compatible with the alkaline and metal hydride battery anodes but have higher cathode capacity and are based on available, benign materials. Iron(VI/III) cathodes can use low-solubility K2FeO4 and BaFeO4 salts with respective capacities of 406 and 313 milliampere-hours per gram. Super-iron batteries have a 50 percent energy advantage compared to conventional alkaline batteries. A cell with an iron(VI) cathode and a metal hydride anode is significantly (75 percent) rechargeable.
Sharma, V. K. (2013). "Ferrate(VI) and ferrate(V) oxidation of organic compounds: Kinetics and mechanism." Coordination Chemistry Reviews 257(2): 495-510.
Lee, Y. and von Gunten, U. (2010) Oxidative transformation of micropollutants during municipal wastewater treatment: Comparison of kinetic aspects of selective (chlorine, chlorine dioxide, ferrate(VI), and ozone) and non-selective oxidants (hydroxyl radical). Water Research 44(2), 555-566.
Chemical oxidation processes have been widely applied to water treatment and may serve as a tool to minimize the release of micropollutants (e g pharmaceuticals and endocrine disruptors) from municipal wastewater effluents into the aquatic environment The potential of several oxidants for the transformation of selected micropollutants such as atenolol, carbamazepine, 17 alpha-ethinylestradiol (EE2), ibuprofen, and sulfamethoxazole was assessed and compared The oxidants include chlorine, chlorine dioxide, ferrate(VI), and ozone as selective oxidants versus hydroxyl radicals as non-selective oxidant. Second-order rate constants (k) for the reaction of each oxidant show that the selective oxidants react only with some electron-rich organic moieties (ERMs), such as phenols, anilines, olefins, and deprotonated-amines in contrast, hydroxyl radicals show a nearly diffusion-controlled reactivity with almost all organic moieties (k > 10(9) M-1 s(-1)) Due to a competition for oxidants between a target micropollutant and wastewater matrix (i e effluent organic matter, EfOM), a higher reaction rate with a target micropollutant does not necessarily translate into more efficient transformation For example, transformation efficiencies of EE2, a phenolic micropollutant, in a selected wastewater effluent at pH 8 varied only within a factor of 7 among the selective oxidants, even though the corresponding k for the reaction of each selective oxidant with EE2 varied over four orders of magnitude in addition, for the selective oxidants, the competition disappears rapidly after the ERMs present in EfOM are consumed In contrast, for hydroxyl radicals, the competition remains practically the same during the entire oxidation Therefore, for a given oxidant dose, the selective oxidants were more efficient than hydroxyl radicals for transforming ERMs-containing micropollutants, while hydroxyl radicals are capable of transforming micropollutants even without ERMs Besides EfOM, ammonia, nitrite, and bromide were found to affect the micropollutant transformation efficiency during chlorine or ozone treatment.
Sharma, V.K. (2010) Oxidation of Inorganic Compounds by Ferrate(VI) and Ferrate(V): One-Electron and Two-Electron Transfer Steps. Environmental Science & Technology 44(13), 5148-5152.
Ferrate(VI) (Fe(VI)O(4)(-), Fe(VI)) and ferrate(V) (Fe(V)O(4)(3-), Fe(V)) have a high oxidizing power and upon decomposition form a nontoxic byproduct Fe(III), which makes them environmentally friendly oxidants in water and wastewater treatment The kinetics of the reaction between Fe(VI) and I(-) was determined using a stopped-flow technique. The second-order rate constants (k, M(-1) s(-1)) for the oxidation of I- and other inorganic compounds by protonated ferrate(VI)(HFe(VI)O(4)(-)) and ferrate(V) (Fe(V)O(4)(3-)) ions were correlated with thermodynamic reduction potentials to understand the reaction mechanisms. A linear relationship was found between log k and 1-e(-) reduction potentials for iodide, cyanides, and superoxide while oxy-compounds of nitrogen, sulfur, selenium, and arsenic demonstrated a linear relationship for 2-e(-) reduction potentials. The estimated limits for the reduction potentials of the couple are E(0)(Fe(VI)/Fe(V)) >= 0.76 V and E(0)(Fe(VI)/Fe(IV)) >= -0.18 V. Conclusions drawn from the correlations are consistent with the mechanisms postulated from stoichiometries, intermediates, and products of the reactions. Implication of the kinetic results in treatment using ferrate(VI) is briefly discussed.
Johnson Michael, D., J. Hornstein Brooks, and J. Wischnewsky. 2008. Ferrate(VI) Oxidation of Nitrogenous Compounds, p. 177-188, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. hydrazines, hydroxylamines, azo compounds, analines The oxidation kinetics of a series of nitrogen containing compounds by ferrate(VI), FeO42-, is described. Each of these reactions was studied at 25°C using spectrophotometric techniques. These included stopped-flow, rapid scanning spectrophotometry and convention Diode array spectrophotometry. Mechanistic schemes are proposed for each system studied along with potential intermediates when observed or required by kinetic data.
Carr James, D. 2008. Kinetics and Product Identification of Oxidation by Ferrate(VI) of Water and Aqueous Nitrogen Containing Solutes, p. 189-196, In V. K. Sharma, ed. Ferrates, ACS Symp. 985.
Water is shown to be oxidized by ferrate(VI) via a pathway dominated by protonated ferrate and leading to molecular oxygen via hydrogen peroxide. Nitrogen-containing solutes are oxidized in competition with water oxidation. Products of oxidation are determined by a variety of analytical methods. Iron(II) is shown to be an intermediate state of iron which originated as Fe(VI) but Fe(III) is the final product in the absence of a trapping reagent for Fe(II). Nitroxyl is shown to be an important intermediate species in the oxidation of azide and hydroxylamine.
Lee, Y., Yoon, J. and Von Gunten, U. (2005) Kinetics of the oxidation of phenols and phenolic endocrine disruptors during water treatment with ferrate (Fe(VI)). Environmental Science & Technology 39(22), 8978-8984.
The ability of ferrate (Fe(Vl)) to oxidize phenolic endocrine-disrupting chemicals (EDCs) and phenols during water treatment was examined by determining the apparent second-order rate constants (k(app)) for the reaction of Fe(VI) with selected environmentally relevant phenolic EDCs(17 alpha-ethinylestradiol, beta-estradiol, and bisphenol A) and 10 substituted phenols at pH values ranging from 6 to 11. The three selected groups of EDCs showed appreciable reactivity with Fe(VI) (k(app) at pH 7 ranged from 6.4 x 10(2) to 7.7 x 10(2) M-1 s(-1)). The kapp for the substituted phenols studied at pH 7 ranged from 6.6 to 3.6 x 10(3) M-1 s(-1), indicating that many other potential phenolic ENS can be oxidized by Fe(VI) during water treatment. The Hammett-type correlations were determined for the reaction between HFeO4- and the undissociated (log(k) = 2.24-2.27 sigma(+)) and dissociated phenol (log(k) = 4.33-3.60 sigma(+)). A comparison of the Hammett correlation obtained for the reaction between HFeO4- and dissociated phenol with those obtained from other drinking water oxidants revealed that HFeO4- is a relatively mild oxidant of phenolic compounds. The effectiveness of Fe(VI) for the oxidative removal of phenolic EDCs was also confirmed in both natural water and wastewater.
Sharma, V.K., Li, X.Z., Graham, N. and Doong, R.A. (2008) Ferrate(VI) oxidation of endocrine disruptors and antimicrobials in water. Journal of Water Supply Research and Technology-Aqua 57(6), 419-426.
estrogens, sulfas, and BPA, including BPA degradation pathways Potassium ferrate(VI) (K2FeO4) has advantageous properties such as a dual function as an oxidant and disinfectant with a non-toxic byproduct, iron(III), which makes it an environmentally friendly chemical for water treatment. This paper presents an assessment of the potential of ferrate(VI) to oxidize representative endocrine disruptors (EDs) and antimicrobials during water treatment using information about reaction kinetics and products. Selected EDs were bisphenol A (BPA) and 17 alpha-ethynylestradiol (EE2), estrone (E1), 17 beta-estradiol (E2), and estriol (E3), and sulfonamides and tetracycline were representative pharamaceuticals. The second-order rate constants, k, of the oxidation reactions at neutral pH were in the range from 6.50-11.8 x 10(2) M(-1)s(-1) and 0.79-15.0 x 10(2) M(-1)s(-1) for EDs and sulfonamides, respectively. At a 10 mgL(-1) K2FeO4 dose, half-lives of the oxidation reaction would be in seconds at a neutral pH. The values of k, and the reaction half-lives, varied with pH. Oxidation products from the reaction with BPA and sulamethoxazole (SMX) at molar ratios of similar to 5:1 were found to be relatively less toxic. Overall, ferrate(VI) oxidation could be an effective treatment method for the purification of waters containing these particular EDs and antimicrobials.
Lee, C., Lee, Y., Schmidt, C., Yoon, J. and Von Gunten, U. (2008) Oxidation of suspected N-nitrosodimethylamine (NDMA) precursors by ferrate (VI): Kinetics and effect on the NDMA formation potential of natural waters. Water Research 42(1-2), 433-441.
The potential of ferrate (Fe(VI)) oxidation to remove N-nitrosodimethylamine (NDMA) precursors during water treatment was assessed. Apparent second-order rate constants (k(app)) for the reactions of NDMA and its suspected precursors (dimethylamine (DMA) and 7 tertiary amines with DMA functional group) with Fe(VI) were determined in the range of pH 6-12. Four model NDMA precursors (dimethyldithiocarbamate, dimethylaminobenzene, 3-(dimethylaminomethyl)indole and 4-dimethylaminoantipyrine) showed high reactivity toward Fe(VI) with k(app) values at pH 7 between 2.6 x 10(2) and 3.2 x 10(5) M-1 s(-1). The other NDMA precursors (DMA, trimethylamine, dimethylethanolamine, dimethylformamide) and NDMA had kapp values ranging from 0.55 to 9.1 M-1 s(-1) at pH 7. In the second part of the study, the NDMA formation potentials (NDMA-FP) of the model NDMA precursors and natural waters were measured with and without pre-oxidation by Fe(VI). For most of the NDMA precursors with the exception of DMA, a significant reduction of the NDMA-FP (>95%) was observed after complete transformation of the NDMA precursor. This result was supported by low yields of DMA from the Fe(VI) oxidation of tertiary amine NDMA precursors. Pre-oxidation of several natural waters (rivers Rhine, Neckar and Pfinz) with a high dose of Fe(VI) (0.38 mM = 21 mg L-1 as Fe) led to removals of the NDMA-FP of 46-84%. This indicates that the NDMA precursors in these waters have a low reactivity toward Fe(VI) because it has been shown that for fast-reacting NDMA precursors Fe(VI) doses of 20 mu M (1.1 mg L-1 as Fe) are sufficient to completely oxidize the precursors.
Sharma, V. K., F. Liu, et al. (2013). "Oxidation of beta-lactam antibiotics by ferrate(VI)." Chemical Engineering Journal 221: 446-451.
antibiotics Amoxicillin(AMX) and ampicillin (AMP), penicillin class p-lactam antibiotics, have been detected in wastewater effluents and their release into the environment may involve long-term risks such as toxicity to aquatic organisms and endocrine disruption in higher organisms. This paper demonstrates the removal of AMX and AMP by ferrate(VI) (Fe(VI)) by performing kinetics and stoichiometric experiments. The dependence of the second-order rate constants of the reaction between Fe(VI) and AMX (or AMP) on pH was explained using acid-base equilibria of Fe(VI) and organic molecules. The kinetics study with the model compound, 6-aminopencillanic acid and the pH dependence behavior suggested that Fe(VI) reacted with the amine moieties of the studied beta-lactams. The reactivity of different oxidants with AMX have been shown to follow the sequence:.(OH)-O-center dot approximate to SO4 center dot- > bromine > ozone > chlorine > Fe(VI). The required molar stoichiometric ratios ([Fe(VI)]:[beta-lactam]) for the complete removal of AMX and AMP by Fe(VI) were about 4.5 and 3.5, respectively. The Fe(VI) is able to eliminate AMX and AMP and hence is likely to also oxidize other p-lactams effectively.
Casbeer, E. M., V. K. Sharma, et al. (2013). "Kinetics and Mechanism of Oxidation of Tryptophan by Ferrate(VI)." Environmental Science & Technology 47(9): 4572-4580.
Tryptophan Kinetics of the oxidation of tryptophan (Trp) and kynurenine (Kyn), precursors of nitrogenous disinfection byproducts (N-DBP), by ferrate(VI) ((FeO42-)-O-VI Fe(VI)) were investigated over the acidic to basic pH range. The second-order rate constants decreased with increase in pH, which could be described by the speciation of Fe(VI) and Trp (or Kyn). The trend of pH dependence of rates for Trp (i.e., aromatic alpha-amino acid) differs from that for glycine (i.e., aliphatic alpha-amino acid). A nonlinear relationship between transformation of Trp and the added amount of Fe(VI) was found. This suggests that the formed intermediate oxidized products (OPs), identified by LC-PDA and LC-MS techniques, could possibly compete with Tip to react with Fe(VI). N-Formylkynurenine (NFK) at pH 7.0 and 4-hydroxyquinoline (4-OH Q) and kynurenic acid (Kyn-A) at pH 9.0 were the major OPs. Tryptophan radical formation during the reaction was confirmed by the rapid-freeze quench EPR experiments. The oxygen atom transfer from Fe(VI) to NFK was demonstrated by reacting (FeO42-)-O-18 ion with Tip. A proposed mechanism explains the identified OPs at both neutral and alkaline pH. Kinetics and OPs by Fe(VI) were compared with other oxidants (chlorine, ClO2 center dot, O-3, and (OH)-O-center dot).
Anquandah, G. A. K., V. K. Sharma, et al. (2013). "Ferrate(VI) oxidation of propranolol: Kinetics and products." Chemosphere 91(1): 105-109.
Propranolol The oxidation of propranolol (PPL), a beta-blocker by ferrate(VI) (Fe(VI)) was studied by performing kinetics, stoichiometry, and analysis of the reaction products. The rate law for the oxidation of PPL by Fe(VI) was first-order with respect to each reactant. The dependence of second-order rate constants of the reaction of Fe(VI) and PPL on pH was explained using acid-base equilibrium of Fe(VI) and PPL. The required molar stoichiometry for the complete removal of PPL was determined to be 6:1 ([Fe(VI):[PPL]). The identified products using liquid chromatography-tandem mass spectrometry were oxidized product (OP)-292, OP-308, and OP-282. The formed OPs could possibly compete with the parent molecule to react with Fe(VI) and thus resulted in a non-linear relationship between degradation of PPL and the added amount of Fe(VI). Rate and removal studies indicate the Fe(VI) is able to oxidize PPL and hence can also oxidize other beta-blockers, e.g., atenolol and metoprolol.
Sharma, V. K., K. Siskova, et al. (2012). Mechanism of Oxidation of Cysteine and Methionine by Ferrate(VI): Mossbauer Investigation. Mossbauer Spectroscopy in Materials Science - 2012. J. Tucek and L. Machala. 1489: 139-144.
Cysteine & Methionine Oxidation of organosulfur compounds (S) by ferrate(VI) ((FeO42-)-O-VI, Fe(VI)) proceeds by the transfer of oxygen atom to S. A vast amount of literature proposed oxygen atom transfer (OAT) via 2-e(-) transfer process in which Fe(IV) acts as an intermediate, and Fe(II) was also proposed to be an intermediate or final reduced iron species of Fe(VI) (Fe-VI -> Fe-IV -> Fe-II). In this paper, Mossbauer spectroscopy was applied to explore intermediate iron species in the oxidation of cysteine (Cys) and methionine (Met) by Fe(VI). In the oxidation of Cys, both Fe(II) and Fe(III) were observed while only Fe(III) was seen in the oxidation of Met by Fe(VI). These results support that no Fe(II) species was formed in the oxidation of Met before forming Fe(III). These results are in consistency with the possibility of initial 1-e(-) transfer with the formation of Fe(V) species and subsequent 2-e(-) transfer to yield Fe(III) (Fe-VI -> Fe-V -> Fe-III).
Yang, B., Ying, G.G., Zhao, J.L., Liu, S., Zhou, L.J. and Chen, F. (2012) Removal of selected endocrine disrupting chemicals (EDCs) and pharmaceuticals and personal care products (PPCPs) during ferrate(VI) treatment of secondary wastewater effluents. Water Research 46(7), 2194-2204.
Sharma, V.K., Sohn, M., Anquandah, G.A.K. and Nesnas, N. (2012) Kinetics of the oxidation of sucralose and related carbohydrates by ferrate(VI). Chemosphere 87(6), 644-648.
The kinetics of the oxidation of sucralose, an emerging contaminant, and related monosaccharides and disaccharides by ferrate(VI) (Fe(VI)) were studied as a function of pH (6.5-10.1) at 25 degrees C. Reducing sugars (glucose, fructose, and maltose) reacted faster with Fe(VI) than did the non-reducing sugar sucrose or its chlorinated derivative, sucralose. Second-order rate constants of the reactions of Fe(VI) with sucralose and disaccharides decreased with an increase in pH. The pH dependence was modeled by considering the reactivity of species of Fe(VI), (HFeO4- and FeO42-) with the studied substrates. Second-order rate constants for the reaction of Fe(VI) with monosaccharides displayed an unusual variation with pH and were explained by considering the involvement of hydroxide in catalyzing the ring opening of the cyclic form of the carbohydrate at increased pH. The rate constants for the reactions of carbohydrates with Fe(VI) were compared with those for other oxidant species used in water treatment and were briefly discussed.
Zimmermann, S.G., Schmukat, A., Schulz, M., Benner, J., von Gunten, U. and Ternes, T.A. (2012) Kinetic and Mechanistic Investigations of the Oxidation of Tramadol by Ferrate and Ozone. Environmental Science & Technology 46(2), 876-884.
The kinetics and oxidation products (OPs) of tramadol (TRA), an opioid, were investigated for its oxidation with ferrate (Fe(VI)) and ozone (O-3). The kinetics could be explained by the speciation of the tertiary amine moiety of TRA, with apparent second-order rate constants of 7.4 (+/- 04) M-1 s(-1) (Fe(VI)) and 4.2 (+/- 0.3) x 10(4) M-1 s(-1) (O-3) at pH 8.0, respectively. In total, six OPs of TRA were identified for both oxidants using Qq-LIT-MS, LTQFT-MS, GC-MS, and moiety-specific chemical reactions. In excess of oxidants, these OPs can be further transformed to unidentified OPs. Kinetics and OP identification confirmed that the lone electron pair of the amine-N is the predominant site of oxidant attack. An oxygen transfer mechanism can explain the formation of N-oxide-TRA, while a one-electron transfer may result in the formation of N-centered radical cation intermediates, which could lead to the observed N-dealkylation, and to the identified formamide and aldehyde derivatives via several intermediate steps. The proposed radical intermediate mechanism is favored for Fe(VI) leading predominantly to N-desmethyl-TRA (ca. 40%), whereas the proposed oxygen transfer prevails for O-3 attack resulting in N-oxide-TRA as the main OP (ca. 90%).
Yang, B., Ying, G.G., Zhang, L.J., Zhou, L.J., Liu, S. and Fang, Y.X. (2011) Kinetics modeling and reaction mechanism of ferrate(VI) oxidation of benzotriazoles. Water Research 45(6), 2261-2269.
Benzotriazoles (BTs) are high production volume chemicals with broad application in various industrial processes and in households, and have been found to be omnipresent in aquatic environments. We investigated oxidation of five benzotriazoles (BT: 1H-benzotriazole; 5MBT: 5-methyl-1H-benzotriazole; DMBT: 5,6-dimethyl-1H-benzotriazole hydrate; 5CBT: 5-chloro-1H-benzotriazole; HBT: 1-hydroxybenzotriazole) by aqueous ferrate (Fe(VI)) to determine reaction kinetics as a function of pH (6.0-10.0), and interpreted the reaction mechanism of Fe(VI) with BTs by using a linear free-energy relationship. The pK(a) values of BT and DMBT were also determined using UV-Visible spectroscopic method in order to calculate the species-specific rate constants, and they were 8.37 +/- 0.01 and 8.98 +/- 0.08 respectively. Each of BTs reacted moderately with Fe(VI) with the k(app) ranged from 7.2 to 103.8 M(-1)s(-1) at pH 7.0 and 24 +/- 1 degrees C. When the molar ratio of Fe(VI) and BTs increased up to 30:1, the removal rate of BTs reached about >95% in buffered milli-Q water or secondary wastewater effluent. The electrophilic oxidation mechanism of the above reaction was illustrated by using a linear free-energy relationship between pH-dependence of species-specific rate constants and substituent effects (sigma p). Fe(VI) reacts initially with BTs by electrophilic attack at the 1,2,3-triazole moiety of BT, 5MBT, DMBT and 5CBT, and at the N-OH bond of HBT. Moreover, for BT, 5MBT, DMBT and 5CBT, the reactions with the species HFeO(4)(-) predominantly controled the reaction rates. For HBT, the species H(2)FeO(4) with dissociated HBT played a major role in the reaction. The results showed that Fe(VI) has the ability to degrade benzotriazoles in water.
Remsberg Jarrett, R., P. Rice Clifford, H. Kim, O. Arikan, and C. Moon. 2008. Removal of Estrogenic Compounds in Dairy Waste Lagoons by Ferrate(VI), p. 420-433, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. BPA & Hormones Ferrate(VI) was used to remove steroidal estrogens (SE) from dairy waste lagoon effluent (DWLE). Dairy lagoon sites were sampled for estrogenic content (EC) and assayed using high performance liquid chromatography coupled to triple quadrupole mass spectrometry. Effects of varying amounts of ferrate(VI) and ferric chloride treatments on the EC of these DWLE samples were determined. Of the compounds measured, 17?-estradiol, 19.7 µg/L, was the most abundant and estriol, 2.10 µg/L the least abundant. When DWLE was treated with a high concentration (0.84%) of ferrate(VI) there was a significant decrease (>50%) in 17?-estradiol content. Ferrate(VI) treatment of DWLE may be an environmentally sound approach to reduce estrogenic compounds.
Zhang, P.Y., Zhang, G.M., Dong, J.H., Fan, M.H. and Zeng, G.M. (2012) Bisphenol A oxidative removal by ferrate (Fe(VI)) under a weak acidic condition. Separation and Purification Technology 84, 46-51.
Recently there has been increasing concerns on widespread occurrence of endocrine disrupting chemicals (EDCs) in aquatic environment. Bisphenol A (BPA) in water was oxidized as a target EDC by K2FeO4 (Fe(VI)) in this study. The results showed that BPA was effectively removed within a broad initial water pH range of 5.0-9.5, especially under a weak acidic condition between the initial pH 5 and 6. When the initial BPA concentration was about 1.5 mg L-1, BPA could be completely removed with a oxidation time of 30 min and a Fe(VI)/BPA molar ratio of 3.0. After Fe(VI) oxidation, UV254 of the water samples significantly increased, indicating that BPA degradation intermediates and end products still contained phenyl ring. Further online UV scanning showed that the UV absorbance obviously changed within the UV range of 190-215 nm and 230-300 nm during Fe(VI) oxidation. The DOC of water samples reduced with the increase of Fe(VI) dosage and prolonging of oxidation time, and about 50% of BPA was mineralized after Fe(VI) oxidation under a Fe(VI)/BPA molar ratio of 4.0. The influences of coexisting constituents such as humic acids, SiO32-, HCO3- and tert-butanol were studied. The results showed that humic acids and SiO32- notably inhibited the BPA removal; tert-butanol slightly decreased the BPA removal: and the existence of HCO3- slightly enhanced the BPA removal.
Anquandah, G.A.K., Sharma, V.K., Knight, D.A., Batchu, S.R. and Gardinali, P.R. (2011) Oxidation of Trimethoprim by Ferrate(VI): Kinetics, Products, and Antibacterial Activity. Environmental Science & Technology 45(24), 10575-10581.
Kinetics, stoichiometry, and products of the oxidation of trimethoprim (TMP), one of the most commonly detected antibacterial agents in surface waters and municipal wastewaters, by ferrate(VI) (Fe(VI)) were determined. The pH dependent second-order rate constants of the reactions of Fe(VI) with TMP were examined using acid-base properties of Fe(VI) and TMP. The kinetics of reactions of diaminopyrimidine (DAP) and trimethoxytoluene (TMT) with Fe(VI) were also determined to understand the reactivity of Fe(VI) with TMP. Oxidation products of the reactions of Fe(VI) with TMP and DAP were identified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Reaction pathways of oxidation of TMP by Fe(VI) are proposed to demonstrate the cleavage of the TMP molecule to ultimately result in 3,4,5,-trimethoxybenzaldehyde and 2,4-dinitropyrimidine as among the final identified products. The oxidized products mixture exhibited no antibacterial activity against E. coli after complete consumption of TMP. Removal of TMP in the secondary effluent by Fe(VI) was achieved.
Yang, B., Ying, G.G., Zhao, J.L., Zhang, L.J., Fang, Y.X. and Nghiem, L.D. (2011) Oxidation of triclosan by ferrate: Reaction kinetics, products identification and toxicity evaluation. Journal of Hazardous Materials 186(1), 227-235.
The oxidation of triclosan by commercial grade aqueous ferrate (Fe(VI)) was investigated and the reaction kinetics as a function of pH (7.0-10.0) were experimentally determined. Intermediate products of the oxidation process were characterized using both GC-MS and RRLC-MS/MS techniques. Changes in toxicity during the oxidation process of triclosan using Fe(VI) were investigated using Pseudokirchneriella subcapitata growth inhibition tests. The results show that triclosan reacted rapidly with Fe(VI), with the apparent second-order rate constant. k(app), being 754.7 M(-1) s(-1) at pH 7. At a stoichiometric ratio of 10:1 (Fe(VI):triclosan), complete removal of triclosan was achieved. Species-specific rate constants, It, were determined for reaction of Fe(VI) with both the protonated and deprotonated triclosan species. The value of k determined for neutral triclosan was 6.7(+/- 1.9) x 10(2) M(-1) s(-1), while that measured for anionic triclosan was 7.6(+/- 0.6) x 10(3) M(-1) s(-1). The proposed mechanism for the oxidation of triclosan by the Fe(VI) involves the scission of ether bond and phenoxy radical addition reaction. Coupling reaction may also occur during Fe(VI) degradation of triclosan. Overall, the degradation processes of triclosan resulted in a significant decrease in algal toxicity. The toxicity tests showed that Fe(VI) itself dosed in the reaction did not inhibit green algae growth.
Yang, S.-f., and R.-a. Doong. 2008. Preparation of Potassium Ferrate for the Degradation of Tetracycline, p. 404-419, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Tetracycline Tetracycline antibiotics are widely used in veterinary medicine and growth-promoting antibiotics for treatment and/or prevention of infectious disease because of their broad-spectrum activity and cost benefit. Tetracycline is rather persistent and a large fraction, which can be up to 75 %, of a single dose can be excreted in non-metabolized form in manures. In addition, potassium ferrate (Fe(VI)) is a powerful oxidant over a wide pH range and can be used as an environmentally friendly chemical in treated and natural waters. Therefore, the ability of ferrate (VI) to oxidize tetracycline in aqueous solution was examined in this study. The stability of Fe(VI) was monitored by the 2,2'-azinobis(3-ethylbenzothiazoline-6-sulfonate) (ABTS) method. Results showed that the decomposition of Fe(VI) is highly dependent on pHs. A minimum decomposition rate constant of 1 x 10-4 s-1 was observed at pH 9.2. A stoichiometry of 1.4:1 was observed for the reaction of tetracycline with Fe(VI) at pH 9.2. In addition, tetracycline was rapidly transformed in the presence of low concentrations of Fe(VI) in the pH rang 8.3-10.0. The degradation efficiency of tetracycline is affected by both pH and initial Fe(VI) concentration. The degradation of tetracycline by Fe(VI) was also confirmed by ESI-MS and total organic carbon (TOC) analyses.
de Luca, S.J., Pegorer, M.G. and de Luca, M.A. (2010) Aqueous oxidation of microcystin-LR by ferrate(VI). Engenharia Sanitaria E Ambiental 15(1), 5-10.
Algae toxins are becoming a severe problem in the water treatment industry, especially for human and animal consumption. Traditional treatment processes have failed in complying with water supply standards. Potassium ferrate(VI) is a powerful oxidant, disinfectant and, also, a coagulant. In this paper, the results of microcystin-LR oxidation by ferrate(VI) ion are presented. Kinetic and jar tests showed a average value of 0,012 min(-1) for the pseudo first order reaction rate constant, for 100 and 200 mu g.L(-1) concentration of MC-LR. Ferrate(VI) dosages between 1.6 and 5.0 mg.L(-1) suggest that water supply standards for MC-LR can be reached, which means that the oxidant may be employed as coadjuvant in water treatment.
Hu, L., Martin, H.M., Arcs-Bulted, O., Sugihara, M.N., Keatlng, K.A. and Strathmann, T.J. (2009) Oxidation of Carbamazepine by Mn(VII) and Fe(VI): Reaction Kinetics and Mechanism. Environmental Science & Technology 43(2), 509-515.
Experimental studies were conducted to examine the oxidation of carbamazepine, an anticonvulsant drug widely detected in surface waters and sewage treatment effluent, by potassium salts of permanganate (Mn(VII); KMnO(4)) and ferrate (Fe(VI); K(2)FeO(4)). Results show that both Mn(VII) and Fe(VI) rapidly oxidize carbamazepine by electrophilic attack at an olefinic group in the central heterocyclic ring, leading to ring-opening and a series of organic oxidation products. Reaction kinetics follow a generalized second-order rate law, with apparent rate constants at pH 7.0 and 25 degrees C of 3.0(+/- 0.3) x 10(2) M(-1) s(-1) for Mn(VII) and 70(+/- 3) M(-1) s(-1) for Fe(VI). Mn(VII) reaction rates exhibit no pH dependence, whereas Fe(VI) reaction rates increase dramatically with decreasing pH, due to changing acid-base speciation of Fe(VI). Further studies with Mn(VII) show that most common nontarget water constituents, including natural organic matter, have no significant effect on rates of carbamazepine oxidation; reduced metals and (bi)sulfide exert a stoichiometric Mn(VII) demand that can be incorporated into the kinetic model. The removal of carbamazepine in two utility source waters treated with KMnO(4) agrees closely with predictions from the kinetic model that was parametrized using experiments conducted in deionized water at much higher reagent concentrations.
Anquandah, G.A.K. and Sharma, V.K. (2009) Oxidation of octylphenol by ferrate(VI). Journal of Environmental Science and Health Part a-Toxic/Hazardous Substances & Environmental Engineering 44(1), 62-66.
The rates of the oxidation of octylphenols (OP) by potassium ferrate(VI) (K(2)FeO(4)) in water were determined as a function of pH (8.0-10.9) at 25 degrees C. The rate law for the oxidation of OP by Fe(VI) was found to be first order with each reactant. The observed second-order rate constants, k(obs), for the oxidation of alkylphenols decreased with an increase in pH. The speciation of Fe(VI) (HFeO(4)(-) and FeO(4)(2-)) and OP (OP-OH and OP-O(-)) species were used to determine individual rate constants of the reactions. Comparison of rate constants and half-lives of oxidation of OP by Fe(VI) with nonylphenol (NP) and bisphenol-A (BPA) were conducted to demonstrate that Fe(VI) efficiently oxidizes environmentally relevant alkylphenols in water.
Yu, M., G. Park, and H. Kim. 2008. Oxidation of Nonylphenol Using Ferrate, p. 389-403, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Nonylphenol Public concerns on nonylphenol (NP) and nonylphenol ethoxylates (NPEOs) are growing because they are frequently detected in the aquatic environment and proven endocrine disrupter compounds (EDCs). Since these compounds cannot be biologically completely degraded, chemical oxidation has been frequently applied to degrade NP and NPEOs. In this study, ferrate(VI) (Fe(VI)) was used to oxidize NP and its oxidation kinetics was evaluated. It should, however, be noted that the first order rate was evaluated using data collected only after the initial degradation phase, in which 50-70 % Np was degraded. In fact, the NP and Fe(VI) concentrations during the ID phase could not be quantified since the oxidation was too fast. The effect of hydrogen peroxide (H2O2) presence on the NP oxidation by Fe(VI) was also evaluated. In general, the initial destruction of NP by Fe(VI) at lower pH was more significant than higher pH (i.e., 26% at pH 9.0 and 71% at pH 6.0). H2O2 addition did not have much impact on the NP oxidation. When applied to oxidation of NP in natural water, Fe(VI) showed less removal efficiency possibly due to the presence of dissolved organics in the water.
Sharma V.K., N. Noorhasan Nadine, K. Mishra Santosh, and N. Nesnas. 2008. Ferrate(VI) Oxidation of Recalcitrant Compounds: Removal of Biological Resistant Organic Molecules by Ferrate(VI), p. 339-349, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. EDTA & Sulfamethoxazole The oxidation of recalcitrant compounds, ethylenediaminetetraacetate (EDTA) and sulfamethoxazole (SMX) by ferrate(VI) (Fe(VI), FeVIO42-) is presented. Kinetics of the reactions were determined as a function of pH at 25 °C by a stopped-flow technique. The rate law for the oxidation of these compounds by Fe(VI) is first-order with respect to each reactant. The observed second-order rate constants for the reaction of Fe(VI) with SMX decreased non-linearly with increase in pH and are possibly related to protonations of Fe(VI) and compounds. The individual rate constants, k (M-1s-1) of Fe(VI) species, HFeO4- and FeO42- with protonated and deprotonated forms of compounds were estimated. The HFeO4- species reacts much faster with these compounds than does the FeO42- species. The results showed that Fe(VI) has the potential to serve as an environmentally-friendly oxidant for removing biologically resistant organic molecules within minutes and converting them to relatively less toxic by-products in water.
Sharma, V.K. and Mishra, S.K. (2006) Ferrate(VI) oxidation of ibuprofen: A kinetic study. Environmental Chemistry Letters 3(4), 182-185.
The kinetics of ferrate(VI) ((FeO42-)-O-VI, Fe(VI)) oxidation of an antiphlogistic drug, ibuprofen (IBP), as a function of pH (7.75-9.10) and temperature (25-45 degrees C) were investigated to see the applicability of Fe(VI) in removing this drug from water. The rates decrease with an increase in pH and the rates are related to protonation of ferrate(VI). The rates increase with an increase in temperature. The E-a of the reaction at pH 9.10 was calculated as 65.4 +/- 6.4 kJ mol(-1). The rate constant of the HFeO4- with ibuprofen is lower than with the sulphur drug, sulfamethoxazole. The use of Fe(VI) to remove ibuprofen is briefly discussed.
Li, X.Z., B.L. Yuan, and N. Graham. 2008. Degradation of Dibutyl Phthalate in Aqueous Solution by a Combined Ferrate and Photocatalytic Oxidation Process, p. 364-377, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Dibutyl Phthalate The aim of the present work was to study the interaction of ferrate oxidation and photocatalytic oxidation in terms of dibutyl phthalate (DBP) degradation in aqueous solution, in which three sets of experiments were carried out, including (1) ferrate oxidation alone, (2) photocatalytic oxidation alone, and (3) the combination of ferrate oxidation and photocatalytic oxidation. Laboratory experiments demonstrated that ferrate oxidation and phtotocatalytic oxidation of DBP in aqueous solution are relatively slow processes. However, the presence of TiO2 and ferrate together under UV illumination accelerated the DBP degradation significantly. Since ferrate was reduced quickly due to the presence of TiO2 and UV irradiation, the DBP degradation reaction can be divided into two phases. During the first 30 min (Phase 1) the DBP was degraded by both photocatalytic oxidation and ferrate oxidation, and by interactive reactions. After 30 min (Phase 2), the ferrate residual had declined to a very low level and the photocatalytic reaction was the dominant mechanism of further DBP degradation. The influence of three main factors, ferrate dosage, TiO2 dosage and pH, on the DBP degradation were investigated in order to understand the reaction mechanism and kinetics.
Graham, N., Jiang, C.C., Li, X.Z., Jiang, J.Q. and Ma, J. (2004) The influence of pH on the degradation of phenol and chlorophenols by potassium ferrate. Chemosphere 56(10), 949-956.
This paper presents information concerning the influence of solution pH on the aqueous reaction between potassium ferrate and phenol and three chlorinated phenols: 4-chlorophenol (Cl?), 2,4-dichlorophenol (DCP), 2,4,6-trichlorophenol (TCP). The redox potential and aqueous stability of the ferrate ion, and the reactivity of dissociating compounds, are known to be pH dependent. Laboratory tests have been undertaken over a wide range of pH (5.8-11) and reactant concentrations (ferrate: compound molar ratios of 1:1 to 8:1). The reactivity of trichloroethylene was also investigated as a reference compound owing to its non-dissociating nature. The extent of compound degradation by ferrate was found to be highly pH dependent, and the optimal pH (maximum degradation) decreased in the order: phenol/CP, DCP, TCP; at the optimal pH the degree of degradation of these compounds was similar. The results indicate that for the group of phenol and chlorophenols studied, the presence of an increasing number of chlorine substituent atoms corresponds to an increasing reactivity of the undissociated compound, and a decreasing reactivity of the dissociated compound.
Goff, H. and Murmann, R.K. (1971) Studies on Mechanism of Isotopic Oxygen Exchange and Reduction of Ferrate (VI) ion (FeO42-). Journal of the American Chemical Society 93(23), 6058-&.
Gombos, E., Felfoldi, T., Barkacs, K., Vertes, C., Vajna, B. and Zaray, G. (2012) Ferrate treatment for inactivation of bacterial community in municipal secondary effluent. Bioresource Technology 107, 116-121. HPC and chlorine resistant bacteria This paper demonstrates the effect of ferrate [Fe(VI)-compound], an environmental friendly multi-purpose reagent, in municipal secondary effluent treatment. The purpose was to study the inactivation capability of ferrate and for the first time to compare the effect and efficiency of Fe(VI) with the widely used disinfectant, chlorine gas on the indigenous bacterial community in the case of secondary effluents. The most probable number technique (MPN) was applied for the determination of cultivable heterotrophic bacterial abundance and terminal restriction fragment length polymorphism (T-RFLP) analysis for comparing bacterial communities. The study demonstrated that (i) ferrate and chlorine had different effect on the total bacterial community of secondary effluents, (ii) low ferrate dose [5 mg L-1 Fe(VI)] was sufficient for >99.9% reduction of indigenous bacteria, and (iii) a similar dosage was also effective in the inactivation of chlorine-resistant bacteria.
Makky, E.A., Park, G.-S., Choi, I.-W., Cho, S.-I. and Kim, H. (2011) Comparison of Fe(VI) (FeO(4)(2-)) and ozone in inactivating Bacillus subtilis spores. Chemosphere 83(9), 1228-1233.
used pH 9 borate buffered ferrate made according to Thompson et al (1951) The protozoan parasites such as Cryptosporidium parvum and Giardia lamblia have been recognized as a frequent cause of recent waterborne disease outbreaks because of their strong resistance against chlorine disinfection. In this study, ozone and Fe(VI) (i.e., FeO(4)(2-)) were compared in terms of inactivation efficiency for Bacillus subtilis spores which are commonly utilized as an indicator of protozoan pathogens. Both oxidants highly depended on water pH and temperature in the spore inactivation. Since redox potential of Fe(VI) is almost the same as that of ozone, spore inactivation efficiency of Fe(VI) was expected to be similar with that of ozone. However, it was found that ozone was definitely superior over Fe(VI): at pH 7 and 20 degrees C, ozone with the product of concentration x contact time (T) of 10 mg L(-1) min inactivate the spores more than 99.9% within 10 min, while Fe(VI) with (C) over barT of 30 mg L(-1) min could inactivate 90% spores. The large difference between ozone and Fe(VI) in spore inactivation was attributed mainly to Fe(III) produced from Fe(VI) decomposition at the spore coat layer which might coagulate spores and make it difficult for free Fe(VI) to attack live spores.
Hu, L., Page, M., Marinas, B., Shisler, J.L. and Strathmann, T.J. (2008) Treatment of Emerging Pathogens and Micropollutants with Potassium Ferrate(VI), WQTC Proceedings, Cincinnati. This study reports on recent work examining the effectiveness ofFe(VI) for oxidation of pharmaceutically active compounds (PhACs) and inactivation of viral pathogens (surrogate pathogen: coliphage MS2) during water treatment. Twelve PhACs from representative compound classes were screened to assess potential reactivity with Fe(VI) on timescales of interest to water utilities. Eight of the 12 PhACs surveyed, including the antiepileptic drug carbamazepine and phenolic endocrine disruptor compounds (EDCs), were found to have moderate to high reactivity with Fe(VI). Results also show that Fe(VI) is an effective disinfectant for MS2 phage. The CT value for 99% inactivation ofMS2 is ~2 mg-min L-1 as Fe at pH 7 and 25 °C. Both rates ofPhAC oxidation and virus inactivation are highly dependent upon solution pH, increasing with decreasing pH as Fe(VI) speciation shifts towards more reactive protonated species (HFe04- and H2Fe04). Kinetic models that consider changing speciation of both Fe(VI) and the reacting PhAC or virus, illustrated for the case of carbamazepine, were developed to account for pH-dependent reactivity trends.
Prucek, R., J. Tucek, et al. (2013). "Ferrate(VI)-Induced Arsenite and Arsenate Removal by In Situ Structural Incorporation into Magnetic Iron(III) Oxide Nanoparticles." Environmental Science & Technology 47(7): 3283-3292. Arsenic removal by insitu formation of nano-Fe2O3 We report the first example of arsenite and arsenate removal from water by incorporation of arsenic into the structure of nanocrystalline iron(III) oxide. Specifically, we show the capability to trap arsenic into the crystal structure of gamma-Fe2O3 nanoparticles that are in situ formed during treatment of arsenic-bearing water with ferrate(VI). In water, decomposition of potassium ferrate(VI) yields nanoparticles having core-shell nanoarchitecture with a gamma-Fe2O3 core and a gamma-FeOOH shell. High-resolution X-ray photoelectron spectroscopy and in-field Fe-57 Mossbauer spectroscopy give unambiguous evidence that a significant portion of arsenic is embedded in the tetrahedral sites of the gamma-Fe2O3 spinel structure. Microscopic observations also demonstrate the principal effect of As doping on crystal growth as reflected by considerably reduced average particle size and narrower size distribution of the "in-situ" sample with the embedded arsenic compared to the "ex-situ" sample with arsenic exclusively sorbed on the iron oxide nanoparticle surface. Generally, presented results highlight ferrate(VI) as one of the most promising candidates for advanced technologies of arsenic treatment mainly due to its environmentally friendly character, in situ applicability for treatment of both arsenites and arsenates, and contrary to all known competitive technologies, firmly bound part of arsenic preventing its leaching back to the environment. Moreover, As-containing gamma-Fe2O3 nanoparticles are strongly magnetic allowing their separation from the environment by application of an external magnet.
Graham, N.J.D., Khoi, T.T. and Jiang, J.Q. (2010) Oxidation and coagulation of humic substances by potassium ferrate. Water Science and Technology 62(4), 929-936.
Tien Khoi, T., N. Graham, and J.-Q. Jiang. 2008. Evaluating the Coagulation Performance of Ferrate: A Preliminary Study, p. 292-305, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. a preliminary version of the WS&T 2010 paper Ferrate is cited as having a dual role in water treatment, both as oxidant and coagulant. Few studies have considered the coagulation effect in detail, mainly because of the difficulty of separating the oxidation and coagulation effects. This paper summarises some preliminary results from an ongoing laboratory-based project that is investigating the coagulation reaction, dynamically via a PDA instrument, between ferrate and a suspension of kaolin powder at different doses and pH values, and comparing the observations with the use of ferric chloride. The PDA output gives a comparative measure of the rate of floc growth and the magnitude of floc formation. The results of the tests show some similarities and significant differences in the pattern of behaviour between ferrate and ferric chloride. This paper presents and discusses these observations and provides some comparative information on the strength of flocs formed.
Jiang, J.Q., Lloyd, B. and Grigore, L. (2001) Preparation and evaluation of potassium ferrate as an oxidant and coagulant for potable water treatment. Environmental Engineering Science 18(5), 323-328.
proprietary preparation of ferrate; some THMFP destruction data Potassium ferrate was prepared using a new method, and its disinfection/coagulation performance for water treatment has been evaluated. The study results demonstrated that farrate could act as a dual-function chemical reagent (i.e., oxidant and coagulant) for drinking-water treatment, and performs better than ferric sulphate at lower doses for treating upland colored water. The ferrate can effectively remove natural organic matter (as UV254) and turbidity, and kills total coliforms (100%) at very low doses. In addition, under the optimal study conditions, the residual iron concentration and trihalomethane formation potential (THMFP) of the ferrate treated water are below that of drinking water standards.
Jiang, J.Q., Stanford, C. and Alsheyab, M. (2009) The online generation and application of ferrate(VI) for sewage treatment-A pilot scale trial. Separation and Purification Technology 68(2), 227-231.
Ferrate(VI) is an oxidant; under acidic conditions, the redox potential of ferrate(VI) salt is the strongest among all oxidants used for water and wastewater treatment. Ferrate(VI) is also a coagulant; during the oxidation process, ferrate(VI) ions are reduced to Fe(III) ions or ferric hydroxide, which simultaneously generates a coagulant in a single dosing and mixing unit process. The superior performance of ferrate(VI) as an oxidant/disinfectant and coagulant in water and wastewater treatment has been extensively studied. However, challenges have existed to the implementation of ferrate(VI) technology in water and wastewater treatment practice due to the instability of the ferrate(VI) solutions and a high preparation cost of a solid ferrate(VI). It would be an ideal approach to generate ferrate(VI) in situ and apply the generated ferrate(VI) directly for wastewater treatment. This paper reports the online preparation and use of ferrate(VI) for sewage treatment at a pilot scale applied in a wastewater treatment plant in the UK. The technology has been demonstrated to be promising in terms of removing suspended solids, phosphate, COD and BOD at a very low dose range, 0.005-0.04 mg l(-1) as Fe(6+) in comparison with a normal coagulant, ferric sulphate at high doses, ranging between 25 and 50 mg l(-1) as Fe(3+). in terms of the similar sewage treatment performance achieved, the required dose with ferrate(VI) was 100 times less than that with ferric sulphate. However, the full operating cost needs further assessment before the ferrate(VI) technology could be implemented in a full scale water or wastewater treatment.
Jiang, J.Q. and Wang, S. (2003) Enhanced coagulation with potassium ferrate(VI) for removing humic substances. Environmental Engineering Science 20(6), 627-633.
Potassium ferrate (K2FeO4) has the strongest oxidation strength among all the oxidants/disinfectants practically used for water and wastewater treatment. Apart from this, in the oxidation process, ferrate(VI) ions are reduced to Fe(III) ions or ferric hydroxide, and this simultaneously generates a coagulant in a single dosing and mixing unit process. This paper demonstrates that potassium ferrate can perform better than ferric sulphate at lower doses for treating waters containing humic and fulvic acids in terms of removing UV254 absorbance and dissolved organic carbon (DOC) and lowering the trihalomethane formation potential (THMFP), indicating that the potassium ferrate is a potential water treatment chemical for "enhanced coagulation."
Lee, Y., Um, I.H. and Yoon, J. (2003) Arsenic(III) oxidation by iron(VI) (ferrate) and subsequent removal of arsenic(V) by iron(III) coagulation. Environmental Science & Technology 37(24), 5750-5756.
We investigated the stoichiometry, kinetics, and mechanism of arsenite [As(III)] oxidation by ferrate [Fe(VI)] and performed arsenic removal tests using Fe(VI) as both an oxidant and a coagulant. As(III) was oxidized to As(V) (arsenate) by Fe(VI), with a stoichiometry of 3:2 [As(III):Fe(VI)]. Kinetic studies showed that the reaction of As(III) with Fe(VI) was first-order with respect to both reactants, and its observed second-order rate constant at 25 degreesC decreased nonlinearly from (3.54 +/- 0.24) x 10(5) to (1.23 +/- 0.01) x 10(3) M-1 s(-1) with an increase of pH from 8.4 to 12.9. A reaction mechanism by oxygen transfer has been proposed for the oxidation of As(III) by Fe(VI). Arsenic removal tests with river water showed that, with minimum 2.0 mg L-1 Fe(VI), the arsenic concentration can be lowered from an initial 517 to below 50 mug L-1, which is the regulation level for As in Bangladesh. From this result, Fe(VI) was demonstrated to be very effective in the removal of arsenic species from water at a relatively low dose level (2.0 mg L-1). In addition, the combined use of a small amount of Fe(VI) (below 0.5 mg L-1) and Fe(III) as a major coagulant was found to be a practical and effective method for arsenic removal.
Licht, S., X. Yu, and D. Qu. 2008. Electrochemical Fe(VI) Water Purification and Remediation, p. 268-291, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Fe(VI) generation and oxidation of N species A novel on-line electrochemical Fe(VI) water purification methodology is introduced, which can quickly oxidize and remove a wide range of both inorganic and organic water contaminants. Fe(VI) is an unusual and strongly oxidizing form of iron, which provides a potentially less hazardous water purifying agent than chlorine. However, a means of Fe(VI) addition to the effluent had been a barrier to its effective use in water remediation, as solid Fe(VI) salts require complex (costly) syntheses steps, and solutions of Fe(VI) decompose. On-line electrochemical Fe(VI) water purification avoids these limitations, in which Fe(VI) is directly prepared in solution from an iron anode as the FeO42- ion, and is added to the contaminant stream. Added FeO42- decomposes, by oxidizing water contaminants. Demonstration of this methodology is performed with inorganic contaminants sulfides, cyanides and arsenic; and water soluble organic contaminants including phenol, aniline and hydrazine. In addition, removal of the oxidation products by following an activated carbon filter at the downstream of the on-line configuration is also presented.
Ma, J., W. Liu, Y. Zhang, and C. Li. 2008. Enhanced Removal of Cadmium and Lead from Water by Ferrate Preoxidation in the Process of Coagulation, p. 456-465, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. removal of Cd & Pb by coagulation This paper discussed the effect of ferrate preoxidation on enhanced removal of cadmium and lead from water in the process of coagulation. Some factors affecting the removal of heavy metals were discussed such as pH value, the dosage of ferrate and the water quality condition etc. The results showed that ferrate preoxidation could effectively increase the removal efficiency of lead, whilst a little increase of removal efficiency of cadmium; the removal efficiency increased with the increase of pH. The presence of humic acid greatly affected the removal efficiency of lead in the process of coagulation, but hardly affected the removal efficiency of cadmium. The combined effect of adsorption by intermediate iron species formed in the process of ferrate oxidation and the enhanced coagulation of iron colloids co-precipitated with heavy metals might be responsible for the effective removal of heavy metals.
Licht, S. and Yu, X.W. (2005) Electrochemical alkaline Fe(VI) water purification and remediation. Environmental Science & Technology 39(20), 8071-8076.
Fe(VI) is an unusual and strongly oxidizing form of iron, which provides a potentially less hazardous water-purifying agent than chlorine. A novel on-line electrochemical Fe(VI) water purification methodology is introduced. Fe(VI) addition had been a barrier to its effective use in water remediation, because solid Fe(VI) salts require complex (costly) syntheses steps and solutions of Fe(VI) decompose. Online electrochemical Fe(VI) water purification avoids these limitations, in which Fe(VI) is directly prepared in solution from an iron anode as the FeO42- ion, and is added to the contaminant stream. Added FeO42- decomposes, by oxidizing a wide range of water contaminants including sulfides (demonstrated in this study) and other sulfur-containing compounds, cyanides (demonstrated in this study), arsenic (demonstrated in this study), ammonia and other nitrogen-containing compounds (previously demonstrated), a wide range of organics (phenol demonstrated in this study), algae, and viruses (each previously demonstrated).
Lim, M. and Kim, M.-J. (2010) Effectiveness of Potassium Ferrate (K(2)FeO(4)) for Simultaneous Removal of Heavy Metals and Natural Organic Matters from River Water. Water Air and Soil Pollution 211(1-4), 313-322.
This study has investigated how to simultaneously remove both heavy metals (Cu, Mn, and Zn) and natural organic matters (NOM; humic acid and fulvic acid) from river water using potassium ferrate (K(2)FeO(4)), a multipurpose chemical acting as oxidant, disinfectant, and coagulant. In water sample including each 0.1 mM heavy metal, its removal efficiency ranged 28-99% for Cu, 22-73% for Mn, and 18-100% for Zn at the ferrate(VI) doses of 0.03-0.7 mM (as Fe). The removal efficiency of each heavy metal increased with increasing pH, whereas an overall temperature did not make any special effect on the reaction between the heavy metal and ferrate(VI). A high efficiency was achieved on the simultaneous treatment of heavy metals (0.1 mM) and NOM (10 mg/l) at the ferrate(VI) doses of 0.03-0.7 mM (as Fe): 87-100% (Cu), 31-81% (Mn), 11-100% (Zn), and 33-86% (NOM). In the single heavy metal solution, the optimum ferrate dose for treating 0.1 mM Cu or Mn was 0.1 mM (as Fe), while that for treating 0.1 mM Zn was 0.3 mM (as Fe). In the mixture of three heavy metals and NOM, on the other hand, 0.5 mM (as Fe) ferrate(VI) was determined as an optimum dose for removing both 0.1 mM heavy metals (Cu, Mn, and Zn) and 10 mg/l NOM. Prior to the addition of ferrate(VI) into the solution of heavy metals and NOM (HA or FA), complexes were formed by the reaction between divalent cations of heavy metals and negatively charged functional groups of NOM, enhancing the removal of both heavy metals and NOM by ferrate(VI).
Liu, W., and Y.-M. Liang. 2008. Use of Ferrate(VI) in Enhancing the Coagulation of Algae-Bearing Water: Effect and Mechanism Study, p. 434-445, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. impacts on algae removal by coagulation and UV absorbance This study found that ferrate preoxidation significantly enhanced the algae removal in alum coagulation. A very short preoxidation time, e.g. several minutes, was enough to achieve substantial enhancement of algae removal by ferrate. It was also found that ferrate preoxidation was much more powerful than pre-chlorination in enhancing the coagulation of algae-bearing water. Ferrate oxidation left obvious impacts on surface architecture of algal cells. Upon oxidation with ferrate, the cells were inactivated and some intracellular and extracelluar components were released into the water, which act as coagulant aid. The coagulation was also improved by increasing particle concentration in water, because of the formation of the intermediate forms of precipitant iron species during preoxidation. In addition, it was also observed that ferrate preoxidation caused algae agglomerate formation before the addition of coagulant, the subsequent application of alum resulted in further coagulation. Ferrate preoxidation also improved the reduction of residual organic matters in algae-bearing water.
Ma, J. and Liu, W. (2002) Effectiveness and mechanism of potassium ferrate(VI) preoxidation for algae removal by coagulation. Water Research 36(4), 871-878.
Jar tests were conducted to evaluate the effectiveness of potassium ferrate preoxidation on algae removal by coagulation. Laboratory studies demonstrated that pretreatment with potassium ferrate obviously enhanced the algae removal by coagulation with alum [Al-2(SO4)(3) . 18H(2)O]. Algae removal efficiency increased remarkably when the water was pretreated with ferrate. A very short time of preoxidation was enough to achieve substantial algae removal efficiency, and the effectiveness was further increased at a prolonged pretreatment time. Pretreatment with ferrate resulted in a reduction of alum dosage required to cause an efficient coagulation for algae removal. The obvious impact of cell architecture by potassium ferrate was found through scanning electron microscopy. Upon oxidation with ferrite, the cells were inactivated and some intracellular and extracelluar components were released into the water. which may be helpful to the coagulation by their bridging effect. Efficient removal of algae by potassium ferrite preoxidation is believed to be a consequence of several process mechanisms. Ferrate preoxidation inactivated algae. induced the formation of coagulant aid, which are the cellular components secreted by algal cells. The coagulation was also improved by increasing particle concentration in water, because of the formation of the intermediate forms of precipitant iron species during preoxidation. In addition, it was also observed that ferrite preoxidation caused algae agglomerate formation before the addition of coagulant, the subsequent application of alum resulted in further coagulation.
Ma, J., C. Li, Y. Zhang, and R. Ju. 2008. Combined Process of Ferrate Preoxidation and Biological Activated Carbon Filtration for Upgrading Water Quality, p. 446-455, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. impacts on coagulation wrt NOM removal and size The preoxidation of polluted surface water with ferrate was conducted with respect to its impact on the following biofiltration. It was found that preoxidation with ferrate promoted the biodegradation of organics with substantial reductions of chemical oxygen demand (CODMn), UV254-absorbance. It was also found that the removal of NH4+-N in biological activated carbon (BAC) process was also substantially improved as compared with the cases without ferrate preoxidation and with ozone preoxidation. In addition, the experiments were conducted related to the effect of potassium ferrate oxidation of raw water of Songhua River on its changes of molecular weight distribution in order to investigate further the enhancement of ferrate preoxidation on the removal of organics. The results indicated that the concentration of organics with molecular weight (MW) of 10k-100k and less than 0.5k were substantially increased after the raw water was coagulated with ferrate preoxidation, which suggested that these oxidation products are readily removed by subsequent biofiltration.
White, D.A., and G.S. Franklin. 1998. A preliminary investigation into the use of sodium ferrate in water treatment. Environmental Technology 19:1157-1160. Mn and color removal This paper describes some preliminary tests on the use of sodium ferrate as a reagent for drinking water treatment. me manufacture and purification of the reagent by a low temperature method is described. Tests were carried out for colour and manganese removal using the ferrate as a flocculant. There is some indication that much less ferrate than ferric flocculant is needed for colour removal. Sodium ferrate is also an effective remover of manganese.
AOC formation & discussion on pathways and oxalate yields Five oxidants, ozone, chlorine dioxide, chlorine, permanganate, and ferrate were studied with regard to the formation of assimilable organic carbon (AOC) and oxalate in absence and presence of cyanobacteria in lake water matrices. Ozone and ferrate formed significant amounts of AOC, i.e. more than 100 mu g/L AOC were formed with 4.6 mg/L ozone and ferrate in water with 3.8 mg/L dissolved organic carbon. In the same water samples chlorine dioxide, chlorine, and permanganate produced no or only limited AOC. When cyanobacterial cells (Aphanizomenon gracile) were added to the water, an AOC increase was detected with ozone, permanganate, and ferrate, probably due to cell lysis. This was confirmed by the increase of extracellular geosmin, a substance found in the selected cyanobacterial cells. AOC formation by chlorine and chlorine dioxide was not affected by the presence of the cells. The formation of oxalate upon oxidation was found to be a linear function of the oxidant consumption for all five oxidants. The following molar yields were measured in three different water matrices based on oxidant consumed: 2.4-4.4% for ozone, 1.0-2.8% for chlorine dioxide and chlorine, 1.1-1.2% for ferrate, and 11-16% for permanganate. Furthermore, oxalate was formed in similar concentrations as trihalomethanes during chlorination (yield similar to 1% based on chlorine consumed). Oxalate formation kinetics and stoichiometry did not correspond to the AOC formation. Therefore, oxalate cannot be used as a surrogate for AOC formation during oxidative water treatment.
Schuck, C.A., De Luca, S.J., Ruaro Peralba, M.d.C. and De Luca, M.A. (2006) Sodium ferrate (IV) and sodium hypochlorite in disinfection of biologically treated effluents. Ammonium nitrogen protection against THMs and HAAs. Journal of Environmental Science and Health Part a-Toxic/Hazardous Substances & Environmental Engineering 41(10), 2329-2343.
ferrate added to treated WW - not much affect The work described in this paper presents an evaluation of disinfection by-products generation in four different biological treatment plant effluents, making use of sodium hypochlorite and sodium ferrate (IV) at varying concentration and reaction time. Correlations between pH, chemical oxygen demand, total organic carbon, ammonium nitrogen, combined chlorine and trihalomethanes (THMs) and haloacetic acids (HAAs) were carried out. Disinfection by-products generation presented a direct relation with concentration and sodium hypochlorite reaction time. For the highest hypochlorite concentration employed (20 mg L-1) and highest reaction time (168 h), the THMs total did not exceed 312.96 mu g L-1, a value that lies below the Brazilian emission standard for treated effluents (1 mg L-1 of chloroform). The THMs presented an inverse correlation with ammonium nitrogen, when inverse (R-2 = 0.646; P < 0.001) and exponential (R-2 = 0.707; P < 0.001) function were used. As per HAAs this same relation was observed for logarithmic (R-2 = 0.0397 P < 0.001) and exponential (R-2 = 0.508; P < 0.001) functions. The more nitrified the effluent, the bigger the chlorinated disinfection by-product generation. The disinfectant sodium ferrate (IV) does not lead to halogenated by-product formation.
Jiang, J.-Q., and K. Sharma Virender. 2008. The Use of Ferrate(VI) Technology in Sludge Treatment, p. 306-325, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. some disinfection data, control of metals, odors Sludge in large quantity is generated as byproducts of wastewater treatment processes. Various approaches have been taken to treat sludge, such as land-filling, ocean dumping, or recycling for beneficial purposes. In the USA, about 60% of sludge generated is land applied as a soil conditioner or fertilizer. Due to increasing public concern on the safety of land-applied sludge, various sludge treatment technologies are being developed or under evaluation in order to improve the quality of sludge in terms of pathogen content, odor characteristics, accumulated organic micro-pollutants. This paper summarizes the results of various reported or on-going researches on the potential use of ferrate [Fe(VI)O42-] as a conditioning agent for sludge. Ferrate(VI) has high oxidizing potential and selectivity, and upon decomposition produces a non-toxic by-product, Fe(III), which is a conventional coagulant; the ferrate(VI) is thus considered to be an environmentally-friendly oxidant. Rates of oxidation reactions increase with decrease in pH. Oxidation of sulfur- and amine-containing contaminants in sludge by Fe(VI) can be accomplished in seconds to minutes with formation of non-hazardous products. Ferrate(VI) can also coagulate toxic metals and disinfect wide ranges of microorganisms including human pathogens. With its multifunctional properties, ferrate(VI) has the potential for sludge treatment.
Stanford, C., Jiang, J.Q. and Alsheyab, M. (2010) Electrochemical Production of Ferrate (Iron VI): Application to the Wastewater Treatment on a Laboratory Scale and Comparison with Iron (III) Coagulant. Water Air and Soil Pollution 209(1-4), 483-488.
removal of SS, BOD, COD & P in WWT This paper presents a comparative study of the performance of ferrate(VI), FeO (4) (2-) , and ferric, Fe(III), towards wastewater treatment. The ferrate(VI) was produced by electrochemical synthesis, using steel electrodes in a 16 M NaOH solution. Domestic wastewater collected from Hailsham North Wastewater Treatment Works was treated with ferrate(VI) and ferric sulphate (Fe(III)). Samples were analysed for suspended solids, chemical oxygen demand (COD), biochemical oxygen demand (BOD) and P removal. Results for low doses of Fe(VI) were validated via a reproducibility study. Removal of phosphorous reached 40% with a Fe(VI) dose as low as 0.01 mg/L compared to 25% removal with 10 mg/L of Fe(III). For lower doses (< 1 mg/L as Fe), Fe(VI) can achieve between 60% and 80% removals of SS and COD, but Fe(III) performed even not as well as the control sample where no iron chemical was dosed. The ferrate solution was found to be stable for a maximum of 50 min, beyond which Fe(VI) is reduced to less oxidant species. This provided the maximum allowed storage time of the electrochemically produced ferrate(VI) solution. Results demonstrated that low addition of ferrate(VI) leads to good removal of P, BOD, COD and suspended solids from wastewater compared to ferric addition and further studies could bring an optimisation of the dosage and treatment.
Filip, J., Yngard, R.A., Siskova, K., Marusak, Z., Ettler, V., Sajdl, P., Sharma, V.K. and Zboril, R. (2011) Mechanisms and Efficiency of the Simultaneous Removal of Metals and Cyanides by Using Ferrate(VI): Crucial Roles of Nanocrystalline Iron(III) Oxyhydroxides and Metal Carbonates. Chemistry-a European Journal 17(36), 10097-10105.
study of precipitates, XPS, TEM, FTIR data The reaction of potassium ferrate(VI), K(2)FeO(4), with weak-acid dissociable cyanides-namely, K(2)[Zn(CN)(4)], K(2)[Cd(CN)(4)], K(2)[Ni(CN)(4)], and K(3)[Cu(CN)(4)]-results in the formation of iron(III) oxyhydroxide nanoparticles that differ in size, crystal structure, and surface area. During cyanide oxidation and the simultaneous reduction of iron(VI), zinc(II), copper(II), and cadmium(II), metallic ions are almost completely removed from solution due to their co-precipitation with the iron(III) oxyhydroxides including 2-line ferrihydrite, 7-line ferrihydrite, and/or goethite. Based on the results of XRD, Mossbauer and IR spectroscopies, as well as TEM, X-ray photoelectron emission spectroscopy, and Brunauer-Emmett-Teller measurements, we suggest three scavenging mechanisms for the removal of metals including their incorporation into the ferrihydrite crystal structure, the formation of a separate phase, and their adsorption onto the precipitate surface. Zn and Cu are preferentially and almost completely incorporated into the crystal structure of the iron(III) oxyhydroxides; the formation of the Cd-bearing, X-ray amorphous phase, together with Cd carbonate is the principal mechanism of Cd removal. Interestingly, Ni remains predominantly in solution due to the key role of nickel(II) carbonate, which exhibits a solubility product constant several orders of magnitude higher than the carbonates of the other metals. Traces of Ni, identified in the iron(III) precipitate, are exclusively adsorbed onto the large surface area of nanoparticles. We discuss the relationship between the crystal structure of iron(III) oxyhydroxides and the mechanism of metal removal, as well as the linear relationship observed between the rate constant and the surface area of precipitates.
Joshi Umid, M., R. Balasubramanian, and K. Sharma Virender. 2008. Potential of Ferrate(VI) in Enhancing Urban Runoff Water Quality, p. 466-476, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. review of treatment effectiveness with focus on runoff Urban development and increasing water demand are putting a lot of stress on existing water resources around the world. A great deal of attention is now paid to alternative sources of water such as stormwater catchment systems as they serve multi-purpose functions. However, human activities introduce a variety of contaminants into the stormwater catchments, which affect the quality of the water to be used for both potable and non-potable purposes. Environmentally friendly treatment technologies are needed to treat and to use urban runoff without having negative impacts on the environment. Ferrate (VI) technology has the potential to be one of the most environmentally friendly water treatment technologies of the twenty-first century. Ferrate(VI) has advantages in treating heavy metals (e.g., Pb2+, Cd2+, Cr3+, Hg2+, Cu2+, and Mn2+), suspended particles, synthetic/natural organic matter (present as TOC, BOD and COD), microorganisms (e.g., bacteria and virus), without producing chlorinated by-products. Ames test on ferrate(VI) treated water demonstrated negative results, suggesting no mutagenic by-products. Uniquely, Ferrate(VI) performs distinctly different treatment functions (oxidation, coagulation, flocculation, and disinfection) from the application of a single dose, thus providing a simplified and cost-effective process.
Licht, S., Yu, X., (2008) Recent Advances in Fe(VI) Synthesis, Ferrates, 985, ACS Symposium Series, pp. 2-51. American Chemical Society.
extensive review of chemical and electrochemical methods The synthesis and analysis of a range of Fe(VI) compounds are presented. Fe(VI) compounds have also been variously referred to as ferrates or super-iron compounds. Fe(VI) salts with detailed syntheses in this paper include the alkali Fe(VI) salts high purity Cs2FeO4, Rb2FeO4, and KxNa(2-x)FeO4, low purity Li2FeO4, as well as high purity alkali earth Fe(VI) salts BaFeO4, SrFeO4, and also Ag2FeO4. Two conventional, as well as two improved Fe(VI) synthetic routes are presented. The conventional syntheses include solution phase oxidation (by hypochlorite) of Fe(III), and the synthesis of less soluble super-irons by dissolution of FeO42-, and precipitation with alternate cations. The new routes include a solid synthesis route for Fe(VI) salts and the electrochemical synthesis (include in-situ & ex-situ synthesis) of Fe(VI) salts. Fe(VI) analytical methodologies summarized are FTIR, ICP, titrimetric, UV/VIS, XRD, Mössbauer and a range of electrochemical analyses. Fe(VI) compounds have been explored as energy storage cathode materials in both aqueous and non-aqueous phase in "super-iron" battery configurations, as well as novel oxidants for synthesis and water treatment purification. Preparation of reversible Fe(VI/III) thin film towards a rechargeable super-iron cathode is also presented. In addition, the preparation of unusual KMnO4 and zirconia coatings on Fe(VI) salts, via organic solvent deposition, is summarized. These coatings can stabilize and activate Fe(VI) salts in contact with alkaline media.
Ninane, L., Kanari, N., Criado, C., Jeannot, C., Evrard, O., Neveux, N., (2008) New Processes for Alkali Ferrate Synthesis, Ferrates, 985, ACS Symposium Series, pp. 102-111. method for use in water treatment A new process for the synthesis of the potassium ferrate(VI) salt was developed at the University of Nancy, France under an EEC program; started in 2001. This program had an objective of synthesizing a large quantity of ferrate in order to feed large scale applications in the field of water treatment such as drinking water, municipal waste water, and industrial waste water treatment. The raw materials used were ferrous sulphate, potassium hydroxide, and calcium hypochlorite (or chlorine). In this process, mixing of three solids took place in a mixer in which the potassium ferrate(VI) salt was stabilized. Another objective of the program was to develop synthesis of solid sodium ferrate(VI), which is cheaper to produce because its preparation requires less expensive materials: caustic soda instead of potassium hydroxide and sodium hypochlorite (or chlorine gas) instead of calcium hypochlorite. The final objective was to develop a better technology, which could be cheaper and easier to scale-up. This chapter also describes successful results of lab and small pilot tests.
Alsheyab, M., Jiang, J.Q. and Stanford, C. (2009) On-line production of ferrate with an electrochemical method and its potential application for wastewater treatment - A review.Journal of Environmental Management 90(3), 1350-1356.
A number of studies on the oxidation of various organic/inorganic contaminants by ferrate(VI) were reported in the 1980s and 1990s. The exploration of the use of ferrate(VI) for water and wastewater treatment has been well addressed recently. However, challenges have existed for the implementation of ferrate(VI) technology in practice due to the instability of a ferrate solution or high production cost of solid ferrate products. The research has been carried out aiming at the generation and application of ferrate(VI) in situ. This paper thus reviews ferrate chemistry and its overall performance as a water treatment chemical, discusses the factors affecting the fer-rate yield efficiency using the electrochemical method, and finally, summarises the work on the production and use of ferrate in situ which is currently under study.
Alsheyab, M., Jiang, J.Q. and Stanford, C. (2010) Engineering Aspects of Electrochemical Generation of Ferrate: A Step Towards Its Full Scale Application for Water and Wastewater Treatment. Water Air and Soil Pollution 210(1-4), 203-210. The objective of this paper is to design a pilot plant electrochemical reactor and to prove the operational concept of the electrochemical production of ferrate in situ and its online application for sewage treatment. To that end, the first part of this paper focuses on the analysis of the main engineering aspects of the reactor and the electrochemical process that affect the ferrate production, using laboratory scale experiments such as the interelectrode gap, the space-time yield, the area/volume (A/V) ratio, the current efficiency, and the energy consumption. The second part focuses on the production of ferrate using a pilot plant scale to prove the operational concept of the electrochemical generation of ferrate in situ and its online application as a step towards its full scale application for water and wastewater treatment.
Alsheyab, M., Jiang, J.Q. and Stanford, C. (2010) Electrochemical generation of ferrate (VI): Determination of optimum conditions Desalination 254(1-3), 175-178.
The optimum conditions for the electrochemical generation of ferrate were analysed using a laboratory scale electrochemical reactor and using NaOH as the electrolyte with a reaction duration of 25 min. The criteria used for comparison are the current efficiency and the energy consumption. The conducted experiments in this project showed that the optimum current density of the 10 studied applied currents was 36 A/m(2); the optimum concentration of NaOH of the four different studied concentrations was 16 M and the optimum carbon content (C%) of the three studied steels was 0.11%.
Chengchun, J., Chen, L., Shichao, W., (2008) Preparation of Potassium Ferrate by Wet Oxidation Method Using Waste Alkali: Purification and Reuse of Waste Alkali, Ferrates, 985, ACS Symposium Series, pp. 94-101. A new method of preparing potassium ferrate using waste alkali is developed in this report. After preparation of potassium ferrate by the wet oxidation method, the waste alkali was purified and reused for a further preparation runs. The purification of waste alkali and the temperature for the furification were studied. The results indicated that the waste alkali can be used for preparing potassium ferrate, and the purity and yield of potassium ferrate product were steadily higher than 90% and 60%, respectively after ten recycles of the waste alkali. Therefore, due to the use of waste alkali, the cost is reduced sharply, and a green synthesis for potassium ferrate is achieved.
Benová, M., Híveš, J., Bouzek, K., Sharma, V.K., (2008) Electrochemical Ferrate(VI) Synthesis: A Molten Salt Approach, Ferrates, 985, ACS Symposium Series, pp. 68-80. The electrochemical synthesis of ferrate(VI) was studied for the first time in a molten salt environment. An eutectic NaOH-KOH melt at the temperature of 200 °C was selected as a most appropriate system for the synthesis. Cyclic voltammetry was used to characterize the processes taking place on the stationary platinum (gold) or iron electrodes. The identified anodic current peak corresponding to the ferrate(VI) production was close to the potential region at which oxygen evolution begins. During the reverse potential scan, well defined cathodic current peak corresponding to the ferrate(VI) reduction appears. However, the peak was shifted to less cathodic potential than that of potential corresponding to the electrolysis in aqueous solutions. This indicates less progressive anode inactivation in a molten salts environment.
Li, C., Li, X.Z. and Graham, N. (2005) A study of the preparation and reactivity of potassium ferrate. Chemosphere 61(4), 537-543.
preferred synthetic method for Graham's group In the context of water treatment, the ferrate ([FeO4](2-)) ion has long been known for its strong oxidizing power and for producing a coagulant from its reduced form (i.e. Fe(III)). However, it has not been studied extensively owing to difficulties with its preparation and its instability in water. This paper describes an improved procedure for preparing solid phase potassium ferrate of high purity (99%) and with a high yield (50-70%). The characteristics of solid potassium ferrate were investigated and from XRD spectra it was found that samples of the solid have a tetrahedral structure with a space group of D-2h (Pnma) and a = 7.705 angstrom, b = 5.863 angstrom, and c = 10.36 angstrom. The aqueous stability of potassium ferrate at various pH values and different concentrations was investigated. It was found that potassium ferrate solution had a maximum stability at pH 9-10 and that ferrate solution at low concentration (0.25 mM) was more stable than at high concentration (0.51 mM). The aqueous reaction of ferrate with bisphenol A (BPA), a known endocrine disrupter compound, was also investigated with a molar ratio of Fe(VI):BPA in the range of 1:1-5:1. The optimal pH for BPA degradation was 9.4, and at this pH and a Fe(VI):BPA molar ratio of 5: 1, approximately 90% of the BPA was degraded after 60 s.
Sanchez-Carretero, A., Saez, C., Canizares, P., Cotillas, S. and Rodrigo, M.A. (2011) Improvements in the Electrochemical Production of Ferrates with Conductive Diamond Anodes Using Goethite as Raw Material and Ultrasound. Industrial & Engineering Chemistry Research 50(11), 7073-7076.
The improvement in the electrochemical production of ferrates with conductive diamond anodes using goethite as raw material and ultrasounds has been studied. The conductive-diamond electrolysis of suspensions of goethite leads to a higher ferrate concentration than that of Fe(OH)(3) suspensions and of iron-powder bed. With this raw material, the ferrate concentration increases continuously with a constant production rate, after a first more efficient stage in which results are similar to the other two raw materials. In addition, the ferrate production rate is significantly improved in the sonoelectrochemical system, suggesting that the amount of soluble iron ready to be oxidized to ferrates increases with the ultrasound effect. The initial iron concentration only seems to have an influence on the first stage of the process. On the other hand, the concentration of ferrate strongly depends on the hydroxide anion concentration: the higher is the hydroxide concentration, the higher the ferrate concentration is.
Zheng, H.-l., Deng, L.-l., Ji, F.-y., Jiang, S.-j. and Zhang, P. (2010) A New Method for the Preparation of Potassium Ferrate and Spectroscopic Characterization. Spectroscopy and Spectral Analysis 30(10), 2646-2649.
XRD, IR, UV characterization - in Chinese Calcium hypochlorite was used as the raw material for preparation of the high purity potassium ferrate. The study includes the effects of reaction temperature, recrystallization temperature, reaction time, Ca(ClO)(2) dosage, and the amount of calcium hypochlorite on the yield. It was determined that when the reaction temperature was 25 degrees C, recrystallization temperature 0 degrees C and reaction time 40 min, the yield was more than 75%. The purity was detected by direct spectrophotometric method to be more than 92%. The product was characterized by infrared spectrum(IR), X-ray diffraction (XRD) and ultraviolet spectrum (UV) methods and proved to be potassium ferrate that was prepared by calcium hypochlorite as the raw material.
Delaude, L. and Laszlo, P. (1996) A novel oxidizing reagent based on potassium ferrate(VI). Journal of Organic Chemistry 61(18), 6360-6370.
synthetic method used by Strathmann & several Korean groups A new, efficient preparation has been devised for potassium ferrate(VI)(K2FeO4). The ability of this high-valent iron salt for oxidizing organic substrates in nonaqueous media was studied. Using benzyl alcohol as a model, the catalytic activity of a wide range of microporous adsorbents was ascertained. Among numerous solid supports of the aluminosilicate type, the K10 montmorillonite clay was found to be best at achieving quantitative formation of benzaldehyde, without any overoxidation to benzoic acid, The roles of the various parameters (reaction time and temperature, nature of the solvent, method of preparation of the solid reagent) were investigated. The evidence points to a polar reaction mechanism. The ensuing procedure was applied successfully, at room temperature, to oxidation of a series of alcohols to aldehydes and ketones, to oxidative coupling of thiols to disulfides, and to oxidation of nitrogen derivatives. At 75 degrees C, the reagent has the capability of oxidizing both activated and nonactivated hydrocarbons. Toluene is turned into benzyl alcohol (and benzaldehyde), Cycloalkanes are also oxidized, in significant (30-40%) yields, to the respective cycloalkanols (and cycloalkanones). Thus, potassium ferrate, used in conjunction with an appropriate heterogeneous catalyst, is a strong and environmentally friendly oxidant.
Machala, L., R. Zboril, K. Sharma Virender, J. Filip, O. Schneeweiss, J. Madarasz, Z. Homonnay, G. Pokol, and R. Yngard. 2008. Thermal Stability of Solid Ferrates(VI): A Review, p. 124-144, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society.
This review critically summarizes currently known results concerning the thermal decomposition of the most frequently used ferrate(VI) salts (K2FeO4, BaFeO4, Cs2FeO4). Parameters important in the thermal decomposition of solid ferrates(VI) include the initial purity of the sample, a presence of adsorbed and/or crystal water, reaction atmosphere and temperature, crystallinity, phase transitions, and secondary transformation of the decomposition products. The confirmation and identification of metastable phases formed during thermal treatment can be difficult using standard approach. The in-situ experimental approach is necessary in some cases to understand better the decomposition mechanism. Generally, solid ferrates(VI) were found to be unstable at temperatures above 200 °C as one-step reduction accompanied by oxygen evolution usually proceeds. The most known and used ferrate(VI) salt, potassium ferrate(VI) (K2FeO4), decomposes at high temperatures to potassium orthoferrate(III), (KFeO2), and potassium oxides. The resulting phase composition of the sample heated in air can be affected by accompanying secondary reactions with the participation of CO2 and H2O in air. However, the thermal decomposition of barium ferrate(VI) (BaFeO4) is not sensitive to constituents of air and is mostly reduced to non-stoichiometric BaFeOx(2.5 < x < 3) perovskite-like phases stable under ordinary conditions. Such phases contain iron atoms with oxidation state +4; exhibiting the main difference in the decomposition mechanisms of K2FeO4 and BaFeO4.
Lee, Y., Yoon, J. and von Gunten, U. (2005) Spectrophotometric determination of ferrate (Fe(Vl)) in water by ABTS. Water Research 39(10), 1946-1953.
preferred analytical method for ferrate in treated waters? A new method for the determination of low concentrations (0.03-35 mu M) of the aqueous ferrate (Fe(VI)) was developed. The method is based on the reaction of Fe(VI) with 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonate) (ABTS) which forms a green radical cation (ABTS center dot(+)) that can be measured spectrophotometrically at 415 nm (ABTS method). The reaction of Fe(VI) with ABTS has a stoichiometry of 1:1 in excess of ABTS (73 mu M). The increase in absorbance at 415 nm for ABTS center dot(+) generation was linear with respect to Fe(VI) added (0.03-35 mu M) in buffered solutions (acetate/phosphate buffer at pH = 4.3) and was (3.40 +/- 0.05) x 10(4) M-1 cm(-1). The reaction of Fe(VI) with ABTS was very rapid with a half-life time below 0.01 s at pH 4.3 and 73 mu M of ABTS. This enables the ABTS method to measure Fe(VI) selectively. The residual absorbance of ABTS center dot(+) was found to be stable in several water matrices (synthetic buffer solution and natural waters) and concentrations of Fe(VI) spiked in natural waters could be determined with high accuracy. The ABTS method can also be used as a tool to determine rate constants of reactions of Fe(VI). The second-order rate constant for the reaction of phenol with Fe(VI) was determined to be 90 M-1 s(-1) at pH 7.
Golovko, D.A., Sharma, V.K., Pavlova, O.V., Belyanovskaya, E.A., Golovko, I.D., Suprunovich, V.I. and Zboril, R. (2011) Determination of submillimolar concentration of ferrate(VI) in alkaline solutions by amperometric titration. Central European Journal of Chemistry 9(5), 808-812. A new amperometric titration method was developed for quantitative determination of ferrate(VI) (Fe(VI)O(4) (2-)) in the 7.06x10(-5)-5.73x10(-3) M concentration range. Chromium(III) hydroxide solution was used as the titrant. The diffusion current (Id) had a linear relationship with the concentration of ferrate(VI) and slopes were dependent on the concentration of NaOH. The amperometric titration could detect a lower concentration of ferrate(VI) than could potentiometric and colorimetric titrations. The method was applied successfully to determine concentrations of ferrate(VI), generated electrochemically, in strong alkaline solutions.
Licht, S., Naschitz, V., Halperin, L., Halperin, N., Lin, L., Chen, J.J., Ghosh, S. and Liu, B. (2001) Analysis of ferrate(VI) compounds and super-iron Fe(VI) battery cathodes: FTIR, ICP, titrimetric, XRD, UV/VIS, and electrochemical characterization. Journal of Power Sources 101(2), 167-176.
Chemical and electrochemical techniques are presented for the analysis of Fe(VI) compounds used in super-iron electrochemical storage cells. Fe(VI) analytical methodologies summarized are FTIR, ICP, titrimetric, UV/VIS, XRD Fe(VI), potentiometric, galvanostatic, cyclic voltammetry, and constant load, current or power electrochemical discharges probes. The investigated FTIR methodology becomes quantitative with introduction of an internal standard such as added barium sulfate. Electrochemical techniques which utilize a solid cathode, and spectroscopic techniques which utilize a solid sample, are preferred over solution phase techniques. The titrimetric methodology (chromite analysis) has been detailed, and adjusted to determine the extent of Fe(VI --> III) oxidation power in both unmodified or coated Fe(VI) compounds. Fe(VI) compounds have also been variously referred to as ferrates or super-iron compounds, and include K2FeO4 and BaFeO4 Such compounds are highly oxidizing, and in the aqueous phase the full three electron cathodic charge capacity has been realized, as summarized by reactions such as: FeO42- + 5/2H(2)O + 3e(-) --> 1/2Fe(2)O(3) + 5OH(-).
Noorhasan Nadine, N., Sharma Virender, K., Baum, J.C., (2008) A Fluorescence Technique to Determine Low Concentrations of Ferrate(VI), Ferrates, 985, ACS Symposium Series, pp. 145-155. American Chemical Society. A fluorescence technique to determine low concentrations of aqueous ferrate(VI), [FeVIO42-], in water was developed over a wide pH range using the reaction of ferrate(VI) with scopoletin reagent. The rates of the reaction of ferrate(VI) with scopoletin as a function of pH at 25°C were determined using the stopped-flow technique to demonstrate the reaction is rapid (< 1 min). Spectral measurements on scopoletin at different pH showed that the maximum in absorption varies with the pH while the emission maximum is independent of pH. The absorbance measurements were used to determine the acid dissociation constant, Ka = 1.55 ± 0.01 ? 10-9 (pKa = 8.81 ± 0.05) for scopoletin. The intensity of fluorescence for scopoletin decreases linearly with increase in the concentration of ferrate(VI), which suggests the suitability of the method. Moreover, a relatively large decrease in intensity per micromolar ferrate(VI) concentration was observed, especially at low pH, which makes fluorescence a sensitive technique to determine low ferrate(VI) concentrations.
Sharma Virender, K., and V.N. Chenay Benoit. 2008. Heterogeneous Photocatalytic Reduction of Iron(VI): Effect of Ammonia and Formic Acid, p. 350-363, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Photo-TiO2 Ammonia is a potential pollutant that can contribute to eutrophication of rivers and lakes and its removal is thus becoming an important issue. Formic acid is a byproduct of many industrial processes and its removal from wastewater is of great interest. The removal of ammonia and formic acid was sought by studying the heterogeneous photocatalytic oxidation of ammonia and formic acid in UV-irradiated TiO2 suspensions with and without Fe(VI) (FeVIO42-) at pH 9.0. The kinetics of the reactions was determined by monitoring both reduction of Fe(VI) and oxidation of ammonia/formic acid. The initial rate of Fe(VI) reduction (R) can be expressed as R = kFe(VI)[Fe(VI)]m where m = 1.25 ± 0.03 and 0.70 ± 0.06 for ammonia and formic acid, respectively. The rate constant, kFe(VI), depends on the concentration of ammonia and formic acid. The values of kFe(VI) for the oxidation of ammonia was determined as kFe(VI) = [Ammonia]/(a[Ammonia]+b), a = 6.0 ? 103 µM0.25 s and b = 4.1 ? 106 µM-1.25 s-1. The kFe(VI) for the photocatalytic oxidation of formic acid showed a positive linear relationship, which can be written as kFe(VI) = 2.41 ? 10-3 + 1.58 ? 10-7 ([Formic Acid]). The rates of oxidation of ammonia and formic acid in TiO2/UV suspensions were enhanced in the presence of Fe(VI). Results suggest the photocatalytic production of a highly reactive species, Fe(V), a powerful oxidant, to oxidize ammonia and formic acid. A combination of Fe(VI) and the TiO2 photocatalyst has the potential to enhance the oxidation of pollutants in the aquatic environment.
Cabelli Diane, E., and K. Sharma Virender. 2008. Aqueous Ferrate(V) and Ferrate(IV) in Alkaline Medium: Generation and Reactivity, p. 158-166, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Fe(IV) and Fe(V) This chapter reviews the generation of ferrate(V) and ferrate(IV) complexes in basic solutions. Ferrate(V) (FeVO43-) is easily produced by the one-electron reduction of the relatively stable FeVIO42- ion. Comparatively, generation of a ferrate(IV) complex via one-electron oxidation of Fe(III) is rather difficult, due to the relative insolubility of Fe(III) hydroxides and the slow oxidation rate. This has resulted in limited studies of the reactivity of ferrate(IV). The most studied aquated ferrate(IV) complex is ferrate(IV)-pyrophosphate. The reactivity of ferrate(IV) and ferrate(V) complexes with inorganic and organic substrates in alkaline solution is presented. The reactions of ferrate(IV)-pyrophosphate complex with pyrophosphate complexes of divalent metal ions are likely occurring through inner-sphere electron transfer. Ferrate(V) reacts with substrates predominantly via a two-electron transfer process to Fe(III). The only known example of one-electron reduction of ferrate(V) is its reactivity with cyanide in which sequential reduction of Fe(V) to Fe(IV) to Fe(III) was demonstrated. The reaction of Fe(V) with cyanide thus provides an opportunity for selective and unambiguous production of quantitative amounts of Fe(IV) in aqueous media.
Pestovsky, O., and A. Bakac. 2008. Identification and Characterization of Aqueous Ferryl(IV) Ion, p. 167-176, In V. K. Sharma, ed. Ferrates, Vol. 985. American Chemical Society. Fe(IV) The reaction between ferrous ions and ozone in acidified aqueous solution generates a short-lived species (t½ ≈ 7 sec), which was identified as high-spin pentaaquairon(IV) oxo dication (ferryl) by UV-Vis, Mossbauer, XAS spectroscopies, DFT calculations, 18O isotopic labeling, and conductometric kinetic studies. Kinetic and 18O isotopic labeling studies were used to determine the rate constant for the oxo group exchange between ferryl and solvent water, kex = 1400 s-1. Oxidation of alcohols, aldehydes, and ethers by ferryl occurs by simultaneous hydrogen atom and hydride transfer mechanisms. Ferryl was also found to be an efficient oxygen atom transfer reagent in the reactions with sulfoxides, a water soluble phosphine, and a thiolato-complex of cobalt(III). A quantitative and fast reaction between ferryl and DMSO (kDMSO = 1.3 ? 105 M-1 s-1) produces methyl sulfone. This and some other findings unambiguously rule out ferryl as a Fenton intermediate.
Anquandah, George Aloysius Kofi. "Ferrate(VI) Oxidation of Trimethoprim, Atenolol, Propranolol, Nonylphenol and Octylphenol." Florida Institute of Technology, 2011.
The kinetic studies of the reactions of Ferrate(VI) (FeO4 2- , Fe(VI)) with these compounds were determined in the acidic to basic pH range at 25°C in order to seek their removal in water. The reactions were first-order with respect to concentrations of both Fe(VI) and compounds, hence the overall reactions followed second-order kinetics. The second-order rate constants were pH dependent. The values were in the ranges of (1.9-0.01)x102 M-1 s-1 , (4.3-12.0)x 100 M-1 s-1 , (1.7-1.4)x101 M -1 s-1 (2.70-0.4)x102 and (3.2 - 0.1)x102 M-1 s-1 for TMP, ATL, PPL, NP, and OP, respectively. The pH dependence was explained by considering the speciation of Fe(VI) and individual compounds. The half-lives of the Fe(VI) oxidation of the compounds ranged from ~ 10 seconds to ~ 30 minutes at pH 7.0 using a 10 mg/L Fe(VI) dose. The stoichiometries of Fe(VI) to TMP or PPL were in molar ratios of 5:1 and 4:1, respectively. Oxidation products of the reactions of Fe(VI) with TMP were identified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Reaction pathways of oxidation of TMP by Fe(VI) are proposed to demonstrate the cleavage of the TMP molecule to ultimately result in 3,4,5,-trimethoxybenzaldehyde and 2,4-dinitropyrimidine as among the final identified products. The oxidized product mixture exhibited no antibacterial activity against Escherichia coli after complete consumption of TMP. Removal of TMP in the secondary effluent by Fe(VI) was achieved.
Hu, Lanhua. "Oxidative Treatment of Emerging Micropollutants and Viral Pathogens by Potassium Permanganate and Ferrate: Kinetics and Mechanisms." University of Illinois at Urbana-Champaign, 2011.
Survey tests show that permanganate and ferrate are both selective oxidants that target compounds with specific electron-rich moieties, including olefin, phenol, amine, cyclopropyl, thioether, and alkyne groups. Detailed kinetics studies were undertaken to characterize Mn(VII) oxidation of five representative PhACs that exhibit moderate to high reactivity (carbamazepine, CBZ; ciprofloxacin, CPR; lincomycin, LCM; trimethoprim, TMP; and 17α-ethinylestradiol, EE2), Fe(VI) oxidation of one representative PhAC (CBZ), and Fe(VI) inactivation of MS2 phage (Fe(VI) reactions with other PhACs were not conducted because recent literature reports addressed the topic). The Mn(VII) and Fe(VI) reactions examined with PhAC and MS2 phage were found to follow generalized second-order rate laws, first-order in oxidant concentration and first-order in target contaminant concentration. The temperature dependence of reaction rate constants was found to follow the Arrhenius equation. Changing of solution pH had varying effects on reaction rates, attributed to change in electron density on the target reactive groups upon protonation/deprotonation. The effects of pH on reaction rates were quantitatively described by kinetic models considering parallel reactions between different individual contaminant species and individual oxidant species. For Mn(VII) reactions, removal of PhACs in drinking water utility source waters was generally well predicted by kinetic models that include temperature, KMnO 4 dosage, pH, and source water oxidant demand as input parameters.
A large number of reaction products from Mn(VII) oxidation of CBZ, CPR, LCM, TMP, and EE2 and Fe(VI) oxidation of CBZ were identified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Structures of reaction products were proposed based on MS spectral data along with information collected from proton nuclear magnetic resonance ( 1 H-NMR), chromatographic retention time, and reported literature on Mn(VII) reactions with specific organic functional groups. Mn(VII) and Fe(VI) rapidly oxidize CBZ by electrophilic attack at the olefinic group on the central heterocyclic ring. Mn(VII) oxidation of CPR was found to occur primary on the tertiary aromatic amine group on the piperazine ring, with minor reactions on the aliphatic amine and the cyclopropyl group. LCM was oxidized by Mn(VII) through the aliphatic amine group on the pyrrolidine ring and thioether group attached to the pyranose ring. TMP oxidation by Mn(VII) was proposed to occur at the C=C bonds on the pyrimidine ring and the bridging methylene group. EE2 oxidation by Mn(VII) resulted in several types of products, including dehydrogenated EE2, hydroxylated EE2, phenolic ring cleavage products, and products with structural modifications on the ethynyl group. Although little mineralization of PhAC solutions was observed after Mn(VII) treatment, results from bioassay tests of three antibiotics show that the antibacterial activity was effectively removed upon reaction with Mn(VII), demonstrating that incomplete oxidation of PhACs during Mn(VII) treatment will likely be sufficient to eliminate the pharmaceutical activity of impacted source waters. Overall, results show that reactions with Mn(VII) likely contribute to the fate of many PhACs in water treatment plants that currently use Mn(VII), and the kinetic model developed in this study can be used to predict the extent of PhAC removal by Mn(VII) treatment. For water contaminated with highly Mn(VII)-reactive PhACs (e.g., carbamazepine, estradiol), specific application of Mn(VII) may be warranted. Results suggest Fe(VI) may be a useful disinfecting agent, but more work is needed to characterize its activity and mode of inactivating with other pathogens of concern.
Yngard, Ria Astrid. "Ferrate(VI) and Ferrate(V) Oxidation of Weak-Acid Dissociable Cyanides." Florida Institute of Technology, 2007.
Free cyanide and its weak acid dissociable (WAD) metal complexes, generated from different industries, are toxic and must be removed from effluents. This work presents the kinetics and stoichiometry of the oxidation of WAD tetracyano complexes, M(CN) 4 2- (M = Cd, Zn, and Ni) by ferrate (VI) (Fe VI O 4 2- ) and ferrate(V) (Fe VO 4 3- ). The rates of reactions of Fe(VI) with metal WAD cyanides were measured as a function of pH (9.1-10.5) and temperature (15-45°C) by using a stopped-flow technique. The rate laws for the oxidation of M(CN) 4 2- by Fe(VI) may be written as -d[Fe(VI)]/d t = k [Fe(VI)][M(CN) 4 2- ] n where n = 0.5 for Cd 2+ and Zn 2+ and n = 1 for Ni 2+ . The oxidation rates decreased with increase in pH and were primarily related to the reactivity of M(CN) 4 2- with HFeO 4 - . The observed activation energy was determined as 47.8±1.4, 45.7±1.4, and 42.6±1.4 kJ mol -1 for Cd(CN) 4 2- , Zn(CN) 4 2- and Ni(CN) 4 2- , respectively. The stoichiometries of the oxidation of M(CN) 4 2- by Fe(VI) were: 4HFeO 4 - + M(CN) 4 2- + 6H 2 O -> 4Fe(OH) 3 + M 2+ + 4NCO - + O 2 + 4OH - . Except for Ni, metal ions formed in the reaction are subsequently removed from water by possibly iron(III) hydroxide. Mechanisms are proposed that agree with the observed reaction rate-laws and stoichiometries of the oxidation of WAD cyanides by Fe(VI). Overall, results indicate that Fe(VI) can effectively destroy WAD cyanides in effluents.
Rates of reactions of Fe(V) with Cd(CN) 4 2- and Zn(CN) 4 2- were measured at pH 10.0-12.0 using a premix pulse radiolysis technique. Kinetic and spectral studies performed on the reduction of Fe(V) by Cd(CN) 4 2- and Zn(CN) 4 2- suggest two steps in which an intermediate Fe(IV)-cyano complex is formed through a precursor Fe(V)-cyano complex in the first step. In the second step, the intermediate is further converted to Fe(III). If this hypothesis is correct, this is the first example of such aqueous high-valent iron complexes. The rate constant for the oxidation of Cu(CN) 4 3- by Fe(V) was determined at pH 12.0 and 22.0°C as 1.35±0.02 × 10 7 M -1 mol-1 .
Lee, Changyoul. "Oxidation of Humic Substances by Environmentally Friendly Oxidants Iron(VI) and Hydrogen Peroxide." Florida Institute of Technology, 2005.
The kinetics of the reactions between humic and fulvic acids (HA and FA) and potassium ferrate were investigated under pseudo order conditions. The rates of HA and FA oxidation with ferrate(VI) were determined at pH 9.0, 25°C . The reactions were found to be first order for each reactant. The oxidation of HA and FA with Fe(VI) occurred by two different pathways allowing for two different pseudo first order rate constants. Rate constant values for an estuarine FA (Mullet Creek) were very similar to those for SRFA (Suwannee River Fulvic Acid) increasing the likelihood that the values are applicable to other natural aqueous systems. The degradation of HA and FA in aqueous solution by means of a photolytic method was also studied. Both FA and HA oxidation by H 2 O2 followed first-order kinetics for HS (humic substances) and fractional order kinetics for H2 O 2 , respectively. The optimum H2 O2 dose was found to be 86 mM for the oxidation of HS from this study. The oxidation efficiency of Fe(VI) and oxidation by-products were analyzed by TOC, NMR and GC/MS. The TOC experiments were conducted by measuring organic carbon contents after the oxidation of FA. The results showed that the oxidation of HS by Fe(VI) occurred efficiently and led to 48% mineralization within 30min. The NMR results showed a decrease in aromaticity and carbohydrates after oxidation, and the GC/MS analysis positively identified small carboxylic acid and polycarboxylic acid residues. The heterogeneous photocatalytic degradation of FA and HA by Fe(VI) with UV/TiO2 was investigated at pH 9.0, 25°C . The oxidation of HS by the Fe(VI) was enhanced by the addition of TiO 2 in the presence of UV light.
Dormedy, Derek Frank. "I. Kinetics and Mechanism of the Ferrate Oxidation of Dimethyl Sulfide in Aqueous Solution. II. A Temporal Profile of Triazine and Acetanilide Herbicides in Surface and Ground Water Near Ashland, Nebraska." The University of Nebraska - Lincoln, 2000.
I . This work studies the reaction of ferrate with dimethyl sulfide (DMS). Reactants, intermediates, and products of this reaction were measured in a mechanistic study of this reaction. The source and destination of the oxygen atoms involved in this aqueous reaction were tracked using 18 O-labeled water. This was accomplished by monitoring the masses of dimethyl sulfoxide (DMSO and DMS18 O) by GC-MS. Information concerning intermediates was obtained by quenching the reaction at various times with hydroxylamine. The effect of different buffers and dissolved oxygen on the reaction were studied at pH 10.2 and found to have no significant influence on the kinetics or product distribution. Resolved rate constants were calculated to be 11sec-1 and 426sec-1 respectively for FeO42- and HFeO4- reacting with DMS.
Isotope tracers using 18 O-labeled water, GC-MS detection of the products and oxygen quantitation were used to provide evidence for an oxygen atom transfer from Fe(VI) to DMS being a step in the mechanism. It was also found that some solvent oxygen was incorporated into DMSO, probably from hydrogen peroxide or hydroxyl radical formed by oxidation of water by ferrate.
II . Atrazine, cyanazine, alachlor, acetachlor and metolachlor were detected in the Platte River and surrounding ground water after the first thunderstorm following herbicide application. Maximum concentrations of atrazine in the Platte River vary from 2 ppb to 12 ppb in the years 1995 through 1998. Many tributaries intersect with the Platte River, but mixing of these waters is incomplete even miles downstream from their convergence. Induced recharge pulls surface water that may be contaminated with herbicides into the drinking water aquifer. Concentrations of analytes in the ground water near the Platte River in eastern Nebraska can be between 0 and 100% of the surface water concentrations.
The lag time of this process has been estimated for wells at different distances from the Platte River. The lag times for analytes to enter production wells on an island in the river were between 5 and 9 days and to enter a monitoring well on the mainland were between 11 to 21 days. Because of these differences in lag times, analytes that degraded faster (acetanilides) were rarely detected in water samples taken from the mainland. Water samples taken from island wells regularly contained detectable concentrations of an the analytes.
Hornstein, Brooks John. "Reaction Mechanisms of Hypervalent Iron: The Oxidation of Amines and Hydroxylamines by Potassium Ferrate." New Mexico State University, 1999.
The reaction kinetics of aniline, hydroxylamine, N-methylaniline and benzylamine with potassium ferrate have been studied. For each oxidation there is evidence for the formation of an iron-substrate intermediate that is subject to different fates depending on the reaction conditions. The lifetime of the reactive intermediate is dependent upon the nature of the substrate and in some situations these intermediates are observed spectrally. This work not only provides information about the oxidation mechanisms associated with ferrate but also lends insight into the development of new high oxidation state iron compounds.
Gai, Huifa. "A Kinetic Study of the Reduction of Ferrate by Water and the Oxidation of Alcohols by Ferrate." The University of Regina (Canada), 1996.
The kinetics of the reduction of potassium ferrate by water have been investigated. When the pH is lower than 8 the reaction is second order with respect to tetraoxoferrate(VI) ion. When the pH is higher than 10 the reaction is first order with respect to ferrate ion. A mixed order is found between pH 8 and 10. Ferrate ion is kinetically most stable at a pH of approximately 9.5.
The kinetics of the oxidation of 2-propanol and mandelic acid have been studied. The rate laws for the oxidation of these alcohols are first order in alcohol are first order in ferrate ion. The small influence of the ionic strength on the reaction rate suggests that there is no significant charge build up in the transition state. The determination of a primary deuterium isotope effect and the observation that the rates for the oxidation of ethers and alcohols are comparable suggest that the reaction is initiated by reaction of the $\alpha$-C-H bond with ferrate ion. Open chain products observed from the oxidation of cyclobutanol indicate generation of free radicals during the reaction. Mechanisms consistent with these observations are considered. The oxidation of alcohols, alkenes and other substrates by ferrate under heterogenous conditions has also been investigated. The products of the oxidation of secondary alcohols are the corresponding ketones. The products of the oxidation of primary alcohols are the corresponding aldehydes or aldehydes with one or more less carbons. The products of the oxidation of double bonds are the corresponding aldehydes or ketones. There is no significant selectivity for oxidation of double bonds or hydroxyl functional groups in the system investigated. The usefulness of this reaction for organic synthesis is evaluated.
Erickson, John Edward. "The Oxidation of Water and Inorganic Nitrogen Compounds by Potassium Ferrate (VI)." The University of Nebraska - Lincoln, 1988.
The oxidations of azide, hydroxylamine, hydrazine, and ammonia by potassium ferrate(VI) were investigated in aqueous solutions from neutral to basic pH conditions. The study included both reaction kinetics and product determination. The rate of disappearance of the ferrate(VI) ion was monitored spectrophotometrically at 505 nm using both stopped-flow and conventional instruments. The analytical techniques of gas chromatography, ion chromatography, and differential pulse polarography were used to identify and quantitate the products of these reactions. Dinitrogen, nitrous oxide, nitrite, and nitrate were the most often encountered nitrogen-containing products from these oxidations. A mass balance between the products formed and the ferrate(VI) reacted was calculated for each reductant. Based on the kinetic results and the product analysis data, the order of reactivity of these solutes toward ferrate(VI) at pH 10 is: N$\sb2$H$\sb4$ $>$ NH$\sb2$OH $\gg$ NH$\sb3$ $>$ N$\sb3\sp-$. At pH 7, the reactivity order is: N$\sb2$H$\sb5\sp+$ $>$ NH$\sb2$OH $\approx$ N$\sb3\sp-$ $\gg$ NH$\sb4\sp+$. A mechanism is proposed for each of these reactions which is consistent with the kinetic data and the product distributions. The reactive species nitroxyl (HNO or NO$\sp-$) is suggested as a likely intermediate in a number of the proposed mechanisms. In addition, the stoichiometry and products from the oxidation of solvent water by the ferrate(VI) in the pH range from 4 to 8.5 was reinvestigated. Gas chromatography and differential pulse polarography were used to determine the products of the reaction. Molecular oxygen is found to be the major product of this oxidation with small amounts of hydrogen peroxide also detected. A mechanism has been proposed by which these products are formed.
Potassium ferrate(VI) was shown to easily oxidize aqueous nitrite ions to nitrate over the pH range of 2.5 to 6.8. Stopped-flow and conventional spectroscopy were used to monitor the rate of disappearance of the purple solution color of potassium ferrate(VI) at 505nm. The parallel oxidation of solvent water was also monitored using a calibrated dissolved oxygen probe. A selective nitrate electrode was used to potentiometrically determine product formation. The overall reaction was determined to be mixed-order with a hydronium ion dependency of 0.58. The rate of the reaction was also shown to depend on the fraction of species of diprotonated ferrate(VI). Potassium ferrate(VI) was also able to oxidize chlorite ions to chlorate and hypochlorite to chlorite. Evidence is also presented showing, to a slight extent, the oxidation of chlorate to perchlorate. Attempts were made to oxidize chloride to higher states, but no successful oxidation was detected. Chlorate oxidation rates were studied at pH's from 2.5 to 6.8. Chlorite rates were determined from pH 8.0 to 10.1. Hypochlorite oxidation was investigated over the pH range 6.4 to 8.4. Possible chloride oxidation was investigated from a pH of 3.5 to 8.2. Identification of possible solute oxidation products was accomplished via ion chromatography of stock solutions that had been reacted with potassium ferrate(VI). The parallel oxidation of solvent water to oxygen was also monitored with a dissolved oxygen probe. Investigation of chlorate containing solutions showed the water oxidation pathway of potassium ferrate(VI) to be much more favorable than the chlorate oxidation pathway. The same result was obtained when chloride was in solution. However, when chlorite was in solution, the solvent oxidation pathway was less favored. The rate of water oxidation was shown to be inversely related to chlorite concentration. When hypochlorite was in solution with potassium ferrate(VI) the amount of water being oxidized increased with increased hypochlorite. This was explained by the oxidation of hypochlorite to chlorite with a subsequent reaction that produces chlorine dioxide.
Bartzatt, Ronald Lee. "THE KINETICS OF OXIDATION OF VARIOUS ORGANIC SUBSTRATES BY POTASSIUM FERRATE." The University of Nebraska - Lincoln, 1982.
The ferrate oxidation of diethylsulfide and 2,2-thiodiethanol were studied in a pH range of 8.90 to 9.90. The ferrate oxidation of chloral was studied in a pH range of 7.90 to 9.30. Chloral exists in aqueous solution in its hydrated form. The oxidation of acetaldehyde by potassium ferrate has been studied at three different temperatures. Neopentyl alcohol has been studied at three different x temperatures. An increase in reaction temperature does not bring about an increase in k(,1(Obs)). Using the data obtained for acetaldehyde, observations concerning thermodynamics can be made. Calculations for activation energy, A study of water oxidation by potassium ferrate was made. The values of k(,1(Obs)) for water oxidation by potassium ferrate do not change significantly as temperature of the reaction solutions vary from 35C to 18C. Graphs of Log k(,1(Obs)) versus pH and Log k(,2(Obs)) versus pH are linear.
De Luca, Sergio Joao. "REMOVAL OF ORGANIC COMPOUNDS BY OXIDATION-COAGULATION WITH POTASSIUM FERRATE." North Carolina State University, 1981.
The aqueous solutions for five organic "priority pollutants" were selected for oxidation and coagulation studies by use of potassium ferrate. Naphthalene and Trichloroethylene were successfully removed from waste streams with proper pH control. Nitrobenzene was oxidized very slowly by ferrate. Bromodichloromethane and 1,2-dichlorobenzene were not removed from aqueous streams under the oxidation test conditions. The Ames test results indicate that the potassium ferrate treatment did not produce products that were detectable by the Ames test as mutagenic.
A comparative study of the efficiency of removal of organic pollutants by coagulation with ferrate and alum was conducted using Jar test. Potassium ferrate oxidation-coagulation is not a satisfactory process for removal of Nitrobenzene, 1,2-dichlorobenzene and Bromodichloromethane. Ferrate by itself does not form a floc surface on which the priority pollutants might be sorbed. It is necessary to provide a coagulant aid. Alum coagulation is not a satisfactory process for removal of those pollutants either. The efficiency of ferrate and alum using paddles versus gas (N(,2)) in the flocculation process was compared. The removal efficiencies of Naphthalene, Trichloroethylene, Bromodichloromethane and 1,2-dichlorobenzene were higher than those obtained using paddles. However, it is the stripping which accounts for the removal of the volatile compounds which cannot be effectively removed by either ferrate or Alum.
Kelter, Paul B. "KINETIC METHODS AND KINETICS OF THE OXIDATION OF WATER, NITRILOTRIACETIC ACID, AND RELATED ORGANIC SUBSTANCES BY POTASSIUM FERRATE." The University of Nebraska - Lincoln, 1980.
It was found that many factors can effect the observed rate constants of the oxidation of organic substrates by the ferrate (VI) ion. Selection of buffer will either increase or decrease or decrease the observed rate constants, depending upon the buffer chosen. Increasing the total ionic strength will increase the observed rate constants, while the effect of changing the initial potassium ferrate concentration is not altogether clear. The presence of Fe(III) generated from the reduction of Fe(VI) in solution enhances the rate of oxidation, while adding Fe(III) to the unreacted solution does not affect the oxidation rate. Dual-wavelength spectroscopy was used to try and eliminate any effect in the calculated rate constants due to absorption or scattering of light by the Fe(III) product. It was found that the results were identical to those found when using a single wavelength, dual beam spectrophotometer. The kinetics of the oxidation of water by ferrate (VI) were examined in the pH range of 4.80 to 7.78. The data were shown to fit a mixed-order kinetic model. The second-order process was by far the largest contribution to the rate. As a result, the second-order rate constants showed a much clearer trend than the corresponding first-order rate constants. The reactions showed a strong pH dependence. The half-lives of the reactions ranged from about 100 msec at pH 4.80 to about 50 sec at pH 7.78. The results from the higher pH range (7.5 and above) were in good agreement with those obtained by other workers. The kinetics and the products of the reaction of nitrilotriacetic acid and ferrate (VI) were examined.
Tabatabai, Ali Reza. "OXIDATION KINETICS OF ORGANIC MOLECULES BY POTASSIUM FERRATE (VI)." The University of Nebraska - Lincoln, 1980.
Chow, Victor Shui-Chiu. "Preparation and Spectroscopic Examination of Potassium Ferrate." Stephen F. Austin State University, 1974. | CommonCrawl |
Abstract. We prove the existence of absolutely continuous spectrum for a class of discrete Schr\"odinger operators on tree like graphs. We consider potentials whose radial behaviour is subject only to an $\ell^\infty$ bound. In the transverse direction the potential must satisfy a condition such as periodicity. The graphs we consider include binary trees and graphs obtained from a binary tree by adding edges, possibly with weights. Our methods are motivated by the one dimensional transfer matrix method, interpreted as a discrete dynamical system on the hyperbolic plane. This is extended to more general graphs, leading to a formula for the Green's function. Bounds on the Green's function then follow from the contraction properties of the transformations that arise in this generalization. The bounds imply the existence of absolutely continuous spectrum. | CommonCrawl |
The elliptic-surfaces tag has no usage guidance.
Are there three ordinary elliptic curves $E$, $E_1$, $E_2$ such that $E^2 \cong E_1 \!\times\! E_2$?
Is a supersingular Kummer surface $k$-unirational in characteristic 2?
What is the Artin invariant of an elliptic supersingular K3 surface?
Can a toric surface be an elliptic surface?
Is there a description of the moduli space of elliptic surfaces?
Can monodromy be described by the same matrix for chosen generators in case of the same singularity type?
Spectral sequence associated to elliptic fibration degenerates?
Example of non-modular elliptic surface?
Mordell-Weil of an elliptic surface after adjoining a nontorsion section: as small as possible?
A question on existence of degeneration of Enriques surface.
Topology of K3 as a sum of two abelian fibrations.
Singular fibers of an elliptic fibered K3 surface.
A K3 over $P^1$ with six singular $A_1$- fibers? | CommonCrawl |