The deduction was deliberately chosen to be short and only showcased a small number of Lean tactics. In my previous post, I walked through the task of formally deducing one lemma from another in Lean 4. With a little more effort, one can replace by a more general measure space (and use differential entropy in place of Shannon entropy), to recover Carbery’s inequality in full generality we leave the details to the interested reader. One can the use the entropy chain rule to computeĪnd the claim now follows from the induction hypothesis. For each value attained by, we can take conditionally independent copies of and conditioned to the events and respectively, and then concatenate them to form a tuple in, with a further copy of that is conditionally independent of relative to. Now suppose that, and the claim has already been proven for, thus one has already obtained a tuple with each having the same distribution as, andīy hypothesis, has the same distribution as. Then there exists a -valued random variable, where each has the same distribution as, and Lemma 3 (Conditional expectation computation) Let be an -valued random variable. Comparing the suprema, the claim now reduces to Where ranges over random variables taking values in, range over tuples of random variables taking values in, and range over random variables taking values in. If we take logarithms in the inequality to be proven and apply Lemma 1, the inequality becomes However even for, the existing proofs require the “tensor power trick” in order to reduce to the case when the are step functions (in which case the inequality can be proven elementarily, as discussed in the above paper of Carbery). Which can also be proven by elementary means. Which is easily proven by Cauchy-Schwarz, while for the inequality reads Thus for instance, the identity is trivial for. Where is the set of all tuples such that for. Theorem 2 (Generalized Cauchy-Schwarz inequality) Let, let be finite non-empty sets, and let be functions for each. In this note I would like to use this variational formula (which is also known as the Donsker-Varadhan variational formula) to give another proof of the following inequality of Carbery. One can also interpret this inequality as a special case of the Fenchel–Young inequality relating the conjugate convex functions and. (The expression inside the supremum can also be written as, where denotes Kullback-Leibler divergence. Then is now the probability distribution of some random variable, and the inequality can be rewritten asīut this is precisely the Gibbs inequality. Proof: Note that shifting by a constant affects both sides of (1) the same way, so we may normalize. Lemma 1 (Gibbs variational formula) Let be a function. There is a nice variational formula that lets one compute logs of sums of exponentials in terms of this entropy: If is a random variable taking values in, the Shannon entropy of is defined as Tao was a co-recipient of the 2006 Fields Medal and the 2014 Breakthrough Prize in Mathematics.Let be a non-empty finite set. As of 2015, he holds the James and Carol Collins chair in mathematics at the University of California, Los Angeles. He currently focuses on harmonic analysis, partial differential equations, algebraic combinatorics, arithmetic combinatorics, geometric combinatorics, compressed sensing and analytic number theory. Terence "Terry" Tao FAA FRS (simplified Chinese: 陶哲轩 traditional Chinese: 陶哲軒 pinyin: Táo Zhéxuān) is an Australian-American mathematician who has worked in various areas of mathematics. Tao was a co-recipient of the 2006 Fields Medal and the 2014 Breakthrough Prize in Mathematics.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |