Consider the set of all words , over the alphabet (the vertices of Boolean cube). We naturally multiply the words by concatenation: if , then . For and denote by the word which differs from only in -th position. Clearly the operators are involutive and mutually commute. We further use the same notations for different values of (namely, both for and .) We say that are joined by an edge if for some . This graph (the skeleton of the Boolean cube) is bipartite by the chessboard coloring. In particular, it contains an independent subset of size .

The recent sensational result by Hao Huang https://arxiv.org/pdf/1907.00847 is that if we take a subset , the induced subgraph on this subset already contains a vertex of degree at least . That is sharp by the earlier example of F. R. K. Chung, Z. Füredi, R. L. Graham and P. Seymour https://faculty.math.illinois.edu/~z-furedi/PUBS/furedi_chung_graham_seymour_cube.pdf

Everybody is very excited since this estimate settles the so called sensitivity conjecture, and this is not what I am going to tell about (cause of complete ignorance.)

The aim of this post is to give an explicit exposition of Huang’s argument without using any theorems but the following: the linear system of equations and variables has a non-trivial solution.

Following Huang, we take a not-everywhere-zero function which vanishes outside and satisfies some (other) linear relations. Such a function exists by the aforementioned theorem. For describing these relations, we define for . Clearly and the straightforward but crucial property is

Now we are ready to formulate the relations on , they are the following:

You may say that the number of relations is , but actually the relations (1) for starting from 0, say , which read as:

imply relations (1) for :

This reduces to () after we express via the values . Indeed, Right Hand Side of (3) reads as

the coefficient of in this expression equals . The coefficient of is for all . Finally for the coefficient of equals by .

Now if is chosen so that is maximal, we have and looking at the right hand side we conclude that at least words of the form must belong to the support of , thus to . The proof is complete.

Of course the reason why equations (2) imply the other equations (3) is that the corresponding to (1) operator has an eigensubspace of dimension for the eigenvalue . Or viceversa, if you prefer. I cannot say that I understand well where does this operator come from.

# Low-level variant of Huang’s argument for sensitivity conjecture

Advertisements

Do you know how to give a nice eigenbasis for this operator? I tried but did not succeed.

LikeLike

Well, you may partition the cube onto two parallel facets, take your favourite basis for functions on the facet (for example, one value is 1 and others are 0), and the values on the other facet are determined uniquely by (2). I do not know whether it is nice, but at least it is quite explicit and consists of functions with small support. In general, if the operator satisfies an equation like , the kernel of is the same as the image of , that guy has a basis consisting of the columns of the corresponding matrix.

LikeLiked by 1 person

It looks like sqrt(n) is missing in the left hand side of (2) and (3).

LikeLike

OK, I see, I was wrong, nothing is missing.

LikeLike

The description put by you is really interesting (tasty I would like to say). I did not know a word about the topic prior to reading this post. But I could still understand some parts of the post quite clearly. That’s powerful explanation sir. It’s really good.

LikeLike

Dear Fedor, the sentence, “This reduces to (*) after we express via the values ” is not clear.

can you add a few sentences of explenation to make the post self contained. (Also, what is (*).)

LikeLike

Dear Gil, glad to see you! is an equation after the words “crucial property is”. I added the detailed calculation to the place you ask about.

LikeLike