Design a site like this with WordPress.com

# Low-level variant of Huang’s argument for sensitivity conjecture

Consider the set $\Omega_n$ of all $2^n$ words $t_1t_2\ldots t_n,\, t_i\in \{0,1\}$, over the alphabet $\{0,1\}$ (the vertices of Boolean cube). We naturally multiply the words by concatenation: if $x\in \Omega_n, y\in \Omega_k$, then $xy\in \Omega_{n+k}$. For $i=1,2,\ldots,n$ and $x\in \Omega_n$ denote by $T_ix$ the word which differs from $T_ix$ only in $i$-th position. Clearly the operators $T_i,i=1,\ldots,n$ are involutive and mutually commute. We further use the same notations $T_i$ for different values of $n$ (namely, both for $n$ and $n-1$.) We say that $x,y$ are joined by an edge if $y=T_ix$ for some $i=1,\ldots,n$. This graph (the skeleton of the Boolean cube) is bipartite by the chessboard coloring. In particular, it contains an independent subset of size $2^{n-1}$.

The recent sensational result by Hao Huang is that if we take a subset $A\subset \Omega_n, |A|=2^{n-1}+1$, the induced subgraph on this subset already contains a vertex of degree at least $\sqrt{n}$. That is sharp by the earlier example of F. R. K. Chung, Z. Füredi, R. L. Graham and P. Seymour.

Everybody is very excited since this estimate settles the so called sensitivity conjecture, and this is not what I am going to tell about (cause of complete ignorance.)

The aim of this post is to give an explicit exposition of Huang’s argument without using any theorems but the following: the linear system of $m$ equations and $m+1$ variables has a non-trivial solution.

Following Huang, we take a not-everywhere-zero function $f:\Omega_n\to \mathbb{R}$ which vanishes outside $A$ and satisfies some (other) $2^{n-1}$ linear relations. Such a function exists by the aforementioned theorem. For describing these relations, we define $w_i(x)=(-1)^{t_1+\ldots+t_{i-1}}$ for $x=t_1\ldots t_k\in \Omega_{k},i=1,\ldots,k$. Clearly $w_i(x)=w_i(T_ix)$ and the straightforward but crucial property is $\displaystyle w_i(x)w_j(x)w_j(T_ix)w_i(T_jx)=-1\,\,\text{for}\,i\ne j. \quad \quad\quad (\star)$

Now we are ready to formulate the $2^{n-1}$ relations on $f$, they are the following: $\displaystyle \sqrt{n}f(y)=\sum_{i=1}^{n}w_i(y)f(T_i y),\,\,\text{for all}\, y\in \Omega_{n}.\quad (1)$

You may say that the number of relations is $2^{n}$, but actually the relations (1) for $y$ starting from 0, say $y=0x,x\in \Omega_{n-1}$, which read as: $\displaystyle f(1x)=\sqrt{n}f(0x)-\sum_{i=1}^{n-1}w_i(x)f(0T_ix),x\in \Omega_{n-1},\quad (2)$

imply relations (1) for $y=1x,x\in \Omega_{n-1}$: $\displaystyle f(0x)=\sqrt{n}f(1x)+\sum_{i=1}^{n-1}w_i(x)f(1T_ix),x\in \Omega_{n-1}.\quad (3)$

This reduces to ( $\star$) after we express $f(1x),f(1T_ix)$ via the values $f(0T_iT_jx)$. Indeed, Right Hand Side of (3) reads as $\sqrt{n}(\sqrt{n}f(0x)-\sum_{i=1}^{n-1}w_i(x)f(0T_ix))+\sum_{i=1}^{n-1}w_i(x)(\sqrt{n}f(0T_ix)-\sum_{j=1}^{n-1}w_j(T_ix)f(0T_jT_ix)).$

the coefficient of $f(0x)$ in this expression equals $\sqrt{n}\cdot \sqrt{n}-\sum_{i=1}^{n-1} w_i(x)w_i(T_ix)=1$. The coefficient of $f(0T_ix)$ is $-\sqrt{n}w_i(x)+\sqrt{n}w_i(x)=0$ for all $i=1,\ldots,n-1$. Finally for $i\ne j$ the coefficient of $f(0T_iT_jx)$ equals $-w_i(x)w_j(T_ix)-w_j(x)w_i(T_jx)=0$ by $(\star)$.

Now if $y\in \Omega_n$ is chosen so that $|f(y)|$ is maximal, we have $y\in A$ and looking at the right hand side we conclude that at least $\sqrt{n}$ words of the form $T_i y$ must belong to the support of $f$, thus to $A$. The proof is complete.

Of course the reason why $2^{n-1}$ equations (2) imply the other $2^{n-1}$ equations (3) is that the corresponding to (1) operator $f(y)\mapsto \sum_{i=1}^{n}w_i(y)f(T_i y)$ has an eigensubspace of dimension $2^{n-1}$ for the eigenvalue $\sqrt{n}$. Or viceversa, if you prefer. I cannot say that I understand well where does this operator come from.

## Join the Conversation

1. 2. 3. 4. 5. 6. 1. Do you know how to give a nice eigenbasis for this operator? I tried but did not succeed.

Like

1. Well, you may partition the cube onto two parallel facets, take your favourite basis for functions on the facet (for example, one value is 1 and others are 0), and the values on the other facet are determined uniquely by (2). I do not know whether it is nice, but at least it is quite explicit and consists of functions with small support. In general, if the operator satisfies an equation like $X^2-nI=(X-\sqrt{n}I)(X+\sqrt{n}I)=0$, the kernel of $X-\sqrt{n}I$ is the same as the image of $X+\sqrt{n}I$, that guy has a basis consisting of the columns of the corresponding matrix.

Liked by 1 person

2. Alexander Smal says:

It looks like sqrt(n) is missing in the left hand side of (2) and (3).

Like

1. Alexander Smal says:

OK, I see, I was wrong, nothing is missing.

Like

3. adityaguharoy says:

The description put by you is really interesting (tasty I would like to say). I did not know a word about the topic prior to reading this post. But I could still understand some parts of the post quite clearly. That’s powerful explanation sir. It’s really good.

Like

4. Dear Fedor, the sentence, “This reduces to (*) after we express $f(1x),f(1T_ix)$ via the values $f(0T_iT_jx)$” is not clear.
can you add a few sentences of explenation to make the post self contained. (Also, what is (*).)

Like

1. Dear Gil, glad to see you! $(\star)$ is an equation after the words “crucial property is”. I added the detailed calculation to the place you ask about.

Like