1 Preliminaries
When is a map and , we use to denote the set .
1.1 Structures, homomorphisms, and company
A signature is a set of relation symbols; each relation symbol has an associated arity (a natural number), denoted by . A structure over signature consists of a universe which is a set, and an interpretation for each relation symbol . We use to denote the total size of , defined as . We will in general use the symbols to denote structures, and the symbols to denote their respective universes. In this article, we assume that signatures under discussion are finite, and assume that all structures under discussion are finite; a structure is finite if its universe is finite.
Let be a structure over signature . When , we define as the structure with universe and where . We define an induced substructure of to be a structure of the form , where . Observe that a structure has induced substructures. We define a deduct of to be a structure obtained from by removing tuples from relations of , that is, a structure (over signature ) is a deduct of if and, for each , it holds that .
Let and be structures over the same signature . A homomorphism from to is a map such that for each relation symbol , it holds that . A surjective homomorphism from to is a homomorphism such that , that is, such that is surjective as a mapping from the set to the set . A condensation from to is a surjective homomorphism satisfying the condition that for each relation symbol , it holds that . This condition is sometimes referred to as edgesurjectivity in graphtheoretic contexts.^{1}^{1}1 We remark that some authors use the term surjective homomorphism to refer to what we refer to as a condensation. Notions similar to the notion of condensation have been studied in the literature: notably, the term compaction is sometimes used (for example, in [9]) to refer to a homomorphism between graphs that maps the edge relation of the first graph surjectively onto the relation that contains the nonloop edges of the second graph.
Two structures , are homomorphically equivalent if there exists a homomorphism from to and there exists a homomorphism from to .
Throughout, we tacitly use the fact that the composition of a homomorphism from to and a homomorphism from to is a homomorphism from to .
1.2 Computational problems
We now define the computational problems to be studied. For each structure over signature :

Define to be the problem of computing, given a structure over signature , the number of homomorphisms from to .

Define to be the problem of computing, given a structure over signature , the number of surjective homomorphisms from to .

Define to be the problem of computing, given a structure over signature , the number of condensations from to .
2 Linear combinations of homomorphisms
Our development is strongly inspired by and based on the framework of Curticapean, Dell, and Marx [7], which in turn was based on work of Lovász [12, 11]. It is also informed by the theory developed by the current author with Mengel [3, 4, 5, 6]. In these works, a dual setup is considered, where one fixes the structure from which homomorphisms originate, and counts the number of homomorphisms that an input structure receives from . Many of our observations and results can be seen to have duals in the cited works.
For each signature , let denote the class of all structures over , and fix to be a subclass of that contains exactly one structure from each isomorphism class of structures contained in .
For structures , over the same signature, we use:

to denote the number of homomorphisms from to ,

to denote the number of surjective homomorphisms from to ,

to denote the number of condensations from to ,

to denote the number of induced substructures of that are isomorphic to , and

to denote the number of deducts of that are isomorphic to .
We use to denote the mapping that sends a structure to , and use , etc. analogously.
Observe that
(1) 
We briefly justify this as follows. Each homomorphism from to is a surjective homomorphism from onto an induced substructure of , namely, onto . Let be isomorphic to an induced substructure of , and let us count the number of homomorphisms from to such that is isomorphic to . Let be a list of all induced substructures of that are isomorphic to . Then, we have and , so the desired number is , which is equal to .
Observe that
(2) 
The justification for this equation has the same flavor as that of the previous equation. Each surjective homomorphism from to is a condensation from to a deduct of ; when is isomorphic to a deduct of , the product is the number of condensations from to a deduct of that is isomorphic to .
It is direct from Equation 1 that
(3) 
From this, one can straightforwardly verify by induction on that the function can be expressed as a linear combination of functions each having the form ; moreover, such a linear combination is computable from . We formalize this as follows.
Proposition 2.1
There exists an algorithm that, given as input a structure over signature , outputs a list , where the values are nonzero and the structures are pairwise nonisomorphic and such that, for all structures , it holds that
In an analogous fashion, it is direct from Equation 2 that
(4) 
One can verify by induction that the function can be expressed as a linear combination of functions each having the form ; such a linear combination is computable from , and so in conjunction with Proposition 2.1, we obtain the following.
Proposition 2.2
There exists an algorithm that, given as input a structure over signature , outputs a list , where the values are nonzero and the structures are pairwise nonisomorphic and such that, for all structures , it holds that
Remark 2.3
We can write Equation 1 in the following form:
where the sum is over all induced substructures of ; analogously, we can write Equation 2 in the following form:
where the sum is over all deducts of . From these forms, one can use Möbius inversion on posets to express as a linear combination of functions ; and likewise to express as a linear combination of functions , which linear combination can then be expressed as a linear combination of functions .
Remark 2.4
Equations 1 and 2 can be conceived of as matrix identities. Let denote the restriction of to pairs in , and view it as an infinite matrix whose indices are such pairs and having entries in ; define and view , etc. analogously. Then Equation 1, in matrix notation, is expressed by
Analogously, Equation 2, in matrix notation, is expressed by
Suppose that, for the indexing, the structures in are ordered in a way that respects total size, that is, whenever comes before , it holds that . Then, the matrices and are readily seen to be upper triangular and to have all diagonal entries equal to ; it can be verified that they are invertible.
3 The space of template parameters
We now study the space of linear combinations of functions . Fix to be a signature. Define a template function to be a function such that there exists a structure where, for each , it holds that . Define a template parameter to be a function
that can be expressed as a finite linear combination of template functions. Template parameters naturally form a vector space, and this space is clearly spanned by the template functions. We prove that the template functions
are linearly independent, and hence form a basis for this vector space.Theorem 3.1
Let be such that the are pairwise distinct. Suppose that, for all structures , it holds that . Then .
We first establish a lemma.
Lemma 3.2
Suppose that are pairwise distinct, but all homomorphically equivalent. Then, there exists a structure such that the values are nonzero and pairwise distinct.
For two structures , , we use to denote their disjoint union; and, for , we use to denote the fold disjoint union of with itself. The identity is known and straightforwardly verified.
Proof. We prove this by induction. In the case that , one can simply take .
Suppose that . By induction, there exists such that are nonzero and pairwise distinct. Let us assume for the sake of notation that . Since the structures are homomorphically equivalent, we have . If is distinct from each of the values , we are done. Otherwise, there exists a unique index such that . By Lovasz’s theorem [12], there exists a structure such that ; observe that since and are homomorphically equivalent, both of these values are nonzero; indeed, all of the values are nonzero.
We claim that for all sufficiently large values , the structure has the desired property that the values are nonzero and pairwise distinct. This is indeed straightforward to verify. We have , and since the structures are homomorphically equivalent, we obtain that the values are nonzero. Let us now argue pairwise distinctness. When is such that , for sufficiently large values of , it will hold that , from which it follows that . In a similar way, one sees that when , for sufficiently large values of , it holds that . Finally, we have for all that , as a consequence of and .
Proof. (Theorem 3.1) We prove this by induction on . It is clear for , so suppose that .
We assume for the sake of notation that is extremal in that for each other structure , either is homomorphically equivalent to or does not admit a homomorphism to . We assume further that is a list of the structures among that are homomorphically equivalent to .
Applying Lemma 3.2 to , we obtain a structure such that the values are pairwise distinct. Consider the structures defined by . For each , we have , which implies , which in turn implies . Now, form a system of equations by taking this last equation over ; view it as a system of equations over unknowns , where ranges from to . The corresponding matrix is a Vandermonde matrix, implying that . Since the values are all nonzero, we infer that . By applying induction, we obtain that .
4 The complexity of template parameters
We now study the complexity of computing template parameters, showing in essence that computing a template parameter has the same complexity as being able to compute all of its constituent functions .
Theorem 4.1
Let be such that the values are nonzero and such that the structures are pairwise nonisomorphic.

Let be the function defined by .

Let be the function defined by .
The functions and are equivalent under polynomialtime Turing reduction.
For functions , we use to indicate that polynomialtime Turing reduces to .
Proof. It is clear that , so we prove that , by induction on ; the result is clear for . By rearranging indices if necessary, let us assume that the structures are as described in the second paragraph of the proof of Theorem 3.1. Let be the restriction of to , and let be the restriction of to . Let be the function defined by .
Let us show . By applying Lemma 3.2 to , we obtain a structure such that the values are pairwise distinct. Given a pair as input, the reduction constructs the structures defined by , and then computes the various values . We have, for each , ; from this, we obtain . Viewing this as a system of equations over unknowns , the corresponding matrix is Vandermonde. Hence, we may solve for these unknowns , and then from their solution compute the values . We then output the desired value .
We now argue that . Given a structure as input, the reduction first computes . Since we just showed that , the reduction may also compute the values . By subtracting from , the desired value is computed.
We obtain by induction; it follows that .
As we established that and , it is immediate that .
5 Complexity results
Previous work established a complexity dichotomy for the family of problems . Let denote the functional version of polynomial time. A criterion was presented that distinguishes the structures for which is in the class , from those that are complete for . Here, we refer to this criterion as the tractability condition; we refer the reader to [8] for a precise formulation of this criterion. The dichotomy can be made precise as follows.
Theorem 5.1
The following was also established.
Theorem 5.2
[8] The tractability condition is decidable.
Define the tractability condition to be satisfied by a structure iff the algorithm of Proposition 2.1 returns a list such that each structure satisfies the tractability condition. (We remark here that all algorithms behaving as described in Proposition 2.1 will output the same list, up to permutation, due to Theorem 3.1.) We obtain the following.
Theorem 5.3
Let be any structure. If satisfies the tractability condition, then the problem is in ; otherwise, it is complete under polynomialtime Turing reducibility. Moreover, the tractability condition is decidable.
Proof. Let . be the list obtained by invoking the algorithm of Proposition 2.1 on .
Suppose satisfies the tractability condition. Let us argue that is in . The algorithm is given a structure as input. By assumption, each satisfies the tractability condition, and so each of the values can be computed in polynomial time. The algorithm outputs the sum
Suppose that does not satisfy the tractability condition. There exists an index such that does not satisfy the tractability condition, so is complete by Theorem 5.1. Let and be the functions described in the statement of Theorem 4.1. Clearly, . Since by Theorem 4.1, we obtain that is complete, as desired.
Decidability of the tractability condition is immediate from its definition and Theorem 5.2.
Define the tractability condition to be satisfied by a structure iff the algorithm of Proposition 2.2 returns a list such that each structure satisfies the tractability condition. We have the following; the proof is analogous to that of Theorem 5.3.
Theorem 5.4
Let be any structure. If satisfies the tractability condition, then the problem is in ; otherwise, it is complete under polynomialtime Turing reducibility. Moreover, the tractability condition is decidable.
We would like to present further consequences of our theory. From Equation 3, it can be elementarily verified that, for any structure , the expression of as a linear combination of functions gives a coefficient of to . The same fact holds for in place of , as can be elementarily seen from Equations 4 and 3. (That receives a coefficient of is these expressions also immediate from Möbius inversion.) We thus obtain the following, via Theorem 4.1.
Corollary 5.5
For each structure , the problem reduces to .
Corollary 5.6
For each structure , the problem reduces to .
In the setting of graphs, results similar to these two corollaries were obtained by Focke, Goldberg, and Zivny [9].^{2}^{2}2 See their Theorem 30 and Theorem 13. We remark that their Theorem 13 concerns compactions, and in their setup, inputs are irreflexive graphs. We would like to emphasize that here, these two corollaries fall out as very simple consequences of a more general theory.
This work [9] presented classifications of undirected graph templates with respect to the problems of counting surjective homomorphisms and of counting compactions.
Let us mention that, for the decision problem of checking existence of a surjective homomorphism, a complexity classification of templates seems to be currently elusive, although there is work in this direction (see for example [2, 10] and the references therein).
Acknowledgements.
The author is grateful to Radu Curticapean and Holger Dell for discussions about and clear explanations of their joint work [7] with Dániel Marx. The author thanks Stefan Mengel for his collaboration on database queries [3, 4, 5, 6], in which one can see effects similar to those in the present work. This work was supported by the Spanish Project MINECO COMMAS TIN201346181C2R, Basque Project GIU15/30, and Basque Grant UFI11/45.
References
 [1] Andrei A. Bulatov. The complexity of the counting constraint satisfaction problem. J. ACM, 60(5):34, 2013.
 [2] Hubie Chen. An algebraic hardness criterion for surjective constraint satisfaction. Algebra Universalis, 72(4):393–401, 2014.
 [3] Hubie Chen and Stefan Mengel. A trichotomy in the complexity of counting answers to conjunctive queries. CoRR, abs/1408.0890, 2014.
 [4] Hubie Chen and Stefan Mengel. A trichotomy in the complexity of counting answers to conjunctive queries. In 18th International Conference on Database Theory, ICDT 2015, March 2327, 2015, Brussels, Belgium, pages 110–126, 2015.
 [5] Hubie Chen and Stefan Mengel. Counting answers to existential positive queries: A complexity classification. In Proceedings of the 35th ACM SIGMODSIGACTSIGAI Symposium on Principles of Database Systems, pages 315–326, 2016.
 [6] Hubie Chen and Stefan Mengel. The logic of counting query answers. In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik, Iceland, June 2023, 2017, pages 1–12, 2017.

[7]
Radu Curticapean, Holger Dell, and Dániel Marx.
Homomorphisms are a good basis for counting small subgraphs.
In
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 1923, 2017
, pages 210–223, 2017.  [8] Martin E. Dyer and David Richerby. An effective dichotomy for the counting constraint satisfaction problem. SIAM J. Comput., 42(3):1245–1274, 2013.
 [9] Jacob Focke, Leslie Ann Goldberg, and Stanislav Zivny. The complexity of counting surjective homomorphisms and compactions. CoRR, abs/1706.08786, 2017.
 [10] Benoit Larose, Barnaby Martin, and Daniel Paulusma. Surjective hcolouring over reflexive digraphs, 2017.
 [11] László Lovász. Large Networks and Graph Limits, volume 60 of Colloquium Publications. American Mathematical Society, 2012.
 [12] L. Lovász. Operations with structures. Acta Mathematica Academiae Scientiarum Hungarica, 18(34):321–328, 1967.
Comments
There are no comments yet.