I have a very general (and slightly weird) question on assessing moderating/interaction effects.

Here's the deal: I have a research question that focuses primarily on the effect of explanatory X on outcome Y, with a number of moderator variables Z that - so the hypotheses - influence X's effect on Y. From descriptive/qualitative analyses is seems likely that there is indeed an effect of X on Y. Unfortunately, there is no good real-world dataset where X takes different values. However, there is a good dataset in which X can be considered constant, let's say at X=1, while the Zs and Ys vary. Though I obviously cannot test the effect of X on Y here, I want to find out what level of influence the different Zs have on the outcome Y WHEN X=1.

Given that mathematically there is no difference between Zs and X, I would think that I can simply treat the Zs as my explanatory variables and run a regression model Y ~ Z1 + Z2 + Z3, ignoring the constant X in the model. The fact that X is constant at X=1 would be considered as a limitation to the population of cases to which the model applies. The results would of course tell me nothing directly about the interaction between X and Z. However, I would argue that

*under the assumption*that X affects Y, the effects of various Zs on Y provide

*tentative*indications on the types of cases in which X has a higher chance of affecting the outcome (Y is dichotomous btw, so we're talking about a logistic regression). For example, if I find that Z1 has a strong negative effect on the likelihood of Y=1, this would mean that in a scenario where Z1 affects the effect of X on Y, it will probably reduce X's positive effect on the likelihood of Y=1.

Is that correct or is there a flaw somewhere in this reasoning?

Thanks so much!

Janix