Multicollinearity between predictors affects all algorithmic attribution models, including sophisticated ones with thousands of predictors as Convertro’s multi-touch algorithmic model.
To make sure we don’t fall in the collinearity trap, we use a modeling approach called Regularization.
Regularization can in the simplest way be thought of as a generalization of Occam’s razor reasoning to model fitting. That is, we’re balancing the effort of getting the best fit of our model to your data with model complexity (i.e. – we penalize for more “complex” models, preferring the simplest explanation of the data). We gain stability to outliers/errors, better generalization (handling overfitting) and handling of the collinearity problem, at the cost of a small reduction in fit to data. The specific approach we are using is called elastic-net regularization of logistic regression.