Preparing for your next Quant Interview?
Practice Here!
OpenQuant
All Questions
Next Question
Linear Regression MLE
00:00:00
9/10
Machine Learning
Parts
Part 1

Consider the linear regression setting in which you are given a training set: D:={(x1,y1),...,(xN,yN)}\mathcal{D} := \{ (x_1, y_1), ..., (x_N, y_N) \} consisting of NN inputs where yiy_i and yjy_j are conditionally independent given their inputs xi,xjx_i, x_j. Let X:={x1,...,xn}\mathcal{X} := \{x_1, ..., x_n\} and Y:={y1,...,yN}\mathcal{Y} := \{ y_1, ..., y_N\}. Our goal is to find the parameters θ\theta^* for the linear regression model.

One approach for finding these parameters is maximum likelihood estimation in which we maximize the predictive distribution of the data given the parameters. We obtain the MLE parameters as:

θMLEargmaxθp(YX,θ)\begin{equation} \theta_{MLE} \in \arg \max_{\theta} p(\mathcal{Y} | \mathcal{X}, \theta) \end{equation}

To find the parameters θMLE\theta_{MLE} we typically perform gradient descent. However, a closed-form solution also exists to find the parameters. Derive the closed-form solution to find θMLE\theta_{MLE}

Hint: Instead of maximizing the likelihood directly think about how we can use to the log transformation to simplify this derivation.