Bayesian Linear Regression abgeschlossen

This commit is contained in:
paul-loedige 2022-02-15 23:27:27 +01:00
parent 2e793e36af
commit 4a994c767f

View File

@ -6,22 +6,36 @@
Für die Bayesian Linear Regression ist es möglich den Posterior und die Vorhersage ohne die Nutzung von Approximationen zu berechnen.
Hierzu werden die folgenden Komponenten benötigt:
\begin{itemize}
\item Likelihood (einzelnes Sample): $p(y|\bm x,\bm w) = \nomeq{gaussian_distribution}(y|\bm w^T \nomeq{vector_valued_function},\nomeq{variance})$
\item Likelihood (ganzer Datensatz): $p(\bm y|\bm X,\bm w) = \prod_i \nomeq{gaussian_distribution}(y_i|\bm w^T \bm\phi(\bm x_i), \nomeq{variance})$
\item Gaussian Prior: $p(\bm w) = \nomeq{gaussian_distribution}(\bm w|0,\nomeq{regularization_factor}^{-1}\nomeq{identity_matrix})$
\item Likelihood (einzelnes Sample): \tabto{6cm}$p(y|\bm x,\bm w) = \nomeq{gaussian_distribution}(y|\bm w^T \nomeq{vector_valued_function},\nomeq{variance})$
\item Likelihood (ganzer Datensatz): \tabto{6cm}$p(\bm y|\bm X,\bm w) = \prod_i \nomeq{gaussian_distribution}(y_i|\bm w^T \bm\phi(\bm x_i), \nomeq{variance})$
\item Gaussian Prior: \tabto{6cm}$p(\bm w) = \nomeq{gaussian_distribution}(\bm w|0,\nomeq{regularization_factor}^{-1}\nomeq{identity_matrix})$
\end{itemize}
Anschließend erfolgt die Regression nach den Schritten des \nameref{cha:Bayesian Learning}:
\begin{enumerate}
\item Posterior errechnen:
\begin{equation} \label{eq:bayesion_linear_regression_posterior}
\begin{equation} \label{eq:bayesian_linear_regression_posterior}
p(\bm w|\bm X,\bm y) = \frac{p(\bm y|\bm X,\bm w)p(\bm w)}{p(\bm y|\bm X)}
= \frac{p(\bm y|\bm X,\bm w)p(\bm w)}{\int p(\bm y|\bm X,\bm w)p(\bm w)d\bm w}
\end{equation}
\item Predictive Distribution errechnen:
\begin{equation} \label{eq:bayesion_linear_regression_predictive_distribution}
p(y^*|\bm x^*,\bm X,\bm y) = \int p(y^*|\bm w,\bm x^*)p(\bm w|\bm X,\bm y)d\bm w
Hierfür kann die 2. Gaussian Bayes Rule (\cref{sec:Gaussian Bayes Rules}) verwendet werden\\
(mit $\bm\mu_{\bm x}=0$, $\nomeq{covariance}_{\bm x} = \nomeq{regularization_factor}^{-1}$, $\bm F = \bm\Phi$ und $\sigma_{\bm y}^2 = \sigma_{\bm y}^2$)
\begin{equation} \label{eq:bayesian_linear_regression_posterior_gaussian_bayes_rule}
p(\bm w|\bm X,\bm y) = \nomeq{gaussian_distribution}(\bm w|\bm\mu_{\bm w|\bm X,\bm y},\nomeq{covariance}_{\bm w|\bm X,\bm y})
\end{equation}
\begin{itemize}
\item $\bm\mu_{\bm w|\bm X,\bm y} = (\bm\Phi^T\bm\Phi + \sigma_{\bm y}^2\nomeq{regularization_factor}\nomeq{identity_matrix})^{-1}\bm\Phi^T\bm y$
\item $\nomeq{covariance}_{\bm w|\bm X,\bm y} = \sigma_{\bm y}^2(\bm\Phi^T\bm\Phi + \sigma_{\bm y}^2\nomeq{regularization_factor}\nomeq{identity_matrix})^{-1}$
\end{itemize}
\item Predictive Distribution errechnen:
\begin{align} \label{eq:bayesion_linear_regression_predictive_distribution}
p(y^*|\bm x^*,\bm X,\bm y) &= \int p(y^*|\bm w,\bm x^*)p(\bm w|\bm X,\bm y)d\bm w \\
&= \int \nomeq{gaussian_distribution}(y_*|\phi_*^T\bm w,\sigma_{\bm y}^2)\nomeq{gaussian_distribution}(\bm w|\bm\mu_{\bm w|\bm X,\bm y},\nomeq{covariance}_{\bm w|\bm X,\bm y}) d\bm w
\end{align}
Um diese Gleichung zu lösen kann die \nameref{sec:Gaussian Propagation} (\cref{sec:Gaussian Propagation}) verwendet werden:
\begin{itemize}
\item $\nomeq{mean}(\bm x^*) = \phi(\bm x^*)^T(\bm\Phi^T\bm\Phi + \nomeq{regularization_factor}\sigma_{\bm y}^2\nomeq{identity_matrix})^{-1}\bm\Phi^T\bm y$
\item $\nomeq{variance}(\bm x^*) = \sigma_{\bm y}^2(1+\phi(\bm x^*)^T(\bm\Phi^T\bm\Phi + \nomeq{regularization_factor}\sigma_{\bm y}^2\nomeq{identity_matrix})^{-1}\phi(\bm x^*))$
\end{itemize}
\end{enumerate}
WEITER AUF FOLIE 398
Es fällt auf, dass $\nomeq{mean}(\bm{x^*})$ sich im Vergleich zur \nameref{sub:Ridge Regression} nicht verändert hat.
Allerdings ist $\nomeq{variance}(\bm x^*)$ jetzt abhängig von den Eingangsdaten.