196 lines
5.7 KiB
Plaintext
196 lines
5.7 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Maschinelles Lernen (ML) - Übung 3\n",
|
|
"# Perzeptronen und mehrschichtige Perzeptronen\n",
|
|
"# 3.2 Loss Function, Backpropagation und Gradient Descent\n",
|
|
"\n",
|
|
"In dieser Aufgabe wird die Aktualisierungsfunktion (Lernalgorithmus) aus der Vorlesung durch das Verfahren der Fehlerrückführung (engl. backpropagation) und des Gradientenverfahrens (engl. gradient descent) ersetzt.\n",
|
|
"\n",
|
|
"In dieser Übung sollen Sie das bereits ausgearbeitete Perzeptron anpassen."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# matplotlib: Modul zum Plotten von Daten\n",
|
|
"from matplotlib import pyplot as plt \n",
|
|
"\n",
|
|
"# numpy: Mathematikbibliothek\n",
|
|
"import numpy as np \n",
|
|
"import pandas as pd\n",
|
|
"import time"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Aufgabe 1:** Ersetzen Sie die bisherige Aktivierung in der Methode *predict* durch die Sigmoidfunktion.\n",
|
|
" "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Aufgabe 2:** Implementieren Sie das beschriebene Gradientenlernverfahren mit Backpropagation in die Methode *fit*."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Aufgabe 3:** Schauen Sie sich die Ausgabe des Perzeptrons (perceptron.predict(.)) auf einem der bisher verwendeten Datensätze (z.B. *AND*, Iris) an. Was fällt gegenüber einem Perzeptron mit der Signum-Funktion als Aktivierung auf? Was bedeutet das für den Einsatz als binärer Klassifikator?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class Perceptron(object):\n",
|
|
" def __init__(self, number_of_inputs, epochs, eta):\n",
|
|
" \"\"\"\n",
|
|
" Beispielaufruf des Konstruktors:\n",
|
|
" >>> Perceptron(2, 100, 0.1)\n",
|
|
" \"\"\"\n",
|
|
" ### Dein Code kommt hierhin:\n",
|
|
" \n",
|
|
" ##########################\n",
|
|
" pass\n",
|
|
" \n",
|
|
" def predict(self, inputs):\n",
|
|
" \"\"\"\n",
|
|
" Beispiel des Funktionsaufrufes:\n",
|
|
" >>> inputs = np.array([0, 1])\n",
|
|
" >>> h = perceptron.predict(inputs) \n",
|
|
" \"\"\"\n",
|
|
" \n",
|
|
" # Dein Code kommt hierhin: \n",
|
|
" \n",
|
|
" \n",
|
|
" ##########################\n",
|
|
" \n",
|
|
" \n",
|
|
" pass\n",
|
|
"\n",
|
|
" def fit(self, training_inputs, labels):\n",
|
|
" \"\"\"\n",
|
|
" Beispiel des Funktionsaufrufs:\n",
|
|
" >>> perceptron.fit(train_input, labels)\n",
|
|
" \"\"\"\n",
|
|
" \n",
|
|
" # Dein Code kommt hierhin:\n",
|
|
" \n",
|
|
"\n",
|
|
" ##########################\n",
|
|
" pass\n",
|
|
" \n",
|
|
" def status(self):\n",
|
|
" \"\"\"\n",
|
|
" Die Methode status(...) gibt die aktuellen Gewichte aus.\n",
|
|
"\n",
|
|
" Beispiel des Funktionsaufrufes und der Ausgabe:\n",
|
|
" >>> perceptron.status()\n",
|
|
" Perceptron weights: [0. 1. 1.]\n",
|
|
" \"\"\"\n",
|
|
" print(\"Perceptron weights: \", self.weights)\n",
|
|
" \n",
|
|
" def getWeights(self):\n",
|
|
" return self.weights"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# AND-Datensatz\n",
|
|
"train_input = np.array([\n",
|
|
" [0, 0],\n",
|
|
" [0, 1],\n",
|
|
" [1, 0],\n",
|
|
" [1, 1]\n",
|
|
" ])\n",
|
|
"\n",
|
|
"labels_AND = np.array([0, 0, 0, 1])\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Beispiel mit OR\n",
|
|
"labels_OR = np.array([0, 1, 1, 1])\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Hier wird der Iris-Datensatz geladen und vorbereitet (siehe Übung 2)\n",
|
|
"\n",
|
|
"# Datensatz laden\n",
|
|
"names = [\"sepal-length\", \"sepal-width\", \"petal-length\", \"petal-width\", \"class\"]\n",
|
|
"iris_data = pd.read_csv(\"iris.csv\", names = names)\n",
|
|
"\n",
|
|
"# Klassen auswählen (Bei Bedarf ändern)\n",
|
|
"iris_data = iris_data.loc[lambda x: x['class'] != 'Iris-setosa']\n",
|
|
"\n",
|
|
"# Merkmale auswählen (Bei Bedarf ändern)\n",
|
|
"iris_features = ['petal-length', 'petal-width']\n",
|
|
"X = iris_data[iris_features]\n",
|
|
"# Pandas-Datenformat in reine Liste umwandeln\n",
|
|
"X = X.values\n",
|
|
"\n",
|
|
"# Label vorbereiten\n",
|
|
"from sklearn.preprocessing import LabelEncoder\n",
|
|
"lb_make = LabelEncoder()\n",
|
|
"iris_data[\"class_code\"] = lb_make.fit_transform(iris_data[\"class\"])\n",
|
|
"y = iris_data.class_code\n",
|
|
"y = y.values\n",
|
|
" \n",
|
|
"# Trainings- und Testdatensplit\n",
|
|
"from sklearn.model_selection import train_test_split\n",
|
|
"X_train, X_test, y_train, y_test = (\n",
|
|
" train_test_split(X, y, test_size=.2, random_state=np.random.seed(42)))"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.7.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|