# Basics
Here is a basic Tensorflow program
```python
from tensorflow import keras
# callback to stop training after reaching accuracy
callbacks = =endTraininOnAccuracy()
# Simplest neural network with 1 layer 1 neuron
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
# "Compile" the neural network wiht a loss and optimizer function
model.compile(optimizer="sgd", loss="mean_squared_error")
# some simple data
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
# train model
model.fit(xs,ys, epochs=500, callbacks=[callbacks])
```
To stop the neural network once you reach a certain amount of accuracy (not sure if this is a good idea), you use a [[Tensorflow-Advanced#Callbacks|callback]]
## Early stopping
```python
class endTrainingOnAccuracy(keras.callbacks.Callback):
def on_epoch_end(self, epochs, logs={}):
if(logs.get('loss')<0.4):
print("\nLoss is low so canceling training")
self.model.stop_training=True
```
## Learning Rate finder:
```python
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10 **(epoch/20)
)
model.fit(xs,ys, epochs=500, callbacks=[lr_scheduler])
```
Plot learning rate vs loss like this
```python
lrs = 1e-8 * (10 ** (np.arange(100)/20))
plt.semilogx(lrs, history.history("losss"))
plt.axis([1e-8, 1e-3, 0, 300])
```
That provides the best learning rate
![[Pasted image 20210119001348.png]]
## Clear Session
`tf.keras.backend.clear_session()`
## Conv2d
`tf.keras.layers.Conv2D` layer takes input of dimension 4 (n, d1,d2, c/f) - n - number of images, d1, d2 is dimension of image, c/f is number of channels/filters