Interactive Deep Learning Labs β’ ANN β’ CNN β’ RNN β’ Autoencoders
Hands-on, interactive labs to learn neural networks, convolutional models, sequence models, and generative architectures. Choose a lab to begin.
Feedforward neural networks: layers, activations, learning rate, overfitting.
Weight initialization, activation impact, gradients & training dynamics.
L2, Dropout, BatchNorm, early stopping, and decision boundaries.
SGD, Momentum, RMSProp, Adam, LR schedules & convergence behavior.
Model capacity, data fraction, label noise, and U-shaped regularization curves.
Convolutions, pooling, filters, dropout, augmentation, Grad-CAM.
Simple RNNs for text, sentiment, and time-series; perplexity & hidden states.
Compare LSTM/GRU across IMDB, SST-2, sine-wave and Twinkle Melody.
Dense autoencoders, denoising AE, latent space, t-SNE & morphing.