**Module 1: Fundamentals of Neural Network**

**2024 Paper:**

**Q1 a)**Design AND gate using Perceptron.**Q1 b)**Suppose we have N input-output pairs. Our goal is to find the parameters that predict the output (y) from the input (x) according to some function (y = xw). Calculate the sum-of squared error function (E) between predictions (y) and inputs (x). The parameter (w) can be determined iteratively using gradient descent. For the calculated error function (E), derive the gradient descent update rule (w \leftarrow w – \alpha).**Q1 e)**Describe sequence learning problem.**Q2 a)**What are Feed Forward Neural Networks?

**Module 2: Training, Optimization and Regularization of Deep Neural Network**

**2024 Paper:**

**Q1 b**Explain Gradient Descent in Deep Learning.**Q1 c**Explain dropout. How does it solve the problem of overfitting?**Q2 a**Explain the three classes of deep learning.**Q3 a**What are the different types of Gradient Descent methods, explain any three of them.**Q5 a**What are L1 and L2 regularization methods?

**2023 Paper:**

**Q1 c**Explain dropout. How does it solve the problem of overfitting?**Q3 b**Explain early stopping, batch normalization, and data augmentation.**Q5 a**Explain Stochastic Gradient Descent and momentum-based gradient descent optimization techniques.

**Module 3: Autoencoders (Unsupervised Learning)**

**2024 Paper:**

**Q1 d**What are Undercomplete Autoencoders?**Q3 b**Explain the main components of an Autoencoder and its architecture.**Q5 b**Explain any three types of Autoencoders.

**2023 Paper:**

**Q1 d**Explain denoising autoencoder model.

**Module 4: Convolutional Neural Networks (CNN)**

**2024 Paper:**

**Q1 e**Explain Pooling operation in CNN.**Q2 b**Explain the architecture of CNN with the help of a diagram.

**2023 Paper:**

**Q3 a**Explain CNN architecture in detail. Suppose, we have input volume of 32*32*3 for a layer in CNN and there are ten 5*5 filters with stride 1 and pad 2; calculate the number of parameters in this layer of CNN.**Q6 a**Describe LeNet architecture.

**Module 5: Recurrent Neural Networks (RNN)**

**2024 Paper:**

**Q4 a**Explain LSTM model, how it overcomes the limitation of RNN.

**2023 Paper:**

**Q2 a**Explain Gated Recurrent Unit in detail.**Q4 a**Explain RNN architecture in detail.**Q5 b**Explain LSTM architecture.**Q6 b**Explain vanishing and exploding gradient in RNNs.

**Module 6: Recent Trends and Applications**

**2024 Paper:**

**Q4 b**What are the issues faced by Vanilla GAN models?**Q6 b**What are Generative Adversarial Networks, comment on its applications.

**2023 Paper:**

**Q4 b**Explain the working of Generative Adversarial Network.