Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / CS7840 Spring 21/22 Soft Computing Assignment 2 Create a model to optimize prediction of the IRIS dataset using a perceptron

CS7840 Spring 21/22 Soft Computing Assignment 2 Create a model to optimize prediction of the IRIS dataset using a perceptron

Computer Science

CS7840 Spring 21/22

Soft Computing

Assignment 2

Create a model to optimize prediction of the IRIS dataset using a perceptron.

  1. Load the IRIS dataset and split the data Split the data into two with 50% fortraining and 50% for testing.
  2. Display the first four rows of the dataset showing the feature names.
  3. Create a Perceptron class and instantiate a new Perceptron. Fit the data to themodel for 10 training iterations. Compute the prediction.
  4. Use the first three features of the data and create a 3-D Scatter plot of the threetypes of flowers.
  5. Plot the prediction for epochs 1, 10, 100, and 200.

Using the IRIS dataset again. Build a multilayer perceptron with a depth of 2, 20% input drop rate, and 5-% hidden layer drop rate. For the non-linear activation function, use the leaky rectify function for the first dense leayer, and softmax for the second dense layer.

Leaky Rectify Nonlinearity Function

The Leaky Rectifier has a non-zero gradient for negative inputs which often helps convergence:

                                                                                    v           if v ≥ 0

(1)

α · v otherwise.

 

?

(

v

)=

 

The slope for the negative input is α, a value between 0 and 1 which is a measure of ”leakiness”. A leakiness of 0 will converge to the standard rectifier, and a leakiness of 1 will lead to a linear activation function.

 

  1. Plot the performance for the first 10 epochs.
  2. Use a different activation function. What is the result?
  3. Add more layers to the multilayer perceptron. Use 4 layers instead of 2. What isthe result?

Load the Store Data file provided on Pilot and create a data frame of the list of items in the file.

  1. For the 7500 records, implement the Association Rule with a minimum support of

0.0045, minimum confidence of 0.2, and minimum lift of 3.

  1. Print the first 10 Rules along with each Support, Confidence and Lift values.
  2. How long did it take to run the implementation for the 7500 items.
  3. What are some other datasets for which this type of algorithm will be useful.

Open Orange 3 either from the command line or from the Anaconda Navigator. Using the IRIS dataset again, create a scatter plot of the data.

  1. What are the correlation values of all the combinations?
  2. What are the evaluation results for SMV, Random forest, and Logistic Regression.
  3. Show the confusion matrix for the three models.
  4. Repeat the process for zoo.tab dataset.

Load the breast cancer dataset from Scikit-Learn.

  1. What are the 13 features of this dataset?
  2. What are the labels of the data?
  3. Split the data into 70% training and 30% testing.
  4. Create a SVM classifier and train the model.
  5. Predict the output using the testing data.
  6. What is the accuracy, precision, and recall scores?
  7. Load this data into Orange and visualize the dataset and find informative projec-tions.

Using the IRIS dataset again, use the decision tree classifier to fit the data and plot the tree.

  1. Display the decision surface of the tree for the first two features.

Decision Surface

The Decision surface in a statistical classification problem, is the boundary of a hypersurface that partitions the underlying vector space of the data in each class. It shows where the algorithm separates the data of each each class.

  1. Is there a difference in the results from using SVM? What are the advantages anddisadvantages when compared to SVM for this dataset?
  2. Try displaying the decision surface using different features. Is there a difference? Isthere better separation?

b. Use Orange to do the same.

In Orange, use the Breast Cancer dataset with a Random Forest model.

  1. Visualize the results using the Pythagorean Forest.
  2. What pre-processing did you do to reduce the data?
  3. What are the features that produce the best results? What can you use to makethis simpler?
  4. How do you interpret the results?

Option 1

Low Cost Option
Download this past answer in few clicks

22.99 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE