Why Choose Us?
0% AI Guarantee
Human-written only.
24/7 Support
Anytime, anywhere.
Plagiarism Free
100% Original.
Expert Tutors
Masters & PhDs.
100% Confidential
Your privacy matters.
On-Time Delivery
Never miss a deadline.
Write a program to implement the decision tree learning algorithm given in figure 19
Write a program to implement the decision tree learning algorithm given in figure 19.5 in chapter 19 of AIMA textbook using information gain as the importance function. (described in page 662). Print the resulting decision tree.
Prune the resulting decision tree using chi-squared pruning technique described in chapter 19.3.4. (You can use existing libraries for chi-square tests). Set alpha value for chi-square test 0.05
Training Data
https://github.com/aimacode/aima-data/blob/master/restaurant.csv
Programming Language
Python
Grading Rubric
• Read CSV into data structure for training => 10
• Decision Tree Implementation => 20
• Recursively split dataset =>10
• Information Gain Calculation => 10
• Print the resulting decision tree => 10
• Chi-square pruning => 10
Expected Output
The expected output is the decision tree similar to the one shown in figure 18.6(3rd edition) or 19.6 (4th edition) of the textbook.
The exact decision tree would depend on how you break the tie for attributes with the same information gain.
In order to get the same decision tree as figure 18.6 (19.6 in 4th edition), you need to select the attribute in the order they appear on the dataset.
You can use any Formatting which clearly marks the levels and branches of the tree. For e.g.
Example 1
Example 2
Here are the information gains to test your implementation in each level of the tree.
1st split
Attributes=> [Alt, Bar, Fri, Hun, Pat, Price, Rain, Res, Type, Est]
Information Gain=> [0, 0, 0.02, 0.2, 0.54, 0.2, 0, 0.02, 0, 0.21]
Highest information gain => Patron (0.54)
2nd split
Attributes=> [Alt, Bar, Fri, Hun, Price, Rain, Res, Type, Est]
Information Gain=> [ 0.11, 0, 0.11, 0.25, 0.25, 0.11, 0.25, 0.25, 0.25]
Here, we have a tie for max information gain (0.25), if we choose the 1st occurrence, we split on Hun
3rd split
Attributes=> [Alt, Bar, Fri, Price, Rain, Res, Type, Est]
Information Gain=> [ 0, 0, 0.31, 0.31, 0, 0.31, 0.5, 0 ]
Highest information gain => Type (0.5)
Output After Pruning (Only for grad)
{'Patrons': {'Full': 'No', 'None': 'No', 'Some': 'Yes'}}
Expert Solution
PFA
Archived Solution
You have full access to this solution. To save a copy with all formatting and attachments, use the button below.
For ready-to-submit work, please order a fresh solution below.





