- Information
- AI Chat
VTU OLD QP@Az Documents.in
Software Engineering (CS530)
Visvesvaraya Technological University
Recommended for you
Preview text
AI PART
MODULE 1
Explain different characteristics of the AI problem used for analyzing it to choose most appropriate method. (8M) (July 2018)
A water jug problem states “you are provided with two jugs, first one with 4-gallon capacity and the second one with 3-gallon capacity. Neither have any measuring markers on it.” How can you get exactly 2 gallons of water into 4-gallon jug? a. Write down the production rules for the above problem. b. Write any one solution to the above problem. (8M) (July 2018)
Explain the Best First Search algorithm with an example. (6M) (July 2018)
List various task domains of AI. (4M) (July 2018)
Explain how AND-OR graphs are used in problem reduction. (6M) (July 2018)
Define artificial intelligence and list the task domains of artificial intelligence. (6M) (Jan 2019)
State and explain algorithm for Best Forst Search with an example. (6M) (Jan 2019)
Explain production system. (4M) (Jan 2019)
Write a note on water jug problem using production rules. (8M) (Jan 2019)
Explain simulated annealing. (4M) (Jan 2019)
Explain problem reduction with respect to AND-OD graphs. (4M) (Jan 2019)
What is AI technique? List less desirable properties and representation of knowledge. (8M) (July 2019)
Explain production system with components and characteristics. List the requirements for good control strategies. (8M) (July 2019)
List and explain the AI problem characteristics. (8M) (July 2019)
Explain constraint satisfaction and solve the cryptarithmetic problem: CROSS + ROADS = DANGER. (8M) (July 2019)
Define artificial intelligence. Classify the task domains of artificial intelligence. (4M) (Sept 2020)
List the properties of knowledge. (4M) (Sept 2020)
Discuss the production rules for solving the water-jug problem. (8M) (Sept 2020)
Briefly discuss any four problems characteristics. (6M) (Sept 2020)
Write an algorithm for a. Steepest-Ascent hill climbing with example. b. Best-First Search with example. (10M) (Sept 2020)
Solve the following cryptarithmetic problem DONALD + GERALD = ROBERT. (10M) (Sept 2020)
Develop AO* algorithm for AI applications. (10M) (Sept 2020)
Solve water jug problem using production rule system. (10M) (Sept 2020)
What is an AI technique? Explain in terms of knowledge representation. (5M) (Feb
Distinguish Breadth First Search and Depth First Search. (6M) (Feb 2021)
Write an algorithm for simple Hill Climbing. (5M) (Feb 2021)
On what dimensions problems are analyzed? (8M) (Feb 2021)
Mention issues in design of search problem. (3M) (Feb 2021)
Write a note on constraint satisfaction. (5M) (Feb 2021)
Explain and illustrate unification algorithm. (6M) (Feb 2021)
What are the properties of a good system for the representation of knowledge? (4M) (Feb 2021)
Discuss how forward reasoning is different from backward reasoning. (6M) (Feb
With an illustration explain the process of converting well-formed formulas to clause form. (8M) (Feb 2021)
Write a note on: a. Conflict resolution b. Logic programming.
Define artificial intelligence. Describe the four categories under which AI is classified with. (6M) (Feb 2021)
Describe Briefly the various problem characteristics. (7M) (Feb 2021)
Describe the process of simulated annealing with an example. (7M) (Feb 2021)
List and explain various task domains of AI. (6M) (Feb 2021)
Discuss A* and A0* algorithm and the various observations about algorithm briefly. (7M) (Feb 2021)
Explain in detail about the means–end analysis procedure with example. (7M) (Feb
What is Artificial Intelligence? Explain. (6M) (July 2021)
A water jug problem: Two Jugs of 4L and 3L capacity (No marker on it). How can you get exactly 2L of water into 4L jug? Write both production rule and solution. (10M) (July 2021)
What is meant by uniformed search? Explain Depth-first-search strategy. (4M) (July
What is an AI technique? Explain. (6M) (July 2021)
Write a note on Production System. (6M) (July 2021)
Crypt arithmetic problem: SEND + MORE = MONEY. Initial state: No two letters have same value. Sum of digits must be shown. (8M) (July 2021)
iv). Anything anyone eats and isn’t killed by is food v). Bill eats peanuts and is still alive vi). She eats everything Bill eats
Translate these sentences into formulas in predicate logic. (8M) (Sept 2020)
- In brief, discuss forward and backward reasoning. (10M) (Sept 2020)
- Write a resolution algorithm for predicate logic. (6M) (Sept 2020)
- Consider the following set of well-formed formulas in predicate logic:
(10M) (Sept 2020) 19. Write the propositional resolution algorithm. (10M) (Sept 2020) 20. Write the algorithm for conversion to clause form. (10M) (Sept 2020) 21. Distinguish forward and backward reasoning with an example. (10M) (Sept 2020) 22. Discuss resolution in brief with an example. (6M) (Feb 2021) 23. Write the algorithm to unify (L1, L2). (7M) (Feb 2021) 24. Describe the issues in knowledge representation. (7M) (Feb 2021) 25. Discuss resolution in brief with an example. (6M) (Feb 2021) 26. Illustrate in detail about forward and backward reasoning with example. (7M) (Feb 2021) 27. What is “matching” in rule-based system? Briefly explain different proposals for matching. (7M) (Feb 2021) 28. Explain mapping between Facts and representation with example. (5M) (July 2021) 29. Explain Forward and Backward reasoning. (5M) (July 2021) 30. Translate following into First Order Logic: (i) All pompeins were Romans. (ii) All Romans are either loyal to Caesar or hated him. (iii) Everyone is loyal to someone. (iv) Was Macrus loyal to Caesar? (v) All pompeins died when the voleano errupted in 79AD (10M) (July 2021) 31. Explain Inheritable knowledge. (6M) (July 2021) 32. Consider following sentences: (i) John likes all kind of food. (ii) Apple and Chicken are food (iii) Anything anyone eats and is not killed by is food. (iv) Bill eats peanuts and is still alive. (v) Sue eats everything bill eats.
Using resolution prove that “John likes Peanuts”. (10M) (July 2021)
- Write a note on Matching. (4M) (July 2021)
ML PART
MODULE 2
INTRODUCTION, CONCEPT LEARNING
- Specify the learning task for “A Checkers learning problem”. (3M)(JAN 19)
- Discuss the following with respect to the above a. Choosing the training experience b. Choosing the target function c. Choosing a function approximation algorithm. (9M)(JAN 19)
- Comment on the issue in machine learning (4M)(JAN 19)
- Write candidate elimination algorithm. Apply the algorithm to obtain the final version space for the training example, Sl. No.
Sky AirTemp Humidity Wind Water Forecast EnjoySport
1 Sunny Warm Normal Strong Warm Same Yes 2 Sunny Warm High Strong Warm Same Yes 3 Rainy Cool High Strong Warm Change No 4 Sunny Warm High Strong Cool Change Yes (10M) (JAN 19) 5. Discuss about an unbiased learner.(6M) (JAN 19) 6. Define machine learning. Describe the steps in designing learning system. (8M)(JULY 19) 7. Write Find-S algorithm and explain with example. (4m) (JULY 19) 8. Explain List-then-eliminate algorithm. (4M) (JULY 19) 9. List out any 5 applications of machine learning. (5M) (JULY 19) 10. What do you mean by hypothesis space, instance space, and version space? (3M) (JULY 19) 11. Find the maximally general hypothesis and maximally specific hypothesis for the training examples given in the table using candidate elimination algorithm. Sl. No.
Sky AirTemp Humidity Wind Water Forecast EnjoySport
1 Sunny Warm Normal Strong Warm Same Yes 2 Sunny Warm High Strong Warm Same Yes 3 Rainy Cool High Strong Warm Change No 4 Sunny Warm High Strong Cool Change Yes (08M) (JULY 19)
- Apply candidate elimination algorithm and obtain the version space considering the training examples given in Table Q1 (c). Table Q1 (c) Eyes Nose Head FColor Hair? Smile? (TC) Round Triangle Round Purple Yes Yes Square Square Square Green Yes No Square Triangle Round Yellow Yes Yes Round Triangle Round Green No No Square Square Round Yellow Yes Yes
(8M) (Feb 2021) 30. Explain the following with respect to designing a learning system: a. Choosing the training experience b. Choosing the target function c. Choosing a representation for the target function. (9M) (Feb 2021) 31. Write Find-S algorithm. Apply the Find-S for Table Q1 (c) to find maximally specific hypothesis. (6M) (Feb 2021) 32. Explain the concept of inductive bias. (5M) (Feb 2021) 33. Explain the designing of a learning system in detail. (10M) (Jul 2021) 34. Define learning. Specify the learning problem in handwriting recognition and robot driving. (5M) (Jul 2021) 35. Explain the issues in machine learning. (5M) (Jul 2021) 36. Write the steps involved in find-S algorithm. (5M) (Jul 2021) 37. Apply candidate elimination algorithm to obtain final version space for the training set shown in Table Q2 (b) to infer which books or articles the user reads based on keywords supplied in the article. Article Crime Academes Local Music Reads a 1 True False False True True a 2 True False False False True a 3 False True False False False a 4 False False True False False a 5 True True False False True (10M) (Jul 2021) 38. State the inductive bias rote-learner, candidate elimination and Find-S algorithm. (5M) (Jul 2021)
MODULE 3
DECISION TREE LEARNING
ARTIFICIAL NEURAL NETWORKS
Decision Tree Learning
What is decision tree? Discuss the use of decision tree for classification purpose with an example. (8M) (JAN 19)
Write and explain the decision tree for the following transactions. Tid Refund MaritalStatus TaxableIncome Cheat 1 Yes Single 125k No 2 No Married 100k No 3 No Single 70k No 4 Yes Married 120k No 5 No Divorced 95k Yes 6 No Married 60k No 7 Yes Divorced 220k No 8 No Single 85k Yes 9 No Married 75k No 10 No Single 90k Yes (8M) (JAN 19)
For the transactions shown in the table, compute the following: a. Entropy of the collection of transaction records of the table with respect to classification b. What are the information gain of a 1 and a 2 relative to the transaction of he table? Instance 1 2 3 4 5 6 7 8 9 a 1 T T T F F F F T F a 2 T T F F T T F F T TargetClass + + - + - - - + - (8M) (JAN 19)
Discuss the decision learning algorithm. (4M) (JAN 19)
List the issues of decision tree learning. (4M) (JAN 19)
Construct the decision tree for the following data using ID3 algorithm. Day A1 A2 A3 Classification 1 True Hot High No 2 True Hot High No 3 False Hot High Yes 4 False Cool Normal Yes
D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High strong No (10M) (SEP 2020) 17. Explain the issues in decision tree learning. (6M) (SEP 2020) 18. Define decision tree learning. List and explain appropriate problems for decision tree learning. (6M) (Feb 2021) 19. Explain the basic decision tree learning algorithm. (5M) (Feb 2021) 20. Describe the hypothesis space search in decision tree learning. (5M) (Feb 2021) 21. Define inductive bias. Explain inductive bias in decision tree learning. (6M) (Feb 2021) 22. Give differences between the hypothesis space search in decision tree and candidate elimination algorithm. (4M) (Feb 2021) 23. List and explain issues in decision tree learning. (6M) (Feb 2021) 24. Explain the concept of decision tree learning. Discuss the necessary measures required to select the attribute for building a decision tree using ID3 algorithm. (11M) (Feb 2021) 25. Explain the following with respect to decision tree learning. a. Incorporating continuous valued attributes b. Alternative measures for selecting attributes. c. Handling training examples with missing attribute values. (9M) (Feb 2021) 26. Construct decision tree using ID3 considering the following training examples.
Weekend Weather AvailabilityParental Wealthy Decision Class H 1 Sunny Yes Rich Cinema H 2 Sunny No Rich Tennis H 3 Windy Yes Rich Cinema H 4 Rainy Yes Poor Cinema H 5 Rainy No Rich Home H 6 Rainy Yes Poor Cinema H 7 Windy No Poor Cinema H 8 Windy No Rich Shopping H 9 Windy Yes Rich Cinema H 10 Sunny No Rich Tennis
(12M) (Feb 2021)
- Discuss the issues of avoiding the overfitting the data, and handling attributes with different costs. (8M) (Feb 2021)
- Define the following terms with an example for each: a. Decision tree b. Entropy c. Information gain d. Restriction bias e. Preference bias (10M) (Jul 2021)
- Construct decision tree for the data set shown in Table Q3(b) to find whether a seed is poisonous or not. Example Color Toughness Fungus Appearance Poisonous 1 Green Soft Yes Wrinkled Yes 2 Green Hard Yes Smooth No 3 Brown Soft No Wrinkled No 4 Brown Soft Yes Wrinkled Yes 5 Green Soft Yes Smooth Yes 6 Green Hard No Wrinkled No 7 Orange Soft Yes Wrinkled Yes (10M) (Jul 2021)
- Explain ID3 algorithm. Give an example. (10M) (Jul 2021)
- Explain the issues and solutions to those issues in decision tree learning. (10M) (Jul
Artificial Neural Networks
Draw the perceptron network with the notation. Derive an equation of gradient descent rule to minimize the error. (8M) (JAN 19)
Explain the importance of the terms: (i) Hidden Layer (ii) Generalization (iii) Overfitting (iv) Stopping criterion (8M) (JAN 19)
Discuss the application of neural network which is used for learning to steer an autonomous vehicle. (6M) (JAN 19)
Write an algorithm for BACKPROPAGATION which uses stochastic gradient descent method. comment on the effect of adding momentum to the network.(10M) (JAN 19)
Explain artificial neural network based on perception concept with diagram (6M)(JULY 19)
What is gradient descent and delta rule? Why stochastic approximation to gradient descent if needed?(4M)(JULY 19)
Describe the multilayer neural network. Explain why BACKPROPAGATION algorithm is required. (6M)(JULY 19)
Derive the BACKPROPAGATION rule considering the output layer and training rule for output unit weights. (8M)(JULY 19)
Derive expressions for training rule of output and hidden unit weights for back propagation algorithm. (10M) (Jul 2021)
MODULE 4
BAYESIAN LEARNING
- What is Bayes theorem and maximum posterior hypothesis. (4M) (JAN 19)
- Derive an equation for MAP hypothesis using Bayes theorem. (4M) (JAN 19)
- Consider a football team between two rival teams: Team 0 and Team 1. Suppose Team 0 wins 95% of the time and Team 1 wins the remaining matches. Among the games won by Team 0, only 30% of them came from plating on Team 1’s field. On the other hand, 75% of the victories for Team 1 are obtained while playing at home. If Team 1 is to host the next match between the two teams, which team will most likely emerge as the winner? (8M) (JAN 19)
- Describe Brute Force learning algorithm.( 4M)(JAN 19)
- Discuss the Naïve Bayes Classifier. (4M)(JAN 19)
- The following table gives data set about stolen vehicles. Using Naïve Bayes classifier classify the new data (Red, SUV, Domestic). Color Type Origin Stolen Red Sports Domestic Yes Red Sports Domestic No Red Sports Domestic Yes Yellow Sports Domestic No Yellow Sports Imported Yes Yellow SUV Imported No Yellow SUV Imported Yes Yellow SUV Domestic No Red SUV Imported No Red Sports Imported Yes (8M)(JAN 19)
- Explain Maximum-a-posterior (MAP) hypothesis using Bayes theorem. (6M)(JULY
- Estimate conditional probabilities of each attributes {color, legs, height, smelly} for the species classes: {M, H} using the data given in the table. Using these probabilities estimate the probability values for the new instance- (color=Green, legs=2, height =Tall, and smelly=No) No. Color Legs Height Smelly Species 1 White 3 Short Yes M 2 Green 2 Tall No M 3 Green 3 Short Yes M 4 White 3 Short Yes M
5 Green 2 Short No H 6 White 2 Tall No H 7 White 2 Tall No H 8 White 2 Short Yes H (10M)(JULY 19) 9. Explain Naïve Bayes Classifier and Bayesian Belief Networks. (10M)(JULY 19) 10. Prove that how maximum likelihood (Bayesian learning) can be used in any learning algorithms that are used to minimize the squared error between actual output hypotheses and predicted output hypothesis. (6M)(JULY 19) 11. Explain Naïve Bayes Classifier. (8M) (JAN 2020) 12. Explain Brute force MAP learning algorithm. (8M) (JAN 2020) 13. Discuss Minimum Description Length principle in brief. (8M) (JAN 2020) 14. Explain Bayesian Belief Networks and conditional independence with example. (8M) (JAN 2020) 15. Explain Naïve Bayes Classifier. (10M) (SEP 2020) 16. Explain Bayesian Belief Networks. (6M) (SEP 2020) 17. Explain EM algorithm. (8M) (SEP 2020) 18. Explain the derivation of K-Means algorithm. (8M) (SEP 2020) 19. List and explain features of Bayesian learning methods. (6M) (Feb 2021) 20. Explain Brute-Force MAP learning algorithm. (5M) (Feb 2021) 21. Explain Maximum Likelihood and least-squared error hypothesis. (5M) (Feb 2021) 22. Describe maximum likelihood hypotheses for predicting probabilities. (5M) (Feb 2021) 23. Define Bayesian Belief networks. Explain with an example. (6M) (Feb 2021) 24. Explain EM algorithm. (5M) (Feb 2021) 25. Explain Bayes theorem and mention the features of Bayesian learning. (7M) (Feb 2021) 26. Prove that a Maximum likelihood hypothesis can be used to predict probabilities. (8M) (Feb 2021) 27. Explain Naïve Bayes classifier. (6M) (Feb 2021) 28. Describe MAP learning algorithm. (8M) (Feb 2021) 29. Classify the test data and {Red, SUV, Domestic} using Naïve Bayes classifier for the dataset shown in Table Q8 (b). Table Q8(b) Color Type Origin Stolen Red Sports Domestic Yes Red Sports Domestic No Red Sports Domestic Yes Yellow Sports Domestic No Yellow Sports Imported Yes Yellow SUV Imported No
D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High strong No Use the Naïve Bayes classifier and the training data from the table to classify the following novel instance: <Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong> (10M) (Jul 2021)
MODULE 5
EVALUATING HYPOTHESIS, INSTANCE-BASED
LEARNING, REINFORCEMENT LEARNING
Write short notes on the following: a. Estimating hypothesis accuracy b. Binomial distribution (8M)(JAN 19)
Discuss the method of comparing two algorithms. Justify with paired t tests methods. (8M) (JAN 19)
Discuss the k-nearest neighbor algorithm. (4M)(JAN 19)
Discuss locally weighted regression. (4M)(JAN 19)
Discuss the learning tasks and Q learning in the context of reinforcement learning. (8M)(JAN 19).
Explain locally weighted regression linear regression. (8M)(JULY 19)
What do you mean by reinforcement learning? How reinforcement learning problem differs from other function approximation tasks. (5M)(JULY 19)
Write down Q-learning algorithm. (3M)(JULY 19).
What is instance-based learning? Explain k-nearest neighbor learning? (8M)(JULY
Explain sample error, true error, confidence intervals and Q-learning function. (8M)(JULY 19).
Define: (i) Simple Error (ii) True Error. (4M) (JAN 2020)
Explain k-Nearest Neighbor learning problem. (8M) (JAN 2020)
What is reinforcement learning? (4M) (JAN 2020)
Define expected value, variance, standard deviation, and estimate bias of a random variable. (4M) (JAN 2020)
Explain locally weighted linear regression. (89M) (JAN 2020)
Write a note on Q-Learning. (4M) (JAN 2020)
Explain k-Nearest Neighbor learning algorithm with example. (10M) (SEP 2020)
Explain case-based reasoning with example. (6M) (SEP 2020)
Write short note on: a. Q-Learning b. Radial Basis function c. Locally Weighted Regression d. Sampling Theory (20M) (SEP 2020)
Define the following with examples: a. Sample error b. True error c. Mean d. Variance (8M) (Feb 2021)
Explain central limit theorem. (4M) (Feb 2021)
Explain K-nearest neighbor algorithm. (4M) (Feb 2021)
Explain case-based reasoning. (6M) (Feb 2021)
List and explain important differences of reinforcement algorithm with other function approximation tasks. (4M) (Feb 2021)
Explain Q-learning algorithm. (6M) (Feb 2021)
Define a. Sample error b. True error c. Confidence intervals (6M) (Feb 2021)
Explain K-nearest neighbor learning algorithm. (8M) (Feb 2021)
Write a note on Q-learning. (8M) (Feb 2021)
Define mean value, variance, standard deviation and estimation bias of a random variable. (4M) (Feb 2021)
Explain locally weighted linear regression and radial basis function. (10M) (Feb 2021)
What is reinforcement learning? How it differs from other function approximation tasks. (6M) (Feb 2021)
Explain binomial distribution and write the expressions for its probability distribution, mean, variance and standard deviation. (4M) (Jul 2021)
Define the following terms: a. Sample error b. True error c. N% confidence interval d. Random variable e. Expected value f. Variance (6M) (Jul 2021)
VTU OLD QP@Az Documents.in
Course: Software Engineering (CS530)
University: Visvesvaraya Technological University
- Discover more from: